1. Introduction
The global higher education landscape is influenced by globalization, technological advances (particularly in artificial intelligence), and economic, social, and political factors [
1]. In Mexico, private institutions have contributed to increased higher education enrollment, but the diversity in coverage, growth, offerings, and quality varies among these institutions [
2]. However, the proliferation of Higher Education Institutions (HEIs) does not automatically ensure growth and development. Legal recognition legitimizes their operations but does not guarantee the delivery of quality education that addresses the challenges emphasized by the United Nations Educational, Scientific and Cultural Organization (UNESCO).
The educational quality of HEIs should involve prioritizing their capacity to accomplish external evaluation and accreditation systems [
3]. HEIs should transform their institution’s vision and culture towards providing an education contributing to social, local, and global development. In Mexico, the education sector has been slow to adapt to change, characterized by inertia and a reluctance to embrace new approaches, especially in the learning assessment process [
4,
5]. Today, assessment must be participatory, empowering, and emancipating for those being assessed, with a comprehensive, socio-critical, and interpretative approach based on constructivism [
6,
7,
8]. The question remains whether schools, which emerged over 200 years ago with structures and practices that are largely anachronistic now, can adapt to alternative assessment approaches that place responsibility and decision-making about learning in the hands of the student.
Evaluation is a strategy that supports learning of a formative nature, provides feedback, and determines improvement as an element of the teaching–learning–assessment trinomial [
9,
10]. The Joint Committee on Standards for Educational Evaluation [
11] defines four quality standards: (1) proprietary standards, which make evaluation legal and ethical; (2) useful standards, which ensure that there will be practical information provided; (3) viability, which refers to time, resources, and participation; and, finally, (4) precision, which reveals that evaluation is technically adequate.
The forms of learning, the time spent on it, and the results achieved in acquired competencies are directly conditioned by the evaluation methods, the faculty member’s evaluation capacity, and, generally, by the evaluation system followed [
12]. Hess and co-authors [
13] suggest that faculty members use different strategies to ensure effective assessments, including continuous evaluation of student progress, familiarity with evaluation models and tools, and constant adaptation and innovation of evaluation practices.
This study aims to understand university professors’ perspectives on the learning assessment process, including the importance placed on evaluation in the training process, their conceptualization, and the considerations they impose on their practices.
1.1. Literature Review
Evaluation is a significant global concern for educational, research, and training policies. Although there has been some progress, there is still a long way to go before evaluation becomes the expected practice in the classroom. In European countries, assessment is a standard faculty competency that helps ensure quality in educational institutions. The European University Association [
14] has identified challenges such as inclusive and fair assessment, supporting student agency, and faculty member professional development. Measures recommended to address these include aligning formative and summative assessments, promoting student co-responsibility, and implementing comprehensive approaches to assessment planning. However, experts note that innovative approaches are still needed to fully implement competencies and make equitable assessment a reality [
15].
The ministries of education of western and northern Canada [
16] have advocated new assessment methods that contribute to learning in Canada. However, implementing modern assessment practices remains challenging due to limited learning opportunities and barriers such as time and resources. Different United States (US) educational organizations have emphasized the importance of formative assessment, but its implementation faces socio-cultural and political resistance to change [
17].
In Mexico, reports have pointed to areas for improvement in learning assessment. The Organization for Economic Co-operation and Development [
18] has outlined some challenges, including exam-oriented teaching, lack of consistency between classes and grades, and no classroom assessment support. In 2019, the Organisation for Economic Co-operation and Development (OECD) recommended policy changes to foster innovation and quality in higher education [
19]. Reviews of studies conducted by educational bodies and researchers on the reality of assessment show that even though assessment has become more integrated into the learning process, loose ends, and grey areas still need to be studied to improve assessment practice.
In this regard, a gap between advances in the field of assessment has been found [
20], especially between approaches, principles, and methodologies, which are clearly explained in the literature, and faculties’ assessment practices, which continue with a traditional approach. This was the conclusion of a study on a group of law professors from a Mexican HEI. Similar results were shown by Jaffar et al. [
21], who studied the perception of 468 students from 10 universities in Pakistan about assessment tasks and their influence on the learning approach they apply. They concluded that students developed superficial learning strategies due to a negative perception of the assessment task, focused on obtaining a grade, and not achieving learning.
In Spain, researchers compared the perception of students who underwent a formative assessment with others of the same subject whose assessment was traditional, based on the grade, predominantly of a final exam [
22]. The results were encouraging and worrying, firstly because, with formative assessment, the perception of learning was more favorable (including involvement in tasks) than with traditional assessment. However, it took time for the students to understand the system; they considered it complex. Furthermore, the student is an actor in the scene, and his appreciation of the evaluation methods can exert an imperceptible tension.
As the studies in these countries make clear, one of the loose ends is faculty competence to carry out the evaluation process and their reflections and considerations about the concept and practice. Other researchers speak of an acculturation of evaluation among faculty. The key lies in the faculty member’s knowledge and beliefs, willingness to incorporate new evaluation methods, and conditions for doing so [
23,
24].
1.2. Theoretical Framework
In the approach used in this research, the concept of quality is important, so it is necessary to understand this term. Quality is a word that abounds in business texts; it is usually associated with customer satisfaction, cost reduction, error prevention, and continuous improvement [
25]. No single concept is referred to when talking about the quality of evaluation. For this research, we have not defined the idea since it is expected to recognize the attributes that denote quality from the perspective and practice of the faculty member. However, it is recognized that the quality of assessment in the classroom is not the same as that required for large-scale evaluation, in which validity and reliability indicators relate to the test and the reagent.
Table 1 details the concept that some scholars have proposed, expanding the quality indicators of large-scale assessment to classroom assessment.
Their definitions include evaluation that drives learning, is practical and efficient, and is not conducted excessively. Unlike large-scale evaluation, validity is associated with the evaluator’s interpretation and reliability in having enough information to make such an interpretation.
These concepts allow for an understanding of the meaning of quality in evaluation. It is a broad conceptual framework of quality that transforms what is inherited from psychometrics and adjusts it to the characteristics of classroom assessment.
1.3. Research Questions
Improving educational assessment practices is a global need that requires changing perspectives and teaching practices. While evaluation has opportunities for improvement, it still carries the legacy of psychometrics. Considering the context of this study, this research will look to answer the research questions in
Table 2.
3. Results and Discussion
The initial coding in the theoretical sample yielded 86 codes (
Appendix S4). Feedback was the main code. It was linked to other codes to a greater extent than any other. Some codes can be associated with approaches that conceptualize assessment as learning or for learning and traditional aspects, such as grading, summative assessment, and exams. The co-occurrence between the ten main codes shown in
Table 4 highlights the highest numbers as being for continuous assessment (59) and feedback (86), the number of times each of these codes encountered the others.
Similarly,
Table 4 shows that the greatest co-occurrence between the two codes is for feedback with continuous assessment (23 times) and the input with formative assessment that serves learning (18 times). The coefficient of co-occurrence between the feedback and constant assessment codes is 0.23, while between feedback and formative evaluation, it was 0.21. However, these indicators are far from a coefficient of 1.00, representing the greatest strength, although they have the highest intensity within the present analysis.
Another way to examine the co-occurrence of codes is with Sankey’s diagram.
Figure 2 shows the breadth of the line linking feedback and continuous assessment, which co-occur together in greater numbers than other codes.
To a very limited extent, they co-occur with the exam code. Faculty associates give feedback to the student during the learning process, not at a summative moment. The low co-occurrence with difficulties could indicate that it is not complicated for them; however, later, it will be shown why they associate it with feedback. This way of visualizing co-occurrences allows more than one code to be linked.
Figure 3 shows which codes appear together with confidence, which means the faculty member’s security about the evaluation, and the codes that occur simultaneously with difficulties, a code that appears in quotes that mention evaluation as a difficult, stressful, or even conflict-generating process.
This shows that the same difficult actions give the instructor confidence to judge the student’s learning or provide feedback. For example, time is observed as a condition that can determine confidence when evaluating the extent to which it allows us to know less or more about the student; when time is not enough, it represents a difficulty.
It is observed that faculty members perceive confidence in the evaluation when they use instruments and various evaluation strategies, manage to minimize subjectivity and provide feedback. However, the same empirical situations are related to difficulties.
Codes were grouped in the process of systematizing the information; of the 86 that emerged from the open or initial coding [
43], 14 groups of codes were defined (
Table 5), with which the faculty members’ perspectives were mainly analyzed and theorized.
The group with the greatest empirical extension as a category is the Formative Feedback group, in which codes are gathered to show the evaluation process as an aid to learning. The next group is the Confidence group, which is named because it encompasses codes that reflect actions associated with the faculty’s security and certainty about the result.
3.1. Evaluation from the Perspective of the Faculty
The faculty members’ concepts of assessment were analyzed. The approach implicit in its definition and the opportunities and difficulties encountered in their practice were reviewed. Next, we present the different topics that were identified in this analysis.
In the interviews, faculty members repeatedly mentioned that assessment is learning and strengthens the student’s self-analysis and self-management skills. Their points of view coincide with those of other authors [
44,
45,
46], who, in their definitions, agree that it implies a value judgment and that it is applied to improve what is evaluated.
Table 6 concentrates on the words of some faculty members who define the evaluation process and interpret these from their implicit approach.
Learning evaluation and the learning process are both verification exercises. Evaluation provides feedback on the process, and the faculty member’s assessment approach is modern and not limited to measurements or summative exams.
Faculty members make a clear distinction between formative assessment: “I evaluate on an ongoing basis, without punitive purposes… so that they see that this helps them know what they can improve… I try to get them to ask to be evaluated” of the summative evaluation: “even those that have many points, I try to make them see that the number is a stigma, I encourage them to analyze their grade and see in it what their achievements are and their opportunities to improve” (P7). Professor P8 says that the student should know that they will be evaluated “both to receive feedback only and to obtain a grade”. Other faculty members, such as Professor P9, assign a few points to the progress of a project but ask them to heed the suggestions she makes in the full delivery, where she evaluates in a final and global way.
Faculty members assign formative assessments the ability to influence teaching: “it determines if I have to redirect what I have planned” (P3); “… -it allows me to know if I should underpin the knowledge that students need… if you have time (he clarifies it with an inflection of voice)” (P4). This shows they recognize that formative assessment represents information they can use to improve teaching. This aligns with the authors’ arguments [
46], who state that faculty members can transform their teaching using continuous and formative assessment information.
3.2. How and for What Purpose Do Faculty Members Evaluate?
Evaluating through day-to-day situations gives authenticity to the evaluation and is ideal for competencies or skills; in that sense, it is better than a written test or knowledge exam [
47]. Due to the realism it has, the cognitive challenge, and the evaluative judgment that it entails [
48], the student sees that authentic evaluation adds value to their future professional work. Villarroel and Bruna [
49] conducted an extensive literature search. They reported that authentic assessment impacts the quality and depth of learning, develops higher-order skills, generates confidence and autonomy, and motivates and increases the capacity for self-regulation and reflection.
Hernán et al. [
50] reviewed the current literature. They found that authentic assessment is defined by activities similar to real life and a context identical to what the student would encounter in the profession. The faculty members interviewed mentioned that students value the challenges they experience according to the learning they gain from them and their link with the professional reality of environments that represent future job possibilities.
The results show that faculty members use situational techniques and diversify their strategies, which is an advantage according to recent assessment approaches, particularly when they aim to develop higher cognitive competencies or processes. Despite this, they focus mostly on products: “I have an evaluation plan; it almost always has activities related to the topics we are looking at. I try to ensure that at the end of each topic, we have some deliverables, which I grade and that adds to their grade. Ultimately, a project or broader work always recapitulates or applies everything they learned in the course” (P10). This structure of the evaluation plan is the one that faculty members have; it is not in all cases, but it is perceived that the observation of performance as assessable and qualifiable evidence is left out. In addition, faculty members repeatedly alluded to the high number of students in a group as an impediment to the observation of performance and the review of individual actions within the teamwork, considering only the review of the product or deliverable prepared as a team.
Evaluation in an authentic (and formative) approach requires the faculty member to observe performances, actions, and attitudes. Those who said they did so find it comfortable when the activity does not carry a grade; it is formative: “I am more of an observer of performance during class with activities that have no value” (P11); “I like to observe how they behave, how they work on their project, what each one contributes, what they do and what they do not do as well” (P12).
Observing on-site performance in authentic evaluation is paramount and key to feedback. However, it loses relevance when only the physical and tangible deliverables they produce derived from the process are considered. This encourages the student to pay full attention to the deliverable, especially if it represents a high point value. This reinforces the student’s attitude towards the above numerical grade, prioritizing it over real, deep learning.
Each student lives their training process, and their learning is different. This implies that the evaluation is individual, even when they work as a team. In these cases, exercises designed to observe the student’s actions and assess not only what they know but also the application of that knowledge and the transfer to real or simulated situations are necessary. However, when these exercises are done “at home,” and only the product generated is reviewed, the faculty loses the opportunity to evaluate the student individually.
What is common in teamwork is that the same grade is assigned to all members because, in reality, what is being evaluated is the deliverable and not the learning of each student. “You are going to investigate, you are going to share, … they are going to work as a team, and that team activity is the one that will be graded” (P13).
Several faculty members consider it difficult to evaluate learning individually when teamwork is requested. Although they would like to do so, the belief is “I cannot control if they do it in a team if it is done by someone else … I cannot control that part, can I? … Then each one hands it over, that is how the activity goes, but well… how do you know if he did it with someone else?” (P14). They do not have procedures that allow the assessment of individual performance and the student’s contributions to the process and product of the group [
51,
52]. The deliverables or product provide evidence of what the student achieves individually or in teams regularly for a summative assessment with a high percentage of the grade.
Faculty members use various authentic and formative assessment strategies but are limited by traditional expectations and lack of methodologies for individual assessment in teamwork. They have evolved their assessment practices towards continuous assessment with frequent feedback and contextual relevance for students. However, there are shortcomings in systematic follow-ups for formative assessment and collaborative work evaluation.
3.3. What Do Faculty Members Evaluate?
In this section, faculty members consider the criteria formally established in their program to define evaluation strategies and tasks. A valid assessment reflects a clear purpose and correspondence with learning purposes and teaching strategies. For De la Orden [
53], this coherence is a virtue of evaluation; therefore, it will be a powerful factor in educational quality and the effectiveness of the axiological system that sustains it.
Although the professors focus the evaluation on specific content or on the activities that they ask of the students on a day-to-day basis, they do not lose sight of the fact that they come from the formal objectives of the program: “If, as I said, I evaluate by the topics that we are seeing, they have to do with the objectives and in the model of our institution with the competencies that they have to develop” (P10). When this coherence does not exist, not only the virtue but also the teleological attribute of the evaluation is fragmented, and its validity is questioned; the assessment is reduced to an object of control, in which the faculty member achieves a false tranquility if the student does what they ask. Moreover, what matters to the student is accreditation. Moreno Olivos [
54] points out that the process leans towards a positivist perspective when evaluating tasks that lack relevance to the purposes.
Designing the evaluation based on the aims or objectives favors the evidence, or evaluation tasks are not evaluated with external criteria of a different nature, such as compliance, form, or punctuality. The validity of the assessment supports the interpretive judgment of what it assesses [
55]: “If the objective of the module is for them to design a didactic sequence to apply elements of an instructional design, then my logic would tell me that I have to do a project, … that will be evidence, I review the progress of the project to know what they are learning” (P15).
With this alignment and collection of evidence, Professor P8 recognizes in the strategy and the instrument “that the evaluation reflects that (the objectives) from the design of the instrument, (he keeps thinking, he wants to give examples) not to ask them how much it is, that is, that they make calculations, but that the instrument itself is by the vision of the course, it is about solving problems, analyzing information in context, and so on, not mechanizing a procedure”.
For Kane and Wools [
56], a systematic and effective approach to achieving validity involves three actions: consistency in the interpretation and use of results, evidence that supports interpretation and intended uses, and what could be called a meta-evaluation, i.e., an evaluation of how well the evidence supports interpretation. These same authors propose that the functional perspective is the main one in classroom evaluation, which focuses on the usefulness of the evidence and the information collected.
Up to this point, evaluation has been analyzed from its teleological perspective since validity is established in the alignment with the ends; however, it is necessary to make visible that the quality of evaluation can be signified from other perspectives, methodological, axiological and psychological [
57], each one representing a different approach where what is expected to be achieved is based on the method, values or the student’s representation of what they have learned.
Faculty members recognize that discipline dictates different methods of evaluation; their experiences point to this: “a course where the student has investigated or worked based on problems, then his evaluation, I would expect it to be also based on problems, based on similar situations and not just an exam” (P3); “… they have to do readings and part of the evaluation is the questions they ask about those texts, they ask them in class and I take it as participation” (P16).
Although in the evaluation processes implemented by faculty, they take care that their strategies and instruments evaluate what is expected, from the teleological perspective, they recognize substantial difficulties: “the truth is that it is difficult to know everything that the student has learned” (P17). This faculty member refers particularly to the fact that the student usually learns more than the formal curriculum establishes. The informal and the hidden influences their learning.
Whether the validity of the evaluation is related to the alignment of the aims or purposes, the methodology, or, in general, the didactic management, for faculty members, it represents a challenge, as they point out: “yes, and because I believe that in any educational model evaluation is what causes the most, the most conflict” (P3); “evaluation is a very difficult subject, I think it is the most difficult part of the teaching job. Preparing, and teaching is a pleasure… evaluation is the serious and difficult part” (P6); “Evaluation is complex, we need to learn how to do it more objectively… (look for another word) integral and be sure that what we observe is really what the student has learned” (P2).
Faculty members prioritize formative purposes for evaluation, which adds quality and validity from a teleological perspective. However, they face challenges as traditional methods cannot measure the complex learning construct. Qualitative and personalistic approaches are necessary. Education proposes models, but faculty members remain skeptical when grading student achievements.
3.4. What Do Faculty Members Evaluate for?
The reality studied confirms positions recognizing that the assessment approach for learning has become more visible in teaching practice, consistent with an educational process focused on learning [
58,
59]. However, it is not specified how to develop it promptly so that it impacts learning.
This research shows that the educational institution’s design and curricular strategy weigh heavily, as they trigger messages that the faculty incorporates as part of their dialogue at the individual and collegiate levels, creating institutionalized thinking. The environment in which the research was carried out is immersed in a process of change that promotes a high level of didactic training according to the educational model.
Sometimes, the changes that occur from the “top-down” are not well considered [
60], but, as Fullan [
61] mentions, strategies in both directions are necessary. This author argues that initiatives that emerge from “above” and are accompanied by timely capacity building that transforms faculty practice guarantee change, even radical change.
Within the environment where the research was carried out, feedback is indeed a key element that evolved, first, by an institutional “mandate,” but which was met with the indisputable commitment assumed by the faculty member in his role as trainer and began to become part of the reality in the evaluation process, with the belief of being a factor that promotes student learning.
Feedback is an action that links assessment with learning. In the research, the code with the greatest rootedness and density shows that faculty members recognize assessment as a process that contributes to learning. Even though it persists as a measure, it highlights that, in this context, faculty members are moving towards formative assessment.
For faculty members, the important thing is to provide feedback on a day-to-day basis. As Professor P14, from the business area, mentions, “…. well… At times, I took time to go with them to the laboratories, so in the laboratory, what I did is I walked among them, the doubts I had were resolved, (silence)… that is, in that interaction, I knew and realized what they were doing”.
Professor P11, from the humanities area, commented that it is not easy to provide effective feedback and, therefore “… it is important to observe them, to give them continuous feedback (upward inflection in the voice), on partial tasks, to help them improve”. As Professor P4, from the engineering area, said, “I evaluate everything given to me. If the student gives me something, I will give it feedback! (raises the tone of voice). I cannot keep something that they give me and not tell them if it was right or wrong, whatever they do, you must give them feedback, that is why I say that… (pause) … It takes a long time, but it is the greatest value I see in this process”.
This type of practice responds to an evaluation approach, whose fundamental purpose is to achieve learning through its results, observe behaviors, and comment on them in an environment of confidence. Santos Guerra [
62] recommends turning the evaluation of a threat into an aid. It was also observed that feedback is not a mechanical practice, carried out by instruction of the educational model, highlighting its importance and usefulness in faculty members’ conceptualization. Professor P4 mentions “I see the evaluation process as this moment in which you give feedback to the student on their areas of opportunity, not to make a criticism that indicates the student is wrong, but rather, to tell them that you are standing at this point and you need to take these actions to get to this other place, which is the goal that was set from the beginning, at the beginning of the course, workshop or whatever”. Or Professor P16 from the area of social sciences, mentions “… assessment encourages learning, I do believe it (he seems to say it to ensure it). I often use evaluation to encourage them, so I like this feedback”.
However, faculty members are aware of the conditions of classroom work, the limitations that the number of students per group may represent, and even the students’ lack of interest. They expressed this in an interview: “I find it difficult to conduct qualitative evaluations of each student, (pause) we have large groups, and that does not help. In addition, as professors, we are always involved in projects, research, and administrative activities; even if you want to, you must try to be practical”—he mentions that several evaluation strategies cannot always give them all the feedback (P17). Or the thoughts of Professor P2: “Feedback is important, but not everyone can receive it in depth; it is not possible because it is not always of interest to the student. Some seek it; others do not; perhaps they do not see the value (accompanied by a gesture in which they raise their shoulders as if indicating indifference), or perhaps they cannot listen and not feel offended or criticized”. She adds that because of her number of groups and students, “I cannot always give immediate or timely feedback”.
The data spark debate about what the theory, the model, the design, and the reality conditions dictate. A possible interpretation is that they are on the road. In this context, they have undertaken a change, and this is not a model; it is a journey with uncertainties, apparent setbacks, and inertia, but it reflects a difference from what many faculty members interviewed said.
This research did not aim to analyze the issue of change management in education. However, each approach in the theoretical sampling revealed that contexts, understood as intersubjective constructs designed in the interaction [
63] institution–a faculty member–student, could not be overlooked, and each of these contexts shapes the faculty’s narrative. Institutional change and dialogue are (de)constructed by each faculty member; from that perspective, they discuss and implement their students’ evaluation and feedback process. For this reason, the understanding given by each faculty member’s narrative shows how they assimilate, interpret, and bring feedback to life, an inherent element of assessment and relevant to learning, but also their recognition of the conditions and circumstances that, for some, are limiting and, for others, allow them to succeed. Everyone is on a journey.
3.5. Findings Implications
Using a grounded theory approach, we have examined how a few academics in Mexico perceive the purpose and significance of evaluation. It is essential to connect these findings with broader studies on how compulsory and higher education teachers view assessment processes’ purpose, function, and nature.
Our work examines the ethical aspects of assessment methods used in education. It emphasizes the need for assessment practices to align with educational institutions’ goals and society’s expectations. Transparency, academic integrity, and adherence to educational institutions’ values and standards are crucial for establishing a fair and effective assessment process that benefits everyone involved.
Our work also emphasizes the importance of student feedback in the assessment process, as it can offer valuable insights that may differ from institutional beliefs and policies, enriching the overall educational experience. We also consider the impact of timing and the duration of assessments on professors’ perceptions, especially in a modern academic environment that includes many additional responsibilities beyond teaching duties.
Other researchers have also studied the perceptions of teachers in different contexts. For example, Barnes and colleagues [
64] explain that beliefs include personal truths, reality experiences, emotions, and memories in exploring teachers’ beliefs about assessment. They also suggest that beliefs are part of a dynamic mental structure consisting of rules, concepts, ideas, and preferences—essentially, knowledge. While some teachers overlook assessment because they prioritize measurability, cultural differences shape views on assessment. The authors stress that ethical considerations around power and assessment warrant further research.
Fulmer et al. [
65] also undertook a study to identify the factors that shape teachers’ adoption of new instructional practices in the classroom. They categorized these factors into three levels of influence: the individual, the school, and society. The study found a significant research gap in understanding how school and societal factors influence teachers’ practices, emphasizing the need for further investigation. It also highlighted the importance of considering contextual factors in assessments and teachers’ extracurricular experiences in shaping teaching values.
Xu and Brown [
66] analyzed how teachers perceive and use assessments, considering factors like the school environment. Their study introduced the Teacher Assessment Literacy in Practice (TALiP) framework, which underscores the need for teachers to reflect on their assessment practices and adapt to improve. The research highlighted the impact of teachers’ self-perception as assessors and the influence of school environments on assessment practices. It addressed teachers’ challenges in implementing preferred assessment methods due to institutional regulations and the school environment.
In a study by Bonner et al. [
67], researchers used self-determination theory to show how teachers’ beliefs interact with external mandates. The study found that teachers’ beliefs and self-perceptions affect their alignment with standards-based systems. It suggested that professional development should focus on providing resources to help teachers develop assessments aligned with external standards. Schools do not always establish norms coherent with the external system, leading to difficulties in implementing alternative assessments desired by teachers. In this regard, teachers expressed resentment and fear due to feeling compelled to comply with the limited structure of the system, impacting their professional autonomy and beliefs.
Pastore and Andrade [
68] studied teacher education about assessment knowledge and skills. They introduced a model with three parts: understanding concepts, putting knowledge into practice, and considering social and emotional aspects. The model emphasizes the importance of practical, conceptual, and socio-emotional understanding of assessment and received positive feedback from 35 experts. The study aims to start conversations and future research on assessment knowledge and skills to shape teacher education and professional development programs.
In their study, Estaji et al. [
69] examined the skills and knowledge teachers need for assessing students in digital environments. They focused on Teachers’ Understanding of Assessment in Digital Environments (TALiDE). They discussed the use of digital assessments in education, challenges during COVID-19, academic integrity, and the skills needed for digital assessments. The study emphasized the importance of clear criteria and rubrics for scoring digital assessments to ensure fairness and concerns about academic honesty in digital evaluations.
These studies analyze how educators’ beliefs about assessment influence their teaching methods. They explore the factors that significantly affect teachers’ assessment practices, such as the school environment, extracurricular activities, and external policies. They also introduce theoretical frameworks, which help educators to understand and apply assessment methods in traditional and digital classroom settings. The literature emphasizes the practical challenges teachers face when implementing preferred assessment strategies within existing institutional and regulatory frameworks. Additionally, it highlights the role of professional development programs in assisting teachers to align their assessment practices with external standards while emphasizing the tension between educators’ autonomy and standardized educational systems.
Considering what has been studied before, it is essential to merge the ethical considerations of assessment practices with the theoretical frameworks presented in this study. By linking these ethical principles with theoretical constructs such as TALiP and TALiDE, a more comprehensive understanding of how these frameworks can incorporate ethical considerations into practical assessment methods can be achieved. Additionally, the impact of professors’ additional responsibilities, as highlighted, should be examined concerning the implementation of these theoretical frameworks. This should also include recommendations for aligning professional development with ethical practices and considering student perspectives, ultimately bridging the gap between theory and practice in assessment.
4. Conclusions
Faculty members’ views on evaluation differ and are shaped by their expertise and personal and professional experiences. However, their approach is also influenced by the educational context, the curriculum design, teaching methods, and the institution’s evaluation policies. This demonstrates that evaluation extends beyond mere content retention measurements or end-of-subject exams.
Educators assess students’ performance to determine if they have met the expected standards. They understand the importance of aligning the assessment with learning objectives to ensure a fair and accurate evaluation. They strive to uphold objectivity and avoid disadvantaging students with poorly designed assessments. Therefore, they emphasize correctly answered tests and activities demonstrating critical thinking and applying developed skills to real-world problems.
This study emphasized instructors’ ongoing assessments and their focus on feedback. They understand that their comments to students direct them toward improvement and impact their motivation, performance, and self-direction.
Based on the analysis and discussion of the results, it can be concluded that faculty members recognize the enrichment of assessment methodologies. They no longer solely rely on traditional, summative, retention-based, punitive approaches; instead, they embrace new evaluation methods, acknowledge the principles of their theoretical frameworks, adopt an evaluation approach to facilitate learning rather than learning to be evaluated and infuse creativity into their evaluation practices.
It is important to continue developing and enhancing instructors’ assessment skills. Different authors suggest the knowledge, actions, and qualities educators should possess regarding evaluation. It is recommended that programs be created that provide them with conceptual clarity, inspire them to develop effective evaluation methods that consider the specificities of their educational environment, and urge them to engage in reflective and constructive discussions about learning, teaching, and assessment.
4.1. Limitations and Future Work
The primary constraint of this study is the limited number of participating professors. Despite inviting 60 professors, only half took part in the research. Additionally, the study’s focus on a specific university in Mexico may restrict its applicability to other contexts.
Drawing from the insights gained through the theoretical sample, several research avenues are recommended:
The moments and ways formative and summative assessments coexist simultaneously. Educators ask themselves the following questions: When is the ideal time to conduct summative evaluation? How far should the formative process of the course advance before having the purpose of making a learning evaluation? When accepting that one evaluates to learn and does not learn to be assessed, should summative evaluation be conducted when the student recognizes that they have already achieved learning?
The impact of numerical grading on assessment. The faculty members’ concerns are reflected in these questions: What are the characteristics of valid procedures for awarding a grade with the information provided by the evaluation? What is the role of numerical grading in an assessment approach to learning? How can we ensure the grade does not condition the student’s decisions about what to do?
The quality of the evaluation process in large groups. This can be a line of research that responds to faculty members’ strong concerns about how to implement formative assessment in large groups. What is the ratio, group size, and feedback time? What are the appropriate strategies for evaluation in large groups?
4.2. Contribution of the Research to the Educational Field
The research makes significant contributions in two key areas. Firstly, it shows how the results can benefit educational decision-making and actions. Secondly, it delves into investigating the phenomenon itself.
The value of this study lies in recognizing that substantial improvements in teaching assessment practices can be achieved through structural changes supported by administrative, academic, and technological processes. Furthermore, systematic research on the evaluation process reveals potential gaps in conceptual models, which often evolve faster than implemented in education. This research is pertinent for designing interventions to enhance educator assessment competence and improve the overall assessment quality.
Further exploration is possible in applying grounded theory to educational research, particularly in the context of classroom assessment. Grounded theory methodology challenges established paradigms, fosters reflection, and encourages researchers’ creativity and self-awareness. This approach enhances the transparency and credibility of findings, thereby adding reliability to the study. Emphasizing a qualitative and interpretative approach, it acknowledges evaluation as a dialogic and intersubjective space. This perspective enriches the educational environment for the benefit of faculty members and students, going beyond the mere quantification of learning.