Next Article in Journal
The Nonlinear Effects of Digital Finance on Corporate ESG Performance: Evidence from China
Previous Article in Journal
Multi-Period Optimal Transmission Switching with Voltage Stability and Security Constraints by the Minimum Number of Actions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Qualitative Study of the Experiences and Perceptions of Korean Undergraduates Regarding Two-Stage Examinations

by
Hyewon Jang
1,*,
Junaid Rashid
2 and
Joohee Lee
3
1
Office of Educational Innovation, Sejong University, Seoul 05006, Republic of Korea
2
Department of Artificial Intelligence and Data Science, Sejong University, Seoul 05006, Republic of Korea
3
Department of Climate and Energy, Sejong University, Seoul 05006, Republic of Korea
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(18), 8273; https://doi.org/10.3390/su16188273
Submission received: 14 August 2024 / Revised: 12 September 2024 / Accepted: 19 September 2024 / Published: 23 September 2024
(This article belongs to the Section Sustainable Education and Approaches)

Abstract

:
Researchers have recognized collaborative problem-solving as a key competency for addressing sustainability challenges through creative and holistic approaches. Nevertheless, transforming traditional individual assessments into collaborative examinations to improve collaborative problem-solving is challenging. This study examines the experiences and perceptions of Korean undergraduates regarding two-stage examinations comprising individual and team components. Semi-structured interviews with nine students yielded results in eighteen subthemes categorized into four themes: process, positive aspects, negative aspects, and action items for support. Participants experienced a dynamic, collaborative problem-solving process through two-stage examinations; reported positive aspects, such as improved grades, learning opportunities, immediate feedback, and reduced test anxiety; and negative aspects, such as the burden of teamwork. As the first qualitative study of students’ perceptions of two-stage exams, this research explores transformative assessment practices that enhance collaborative problem-solving skills crucial for addressing global sustainability challenges. Furthermore, to promote collaborative assessments, this study discusses implications for educators planning to use two-stage examinations and offers suggestions for future research.

1. Introduction

Building collaborative problem-solving skills is crucial for human society’s sustainable development. Challenges that impede sustainable development, such as the climate crisis, are characterized by socio-economic complexity, uncertainty, ethical issues, and difficulties in reaching consensus, and therefore require cooperation [1,2]. Educators in Western countries, such as the United States and Canada, have recognized the importance of collaborative problem-solving skills and implemented two-stage examinations [3,4,5]. The two-stage examinations combine traditional individual tests and group problem-solving activities. The application of two-stage examinations has positively impacted academic achievement, motivation, test anxiety, peer relationships, long-term retention, and learning experiences [3,4,5,6,7,8,9,10,11,12,13,14]. However, it remains uncertain whether two-stage examinations can be successfully accepted in East Asian universities, which traditionally emphasize a competitive academic environment, as explored by previous studies. This study aimed to explore the experiences and perceptions of Korean university students regarding the implementation of two-stage examinations at their institutions. With the recent emergence of generative artificial intelligence (AI), the limitations of traditional assessment methods have become more apparent, increasing interest in alternative assessments. As most two-stage examinations have been applied in Western countries, this study will contribute to the global academic discourse on two-stage examinations in higher education.
Traditional individual assessments have been used to measure concepts and knowledge learned at specific points in time despite their limitations. These limitations include a lack of feedback, stress induction, test anxiety, and vulnerability to cheating. First, traditional assessments often fail to provide feedback [15,16]. Feedback is a key component of learning, and all students should have the opportunity to receive feedback and improve [17,18]. However, traditional assessments frequently provide only a final score without constructive feedback [19,20]. Second, traditional assessments can cause test anxiety and stress [21,22]. Test anxiety is the emotional difficulty students experience in evaluation situations, including worry about failure and physical reactions such as heart palpitations [23]. These reactions may be due to comparisons with others, low confidence, or fear of their own abilities [24]. Studies have reported that students often experience emotional distress owing to the pressure of tests. Typically, in East Asian countries, high entrance examination competition has been correlated with high suicide rates [25]. Third, cheating is frequent in traditional assessments [26,27]. The rise of online classrooms, widespread smartphone use, and advances in AI technology have made it easier for students to cheat on assignments and tests [27,28,29]. The limitations of traditional assessments are clear and highlight the need for alternative and improved assessment methods in higher education.
Two-stage examinations are an alternative assessment approach that promotes collaboration and learning while maintaining traditional individualized assessments [5]. This approach comprises three stages: first, students take an individual test; second, they form groups of three or four to attempt a new test paper and complete their answers collaboratively; third, each student’s final score is calculated as the sum of their individual and team scores [5]. Studies have demonstrated that two-stage examinations have multiple positive effects, including enhanced long-term memory retention, improved content comprehension, increased learning motivation, greater student engagement, development of teambuilding and teamwork skills, improved communication skills, fostering positive peer relationships, and reduced test anxiety [4,30]. The efficacy of two-stage assessments can be attributed to two primary factors: the cooperative learning effect [31,32,33] and the just-in-time feedback effect [34]. The cooperative learning effect is observed in significantly higher team scores compared to individual scores. Notably, 50% of the teams with no correct answers initially ultimately achieve the correct answer after three response opportunities [3]. Furthermore, two-stage assessment has been effective in increasing academic achievement for students across all levels [3,4,5,6]. When team members with diverse perspectives collaborate to solve a problem, the probability of arriving at a correct answer increases, and learning occurs throughout this process [35].
For the sustainable development of human society, fostering collaborative problem-solving skills in education is essential [1,2,36,37]. A research team proposed a framework of key competencies in sustainability, emphasizing interpersonal competence as a crucial component [38]. This competence is defined as the capacity to “motivate, enable, and facilitate collaborative and participatory sustainability research and problem solving” [38]. It encompasses a range of advanced skills, including collaboration, leadership, empathy, negotiation, pluralistic thinking, and effective communication. The development of these skills is essential not only for equipping students to navigate the complexities of sustainability challenges but also for fostering creative and holistic approaches to addressing these issues. In essence, interpersonal competence can be understood as a comprehensive form of collaborative problem-solving ability, specifically tailored to sustainability contexts [39].
However, in cultures with competitive admissions, such as Korea, the importance of personal assessment is widely accepted [25,40]. This is particularly true given that admission to a prestigious university is considered an honor for students, their families, and their schools [40]. Despite the positive impact of two-stage examinations on student learning, their implementation can be challenging if student experiences and perceptions are negative. College students may be particularly sensitive to assessment methods, because their grades are closely tied to scholarships and employment opportunities. Consequently, any proposed changes in assessment should involve discussions about the need for change, its rationale, and the specific measures to be implemented [41]. Assessment innovation should be guided by a comprehensive review of student and instructor perceptions following pilot implementation [42]. Additionally, instructors need proper guidance to effectively introduce new assessment methods [41]. This study explores student experiences and perceptions of two-stage assessments, providing a valuable reference for innovation assessment. The findings may have implications not only for Korean education but also for educators in other cultures where competitive academic environments are prevalent.
This study explored the experiences and perceptions of participants through semi-structured interviews after piloting a two-stage assessment of three Korean university classes. Several methodological strategies were employed to ensure the reliability and rigor of this study [43]. First, the two-stage assessment was piloted in three different courses and the study participants were limited to students who participated in the two-stage assessment to ensure the authenticity of the study results. Second, the semi-structured interview questionnaire was revised through expert review, and pilot interviews were conducted before the main interviews to ensure appropriateness. Third, the interview data were recorded and transcribed in the field, and peer reviews were conducted among the researchers during the coding and analysis processes to ensure the validity of the interpretation.
In the qualitative study, the actual statements of the participants were used to describe the findings and reveal the vividness of their experiences [43]. Triangulation was conducted to enhance the reliability and validity of the qualitative research. In addition to the interview data, other data, such as observations, were reviewed, and student survey sheets were analyzed. Furthermore, the results of the study were shared with the participants to confirm the appropriateness of the interpretation. These methodological approaches reduced bias and increased the rigor and reliability of the findings.
The research questions for this study were as follows: What experiences do Korean university students have while participating in two-stage assessments? How do Korean university students perceive two-stage assessments? This study contributes to sustainable development in several ways. First, it can help cultivate the human resources required to solve wicked problems such as climate change [2,38,39]. Many of the challenges faced by humanity can be overcome through cooperation. This study reports the case and results of a qualitative, in-depth exploration of university student perceptions after applying two-stage examinations to three courses. It is expected to contribute to expanded discussions on collaborative problem-solving in higher educational innovation. Second, this study contributes to a shift toward the assessment of learning. The importance of assessing for learning has been emphasized [44]. Two-stage assessment provides timely feedback and learning opportunities [30,34]. As this study is the first to apply two-stage examinations to a Korean university in Asia, it contributes to extending the discussion on assessment for learning in both cultural and methodological dimensions. Third, this study suggests how to effectively apply two-stage assessments from a student perspective, which will be of practical help to educators in the field as well as to two-stage assessment scholars [45].

2. Literature Review

2.1. Two-Stage Examinations

The application of two-stage examinations began with team-based learning, an instructional strategy proposed by Larry Michaelsen in the 1970s to facilitate meaningful learning in teams [46]. Team-based learning comprises three phases: preparation, readiness assurance, and application-focused exercises. Individual and team readiness assurance, which are components of the two-stage examinations, occur in the readiness assurance phase. Two-stage examinations have been used in diverse disciplines, such as physics, chemistry, biology, engineering, nursing, anatomy, and teacher education, to facilitate discussion and idea exchange in small groups within a testing environment, allowing students to learn from their peers and ultimately improve their learning [5,6,47,48,49]. Studies on cooperative learning [31,32] and just-in-time feedback [34] support the effectiveness of two-stage testing in improving learning.
Two-stage examinations combine traditional assessments with a collaborative element that enables students to learn from their peers, fill knowledge gaps, and reduce individual burdens [4,5,33]. They have proven to positively impact academic achievement, conceptual understanding, retention, academic interest, and self-efficacy [3,4,5,6,7,8,9,10,11,12,13,14,30,50]. Moreover, the two-stage examinations can reduce test anxiety [4,33]. Over 70% of the participants reported that two-stage testing helped them learn more than traditional assessments [5,33].

2.2. Student Perceptions on Two-Stage Examinations

Studies on assessment perceptions in higher education have examined the impact of assessment methods on learning. Studies such as [51] argued that every assessment sends a message to students about what they need to learn and how to learn it. Education scholars view assessment as a powerful determinant of the hidden curriculum [52]. Considering the impact of assessment perceptions on student performance, it is important to study student perceptions of two-stage assessments [49,51].
Moreover, two-stage examination scholars have explored student perceptions, reporting both positive and negative student perceptions based on surveys [4,33]. A previous study reported that student experience surveys indicated that two-stage assessments were conducive to learning and reduced stress [33]. Students experienced fair and collaborative teamwork, feedback, and error correction during the two-stage examination [33]. However, some students found that immediate feedback and realizing that they had made mistakes made them feel less confident and more anxious about their abilities [4,33]. Moreover, unequal contributions within groups and disagreements in the consensus process were issues. Despite these concerns, more than 70% of the students favored the two-stage assessment over traditional individual assessments. Another study conducted a survey using open-ended long-answer questions on student perceptions of two-stage examinations [4]. Students reported several benefits, including improved test performance, immediate feedback, a better understanding of the material, less pressure, increased engagement, and reduced test anxiety. In this study, immediate feedback was rated both positively and negatively, highlighting the complexity of student perceptions. These studies have provided valuable insights into student perceptions of two-stage examinations [4,33]. However, they were limited in that they did not explore student experiences in-depth or support plans to improve student experiences. This limitation suggests the need for more comprehensive qualitative research to acquire a deeper understanding of student experiences and perceptions of two-stage assessments.

3. Methods

3.1. Participants

The participants for this study were recruited from students enrolled in three courses in which two-stage examinations were implemented, as presented in Table 1.
The recruitment process was as follows. First, the researcher requested lecturers and teaching assistants to inform regarding the study and distribute an application questionnaire to potential participants. Second, telephonic interviews were conducted from a pool of applicants to identify suitable candidates. Third, participants were selected based on the following criteria: a minimum of three years of university experience and the ability to articulate their experiences in detail. The selection process yielded nine participants. The participants comprised five men, four women; and four third-year and five fourth-year students from five majors. This purposive sampling ensured that the selected participants had sufficient academic experience to provide in-depth insights into their perceptions of two-stage examinations.

3.2. Application Cases of Two-Stage Examinations

As presented in Table 2, two-stage examinations were administered as the midterm and final examinations for each of the three classes. Each examination comprised 10–20 questions, with the first stage lasting less than 60 min. The question types were false/true, multiple-choice, short answer, short essay, and programming. The assessment questions were the same for both the individual and team examinations. Teams comprised 4–5 members, balanced to account for experience, knowledge level, gender, grade level, and major, which can affect achievement [46]. The final scores were a combination of 70% individual and 30% team grades [45]. To ensure the rigor of the study, the researcher conducted an orientation for instructors. This orientation included a brief introduction to the concept, purpose, and methods of the two-stage examinations, as well as team formation. Thereafter, one of the researchers assisted with the team formation survey for each course. All instructors provided orientation to the students regarding the two-stage examinations.

3.3. Protocol Development

Semi-structured in-depth interview questions were developed to capture student experiences and perceptions of two-stage examinations. The interview protocol was based on a structured reflection process adapted from the reflective cycle [53], which supports structured debriefing. The interview questions were organized into four main phases: recalling, reliving, reinterpreting, and responding to the experience [53]. Examples of interview questions include the following: “Tell me about your activities in the previous two-stage examinations.”; “What were the roles of the team members, including yourself?”; “How do you reflect on your individual and team problem-solving experiences?”; “What was a positive or negative aspect of two-stage examinations, and why?”; “What do you need to continue developing your experience with the two-stage examination?” This approach ensured that the interviews comprehensively covered student experiences and reflected on their perceptions of the two-stage examinations.

3.4. Data Analysis

The data collected from the participants were analyzed using a combination of inductive approaches, reflective thematic analysis [54], and the pattern matching proposed by [55]. Reflective thematic analysis was selected because it necessitates a focus on analyzing data and inductively deriving a description of the shared experience of participants [55]. The authors of this paper are faculty members who have experienced team-based learning in science education, educational administration, and engineering; therefore, the data analysis was examined using this lens. A pattern-matching technique [55] was applied to discuss the findings by comparing the patterns identified in previous studies.
The analysis comprised the following steps [54,55]: transcribing interview data into text, performing open coding to derive initial categories and themes, exploring relationships between codes, and grouping similar codes to identify overarching themes. Consensus among the researchers was achieved through an iterative review and discussion, and the themes and categories were refined. This involved identifying the expected patterns from previous research, comparing the observed patterns in our data with these expected patterns, and returning to the raw data to verify the matching patterns. This comprehensive analytical approach allowed for a thorough exploration of participants’ experiences [54] and perceptions while situating the findings within the existing literature on two-stage examinations [55].

3.5. Validity and Reliability

To enhance the validity and reliability of this study, the standards proposed in [43] were used to examine three dimensions: credibility, fittingness, and auditability. First, to enhance credibility, a reflective cycle in the interview process [53], semi-structured, open-ended questions in individual interviews, recorded and transcribed interviews verbatim, and an inductive data analysis based on reflexive thematic analysis were used [54]. For potential implementers of this evaluation method, it is crucial to note that the reflexive thematic analysis involves six phases: familiarization with data, initial coding, searching for themes, reviewing themes, defining and naming themes, and producing the report. This process allows for a deep engagement with the data and ensures that the themes truly reflect participants’ experiences [54]. Moreover, member checking was employed to ensure an accurate representation of participants’ experiences and perceptions, and data were triangulated using multiple sources, such as interview data, observation notes, researcher notes, and relevant literature [43]. Second, to ensure suitability, participants were recruited from courses where the two-stage examinations were applied, and participants who could freely express their experiences in detail were selected. Third, to ensure auditability of the study, the researchers engaged two external researchers with expertise in qualitative research to validate findings and provide feedback [43]. Further, to minimize bias, the researchers shared their experiences related to the research topic with external researchers. Through these rigorous processes, the researchers endeavored to explore the participants’ experiences in real-life situations as comprehensively and accurately as possible, thereby enhancing the overall validity and reliability of the study.

4. Results

The current study identified four themes and eighteen subthemes of experiences and perceptions of Korean undergraduates regarding two-stage examinations, which are listed in Table 3.

4.1. Process

All participants in this study were new to two-stage examinations. They were expected to obtain higher final scores during the orientation because of the team phase. The experience of team problem-solving in the two-stage examination process was intriguing. The classroom atmosphere became dynamic during the transition from the individual to team phases. They had to collaborate to submit their final answers.
Participant H:
This is the first time I have ever taken this type of test in college. I solved problems together in high school, but I never solved a problem individually and then solved it a second time as a team. I would say it was lively in a different way than usual. It was interesting for me to share my opinions and compromise, although it was a test.
In the two-stage examinations, participants answered the same questions twice, first individually and then as a team. Team answers were reached through consensus among teammates. During this process, team members shared their knowledge, problem-solving approaches, and learning contexts. The two-stage examinations encouraged students to be self-reflective. By reflecting on their knowledge and comparing it with others’ approaches, they became aware of their level of expertise. Participants shared not only their problem-solving methods but also their learning approaches with their teammates.
Participant E:
Team problem-solving made me realize that we all have different approaches, which I would not have noticed if I had not experienced a two-stage assessment. Each team member has a different way of thinking and solving problems. Through this discussion, I learned much about what I missed and what I should have studied further.
Participants sought an advantageous approach to the two-stage examinations. Anticipating differences in team scores, they strategized how to respond to various scenarios and allocated resources based on the question types. For example, time was allocated according to the level of difficulty because it was necessary to move rapidly through easy questions and discuss challenging questions intensively. During team evaluation, a more confident student would often lead, or everyone would take turns explaining.
Participant F:
I knew the team grade would be higher than the best individual grade. However, I was also worried. As our team’s grades increased, the other teams’ grades also increased. I contemplated strategies to make our team competitive.
The two-stage examination experience was new and intriguing, during which students shared their solutions with their teammates, reflected on their solutions, and reached a final answer through consensus, unlike in traditional assessments. Moreover, they had to strategize their time allocation, problem-solving approach, and role distribution to outperform the other teams.

4.2. Positive Aspects

Participants reported positive experiences with the two-stage assessment, citing several key benefits. All participants were optimistic about their performance because of the team phase; the opportunity to improve their scores through teamwork led to increased engagement and a sense of accomplishment. Students were relieved to improve their scores during the team phase. The second stage’s collaborative nature provided students with immediate feedback and new learning opportunities as they discussed and refined their answers with peers. This process facilitated deeper conceptual understanding and exposed students to diverse problem-solving strategies. Unlike traditional university exams, where feedback is often delayed or absent, two-stage exams allowed students to identify and correct misconceptions promptly.
Participant H:
I benefited from this test as I obtained higher points. When I took traditional tests, I did not know whether I had answered correctly. Two-stage examinations were beneficial because as soon as we solved a problem, we could compare our answers with those of our teammates and immediately check whether they were correct or incorrect. In most tests, one does not receive any feedback; therefore, one does not know the mistakes. However, with the two-stage examinations, one can receive feedback immediately, which helps to study.
Participants discovered that discussions with their peers helped them better understand concepts and allowed them to learn various problem-solving processes. They could discuss with their peers and receive immediate feedback, and they realized that it provided new learning opportunities. Peer discussions facilitated enhanced conceptual understanding and exposure to a range of problem-solving strategies. This method provided immediate feedback, allowing students to identify and correct misconceptions promptly.
Participant E:
When solving problems with my team members, I found it quite amazing how I could suddenly see the mistakes I had made. It really stuck in my mind. After solving the problems individually and then immediately working on them as a team, I could tell right away whether my answers were correct or not. It felt like I was working through an error correction notebook in real-time. The experience of immediately knowing if my initial thoughts were right or wrong was incredibly helpful.
Participants reported increased motivation for learning. The team assessment phase facilitated the exchange of study methods and learning strategies. When team members presented differing answers, they naturally shared their study approaches to persuade others, providing insights into peers’ learning techniques. This exchange of information on time investment, resources, and study strategies led to increased self-awareness among participants regarding their learning habits. Consequently, many students expressed a renewed motivation to enhance their study efforts, inspired by their peers’ dedication and diverse approaches to learning.
Participant C:
During the explanation process, I noticed that team members naturally revealed their study methods. As they justified their answers, they ended up describing how and to what extent they had studied the material. I realized I hadn’t studied as thoroughly, and I was amazed at how detailed some of my teammates’ study approaches were.
Unlike traditional assessments, the participants felt reassured that they had someone to trust and felt less anxious about the test. Participants anticipated that even questions they found confusing or did not fully understand could be resolved during the team phase. Interestingly, participants reported a paradoxical effect: while experiencing reduced test anxiety, they simultaneously expressed a stronger motivation to prepare more diligently for the tests. They attempted to contribute to the team’s success. Even students who were not normally proactive prepared more out of a sense of “responsibility” to not let the team down. As teammates shared their study strategies, the participants felt inspired to study harder in the future.
Participant H:
Having teammates allows me to rely on them. I am more relieved than anxious when my scores improve through team assessments.
Participants believed the two-stage assessment contributed to a collaborative learning culture because it required students to work together to achieve a common goal. In lecture-type courses, it is common for students to have limited opportunities to get to know their classmates; however, two-stage examinations require collaboration.
Participant F:
I usually do not pay much attention to my classmates in class. In a class that adopted the two-stage examination, I greeted my teammates whenever I encountered them. We worked together. We had discussions, and even an icebreaker. I believe that the two-stage assessment can help establish a culture of team-based learning.

4.3. Negative Aspects

In some cases, two-stage examinations, like any other type of group work, were a burden to some students because of the need to collaborate with teammates, particularly for introverted team members. Team activities were a psychological burden where no interactions with teammates occurred prior to the team phase. Immediate feedback was helpful for learning; however, it placed a psychological strain on some students. When they realized that their answers were incorrect, they felt “stunned” and demoralized.
Participant F:
The first time we performed a team activity, it was awkward because we did not know each other very well. I felt psychological pressure because of the extensive task ahead of us, and I felt overwhelmed.
Participant H:
I answered incorrectly, and I was discouraged. I was confident when I solved it on my own. However, it was difficult to speak to other team members because I did not have sufficient knowledge.
Participants emphasized the importance of being active, not only for themselves but also for the entire team. If a student is passive, it affects the rest of the team and lowers overall morale. The two-stage examinations took longer than the individual assessments, particularly when they had to discuss the solution process, such as algorithms. Extended test duration was identified as a negative aspect of two-stage examinations. However, most students did not face this issue extensively as there were only two tests: midterm and final.
Participant B:
We had to discuss the problem from the beginning to the end until we all agreed. Therefore, we ran out of time. As everyone had different ways of solving the problem, it took some time for them to explain and communicate with each other. However, I believe this helped us learn.

4.4. Action Items for Support

The participants suggested several approaches to support the effective implementation of the two-stage examinations. First, debate training would be helpful. The team phase involves rapidly exchanging information and determining the best answer within a limited time. Communication skills are required to share information effectively. If teammates have different answers, they need to be convinced; therefore, debate training would be helpful.
Participant C:
We didn’t have a lot of experience solving problems together, so I felt that it was difficult to know how to solve problems together. To do that well, we had to explain well and convince the other person. I think it would be beneficial if we could be trained in advance.
Second, providing an environment that supports team activities is important. Educators should consider environments conducive to teaming, such as team rapport building and roundtables where team members can gather together and discuss. If the course is not team-based, such as a lecture-style course, orientation for two-stage exams and ice-breaking activities to build team rapport are essential before the application of a two-stage examination. A balanced team composition is required to ensure fairness.
Third, more challenging questions should be used in two-stage examinations. When solving simple memorization questions, teams shared answers without discussing them together, whereas for more challenging problem-solving questions, teams shared concepts and approaches to solve the problem. Participants suggested that more challenging problem-solving and discussion questions would be appropriate for two-stage examinations.
Participant A:
My answers to the memorization questions were correct; however, I was not learning. I believe I was learning by working together on difficult problem-solving questions. Students would be more engaged with more difficult questions that have something to discuss than easier ones.
Fourth, participants suggested conducting peer evaluations to encourage active members and factor participation in the final grade. They emphasized the negative impact of passive students on team morale and the need to measure team members’ contributions. This would help ensure that everyone plays a responsible role and feels comfortable voicing their opinions, and it would also promote a fair evaluation.
Participant E:
Some of the team members were not able to explain because they were not confident, and they were just listening to the other team members explain. We need to do peer review to get them to actively participate. A more detailed peer assessment system can be implemented in this course to encourage active participation.

5. Discussion

This study aimed to explore the experiences and perceptions of Korean undergraduates who participated in two-stage examinations, using a qualitative approach. Four distinct themes and eighteen subthemes were identified. This section discusses the results in comparison with those of previous studies and their educational implications.

5.1. Comparisons to Previous Studies

The findings of this study reveal both similarities and differences with those of previous studies. First, consistent with prior studies, students reported positive perceptions, such as improved grades, enhanced discussion opportunities, immediate feedback, and reduced test anxiety [4,30,33]. However, a novel finding of this study was the identification of the students’ strategic approaches. One possible explanation for these differences lies in methodological differences. Previous studies predominantly employed surveys to explore the topic, such as mastery of knowledge, test anxiety, and format preference [4,33]. In contrast, this study used interviews to inductively derive subthemes, which may have facilitated the emergence of new subthemes such as student strategic approaches.
Participants in this study demonstrated strategic approaches during orientation to the two-stage examinations, such as planning various scenarios to benefit their team, allocating time according to task difficulty, and distributing roles among team members. Studies such as [56] have described how team effectiveness is influenced by cognitive, emotional, and behavioral factors, including team members’ competencies, team mental models, and resource allocation. Future research could provide a more detailed examination of team member interactions and strategic approaches based on team effectiveness research.
Second, in both the literature and the present study, some participants expressed negative perceptions of immediate feedback and the burden of teamwork. A few participants believed that it was not always pleasant to receive immediate feedback on their mistakes, and others found it burdensome to share their answers with teammates [4,31]. Psychological theories such as Self-Determination Theory (SDT) suggest that immediate feedback can influence learners’ sense of competence and autonomy. SDT posits that humans have innate needs for competence, autonomy, and relatedness, suggesting that feedback can significantly influence learners’ motivation. While some learners may initially perceive feedback as a threat to their autonomy, constructively delivered feedback can enhance their sense of competence and, ultimately, their intrinsic motivation [57]. Moreover, Feedback Intervention Theory (FIT) highlights that feedback can occasionally divert attention from task-related learning to self-focused evaluation [58]. This shift in focus may contribute to the negative reactions often observed among learners with perfectionist tendencies, as they may be more prone to critical self-evaluation [59]. In this study, a few high-achieving participants were more likely to be self-critical of their mistakes. Those with high levels of perfectionism or performance orientation often find it difficult to tolerate errors and tend to hide or avoid them [60,61]. Moreover, for Generation Z learners, who are used to studying online, face-to-face discussions with teammates can be intimidating [62]. During the orientation, instructors should emphasize the importance of fostering a sense of psychological safety within teams [46]. They can guide the creation of an environment where mistakes are welcomed and shared rather than judged and hidden, enabling learning to occur through constructive feedback [63,64].
Third, the participants mentioned the negative impact of passive participants on the team during the team phase. The majority of the participants emphasized the importance of active participation by all team members. There are two possible explanations for these findings. First, there may have been several passive participants. Owing to their lack of experience with two-stage examinations, students may not be confident and participate passively. Second, participants may have valued the impact of others on their scores. Hofstede’s theory posits that Eastern cultures exhibit stronger collectivist tendencies than Western cultures, potentially leading to significant differences in attitudes toward evaluation [65]. In Eastern societies, others’ perceptions and social evaluations exert a more profound influence on individual identity. Consequently, students from Eastern cultures may be more sensitive to how their performance outcomes are perceived, as these evaluations have a greater impact on their self-concept and social standing [65]. The Asian cultural background may make students more competitive and value outperforming others [25], making them more sensitive to the impact of passive participants on the team. Asian parents have traditionally been reported to be strict with their children, emphasizing academics [66,67]. The Confucian culture of Asia affects students’ academic achievement, test anxiety, and self-doubt [68]. This difference suggests that experiences and perceptions of two-stage assessments may vary across cultures [65]. As this study inductively derived themes based on interviews, differences in perceptions could be identified that previous studies did not. Future research could investigate the differences in student perceptions of two-stage assessments across cultures.

5.2. Educational Implications

This study demonstrated that a two-stage assessment can potentially overcome the limitations of traditional assessment methods. In higher education, assessment methods have been reported to significantly influence learning. The assessment is a powerful determinant of hidden curriculum [52]. In this study, participants reported that they worked individually and in teams, and that they shared, reflected, and reached a consensus on the problem-solving process in a dynamic atmosphere. Participants perceived the two-stage assessment positively: it improved grades, promoted conceptual understanding, facilitated problem-solving, provided immediate feedback, reduced test anxiety, increased accountability, enhanced academic motivation, and facilitated a collaborative culture. This suggests that two-stage examinations can address the limitations of traditional testing, such as limited feedback, test anxiety, stress, and cheating.
Two-stage assessments can transform traditional individual assessments into assessments for learning by providing immediate feedback. Students recognized the importance of feedback in assessments [51,69]. Students, particularly those in Asian cultures, face many psychological challenges in their competitive assessments [60]. Two-stage assessments can help alleviate these challenges. Interestingly, the participants in this study suggested that two-stage examinations are applicable not only to colleges but also to K-12 across the disciplines of science and humanities. Future pilot studies could contribute to exploring the impact on students’ academic and psychological well-being across different levels and disciplines.
In this study, the participants proposed several strategies to enhance the efficacy of two-stage examinations: debate training, fostering a supportive team environment, developing challenging questions, and peer evaluation of participation. First, participants emphasized the necessity for debate training. Team activity in the second stage involves knowledge-sharing and mutual persuasion to reach an optimal solution within a limited time. Studies [70,71,72] have demonstrated that students performed better in guided group discussions. Implementing debate training for students with limited experience could enhance team performance [72]. Second, two-stage examinations require a supportive environment similar to other team-based activities [46]. Educators should consider various factors during orientation to ensure fair and safe team activities, including balanced team formation and rapport building [46]. Physical environments, such as chairs, tables, and whiteboards to facilitate face-to-face discussions, should also support collaboration [45]. Third, students expressed a preference for challenging questions in the team phase of the two-stage examination. While students typically selected proper tasks based on the perceived likelihood of success [73], the participants in this study favored more demanding items for team problem-solving. This finding suggests that two-stage examinations encourage students to engage in complex problems. Such engagement with challenging, collaborative problem-solving tasks can potentially foster the skills necessary to address the intricate, multifaceted issues inherent in sustainability challenges. Fourth, the participants recognized the need to reflect on individual contributions through peer assessment, as active participation is crucial for successful teamwork. Incorporating peer evaluations into the final grading can enhance the perceived fairness of an assessment [45,46].

5.3. Limitations

This study had certain limitations. First, the participants were undergraduate students attending a private metropolitan university in Seoul, Korea. Therefore, the results could not be generalized to Korean undergraduate students from diverse educational systems and cultural backgrounds. As an exploratory study, the results can serve as a reference for understanding the perceptions of two-stage examinations among undergraduate students in Korean universities; however, further research may be necessary to validate their findings across diverse contexts.
Second, as a qualitative study, this study focused on a few pilot cases, providing an in-depth understanding of student perceptions; however, it is challenging to generalize the results. The findings are based on three courses and nine students, indicating that additional research may be required for a broader generalization. Moreover, as this was a qualitative analysis, the researcher’s perspective and interpretation may have influenced the findings. Consequently, while the results of this study can be used as a reference for the future implementation of a two-stage assessment, it is crucial to carefully consider various environmental factors and appropriately account for cultural and social contexts in subsequent studies.

6. Conclusions

Collaborative problem-solving is a critical competency in sustainable development [1,2,36,37,38,39]. Collaborative problem-solving, a critical skill for addressing complex global challenges, can be fostered through well-designed two-stage examinations. Student perceptions of assessments significantly affect their learning approach [49,50,51,52]. This study aims to contribute to educational innovation in universities by potentially catalyzing the development of collaborative skills essential for sustainable human development [1,2].
This qualitative study piloted two-stage examinations comprising individual and team evaluations of Korean university students to explore their experiences and perceptions. The results indicate that the participants experienced a dynamic, collaborative problem-solving process, consistent with previous studies. Students positively perceived grade improvement, enhanced understanding of concepts and immediate feedback, and reduced test anxiety. However, a few participants have reported the psychological burdens associated with immediate feedback and teamwork challenges in line with previous studies [4,33]. Notably, this study identified the following novel findings: strategic approaches to the process, the impact of passive students on team dynamics, and specific suggestions for effective implementation. These findings are discussed in relation to learners’ cultural backgrounds and research on team effectiveness, suggesting avenues for further theoretical exploration.
This study contributes to human resource development for a sustainable society and extends the discussion of two-stage assessments across geographical and methodological dimensions. Moreover, it advances innovations in assessment for learning. Considering that students’ evaluation perceptions significantly impact their approach to learning [52,74], this study aims to contribute to sustainable development through educational innovation by applying collaborative assessment methods to higher education and proposing effective implementation strategies.

Author Contributions

Conceptualization, H.J.; methodology, H.J.; validation, H.J., J.L. and J.R.; investigation, H.J. and J.R.; writing—original draft preparation, H.J.; writing—review and editing, H.J., J.R. and J.L.; supervision, H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the faculty research fund of Sejong University in 2023 (No. 20230431) and by the Korea Ministry of Environment (MOE) as ‘Graduate School specialized in Climate Change’.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board of Sejong University (protocol code SU-2023-015-02 on 12 June 2023).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The data are not publicly available due to ethical issues.

Acknowledgments

The authors would like to thank the students who participated in the interviews, the professors who provided the opportunity to apply the two-stage examinations, and the researchers who assisted in analyzing the data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Khojasteh, D.; Davani, E.; Shamsipour, A.; Haghani, M.; Glamore, W. Climate change and COVID-19: Interdisciplinary perspectives from two global crises. Sci. Total Environ. 2022, 844, 157142. [Google Scholar] [CrossRef] [PubMed]
  2. A Wicked Problem: Controlling Global Climate Change. Available online: https://www.worldbank.org/en/news/feature/2014/09/30/a-wicked-problem-controlling-global-climate-change (accessed on 14 January 2024).
  3. Jang, H.; Lasry, N.; Miller, K.; Mazur, E. Collaborative exams: Cheating? Or learning? Am. J. Phys. 2017, 85, 223–227. [Google Scholar] [CrossRef]
  4. Rempel, B.P.; Dirks, M.B.; McGinitie, E.G. Two-stage testing reduces student-perceived exam anxiety in introductory chemistry. J. Chem. Edu. 2021, 98, 2527–2535. [Google Scholar] [CrossRef]
  5. Wieman, C.E.; Rieger, G.W.; Heiner, C.E. Physics exams that promote collaborative learning. Phys. Teach. 2014, 52, 51–53. [Google Scholar] [CrossRef]
  6. Gilley, B.H.; Clarkston, B. Collaborative testing: Evidence of learning in a controlled in-class study of undergraduate students. J. Coll. Sci. Teach. 2014, 43, 83–91. [Google Scholar] [CrossRef]
  7. Giuliodori, M.J.; Lujan, H.L.; DiCarlo, S.E. Collaborative group testing benefits high-and low-performing students. Adv. Physiol. Educ. 2008, 32, 274–278. [Google Scholar] [CrossRef]
  8. Leight, H.; Saunders, C.; Calkins, R.; Withers, M. Collaborative testing improves performance but not content retention in a large-enrollment introductory biology class. CBE Life Sci. Educ. 2012, 11, 392–401. [Google Scholar] [CrossRef]
  9. Meseke, C.A.; Nafziger, R.E.; Meseke, J.K. Student course performance and collaborative testing: A prospective follow-on study. J. Manip. Physiol. Ther. 2008, 31, 611–615. [Google Scholar] [CrossRef]
  10. Webb, N.M. Collaborative group versus individual assessment in mathematics: Processes and outcomes. Educ. Assess. 1993, 1, 131–152. [Google Scholar] [CrossRef]
  11. Yuretich, R.F.; Khan, S.A.; Leckie, R.M.; Clement, J.J. Active-learning methods to improve student performance and scientific interest in a large introductory oceanography course. J. Geosci. Educ. 2011, 49, 111–119. [Google Scholar] [CrossRef]
  12. Zipp, J.F. Learning by exams: The impact of two-stage cooperative tests. Teach. Sociol. 2007, 35, 62–76. [Google Scholar] [CrossRef]
  13. Meaders, C.L.; Vega, Y. Collaborative Two-Stage Exams Benefit Students in a Biology Laboratory Course. J. Microbiol. Biol. Educ. 2023, 24, e00138-22. [Google Scholar] [CrossRef] [PubMed]
  14. Prescott, W.A., Jr.; Maerten-Rivera, J.; Anadi, I.S.; Woodruff, A.E.; Fusco, N.M. Impact of Collaborative Testing on Academic Performance in Pharmacy Education. Am. J. Pharm. Educ. 2024, 88, 100738. [Google Scholar] [CrossRef] [PubMed]
  15. Sato, B.K.; Dinh-Dang, D.; Cruz-Hinojoza, E.; Denaro, K.; Hill, C.F.C.; Williams, A. The impact of instructor exam feedback on student understanding in a large-enrollment biology course. BioScience 2018, 68, 601–611. [Google Scholar] [CrossRef]
  16. Vojdanoska, M.; Cranney, J.; Newell, B.R. The testing effect: The role of feedback and collaboration in a tertiary classroom setting. Appl. Cogn. Psychol. 2010, 24, 1183–1195. [Google Scholar] [CrossRef]
  17. Wilson, A. Feedback as a transformative tool. In Marking Time: Leading and Managing the Development of Assessment in Higher Education; Coleman, K., Flood, A., Eds.; Common Ground: Champagne, IL, USA, 2013; pp. 193–200. [Google Scholar]
  18. Ahea, M.M.A.B.; Ahea, M.R.K.; Rahman, I. The Value and Effectiveness of Feedback in Improving Students’ Learning and Professionalizing Teaching in Higher Education. J. Educ. Pract. 2016, 7, 38–41. [Google Scholar]
  19. Fyfe, G. The final examination: A squandered opportunity for feedback to students or a poor use of time? In Proceedings of the ATN Assessment Conference 2010: Assessment, Sustainability, Diversity and Innovation, Sydney, Australia, 18–19 November 2010. [Google Scholar]
  20. Scoles, J.; Huxham, M.; McArthur, J. No longer exempt from good practice: Using exemplars to close the feedback gap for exams. Asses. Eval. High. Educ. 2013, 38, 631–645. [Google Scholar] [CrossRef]
  21. Chapell, M.S.; Blanding, Z.B.; Silverstein, M.E.; Takahashi, M.; Newman, B.; Gubi, A.; McCann, N. Test anxiety and academic performance in undergraduate and graduate students. J. Educ. Psychol. 2005, 97, 268–274. [Google Scholar] [CrossRef]
  22. DordiNejad, F.G.; Hakimi, H.; Ashouri, M.; Dehghani, M.; Zeinali, Z.; Daghighi, M.S.; Bahrami, N. On the relationship between test anxiety and academic performance. Procedia Soc. Behav. Sci. 2011, 15, 3774–3778. [Google Scholar] [CrossRef]
  23. Daly, A.L.; Chamberlain, S.; Spalding, V. Test anxiety, heart rate and performance in A-level French speaking mock exams: An exploratory study. Educ. Res. 2011, 53, 321–330. [Google Scholar] [CrossRef]
  24. Owens, M.; Stevenson, J.; Hadwin, J.A.; Norgate, R. When does anxiety help or hinder cognitive test performance? The role of working memory capacity. Br. J. Psychol. 2014, 105, 92–101. [Google Scholar] [CrossRef] [PubMed]
  25. Zeng, K.; Le Tendre, G. Adolescent suicide and academic competition in East Asia. Comp. Educ. Rev. 1998, 42, 513–528. [Google Scholar] [CrossRef]
  26. Newton, P.M. How common is commercial contract cheating in higher education and is it increasing? A systematic review. Front. Educ. 2018, 3, 67. [Google Scholar] [CrossRef]
  27. Lancaster, T.; Cotarlan, C. Contract cheating by STEM students through a file sharing website: A Covid-19 pandemic perspective. Int. J. Educ. Integr. 2021, 17, 3. [Google Scholar] [CrossRef]
  28. Harper, R.; Bretag, T.; Rundle, K. Detecting contract cheating: Examining the role of assessment type. High. Educ. Res. Dev. 2021, 40, 263–278. [Google Scholar] [CrossRef]
  29. Lee, V.R.; Pope, D.; Miles, S.; Zárate, R.C. Cheating in the age of generative AI: A high school survey study of cheating behaviors before and after the release of ChatGPT. Compute. Educ. Artif. Intell. 2024, 7, 100253. [Google Scholar] [CrossRef]
  30. Lee, T.R.C.; Pye, M.; Lilje, O.; Nguyen, H.D.; Hockey, S.; de Bruyn, M.; van den Berg, F.T. Two-stage Examinations in STEM: A Narrative Literature Review. Int. J. Innov. Sci. Math. Educ. 2022, 30, 73–90. [Google Scholar] [CrossRef]
  31. Herrmann, K. The impact of cooperative learning on student engagement: Results from an intervention. Act. Learn. High. Educ. 2013, 14, 175–187. [Google Scholar] [CrossRef]
  32. Johnson, D.W.; Johnson, R.T. Making cooperative learning work. Theory Pract. 1999, 38, 67–73. [Google Scholar] [CrossRef]
  33. Levy, D.; Svoronos, T.; Klinger, M. Two-stage examinations: Can examinations be more formative experiences? Act. Learn. High. Educ. 2023, 24, 79–94. [Google Scholar] [CrossRef]
  34. Gibbs, G.; Simpson, C. Conditions under which assessment supports students’ learning. Learn. Teach. High. Educ. 2005, 1, 3–31. [Google Scholar]
  35. Heller, P.; Keith, R.; Anderson, S. Teaching problem solving through cooperative grouping. Part 1: Group versus individual problem solving. Am. J. Phys. 1992, 60, 627–636. [Google Scholar] [CrossRef]
  36. Fiore, S.M.; Graesser, A.; Greiff, S.; Griffin, P.; Gong, B.; Kyllonen, P.; Massey, C.; O’Neil, H.; Pellegrino, J.; Rothman, R.; et al. Collaborative Problem Solving: Considerations for the National Assessment of Educational Progress; National Center for Education Statistics: Alexandria, VA, USA, 2017. [Google Scholar]
  37. Mo, J. Collaborative problem solving. In PISA in Focus; OECD Publishing: Paris, France, 2017. [Google Scholar]
  38. Wiek, A.; Withycombe, L.; Redman, C.L. Key competencies in sustainability: A reference framework for academic program development. Sustain. Sci. 2011, 6, 203–218. [Google Scholar] [CrossRef]
  39. Lambrechts, W.; Mulà, I.; Ceulemans, K.; Molderez, I.; Gaeremynck, V. The integration of competences for sustainable development in higher education: An analysis of bachelor programs in management. J. Clean. Prod. 2013, 48, 65–73. [Google Scholar] [CrossRef]
  40. Lim, W.; Yoon, H.; Bae, Y.; Kwon, O.N. The development of sociomathematical norms in the transition to tertiary exam-oriented individualistic mathematics education in an East Asian context. Educ. Stud. Math. 2023, 113, 57–78. [Google Scholar] [CrossRef]
  41. Gardner, J.; Harlen, W.; Hayward, L.; Stobart, G. Engaging and Empowering Teachers in Innovative Assessment Practice. In Assessment Reform in Education: Education in the Asia-Pacific Region: Issues, Concerns and Prospects; Berry, R., Adamson, B., Eds.; Springer: Dordrecht, The Netherlands, 2011; Volume 14, pp. 105–119. [Google Scholar]
  42. DCassano, R.; Costa, V.; Fornasari, T. An effective national evaluation system of schools for sustainable development: A comparative European analysis. Sustainability 2019, 11, 195. [Google Scholar] [CrossRef]
  43. Sandelowski, M. The problem of rigor in qualitative research. Adv. Nurs. Sci. 1986, 8, 27–37. [Google Scholar] [CrossRef]
  44. Black, P.; Wiliam, D. Assessment and classroom learning. Asses. Educ. 1998, 5, 7–74. [Google Scholar] [CrossRef]
  45. Jang, H. Introduction to Two-Stage Exams; Ssesmul: Seoul, Republic of Korea, 2023; pp. 37–66. [Google Scholar]
  46. Michaelsen, L.K.; Sweet, M. The Essential Elements of Team-Based Learning; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2008; pp. 7–27. [Google Scholar]
  47. Rieger, G.W.; Heiner, C.E. Examinations that support collaborative learning: The students’ perspective. J. Coll. Sci. Teach. 2014, 43, 41–47. [Google Scholar] [CrossRef]
  48. Shinlder, J.V. “Greater than the Sum of the Parts?” Examining the Soundness of Collaborative Exams in Teacher Education Courses. Innov. High. Educ. 2004, 28, 273–283. [Google Scholar]
  49. Yu, B.; Tsiknis, G.; Allen, M. Turning exams into a learning experience. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education, Milwaukee, WI, USA, 10 March 2010. [Google Scholar]
  50. Martin, D.; Friesen, E.; De Pau, A. Three heads are better than one: A mixed methods study examining collaborative versus traditional test-taking with nursing students. Nurse Educ. Today 2014, 34, 971–977. [Google Scholar] [CrossRef] [PubMed]
  51. Boud, D. Enhancing Learning through Self-Assessment; Kogan Page: London, UK, 1995. [Google Scholar]
  52. Sambell, K.; McDowell, L. The construction of the hidden curriculum: Messages and meanings in the assessment of student learning. Assess. Eval. High. Educ. 1998, 23, 391–402. [Google Scholar] [CrossRef]
  53. Gibbs, G. Learning by Doing: A Guide to Teaching and Learning Methods; Further Education Unit: Oxford, UK, 1998. [Google Scholar]
  54. Braun, V.; Clarke, V. One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qual. Res. Psychol. 2021, 18, 328–352. [Google Scholar] [CrossRef]
  55. Yin, R.K. Case Study Research: Design and Methods; Sage: Thousand Oaks, CA, USA, 2009; Volume 5. [Google Scholar]
  56. Kozlowski, S.W.; Ilgen, D.R. Enhancing the effectiveness of work groups and teams. Psychol. Sci. Public Interest 2006, 7, 77–124. [Google Scholar] [CrossRef] [PubMed]
  57. Ryan, R.M.; Deci, E.L. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 2000, 55, 68–78. [Google Scholar] [CrossRef]
  58. Kluger, A.N.; DeNisi, A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol. Bull. 1996, 119, 254. [Google Scholar] [CrossRef]
  59. Stoeber, J.; Otto, K. Positive conceptions of perfectionism: Approaches, evidence, challenges. Pers. Soc. Psychol. Rev. 2006, 10, 295–319. [Google Scholar] [CrossRef]
  60. Dweck, C.S.; Leggett, E.L. A social-cognitive approach to motivation and personality. Psychol. Rev. 1988, 95, 256. [Google Scholar] [CrossRef]
  61. Stoeber, J.; Rennert, D. Perfectionism in school teachers: Relations with stress appraisals, coping styles, and burnout. Anxiety Stress Coping 2008, 21, 37–53. [Google Scholar] [CrossRef]
  62. Cuambot, J.I. The Moderating Role of Social Networking Use on the Relationship of Subjective Loneliness and Psychological Well-Being among Generation Z Adults during the COVID-19 Pandemic. Psychol. Educ. Multidiscip. J. 2022, 5, 416–422. [Google Scholar]
  63. Van Dyck, C.; Frese, M.; Baer, M.; Sonnentag, S. Organizational error management culture and its impact on performance: A two-study replication. J. Appl. Psychol. 2005, 90, 1228. [Google Scholar] [CrossRef] [PubMed]
  64. Edmondson, A. Psychological safety and learning behavior in work teams. Adm. Sci. Q. 2009, 44, 350–383. [Google Scholar] [CrossRef]
  65. Hofstede, G. Cultural differences in teaching and learning. Int. J. Intercult. Relat. 1986, 10, 301–320. [Google Scholar] [CrossRef]
  66. Chao, R.K. Beyond parental control and authoritarian parenting style: Understanding Chinese parenting through the cultural notion of training. Child Dev. 1994, 65, 1111–1119. [Google Scholar] [CrossRef]
  67. Kim, S.Y.; Wang, Y.; Orozco-Lapray, D.; Shen, Y.; Murtuza, M. Does “tiger parenting” exist? Parenting profiles of Chinese Americans and adolescent developmental outcomes. Asian Am. J. Psychol. 2013, 4, 7–18. [Google Scholar] [CrossRef]
  68. Stankov, L. Unforgiving Confucian culture: A breeding ground for high academic achievement, test anxiety and self-doubt? Learn. Individ. Differ. 2010, 20, 555–563. [Google Scholar] [CrossRef]
  69. Drew, S. Perceptions of what helps learn and develop in education. Teach. High. Educ. 2001, 6, 309–331. [Google Scholar] [CrossRef]
  70. Rezaei, A. Groupwork in Active Learning Classrooms: Recommendations for Users. J. Learn. Spaces 2020, 9, 1–21. [Google Scholar]
  71. Hogan, K.; Nastasi, B.K.; Pressley, M. Discourse Patterns and Collaborative Scientific Reasoning in Peer and Teacher-Guided Discussions. Cogn. Instr. 1999, 17, 379–432. [Google Scholar] [CrossRef]
  72. Semlak, W.D.; Shields, D.C. The effect of debate training on students participating in the Bicentennial Youth Debates. J. Am. Forensic. Assoc. 1977, 13, 192–196. [Google Scholar] [CrossRef]
  73. Wigfield, A.; Eccles, J.S. Expectancy–value theory of achievement motivation. Contemp. Educ. Psychol. 2000, 25, 68–81. [Google Scholar] [CrossRef] [PubMed]
  74. Struyven, K.; Dochy, F.; Janssens, S. Students’ perceptions about evaluation and assessment in higher education: A review. Assess. Eval. High. Educ. 2005, 30, 325–341. [Google Scholar] [CrossRef]
Table 1. Background Information of Participants.
Table 1. Background Information of Participants.
IDUniversity YearGenderCourseMajor
ASeniorMaleBEducation
BJuniorFemaleAHistory
CSeniorMaleBEducation
DSeniorMaleABusiness Administration
EJuniorMaleBBusiness Administration
FJuniorFemaleAEducation
GSeniorMaleCData Science
HSeniorFemaleAHospitality Management
IJuniorFemaleABusiness Administration
Table 2. Two-Stage Examination Details in Three Courses.
Table 2. Two-Stage Examination Details in Three Courses.
CourseNNumber of Examinations Conducted Number of QuestionsQuestion Type 1Time in MinutesTeam FormationWeight
A46220MCQ, SAQ, Short essays30/30Balanced70/30
B19215False/True, MCQ, SAQ40/30Balanced70/30
C15210SAQ, Programming50/40Balanced70/30
1 MCQ (Multiple-Choice Questions), SAQ (Short Answer Questions).
Table 3. Themes and subthemes of experiences and perceptions of participants.
Table 3. Themes and subthemes of experiences and perceptions of participants.
Theme (Number of Subthemes)SubthemePrevious Studies in Line with Our Findings
Process (4)Active classroom[5]
Double problem solving[3,5,35]
Sharing, reflection, and consensus[3,4,5,6,7,8,9,10,11,12,13,14,35]
Strategic approaches-
Positive aspects (6)Improved test performance[3,4,5,6,7,8,9,10,11,12,13,14,30,35]
Learning opportunity[3,35]
Instant feedback[3,30,33,35]
Increased motivation[12]
Decreased test anxiety[4,33]
Collaborative culture[4,33]
Negative aspects (4)Burden of teamwork[4,33]
Instant feedback[4,33]
Passive participants-
Extra test timing-
Action items for support (4)Debate training-
Team-supportive environment-
High-level questioning-
Peer evaluation-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jang, H.; Rashid, J.; Lee, J. A Qualitative Study of the Experiences and Perceptions of Korean Undergraduates Regarding Two-Stage Examinations. Sustainability 2024, 16, 8273. https://doi.org/10.3390/su16188273

AMA Style

Jang H, Rashid J, Lee J. A Qualitative Study of the Experiences and Perceptions of Korean Undergraduates Regarding Two-Stage Examinations. Sustainability. 2024; 16(18):8273. https://doi.org/10.3390/su16188273

Chicago/Turabian Style

Jang, Hyewon, Junaid Rashid, and Joohee Lee. 2024. "A Qualitative Study of the Experiences and Perceptions of Korean Undergraduates Regarding Two-Stage Examinations" Sustainability 16, no. 18: 8273. https://doi.org/10.3390/su16188273

APA Style

Jang, H., Rashid, J., & Lee, J. (2024). A Qualitative Study of the Experiences and Perceptions of Korean Undergraduates Regarding Two-Stage Examinations. Sustainability, 16(18), 8273. https://doi.org/10.3390/su16188273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop