1. Introduction
A common assumption among college and university professors is that students are able to decide how, and how much, they need to prepare to do well on an exam [
1]. Often, study guides are made available to students to assist them in assessing how well they understood the course material, and review sessions are offered prior to a major test [
2]. Yet, learners are ultimately entrusted with, and thus held responsible for, self-assessment of their preparedness. As students gauge what grade they expect, they determine where to focus their study efforts to prepare in order to perform well [
3]. Decisions may vary from targeting particular contents to adopting strategies that promote specific types of learning (e.g., retrieval of newly acquired concepts or their use as problem-solving tools), but they are all consequential [
4]. Professors’ pedagogical effectiveness is thus met by students’ test grade prediction and confidence in their grade prediction, which, by conditioning study behavior as a self-regulatory activity, can affect their performance (test grade).
It is evident that the causality linking prediction to test preparation and grades is not equally beneficial to all students [
5,
6]. Many professors have encountered students who were disappointed by their test grade because they thought they had done better, who may not have studied enough to prepare because they thought they had a higher level of mastery of the material than they actually did, or who were perhaps overly confident in their abilities and mastery of the materials. To wit, if students are not in a position to accurately predict their expected grades and level of preparedness, they are not likely to engage in adequate test preparation [
7]. How can we explain that some students are more successful in predicting their grades accurately and in identifying effective ways to prepare, while others are less so? It may be that some students have a general sense of self-efficacy, that is, confidence in their ability to solve a problem or complete a task [
8]. At the same time, students may also vary in the way they assign responsibility for their performance. They may credit or blame their own competency or resolve, fault or acknowledge the contribution of professors to their learning, or find that their social environment (friends, family, etc.) is either a distraction or a powerful ally. Individual differences in the causal attribution habits that guide the assignment of responsibility for test performance are consequential [
9]. For instance, if learners do not believe that they are in a position to affect their grades through preparation, they will also be unlikely to engage in effective test preparation activities. Thus, awareness of one’s knowledge, as well as habits involving self-efficacy and the attribution of agency to specific causal factors, can all influence academic success, which is especially notable when students assess the extent to which they are prepared for an upcoming test, since their self-assessment, their view of human agency applied to academic endeavors, and confidence guide test preparation.
Here, we explore several interrelated questions to further our understanding of the mechanisms underlying students’ test performance. We begin with the assumption that students vary in their ability to correctly estimate their test grade [
5,
10], and in the subjective confidence they place in their prediction [
11,
12,
13]. We further assume that students may vary in the overall confidence they place in their ability to solve problems in everyday life (general self-efficacy [
14]), and in their attribution of causality for the desirable and undesirable outcomes they may face [
15]. Based on these assumptions, our study investigates four issues related to performance in a course’s final examination. First, we analyze whether students’ ability to accurately predict their grade and their subjective confidence in this prediction may account for their test grade. Second, we ask whether students at different levels of performance vary in their ability to accurately predict their performance, and if so, whether subjective confidence also differs. Third, we ask whether the accuracy and confidence of learners’ predictions are informed by self-efficacy beliefs and causal attribution habits, both of which serve as indices of motivation for test preparation. And fourth, we ask whether different causal attribution preferences contribute to self-efficacy. We analyze these questions using data from an introductory general education course at a large public university in the United States. Our findings suggest that students vary significantly in their ability to accurately predict depending on their level of performance.
2. Students’ Self-Efficacy, Confidence, Causal Attribution Habits, and Test Grades
The key rationale for the current study rests on evidence that the accuracy of grade prediction and subjective confidence are related to academic performance; however, the nature of this relationship is a matter of dispute. Some scholars [
7,
16] suggest that students who are unlikely to perform well on a test tend to overestimate their performance before the test and be confident in the validity of their inflated predictions. According to this view, poor performers are “blissfully unaware” [
17,
18] of their deficiencies, thereby facing too many hurdles simultaneously–deficient mastery that is difficult to overcome, and lack of awareness of their gap in knowledge or skills, which exacerbates the challenges of overcoming these gaps. As such, poor performers are presumably under the spell of the “illusion of knowing” phenomenon [
19]. Namely, they believe that knowledge has been attained when, in fact, knowledge acquisition has failed. Other scholars [
13,
20] suggest that poor performers are also likely to overestimate their future performance, but they are aware of their knowledge gaps as their inflated predictions are made with little subjective confidence. These students are assumed to be under the spell of the “optimism bias” [
21,
22], thereby envisioning a future much brighter than the one that is likely to be faced, perhaps as a way of softening the blow of an undesirable outcome. Clearly, some students are more confident in their test grade prediction than others. Subjective confidence makes a difference as to whether a prediction has an impact on pre-test activities and then ultimately performance. For example, students who expect a good grade but are not sure that this prediction is realistic may well think of strategies to enhance their competencies, while students who are certain that a “good grade” prediction is accurate may be less likely to do so. If students vary not just in the accuracy of their prediction, but also in their subjective confidence that the prediction made is valid, a mismatch between subjective assessment and objective reality is only one side of the coin in the hands of poor performers. As a result, both factors are to be considered to understand poor self-assessment. The obvious reason is that the understanding of the key mechanisms of flawed self-assessment held by professors, instructional staff, advisors, and counselors informs their adoption of particular remedial actions, and thus shapes instructional effectiveness [
23,
24,
25,
26].
In contrast to the controversy that has engulfed prediction accuracy and subjective confidence, learners’ ability to benefit from experience is an individual-difference factor that has, with a few exceptions, been largely neglected in the literature. On the one side, if poor performers are truly “blissfully unaware” of their knowledge gaps, performance predictions made after completing a test may be expected to be as optimistic as those made before the test without any change in subjective confidence. However, if poor performers are merely adopting an “optimistic outlook” prior to the test, predictions made after completing the test may be expected to be more realistic because these learners are aware of their deficiencies and test experience has made such deficiencies an undeniable fact. The available evidence for either argument is not only scarce but also mixed. For instance, Hacker et al. [
23] found no differences between before and after the test for predictions of poor performers, but good performers showed an improvement in predicting their test grade, whereas Hacker, Bol, and Bahbahani [
27] reported that although good performers exhibited greater prediction accuracy before and after the test, neither good nor poor performers yielded significant improvements in accuracy from before to after the test. Subjective confidence regarding the accuracy of the test prediction was not directly assessed in either study, though. Miller and Geraci [
13], who measured subjective confidence before and after a final test, found a decline in confidence for all students after the test. In our study, we further examine whether students who vary in their attainment level can use test experience to improve the accuracy of their predictions and/or alter their subjective confidence.
Determining whether poor students suffer from the illusion of knowing or an optimism bias brings with it the question of whether students’ predictions of performance and related subjective confidence reflect individual differences in an underlying disposition, such as self-efficacy, or explanatory habits, such as causal attribution preferences. These psychological dimensions, whose role will be explained in the next paragraphs, can help us answer the rhetorical question that all engaged and devoted educators are bound to ask themselves when faced with students who perform poorly. Namely, why do students vary in their demonstrated mastery of course contents assessed through tests? Certainly, a variety of factors contributes to the academic performance of college students. Learners may differ in their academic preparation for a given class, motivation, aptitude, demands on their time that restrict the amount of effort they can commit to studying, knowledge and availability of academic support systems and resources, understanding of the requirements of a course, and so on. Yet, agreement exists that one important factor for all students is their ability to accurately assess the extent to which the demands of an upcoming test fit their preparation (i.e., prediction accuracy [
23,
27]). In fact, if students believe that they have mastered all required content and are well prepared to get a good grade without additional effort, they are unlikely to engage in extended study sessions to prepare for a test. To be clear, the overestimation of test grades can lead to unexpected underperformance whose impact on future behavior depends on the particular explanation that students give to bad grades [
28,
29]. That is, the selected explanation may further discourage learners from exerting efforts towards academic achievement, foster inertia, or even stimulate behavioral change. Students may attribute outcomes [
30,
31] to a variety of factors. Common dimensions are locus (internal/dispositional causes, such as ability and effort, or external/situational causes, such as the nature of the test or the quality of the instruction received) and stability (causes invariant over time, such as intelligence and systemic discrimination, or variable, such as mood and luck [
32]). To wit, locus refers to whether learners believe an outcome is their responsibility or can be ascribed to other people or circumstances. Stability refers to the extent to which learners see permanence in the selected cause. For instance, if responsibility for a bad grade is attributed to factors internal to the learner as well as stable (e.g., “I am inept”), students may be discouraged and feel defeated. If responsibility is given to internal but transient factors (e.g., “I did not put enough effort into preparing for this test”), behavioral change may be pursued in future endeavors. However, if factors external to the learner are deemed responsible (e.g., the professor did not explain the test materials well, most students did not do well because the test was unnecessarily difficult, the professor was not willing to give out good grades, etc.), inertia may take root supported by the conviction that circumstances can change in the future. Alternatively, if responsibility for a good grade is attributed to factors internal to the learner as well as stable (e.g., “I am bright”), students may feel elated but will put little effort into preparing for the next test, thereby enhancing the chances of being blindsided by future adverse outcomes. If responsibility is given to internal but transient factors (e.g., “I worked hard”), past activities may be replicated in forthcoming endeavors. However, if factors external to the learner are deemed responsible (e.g., the professor did explain the test materials well, the professor gave an easy test, etc.), inertia may take hold of the learners based on the conviction that favorable circumstances will replicate in the future.
Clearly, if students feel that the subject tested aligns with their abilities, the test is fair, the professor is a competent instructor, and a helpful study group can be joined, and so on, but the grade is poor, changes in the exerted effort are likely to be reflected in future pre-exam activities. Changes can be quantitative (e.g., more time devoted to studying), qualitative (e.g., studying activities are restructured with attention to specific study materials and required type of information processing), or both. Yet, the extent to which some explanations will be preferred over others may depend on the learners’ overall belief in their capabilities to solve problems that life may present, also known as general self-efficacy. In essence, self-efficacy is confidence in oneself. It embodies the extent to which learners trust that their actions can enhance the likelihood that problems can be solved or tasks can be completed [
8]. According to Abramson, Seligman, and Teasdale [
33], a sense of failure, as opposed to a sense of confidence, entails attributing undesirable outcomes to oneself. Learners have been reported to vary in their degree of self-efficacy, with those learners ranking high on self-efficacy ascribing positive outcomes to internal factors, such as their own abilities and efforts, while blaming negative outcomes on external factors. In contrast, low-efficacy learners have been found to hold internal causes, such as their own deficiencies, responsible for failures (see [
34,
35]). However, Hirschy and Morris [
36] and Camgoz, Tektas, and Metin [
37] did not find evidence linking self-efficacy and causal attribution habits. Thus, in the present study, we plan to explore the relationship between general self-efficacy beliefs and causal attribution preferences to determine whether the accuracy of grade prediction, subjective confidence in the prediction made, and explanation for the test outcome, may be traced to the learner’s general self-efficacy beliefs.
3. The Study: Materials and Methods
3.1. Research Questions
The present study is guided by four interrelated questions. First, is the accuracy of students’ prediction of their test grade a better predictor of their actual grade, or is their confidence in the accuracy of their prediction more important? Second, do students at different performance levels vary in their abilities to predict test outcomes? Does their confidence in such predictions also differ? Third, if differences exist in predictive abilities, can these differences be traced to self-efficacy or causal attribution habits (both serving as indices of motivation)? And fourth, do causal attribution habits differentially contribute to self-efficacy?
3.2. Data and Methods
We analyzed these questions using a sample of undergraduate students enrolled in an introductory course, American National Government, during the spring 2020 semester. The course contributes to the university’s general education program (GEP) and is attended by students from across the university. A general education course enrolls students with diverse academic backgrounds, thereby giving us the ability to sample students who were not linked to a specific major. In most of the extant literature, students enrolled in major-specific courses were sampled (e.g., cognitive psychology, educational psychology, etc.), thereby making generalizations of findings arduous. Furthermore, the number of students taking the course every semester was large enough to allow for meaningful statistical analysis; students were unlikely to participate in more than one course section that is part of the study; and the learning objectives for the course were consistent across all sections, even though four different faculty members imparted instruction in 11 sections of the course. The University of Central Florida is a public university that is required to abide by specific learning objectives set by the state of Florida’s Department of Education. Conformity with these objectives is thus required for the selected course to comply with the state’s civic education guidelines. All sections were included in the study.
We employed a convenience sample. Students over the age of 18 had the opportunity to volunteer to participate in the study and were offered extra credit for participation (alternative extra credit options existed). Students who consented to participate completed a series of surveys distributed online through their respective course website. All subjects gave their informed consent for inclusion before they participated in the study. Participation complied with the guidelines of the Office for Human Research Protections of the U.S. Department of Health and Human Services and with the American Psychological Association’s ethical standards in the treatment of human subjects. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Institutional Review Board of the University of Central Florida (STUDY00001215).
Out of a total of 621 students enrolled in all sections of the course, 258 students completed the study and constituted our sample. Compared with those who did not participate, participants could be considered a sample of students who were motivated to pass the course as they deemed the extra credit activity sufficiently enticing. The sample reflected the diverse student body of the university, including men (28.3%) and women (71.7%), and a variety of racial and ethnic backgrounds, comprising Asian (6.2%), Black (10.5%), Hispanic (31.4%), White (42.2%), and multi-racial (1.9%; 7% identified as international and 0.8% did not identify). Students, whose mean age was 19.68, were distributed across all four years of college (first-year, sophomore, junior, senior). Their mean GPA was 3.35 (range: 1.81–4.00). These data were provided by the University of Central Florida’s Office of Institutional Knowledge Management.
At the beginning of the semester, we collected baseline data to establish students’ self-efficacy and causal attribution habits. We used the New General Self-Efficacy (NGSE) inventory [
14,
38] to measure students’ self-reported general confidence in their ability to deal with a variety of life challenges. Students rated their agreement with eight statements regarding general confidence on a scale from strongly disagree (0) to strongly agree (4). The survey statements include, for example, “I will be able to achieve most of the goals that I have set for myself”; “When facing difficult tasks, I am certain that I will accomplish them”; “I will be able to successfully overcome many challenges.” The inventory was selected for its demonstrated psychometric properties, such as internal consistency, unidimensionality, and stability [
38]. In the present study, the reliability of the inventory, as measured by Cronbach’s alpha, α, was 0.865.
A second survey addressed students’ causal attributions for their best and worst grades [
39]. The survey asked students to think back to the time when they received their best grade or worst grade on a test and rank seven potential causes for the best and worst grade, respectively, on a scale from 0 (not at all) to 6 (entirely): ability, effort, test (either difficulty or ease), luck, family’s influence, instructor’s influence, and friends’ influence.
In all course sections, professors administered a final exam at the end of the semester. We asked students to predict their final exam grade and their confidence in these predictions shortly before the final exam was administered. Subjective confidence was to be expressed on a scale from 0 (not at all confident) to 4 (extremely confident). We again asked them to predict their final exam grade and their confidence in their prediction after they completed the exam but before they had received their grade. Students were asked to produce grade estimates twice to determine the extent to which test experience could puncture unrealistic expectations. That is, we are interested in assessing whether students differed in their ability to use test experience to improve the accuracy of their predictions and/or alter their confidence in the predictions made.
Given the questions motivating this field study, the primary variables of interest were students’ test performance (grade), the (in) accuracy of the estimation of grades before and after the final test, and subjective confidence in the estimations made before and after the final exam, as well as their self-efficacy beliefs and causal attribution habits.
4. Results
We conducted separate analyses for each of our research questions. We considered results at the 0.05 level as significant. In these analyses, we arranged students’ final test grades into four categories, corresponding to performance levels: Poor (0–69%), C (70–79%), B (80–89%), and A (90–100%). This corresponds to commonly assigned grades of A, B, C, and D and F grades in one category of “unsuccessful” grades. In higher education, student “success” is commonly defined as A, B, or C grades, while D and F grades are generally interpreted as failing grades since they are below the minimum grade point average (commonly a 2.0, equivalent to a C grade) required for graduating from college. Note that W (withdrawal) grades are also commonly grouped with the D and F grades in the “unsuccessful” category (see [
40]). Our study only analyzes test grades rather than course grades, and all students in the study completed the final exam and finished the course. We operationally defined inaccuracy of grade estimation as the discrepancy between the estimated performance and the actual performance. That is, a student’s estimation of the final grade could be an underestimation (−3, −2, or −1), overestimation (+3, +2, or +1), or be accurate (0). For instance, if a student predicted a B and obtained a C, the inaccuracy score would be + 1, which reflects an overestimation of 1 level. In contrast, if a student predicted a B and obtained an A, the inaccuracy score would be – 1, which reflected an underestimation of 1 level.
4.1. Can Prediction Inaccuracy or Confidence Forecast Performance?
Test performance relies on self-regulatory activities. An important aspect of such activities is the learners’ ability to predict their preparation prior to a test. If students vary in their accuracy of test grade prediction and subjective confidence of their prediction, do these factors help explain their test grades? We conducted regression analyses with the inaccuracy of grade estimation and subjective confidence ratings as the predictors and final test performance as the outcome variable (see
Table 1). Both before and after the test, prediction inaccuracy and subjective confidence contributed to performance, but differently. Performance declined as overestimation increased. That is, the more students overestimated their expected grade, the lower their grade was. In contrast, performance declined along with subjective confidence—the lower students’ confidence in their abilities was, the lower was their test grade. The results for estimates and confidence ratings made after the test mirrored this pattern. The next analysis focuses more closely on these relationships.
4.2. Do Students at Different Performance Levels Vary in Their Ability to Predict Test Outcomes and in Their Subjective Confidence in Such Predictions?
We computed separate two-way analyses of variance on inaccuracy of estimation and on subjective confidence with performance level (poor, C, B, and A) and timing of assessment (before and after the test) as the independent variables. For estimation inaccuracy, we found a main effect of performance level,
F (3, 254) = 108.87,
MSE = 0.614,
p < 0.001,
ηp2 = 0.563, indicating that as performance dropped, overestimation increased. There was also a main effect of timing of assessment,
F (1, 254) = 14.52,
MSE = 0.204,
p < 0.001,
ηp2 = 0.054, indicating an overall shift towards more conservative estimates from before to after the test. The interaction term failed to reach significance, though (
F = 2.59,
ns). For subjective confidence, we found a main effect of performance level,
F(3, 254) = 13.30,
MSE = 1.20,
p < 0.001,
ηp2 = 0.136, indicating that as performance dropped, confidence declined, and a main effect of timing of the confidence assessment,
F(1, 254) = 21.54,
MSE = 0.545,
p < 0.001,
ηp2 = 0.078, illustrating greater confidence after the test. However, a significant interaction,
F (3, 254) = 4.73,
MSE = 0.545,
p = 0.003,
ηp2 = 0.053, suggested that subjective confidence did not increase for all performance levels evenly.
Table 2 displays descriptive statistics.
To investigate these patterns further, tests of simple effects, submitted to the Bonferroni correction to control for experiment-wise alpha, were computed on estimation inaccuracies and subjective confidence separately for before and after the test. To determine whether students at different performance levels were able to take advantage of the experience of completing the exam, tests of simple effects also compared estimation inaccuracies and subjective confidence before and after the test (corrected alpha: 0.005).
We found a significant and consistent shift from overestimation to underestimation as performance increased. Significant differences between all contiguous levels, both before the test, ts ≥ 4.03, p < 0.001, and after the test, t ≥ 3.80, p < 0.001, underlined the shift. That is, students at higher performance levels tended to underestimate their expected grades, while students at lower performance levels tended to overestimate their grades. However, subjective confidence did not change uniformly with performance level. Before the test, poor performers were less confident in their predictions than A-performers, t = 3.38, p = 0.001. After the test, poor performers were less confident in their estimates than both B-performers and A-performers, whereas C-performers and B-performers were less confident than A-performers, ts ≥ 3.03, p ≤ 0.001. Interestingly, only A-performers became more accurate in their predictions after the test, t = 5.92, p < 0.001. Both A- and B-performers improved their subjective confidence in the predictions made after the test, ts ≥ 3.41, p ≤ 0.001.
Taken together, the findings of these analyses yield an unmistakable pattern of individual differences linked to performance level. They indicate that students at a lower level of performance were less successful in accurately predicting their grades even after they took the test than students at higher levels of performance, but they were also less confident in their predictions than their peers who scored higher grades. The predictions and subjective confidence of poor performers did not benefit from the experience of taking the test. A-performers, instead, became more accurate as well as confident in their predictions after the test.
4.3. Can Differences in Predictive Abilities Be Traced to Self-Efficacy or Causal Attribution Habits?
To answer this question, we conducted regression analyses with causal attribution ratings and self-efficacy serving as the predictors and inaccuracy of estimation and subjective confidence before and after the test serving as the outcome variables. In these analyses (see
Table 3 and
Table 4), conducted separately for each outcome variable, self-efficacy was the main contributor to students’ mistakes of estimation and subjective confidence. In addition, when explanations for good grades were considered, effort (an internal variable cause) positively contributed to subjective confidence both before and after the test. In contrast, when explanations for bad grades were considered before the test, luck (an external variable cause) was found to increase the inaccuracy of estimation, whereas ability was found to decrease confidence. The analysis regarding bad grades did not reveal a contribution of causal factors, in addition to self-efficacy, after the test.
4.4. Do Causal Attribution Habits Differentially Contribute to Self-Efficacy?
To answer this question, we conducted regression analyses with causal attribution ratings as the predictors and self-efficacy as the outcome variable (see
Table 5). We found that when explanations for good grades were considered, ability and luck both contributed to self-efficacy, but quite differently. Explanations involving one’s ability increased self-efficacy, whereas explanations involving luck decreased it. When bad grades were considered, explanations involving abilities were linked to a decline in self-efficacy. That is, when students explained their good grades with their own abilities, they also displayed a higher level of self-efficacy; on the other hand, when students thought that their good grades were due to luck, an external variable cause, their sense of self-efficacy was weakened. And, perhaps, unsurprisingly, if students attributed their poor grade to their abilities, their sense of self-efficacy was low.
5. Discussion
Do students who do better on an exam differ in their ability to confidently predict their test grade from students who do less well? And what are the factors that students believe are accounting for their performance? These questions are significant. If students are not able to predict their performance correctly, and are not confident that their predictions are correct, they are less likely to engage in behaviors that maximize their chances of a good grade. Furthermore, if students believe that their role in attaining a certain grade is partial at best because factors outside of their control are the primary drivers for their test grade, inertia rather than targeted studying is the likely consequence. Thus, understanding whether students are able to accurately and confidently predict a test grade, as well as understanding what they believe their role to be in test preparation is, may help educators guide students’ test preparation activities.
Our findings indicate that learners at a high level of attainment differ systematically and significantly from learners at lower levels. While students with good grades tend to underestimate their performance and are fairly confident in their predictions, their peers with lower grades tend to overestimate their performance but with less confidence. Having completed the exam does not substantially change these patterns. While the highest-performing students improve the accuracy of their test predictions, other students fail to take advantage of the experience of having completed the test.
Our findings are consistent with the notion that estimation is a complex process executed under a certain degree of uncertainty regarding the specific demands of the test and the extent to which the knowledge and skills that a learner possesses match such demands [
16,
41,
42]. They are also consistent with earlier findings that poor performers are less accurate and confident in their predictions [
13]. Thus, poor performers are not “blissfully ignorant” of their weaknesses as it is implied by the illusion of knowing phenomenon [
16,
17]. On the contrary, their predictions resemble expressions of hope, which may be used to shield, albeit temporarily, their self-image from the bad news they are about to encounter [
43]. In this context, hope remains dysfunctional, though, as it may prevent learners from optimizing feedback and recalibrating action. In fact, in our study, there was no evidence that poor performers took advantage of the information gained from taking the final test to improve the accuracy of their predictions.
To understand potential underlying reasons for these findings, we probed the role of self-efficacy and causal attribution. Our findings suggest that self-efficacy contributes substantially to students’ inaccurate predictions of their exam grades as well as to their confidence in the predictions made. Avhustiuk et al. [
44], however, found that students’ general self-efficacy increased with the accuracy of self-assessment, whereas Al Kuhayli et al. [
45] did not find any relationship. It has been suggested that hope and self-efficacy are overlapping constructs [
46]. Thus, it is not surprising that a “can do” attitude may have two sides, either as a propeller of a realistic forecast or as a supporter of unsubstantiated optimism. The particular conditions under which either side may emerge or not emerge at all are unclear at present. Notwithstanding self-efficacy, students who credit good test results to effort are also more confident in their predictions. In contrast, students who think that luck plays a role in their undesirable performance demonstrate diminished prediction accuracy before the test; if they think that their abilities are to blame, their subjective confidence also decreases. These findings suggest that the interplay between causal attribution habits and self-efficacy can determine the impact of an outcome (a grade) on the learner’s current and future course of action. The exact nature of the interplay should be further explored in future research.
Despite the significance of these findings, limitations exist. Our research relies on surveys completed by 258 students enrolled in a general education course on American Government at a large public university in the United States. While students came from majors across the university and were from diverse backgrounds, they nonetheless constituted a convenience sample. While we assume that those students who completed the surveys were motivated to do well in the class, we do not know whether they differed systematically from those students who did not participate. Further research may include different classes and institutions of higher education to see whether our findings can be replicated on a broader scale.
Regardless, our initial results provide clues for professors and advisors interested in helping students to succeed. For example, if students who perform poorly are not able to accurately and confidently predict their exam results, and if they believe that external factors are responsible for their performance, they are less likely to engage in behaviors that maximize their chances of success. To address this obstacle, professors can design low-stakes assessments that help students accurately understand their mastery of the material. Appropriate feedback may also be able to help them identify where they need to focus their energy to study for the test [
47]. Repeated practice with formative assessments that explain how they are doing and how they can improve may help them feel that they have a role to play in doing well rather than believing that external factors are primarily responsible for their grades.
As many students struggle to succeed in higher education, understanding why some students fail to engage in behaviors that increase their chances of success is critical [
5,
47]. This study adds to an existing body of literature that demonstrates that it is not necessarily the material taught in college, or the particular pedagogy used, but perhaps students’ beliefs about themselves and their inability to correctly predict their performance that play a critical role in their academic success [
13,
45].