Next Article in Journal
An Exploratory Study of the Obstacles for Achieving Quality in Distance Learning during the COVID-19 Pandemic
Previous Article in Journal
Best Practices in the Development of Transversal Competences among Youths in Vulnerable Situations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Students’ Self-Efficacy, Causal Attribution Habits and Test Grades

by
Kerstin Hamann
1,*,
Maura A. E. Pilotti
2 and
Bruce M. Wilson
1
1
School of Politics, Security, and International Affairs, University of Central Florida, Orlando, FL 32816, USA
2
College of Sciences and Human Studies, Prince Mohammad Bin Fahd University, Al Khobar 31952, Saudi Arabia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2020, 10(9), 231; https://doi.org/10.3390/educsci10090231
Submission received: 9 August 2020 / Revised: 21 August 2020 / Accepted: 26 August 2020 / Published: 2 September 2020

Abstract

:
Why do students vary in their performance on exams? It may be that their test preparation is insufficient because they overestimate their anticipated grade. Our study investigates four issues related to performance on a final examination. First, we analyze whether students’ ability to accurately predict their grade and their subjective confidence in this prediction may account for their grade. Second, we ask whether students at different levels of performance vary in their ability to accurately predict their grade, and if so, whether subjective confidence also differs. Third, we ask whether the accuracy and confidence of learners’ predictions are conditioned by self-efficacy beliefs and causal attribution habits, which serve as indices of motivation for test preparation. Fourth, we ask whether different causal attribution preferences contribute to self-efficacy. We use statistical analysis of data from a general education course at a large public university in the United States. Our results indicate that poor performers’ overestimates are likely to be wishful thinking as they are expressed with low subjective confidence. Self-efficacy is a significant contributor to the inaccuracy of students’ predicted grades and subjective confidence in such predictions. Professors’ understanding of learners’ forecasting mechanisms informs strategies devoted to academic success.

1. Introduction

A common assumption among college and university professors is that students are able to decide how, and how much, they need to prepare to do well on an exam [1]. Often, study guides are made available to students to assist them in assessing how well they understood the course material, and review sessions are offered prior to a major test [2]. Yet, learners are ultimately entrusted with, and thus held responsible for, self-assessment of their preparedness. As students gauge what grade they expect, they determine where to focus their study efforts to prepare in order to perform well [3]. Decisions may vary from targeting particular contents to adopting strategies that promote specific types of learning (e.g., retrieval of newly acquired concepts or their use as problem-solving tools), but they are all consequential [4]. Professors’ pedagogical effectiveness is thus met by students’ test grade prediction and confidence in their grade prediction, which, by conditioning study behavior as a self-regulatory activity, can affect their performance (test grade).
It is evident that the causality linking prediction to test preparation and grades is not equally beneficial to all students [5,6]. Many professors have encountered students who were disappointed by their test grade because they thought they had done better, who may not have studied enough to prepare because they thought they had a higher level of mastery of the material than they actually did, or who were perhaps overly confident in their abilities and mastery of the materials. To wit, if students are not in a position to accurately predict their expected grades and level of preparedness, they are not likely to engage in adequate test preparation [7]. How can we explain that some students are more successful in predicting their grades accurately and in identifying effective ways to prepare, while others are less so? It may be that some students have a general sense of self-efficacy, that is, confidence in their ability to solve a problem or complete a task [8]. At the same time, students may also vary in the way they assign responsibility for their performance. They may credit or blame their own competency or resolve, fault or acknowledge the contribution of professors to their learning, or find that their social environment (friends, family, etc.) is either a distraction or a powerful ally. Individual differences in the causal attribution habits that guide the assignment of responsibility for test performance are consequential [9]. For instance, if learners do not believe that they are in a position to affect their grades through preparation, they will also be unlikely to engage in effective test preparation activities. Thus, awareness of one’s knowledge, as well as habits involving self-efficacy and the attribution of agency to specific causal factors, can all influence academic success, which is especially notable when students assess the extent to which they are prepared for an upcoming test, since their self-assessment, their view of human agency applied to academic endeavors, and confidence guide test preparation.
Here, we explore several interrelated questions to further our understanding of the mechanisms underlying students’ test performance. We begin with the assumption that students vary in their ability to correctly estimate their test grade [5,10], and in the subjective confidence they place in their prediction [11,12,13]. We further assume that students may vary in the overall confidence they place in their ability to solve problems in everyday life (general self-efficacy [14]), and in their attribution of causality for the desirable and undesirable outcomes they may face [15]. Based on these assumptions, our study investigates four issues related to performance in a course’s final examination. First, we analyze whether students’ ability to accurately predict their grade and their subjective confidence in this prediction may account for their test grade. Second, we ask whether students at different levels of performance vary in their ability to accurately predict their performance, and if so, whether subjective confidence also differs. Third, we ask whether the accuracy and confidence of learners’ predictions are informed by self-efficacy beliefs and causal attribution habits, both of which serve as indices of motivation for test preparation. And fourth, we ask whether different causal attribution preferences contribute to self-efficacy. We analyze these questions using data from an introductory general education course at a large public university in the United States. Our findings suggest that students vary significantly in their ability to accurately predict depending on their level of performance.

2. Students’ Self-Efficacy, Confidence, Causal Attribution Habits, and Test Grades

The key rationale for the current study rests on evidence that the accuracy of grade prediction and subjective confidence are related to academic performance; however, the nature of this relationship is a matter of dispute. Some scholars [7,16] suggest that students who are unlikely to perform well on a test tend to overestimate their performance before the test and be confident in the validity of their inflated predictions. According to this view, poor performers are “blissfully unaware” [17,18] of their deficiencies, thereby facing too many hurdles simultaneously–deficient mastery that is difficult to overcome, and lack of awareness of their gap in knowledge or skills, which exacerbates the challenges of overcoming these gaps. As such, poor performers are presumably under the spell of the “illusion of knowing” phenomenon [19]. Namely, they believe that knowledge has been attained when, in fact, knowledge acquisition has failed. Other scholars [13,20] suggest that poor performers are also likely to overestimate their future performance, but they are aware of their knowledge gaps as their inflated predictions are made with little subjective confidence. These students are assumed to be under the spell of the “optimism bias” [21,22], thereby envisioning a future much brighter than the one that is likely to be faced, perhaps as a way of softening the blow of an undesirable outcome. Clearly, some students are more confident in their test grade prediction than others. Subjective confidence makes a difference as to whether a prediction has an impact on pre-test activities and then ultimately performance. For example, students who expect a good grade but are not sure that this prediction is realistic may well think of strategies to enhance their competencies, while students who are certain that a “good grade” prediction is accurate may be less likely to do so. If students vary not just in the accuracy of their prediction, but also in their subjective confidence that the prediction made is valid, a mismatch between subjective assessment and objective reality is only one side of the coin in the hands of poor performers. As a result, both factors are to be considered to understand poor self-assessment. The obvious reason is that the understanding of the key mechanisms of flawed self-assessment held by professors, instructional staff, advisors, and counselors informs their adoption of particular remedial actions, and thus shapes instructional effectiveness [23,24,25,26].
In contrast to the controversy that has engulfed prediction accuracy and subjective confidence, learners’ ability to benefit from experience is an individual-difference factor that has, with a few exceptions, been largely neglected in the literature. On the one side, if poor performers are truly “blissfully unaware” of their knowledge gaps, performance predictions made after completing a test may be expected to be as optimistic as those made before the test without any change in subjective confidence. However, if poor performers are merely adopting an “optimistic outlook” prior to the test, predictions made after completing the test may be expected to be more realistic because these learners are aware of their deficiencies and test experience has made such deficiencies an undeniable fact. The available evidence for either argument is not only scarce but also mixed. For instance, Hacker et al. [23] found no differences between before and after the test for predictions of poor performers, but good performers showed an improvement in predicting their test grade, whereas Hacker, Bol, and Bahbahani [27] reported that although good performers exhibited greater prediction accuracy before and after the test, neither good nor poor performers yielded significant improvements in accuracy from before to after the test. Subjective confidence regarding the accuracy of the test prediction was not directly assessed in either study, though. Miller and Geraci [13], who measured subjective confidence before and after a final test, found a decline in confidence for all students after the test. In our study, we further examine whether students who vary in their attainment level can use test experience to improve the accuracy of their predictions and/or alter their subjective confidence.
Determining whether poor students suffer from the illusion of knowing or an optimism bias brings with it the question of whether students’ predictions of performance and related subjective confidence reflect individual differences in an underlying disposition, such as self-efficacy, or explanatory habits, such as causal attribution preferences. These psychological dimensions, whose role will be explained in the next paragraphs, can help us answer the rhetorical question that all engaged and devoted educators are bound to ask themselves when faced with students who perform poorly. Namely, why do students vary in their demonstrated mastery of course contents assessed through tests? Certainly, a variety of factors contributes to the academic performance of college students. Learners may differ in their academic preparation for a given class, motivation, aptitude, demands on their time that restrict the amount of effort they can commit to studying, knowledge and availability of academic support systems and resources, understanding of the requirements of a course, and so on. Yet, agreement exists that one important factor for all students is their ability to accurately assess the extent to which the demands of an upcoming test fit their preparation (i.e., prediction accuracy [23,27]). In fact, if students believe that they have mastered all required content and are well prepared to get a good grade without additional effort, they are unlikely to engage in extended study sessions to prepare for a test. To be clear, the overestimation of test grades can lead to unexpected underperformance whose impact on future behavior depends on the particular explanation that students give to bad grades [28,29]. That is, the selected explanation may further discourage learners from exerting efforts towards academic achievement, foster inertia, or even stimulate behavioral change. Students may attribute outcomes [30,31] to a variety of factors. Common dimensions are locus (internal/dispositional causes, such as ability and effort, or external/situational causes, such as the nature of the test or the quality of the instruction received) and stability (causes invariant over time, such as intelligence and systemic discrimination, or variable, such as mood and luck [32]). To wit, locus refers to whether learners believe an outcome is their responsibility or can be ascribed to other people or circumstances. Stability refers to the extent to which learners see permanence in the selected cause. For instance, if responsibility for a bad grade is attributed to factors internal to the learner as well as stable (e.g., “I am inept”), students may be discouraged and feel defeated. If responsibility is given to internal but transient factors (e.g., “I did not put enough effort into preparing for this test”), behavioral change may be pursued in future endeavors. However, if factors external to the learner are deemed responsible (e.g., the professor did not explain the test materials well, most students did not do well because the test was unnecessarily difficult, the professor was not willing to give out good grades, etc.), inertia may take root supported by the conviction that circumstances can change in the future. Alternatively, if responsibility for a good grade is attributed to factors internal to the learner as well as stable (e.g., “I am bright”), students may feel elated but will put little effort into preparing for the next test, thereby enhancing the chances of being blindsided by future adverse outcomes. If responsibility is given to internal but transient factors (e.g., “I worked hard”), past activities may be replicated in forthcoming endeavors. However, if factors external to the learner are deemed responsible (e.g., the professor did explain the test materials well, the professor gave an easy test, etc.), inertia may take hold of the learners based on the conviction that favorable circumstances will replicate in the future.
Clearly, if students feel that the subject tested aligns with their abilities, the test is fair, the professor is a competent instructor, and a helpful study group can be joined, and so on, but the grade is poor, changes in the exerted effort are likely to be reflected in future pre-exam activities. Changes can be quantitative (e.g., more time devoted to studying), qualitative (e.g., studying activities are restructured with attention to specific study materials and required type of information processing), or both. Yet, the extent to which some explanations will be preferred over others may depend on the learners’ overall belief in their capabilities to solve problems that life may present, also known as general self-efficacy. In essence, self-efficacy is confidence in oneself. It embodies the extent to which learners trust that their actions can enhance the likelihood that problems can be solved or tasks can be completed [8]. According to Abramson, Seligman, and Teasdale [33], a sense of failure, as opposed to a sense of confidence, entails attributing undesirable outcomes to oneself. Learners have been reported to vary in their degree of self-efficacy, with those learners ranking high on self-efficacy ascribing positive outcomes to internal factors, such as their own abilities and efforts, while blaming negative outcomes on external factors. In contrast, low-efficacy learners have been found to hold internal causes, such as their own deficiencies, responsible for failures (see [34,35]). However, Hirschy and Morris [36] and Camgoz, Tektas, and Metin [37] did not find evidence linking self-efficacy and causal attribution habits. Thus, in the present study, we plan to explore the relationship between general self-efficacy beliefs and causal attribution preferences to determine whether the accuracy of grade prediction, subjective confidence in the prediction made, and explanation for the test outcome, may be traced to the learner’s general self-efficacy beliefs.

3. The Study: Materials and Methods

3.1. Research Questions

The present study is guided by four interrelated questions. First, is the accuracy of students’ prediction of their test grade a better predictor of their actual grade, or is their confidence in the accuracy of their prediction more important? Second, do students at different performance levels vary in their abilities to predict test outcomes? Does their confidence in such predictions also differ? Third, if differences exist in predictive abilities, can these differences be traced to self-efficacy or causal attribution habits (both serving as indices of motivation)? And fourth, do causal attribution habits differentially contribute to self-efficacy?

3.2. Data and Methods

We analyzed these questions using a sample of undergraduate students enrolled in an introductory course, American National Government, during the spring 2020 semester. The course contributes to the university’s general education program (GEP) and is attended by students from across the university. A general education course enrolls students with diverse academic backgrounds, thereby giving us the ability to sample students who were not linked to a specific major. In most of the extant literature, students enrolled in major-specific courses were sampled (e.g., cognitive psychology, educational psychology, etc.), thereby making generalizations of findings arduous. Furthermore, the number of students taking the course every semester was large enough to allow for meaningful statistical analysis; students were unlikely to participate in more than one course section that is part of the study; and the learning objectives for the course were consistent across all sections, even though four different faculty members imparted instruction in 11 sections of the course. The University of Central Florida is a public university that is required to abide by specific learning objectives set by the state of Florida’s Department of Education. Conformity with these objectives is thus required for the selected course to comply with the state’s civic education guidelines. All sections were included in the study.
We employed a convenience sample. Students over the age of 18 had the opportunity to volunteer to participate in the study and were offered extra credit for participation (alternative extra credit options existed). Students who consented to participate completed a series of surveys distributed online through their respective course website. All subjects gave their informed consent for inclusion before they participated in the study. Participation complied with the guidelines of the Office for Human Research Protections of the U.S. Department of Health and Human Services and with the American Psychological Association’s ethical standards in the treatment of human subjects. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Institutional Review Board of the University of Central Florida (STUDY00001215).
Out of a total of 621 students enrolled in all sections of the course, 258 students completed the study and constituted our sample. Compared with those who did not participate, participants could be considered a sample of students who were motivated to pass the course as they deemed the extra credit activity sufficiently enticing. The sample reflected the diverse student body of the university, including men (28.3%) and women (71.7%), and a variety of racial and ethnic backgrounds, comprising Asian (6.2%), Black (10.5%), Hispanic (31.4%), White (42.2%), and multi-racial (1.9%; 7% identified as international and 0.8% did not identify). Students, whose mean age was 19.68, were distributed across all four years of college (first-year, sophomore, junior, senior). Their mean GPA was 3.35 (range: 1.81–4.00). These data were provided by the University of Central Florida’s Office of Institutional Knowledge Management.
At the beginning of the semester, we collected baseline data to establish students’ self-efficacy and causal attribution habits. We used the New General Self-Efficacy (NGSE) inventory [14,38] to measure students’ self-reported general confidence in their ability to deal with a variety of life challenges. Students rated their agreement with eight statements regarding general confidence on a scale from strongly disagree (0) to strongly agree (4). The survey statements include, for example, “I will be able to achieve most of the goals that I have set for myself”; “When facing difficult tasks, I am certain that I will accomplish them”; “I will be able to successfully overcome many challenges.” The inventory was selected for its demonstrated psychometric properties, such as internal consistency, unidimensionality, and stability [38]. In the present study, the reliability of the inventory, as measured by Cronbach’s alpha, α, was 0.865.
A second survey addressed students’ causal attributions for their best and worst grades [39]. The survey asked students to think back to the time when they received their best grade or worst grade on a test and rank seven potential causes for the best and worst grade, respectively, on a scale from 0 (not at all) to 6 (entirely): ability, effort, test (either difficulty or ease), luck, family’s influence, instructor’s influence, and friends’ influence.
In all course sections, professors administered a final exam at the end of the semester. We asked students to predict their final exam grade and their confidence in these predictions shortly before the final exam was administered. Subjective confidence was to be expressed on a scale from 0 (not at all confident) to 4 (extremely confident). We again asked them to predict their final exam grade and their confidence in their prediction after they completed the exam but before they had received their grade. Students were asked to produce grade estimates twice to determine the extent to which test experience could puncture unrealistic expectations. That is, we are interested in assessing whether students differed in their ability to use test experience to improve the accuracy of their predictions and/or alter their confidence in the predictions made.
Given the questions motivating this field study, the primary variables of interest were students’ test performance (grade), the (in) accuracy of the estimation of grades before and after the final test, and subjective confidence in the estimations made before and after the final exam, as well as their self-efficacy beliefs and causal attribution habits.

4. Results

We conducted separate analyses for each of our research questions. We considered results at the 0.05 level as significant. In these analyses, we arranged students’ final test grades into four categories, corresponding to performance levels: Poor (0–69%), C (70–79%), B (80–89%), and A (90–100%). This corresponds to commonly assigned grades of A, B, C, and D and F grades in one category of “unsuccessful” grades. In higher education, student “success” is commonly defined as A, B, or C grades, while D and F grades are generally interpreted as failing grades since they are below the minimum grade point average (commonly a 2.0, equivalent to a C grade) required for graduating from college. Note that W (withdrawal) grades are also commonly grouped with the D and F grades in the “unsuccessful” category (see [40]). Our study only analyzes test grades rather than course grades, and all students in the study completed the final exam and finished the course. We operationally defined inaccuracy of grade estimation as the discrepancy between the estimated performance and the actual performance. That is, a student’s estimation of the final grade could be an underestimation (−3, −2, or −1), overestimation (+3, +2, or +1), or be accurate (0). For instance, if a student predicted a B and obtained a C, the inaccuracy score would be + 1, which reflects an overestimation of 1 level. In contrast, if a student predicted a B and obtained an A, the inaccuracy score would be – 1, which reflected an underestimation of 1 level.

4.1. Can Prediction Inaccuracy or Confidence Forecast Performance?

Test performance relies on self-regulatory activities. An important aspect of such activities is the learners’ ability to predict their preparation prior to a test. If students vary in their accuracy of test grade prediction and subjective confidence of their prediction, do these factors help explain their test grades? We conducted regression analyses with the inaccuracy of grade estimation and subjective confidence ratings as the predictors and final test performance as the outcome variable (see Table 1). Both before and after the test, prediction inaccuracy and subjective confidence contributed to performance, but differently. Performance declined as overestimation increased. That is, the more students overestimated their expected grade, the lower their grade was. In contrast, performance declined along with subjective confidence—the lower students’ confidence in their abilities was, the lower was their test grade. The results for estimates and confidence ratings made after the test mirrored this pattern. The next analysis focuses more closely on these relationships.

4.2. Do Students at Different Performance Levels Vary in Their Ability to Predict Test Outcomes and in Their Subjective Confidence in Such Predictions?

We computed separate two-way analyses of variance on inaccuracy of estimation and on subjective confidence with performance level (poor, C, B, and A) and timing of assessment (before and after the test) as the independent variables. For estimation inaccuracy, we found a main effect of performance level, F (3, 254) = 108.87, MSE = 0.614, p < 0.001, ηp2 = 0.563, indicating that as performance dropped, overestimation increased. There was also a main effect of timing of assessment, F (1, 254) = 14.52, MSE = 0.204, p < 0.001, ηp2 = 0.054, indicating an overall shift towards more conservative estimates from before to after the test. The interaction term failed to reach significance, though (F = 2.59, ns). For subjective confidence, we found a main effect of performance level, F(3, 254) = 13.30, MSE = 1.20, p < 0.001, ηp2 = 0.136, indicating that as performance dropped, confidence declined, and a main effect of timing of the confidence assessment, F(1, 254) = 21.54, MSE = 0.545, p < 0.001, ηp2 = 0.078, illustrating greater confidence after the test. However, a significant interaction, F (3, 254) = 4.73, MSE = 0.545, p = 0.003, ηp2 = 0.053, suggested that subjective confidence did not increase for all performance levels evenly. Table 2 displays descriptive statistics.
To investigate these patterns further, tests of simple effects, submitted to the Bonferroni correction to control for experiment-wise alpha, were computed on estimation inaccuracies and subjective confidence separately for before and after the test. To determine whether students at different performance levels were able to take advantage of the experience of completing the exam, tests of simple effects also compared estimation inaccuracies and subjective confidence before and after the test (corrected alpha: 0.005).
We found a significant and consistent shift from overestimation to underestimation as performance increased. Significant differences between all contiguous levels, both before the test, ts ≥ 4.03, p < 0.001, and after the test, t ≥ 3.80, p < 0.001, underlined the shift. That is, students at higher performance levels tended to underestimate their expected grades, while students at lower performance levels tended to overestimate their grades. However, subjective confidence did not change uniformly with performance level. Before the test, poor performers were less confident in their predictions than A-performers, t = 3.38, p = 0.001. After the test, poor performers were less confident in their estimates than both B-performers and A-performers, whereas C-performers and B-performers were less confident than A-performers, ts ≥ 3.03, p ≤ 0.001. Interestingly, only A-performers became more accurate in their predictions after the test, t = 5.92, p < 0.001. Both A- and B-performers improved their subjective confidence in the predictions made after the test, ts ≥ 3.41, p ≤ 0.001.
Taken together, the findings of these analyses yield an unmistakable pattern of individual differences linked to performance level. They indicate that students at a lower level of performance were less successful in accurately predicting their grades even after they took the test than students at higher levels of performance, but they were also less confident in their predictions than their peers who scored higher grades. The predictions and subjective confidence of poor performers did not benefit from the experience of taking the test. A-performers, instead, became more accurate as well as confident in their predictions after the test.

4.3. Can Differences in Predictive Abilities Be Traced to Self-Efficacy or Causal Attribution Habits?

To answer this question, we conducted regression analyses with causal attribution ratings and self-efficacy serving as the predictors and inaccuracy of estimation and subjective confidence before and after the test serving as the outcome variables. In these analyses (see Table 3 and Table 4), conducted separately for each outcome variable, self-efficacy was the main contributor to students’ mistakes of estimation and subjective confidence. In addition, when explanations for good grades were considered, effort (an internal variable cause) positively contributed to subjective confidence both before and after the test. In contrast, when explanations for bad grades were considered before the test, luck (an external variable cause) was found to increase the inaccuracy of estimation, whereas ability was found to decrease confidence. The analysis regarding bad grades did not reveal a contribution of causal factors, in addition to self-efficacy, after the test.

4.4. Do Causal Attribution Habits Differentially Contribute to Self-Efficacy?

To answer this question, we conducted regression analyses with causal attribution ratings as the predictors and self-efficacy as the outcome variable (see Table 5). We found that when explanations for good grades were considered, ability and luck both contributed to self-efficacy, but quite differently. Explanations involving one’s ability increased self-efficacy, whereas explanations involving luck decreased it. When bad grades were considered, explanations involving abilities were linked to a decline in self-efficacy. That is, when students explained their good grades with their own abilities, they also displayed a higher level of self-efficacy; on the other hand, when students thought that their good grades were due to luck, an external variable cause, their sense of self-efficacy was weakened. And, perhaps, unsurprisingly, if students attributed their poor grade to their abilities, their sense of self-efficacy was low.

5. Discussion

Do students who do better on an exam differ in their ability to confidently predict their test grade from students who do less well? And what are the factors that students believe are accounting for their performance? These questions are significant. If students are not able to predict their performance correctly, and are not confident that their predictions are correct, they are less likely to engage in behaviors that maximize their chances of a good grade. Furthermore, if students believe that their role in attaining a certain grade is partial at best because factors outside of their control are the primary drivers for their test grade, inertia rather than targeted studying is the likely consequence. Thus, understanding whether students are able to accurately and confidently predict a test grade, as well as understanding what they believe their role to be in test preparation is, may help educators guide students’ test preparation activities.
Our findings indicate that learners at a high level of attainment differ systematically and significantly from learners at lower levels. While students with good grades tend to underestimate their performance and are fairly confident in their predictions, their peers with lower grades tend to overestimate their performance but with less confidence. Having completed the exam does not substantially change these patterns. While the highest-performing students improve the accuracy of their test predictions, other students fail to take advantage of the experience of having completed the test.
Our findings are consistent with the notion that estimation is a complex process executed under a certain degree of uncertainty regarding the specific demands of the test and the extent to which the knowledge and skills that a learner possesses match such demands [16,41,42]. They are also consistent with earlier findings that poor performers are less accurate and confident in their predictions [13]. Thus, poor performers are not “blissfully ignorant” of their weaknesses as it is implied by the illusion of knowing phenomenon [16,17]. On the contrary, their predictions resemble expressions of hope, which may be used to shield, albeit temporarily, their self-image from the bad news they are about to encounter [43]. In this context, hope remains dysfunctional, though, as it may prevent learners from optimizing feedback and recalibrating action. In fact, in our study, there was no evidence that poor performers took advantage of the information gained from taking the final test to improve the accuracy of their predictions.
To understand potential underlying reasons for these findings, we probed the role of self-efficacy and causal attribution. Our findings suggest that self-efficacy contributes substantially to students’ inaccurate predictions of their exam grades as well as to their confidence in the predictions made. Avhustiuk et al. [44], however, found that students’ general self-efficacy increased with the accuracy of self-assessment, whereas Al Kuhayli et al. [45] did not find any relationship. It has been suggested that hope and self-efficacy are overlapping constructs [46]. Thus, it is not surprising that a “can do” attitude may have two sides, either as a propeller of a realistic forecast or as a supporter of unsubstantiated optimism. The particular conditions under which either side may emerge or not emerge at all are unclear at present. Notwithstanding self-efficacy, students who credit good test results to effort are also more confident in their predictions. In contrast, students who think that luck plays a role in their undesirable performance demonstrate diminished prediction accuracy before the test; if they think that their abilities are to blame, their subjective confidence also decreases. These findings suggest that the interplay between causal attribution habits and self-efficacy can determine the impact of an outcome (a grade) on the learner’s current and future course of action. The exact nature of the interplay should be further explored in future research.
Despite the significance of these findings, limitations exist. Our research relies on surveys completed by 258 students enrolled in a general education course on American Government at a large public university in the United States. While students came from majors across the university and were from diverse backgrounds, they nonetheless constituted a convenience sample. While we assume that those students who completed the surveys were motivated to do well in the class, we do not know whether they differed systematically from those students who did not participate. Further research may include different classes and institutions of higher education to see whether our findings can be replicated on a broader scale.
Regardless, our initial results provide clues for professors and advisors interested in helping students to succeed. For example, if students who perform poorly are not able to accurately and confidently predict their exam results, and if they believe that external factors are responsible for their performance, they are less likely to engage in behaviors that maximize their chances of success. To address this obstacle, professors can design low-stakes assessments that help students accurately understand their mastery of the material. Appropriate feedback may also be able to help them identify where they need to focus their energy to study for the test [47]. Repeated practice with formative assessments that explain how they are doing and how they can improve may help them feel that they have a role to play in doing well rather than believing that external factors are primarily responsible for their grades.
As many students struggle to succeed in higher education, understanding why some students fail to engage in behaviors that increase their chances of success is critical [5,47]. This study adds to an existing body of literature that demonstrates that it is not necessarily the material taught in college, or the particular pedagogy used, but perhaps students’ beliefs about themselves and their inability to correctly predict their performance that play a critical role in their academic success [13,45].

Author Contributions

Conceptualization, K.H., M.A.E.P., and B.M.W.; methodology, K.H., M.A.E.P., and B.M.W.; formal analysis, M.A.E.P.; investigation, K.H., M.A.E.P., and B.M.W.; resources, K.H., M.A.E.P., and B.M.W.; data curation, K.H., M.A.E.P., and B.M.W.; writing—original draft preparation, K.H., M.A.E.P., and B.M.W.; writing—review and editing, K.H., M.A.E.P., and B.M.W.; project administration, K.H., M.A.E.P., and B.M.W.; funding acquisition, n/a. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank Robert Bass, Annabelle Conroy, Bruce Farcau, James Paradiso, Esther Wilkinson, Ronan Wilson, and the anonymous reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kitsantas, A. Test preparation and performance: A self-regulatory analysis. J. Exp. Educ. 2002, 70, 101–113. [Google Scholar] [CrossRef]
  2. Madaus, G.F. The distortion of teaching and testing: High-stakes testing and instruction. Peabody J. Educ. 1988, 65, 29–46. [Google Scholar] [CrossRef]
  3. Lee, Y.; Kim, B.; Shin, D.; Kim, J.; Baek, J.; Lee, J.; Choi, Y. Prescribing Deep Attentive Score Prediction Attracts Improved Student Engagement. Available online: https://arxiv.org/abs/2005.05021 (accessed on 27 August 2020).
  4. Bicak, B. Scale for test preparation and test taking strategies. Educ. Sci. Educ. Sci. Theory Pract. 2013, 13, 279–289. [Google Scholar]
  5. Pinquart, M.; Ebeling, M. Students’ expected and actual academic achievement—A meta-analysis. Int. J. Educ. Res. 2020, 100, 101524. [Google Scholar] [CrossRef]
  6. Serra, M.J.; DeMarree, K.G. Unskilled and unaware in the classroom: College students’ desired grades predict their biased grade predictions. Mem. Cogn. 2016, 44, 1127–1137. [Google Scholar] [CrossRef] [Green Version]
  7. Kennedy, E.J.; Lawton, L.; Plumlee, E.L. Blissful ignorance: The problem of unrecognized incompetence and academic performance. J. Mark. Educ. 2002, 24, 243–252. [Google Scholar] [CrossRef]
  8. Bandura, A. Human agency in social cognitive theory. Am. Psychol. 1989, 44, 1175–1184. [Google Scholar] [CrossRef]
  9. Zimmerman, B.J. Becoming a self-regulated learner: An overview. Theory Pract. 2002, 41, 64–70. [Google Scholar] [CrossRef]
  10. Maki, R.H.; Micheal, S.; Amanda, E.W.; Tammy, L.Z. Individual differences in absolute and relative metacomprehension accuracy. J. Educ. Psychol. 2005, 97, 723–731. [Google Scholar] [CrossRef]
  11. Dunlosky, J.; Serra, M.J.; Matvey, G.; Rawson, K.A. Second-order judgments about judgments of learning. J. Gen. Psychol. 2005, 132, 335–346. [Google Scholar] [CrossRef]
  12. Händel, M.H.; de Bruin, A.B.H.; Dresel, M. Individual differences in local and global metacognitive judgments. Metacogn. Learn. 2020, 15, 51–75. [Google Scholar] [CrossRef] [Green Version]
  13. Miller, T.M.; Geraci, L. Unskilled but aware: Reinterpreting overconfidence in low-performing students. J. Exp. Psychol. Learn. Mem. Cogn. 2011, 37, 502–506. [Google Scholar] [CrossRef] [Green Version]
  14. Chen, G.; Gully, S.M.; Whiteman, J.A.; Kilcullen, R.N. Examination of relationships among trait-like individual differences, state-like individual differences, and learning performance. J. Appl. Psychol. 2000, 85, 835–847. [Google Scholar] [CrossRef]
  15. Simon, J.G.; Feather, N.T. Causal attributions for success and failure at university examinations. J. Educ. Psychol. 1973, 64, 46–56. [Google Scholar] [CrossRef]
  16. Dunning, D.; Heath, C.; Suls, J.M. Flawed self-assessment: Implications for health, education, and the workplace. Psychol. Sci. Public Interest 2004, 5, 69–106. [Google Scholar] [CrossRef] [Green Version]
  17. Dunning, D.; Johnson, K.; Ehrlinger, J.; Kruger, J. Why people fail to recognize their own incompetence. Curr. Dir. Psychol. Sci. 2003, 12, 83–87. [Google Scholar] [CrossRef]
  18. Williams, W.M. Blissfully incompetent. Psychol. Sci. Public Interest 2004, 5, i–ii. [Google Scholar] [CrossRef] [Green Version]
  19. Glenberg, A.M.; Wilkinson, A.C.; Epstein, W. The illusion of knowing: Failure in the self-assessment of comprehension. Mem. Cogn. 1982, 10, 597–602. [Google Scholar] [CrossRef] [Green Version]
  20. Al-Harthy, I.S.; Was, C.A.; Hassan, A.S. Poor performers are poor predictors of performance and they know it: Can they improve their prediction accuracy. J. Glob. Res. Educ. Soc. Sci. 2015, 4, 93–100. [Google Scholar]
  21. Lefebvre, G.L.; Lebreton, M.; Meyniel, F.; Bourgeois-Gironde, S.; Palminteri, S. Behavioural and neural characterization of optimistic reinforcement learning. Nat. Hum. Behav. 2017, 1, 1–9. [Google Scholar] [CrossRef]
  22. Sharot, T. The Optimism Bias. Curr. Biol. 2011, 21, 941–945. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Hacker, D.J.; Bol, L.; Horgan, D.D.; Rakow, E.A. Test prediction and performance in a classroom context. J. Educ. Psychol. 2000, 92, 160–170. [Google Scholar] [CrossRef]
  24. Miller, T.M.; Geraci, L. Training metacognition in the classroom: The influence of incentives and feedback on exam predictions. Metacogn. Learn. 2011, 6, 303–314. [Google Scholar] [CrossRef]
  25. Mills, C.M.; Keil, F.C. Knowing the limits of one’s understanding: The development of an awareness of an illusion of explanatory depth. J. Exp. Child Psychol. 2004, 87, 1–32. [Google Scholar] [CrossRef]
  26. Saens, G.D.; Geraci, L.; Tirso, R. Improving metacognition: A comparison of interventions. Appl. Cogn. Psychol. 2019, 33, 918–929. [Google Scholar] [CrossRef]
  27. Hacker, D.J.; Bol, L.; Bahbahani, K. Explaining calibration accuracy in classroom contexts: The effects of incentives, reflection, and explanatory style. Metacogn. Learn. 2008, 3, 101–121. [Google Scholar] [CrossRef]
  28. Weiner, B. An attributional theory of achievement motivation and emotion. Psychol. Rev. 1985, 92, 548–573. [Google Scholar] [CrossRef]
  29. Weiner, B. Human Motivation: Metaphors, Theory, and Research; SAGE Publications: Newbury Park, CA, USA, 1992. [Google Scholar]
  30. Heider, F. Social perception and phenomenal causality. Psychol. Rev. 1944, 51, 358–374. [Google Scholar] [CrossRef]
  31. Heider, F. The Psychology of Interpersonal Relations; Wiley: New York, NY, USA, 1958. [Google Scholar]
  32. Weiner, B. Intrapersonal and interpersonal theories of motivation from an attributional perspective. Educ. Psychol. Rev. 2000, 12, 1–14. [Google Scholar] [CrossRef]
  33. Abramson, L.A.; Seligman, M.E.; Teasedale, J.D. Learned helplessness in humans: Critique and reformulation. J. Abnorm. Psychol. 1978, 87, 49–74. [Google Scholar] [CrossRef]
  34. Silver, W.S.; Mitchell, T.R.; Gist, M.E. Responses to successful and unsuccessful performance: The moderating effect of self-efficacy on the relationship between performance and attributions. Organ. Behav. Hum. Decis. Process. 1995, 62, 286–299. [Google Scholar] [CrossRef]
  35. Bandura, A. Social Foundations of Thought and Action; Prentice-Hall: Upper Saddle River, NJ, USA, 1986. [Google Scholar]
  36. Hirschy, A.J.; Morris, J.R. Individual differences in attributional style: The relational influence of self-efficacy, self-esteem, and sex role identity. Personal. Individ. Differ. 2002, 32, 183–196. [Google Scholar] [CrossRef]
  37. Camgoz, S.M.; Tektas, O.O.; Metin, I. Academic attributional style, self-efficacy and gender: A cross-cultural comparison. Soc. Behav. Personal. Int. J. 2008, 36, 97–114. [Google Scholar] [CrossRef]
  38. Chen, G.; Gully, S.M.; Eden, D. Validation of a new general self-efficacy scale. Organ. Res. Methods. 2001, 4, 62–83. [Google Scholar] [CrossRef] [Green Version]
  39. John, M.; Luanna, H.M.; Jessica, G.; Ronald, F.; Kirsty, F.W.; Frank, H.W. Students’ attributions for their best and worst marks: Do they relate to achievement? Contemp. Educ. Psychol. 2011, 36, 71–81. [Google Scholar]
  40. Moskal, P.D.; Dziuban, C.D. Cybereducation: The Future of Long-Distance Learning; Mary Ann Liebert: New Rochelle, NY, USA, 2001. [Google Scholar]
  41. Zell, E.; Krizan, Z. Do people have insight into their abilities? A metasynthesis. Perspect. Psychol. Sci. 2014, 9, 111–125. [Google Scholar] [CrossRef] [Green Version]
  42. García-Fernández, J.M.; Inglés-Saura, C.J.; Gonzálvez, M.V.C.; Gonzálvez, C. Relación entre autoeficacia y autoatribuciones académicas en estudiantes chilenos. Univ. Psychol. 2016, 15, 79–88. [Google Scholar]
  43. Kruglanski, A.W.K.; Jasko, K.; Friston, K. All thinking is ‘wishful’ thinking. Trends Cogn. Sci. 2020, 24, 413–424. [Google Scholar] [CrossRef] [PubMed]
  44. Avhustiuk, M.M.; Pasichnyk, I.D.; Kalamazh, R.V. The illusion of knowing in metacognitive monitoring: Effects of the type of information and of personal, cognitive, metacognitive, and individual psychological characteristics. Eur. J. Psychol. 2018, 14, 317–341. [Google Scholar] [CrossRef] [Green Version]
  45. Al Kuhayli, H.; Pilotti, M.A.E.; El-Alaoui, K.; Cavazos, S. An exploratory non-experimental design of self-assessment practice. Int. J. Assess. Eval. 2019, 26, 49–65. [Google Scholar] [CrossRef]
  46. Zhou, M.; Seng Kam, C.C. Hope and general self-efficacy: Two measures of the same construct? J. Psychol. 2016, 150, 543–559. [Google Scholar] [CrossRef] [PubMed]
  47. Feldman, D.B.; Kubota, M. Hope, self-efficacy, optimism, and academic achievement: Distinguishing constructs and levels of specificity in predicting college grade-point average. Learn. Individ. Differ. 2015, 37, 210–216. [Google Scholar] [CrossRef]
Table 1. Regression Analysis with Inaccuracy of Estimation and Subjective Confidence as the Predictors and Final Test Performance as the Outcome Variable.
Table 1. Regression Analysis with Inaccuracy of Estimation and Subjective Confidence as the Predictors and Final Test Performance as the Outcome Variable.
BSEBetaTSig.
Before the Final Test
constant78.8151.595
Inaccuracy Before−8.7780.686−0.610−12.798*
Confidence Before4.1260.7070.2785.834*
After the Final Test
Constant76.3281.662
Accuracy After−8.0260.727−0.513−11.036*
Confidence After4.8910.5850.3898.357*
Note: Before the Final Test: R = 0.651; After the Final Test: R = 0.673. * indicates significance at 0.05 level.
Table 2. Percentage of Students at Each Performance Level that Underestimated (−1, −2, and −3), Correctly Estimated (0), or Overestimated (+1, +2, and +3) their Grades Measured as a Difference between the Obtained and the Predicted Level. Mean Subjective Confidence and Standard Errors of the Mean (SEM) are Reported in the Last Columns.
Table 2. Percentage of Students at Each Performance Level that Underestimated (−1, −2, and −3), Correctly Estimated (0), or Overestimated (+1, +2, and +3) their Grades Measured as a Difference between the Obtained and the Predicted Level. Mean Subjective Confidence and Standard Errors of the Mean (SEM) are Reported in the Last Columns.
Level Before−3−2−10+1+2+3Confidence
0 Poor(0–69%) 4.838.152.44.81.57 (0.19)
1 C(70–79%) 3.229.054.812.9 1.87 (0.16)
2 B(80–89%) 13.860.625.5 2.06 (0.10)
3 A(90–100%) 10.737.551.8 2.22 (0.08)
After
0 Poor(0–69%) 4.833.342.919.01.62 (0.18)
1 C(70–79%) 6.59.767.716.1 2.16 (0.16)
2 B(80–89%) 1.19.657.431.9 2.47 (0.12)
3 A(90–100%) 2.720.576.8 3.01 (0.09)
Table 3. Regression Analyses with Explanatory Preferences for Best Grades and Self-Efficacy Beliefs as the Predictors, and Estimation Inaccuracy and Subjective Confidence as the Outcome Variables.
Table 3. Regression Analyses with Explanatory Preferences for Best Grades and Self-Efficacy Beliefs as the Predictors, and Estimation Inaccuracy and Subjective Confidence as the Outcome Variables.
Good Grades
Estimation Inaccuracy—Before TestBSEBetatSig.
Constant−1.2450.474
Ability−0.0090.056−0.010−0.154ns
Effort−0.0090.063−0.009−0.135ns
Test0.0270.0430.0430.631ns
Luck0.0370.0420.0650.865ns
Family−0.0130.035−0.025−0.373ns
Instructor−0.0390.042−0.061−0.933ns
Friends0.0180.0400.0320.453ns
Self-Efficacy0.4340.1140.2533.825*
Subjective Confidence—Before Test
Constant0.5930.449
Ability0.0470.0530.0590.899ns
Effort0.1400.0600.1592.330*
Test−0.0470.041−0.076−1.153ns
Luck0.0150.0400.0270.365ns
Family0.0500.0330.1001.505ns
Instructor−0.0630.040−0.100−1.573ns
Friends−0.0560.038−0.104−1.481ns
Self-Efficacy0.2890.1080.1742.686*
Estimation Inaccuracy—After Test
Constant−0.5190.441
Ability−0.0040.052−0.006−0.085ns
Effort−0.0460.059−0.056−0.787ns
Test0.0160.0400.0270.399ns
Luck0.0260.0390.0490.652ns
Family0.0010.0330.0020.024ns
Instructor−0.0200.039−0.033−0.498ns
Friends0.0180.0370.0360.500ns
Self-Efficacy0.3090.1060.1962.922*
Subjective Confidence—After Test
Constant0.4190.534
Ability0.0080.0630.0080.121ns
Effort0.1740.0710.1682.438*
Test0.0120.0490.0170.257ns
Luck0.0150.0480.0230.314ns
Family0.0660.0400.1121.677ns
Instructor0.0840.0470.1131.770ns
Friends−0.0550.045−0.086−1.222ns
Self-Efficacy0.2690.1280.1372.104*
Note: Estimation Inaccuracy–Before Test: R = 0.250; Estimation Confidence—Before Test: R = 0.326; Estimation Inaccuracy—After Test: R = 0.199; Estimation Confidence–After Test: R = 0.297. * significant contribution at 0.05 level.
Table 4. Regression Analyses with Explanatory Preferences for Worst Grades and Self-Efficacy Beliefs as the Predictors, and Estimation Inaccuracy and Subjective Confidence as the Outcome Variables.
Table 4. Regression Analyses with Explanatory Preferences for Worst Grades and Self-Efficacy Beliefs as the Predictors, and Estimation Inaccuracy and Subjective Confidence as the Outcome Variables.
Worst Grades
Estimation Inaccuracy—Before TestBSEBetatSig.
Constant−1.4290.453
Ability−0.0020.036−0.004−0.062ns
Effort0.0120.0350.0210.327ns
Test−0.0210.049−0.028−0.420ns
Luck0.1030.0420.1812.450*
Family−0.0420.050−0.060−0.838ns
Instructor−0.0460.037−0.084−1.251ns
Friends0.0820.0460.1291.798ns
Self-Efficacy0.4560.1070.2664.266*
Subjective Confidence—Before Test
Constant1.2400.448
Ability−0.0700.035−0.129−2.000*
Effort0.0070.0350.0130.199ns
Test−0.0030.049−0.004−0.067ns
Luck0.0360.0420.0640.857ns
Family−0.0070.050−0.010−0.141ns
Instructor−0.0280.036−0.052−0.766ns
Friends0.0070.0450.0120.165ns
Self-Efficacy0.3450.1060.2073.262*
Estimation Inaccuracy—After Test
Constant−0.6470.427
Ability−0.0160.033−0.031−0.472ns
Effort−0.0060.033−0.011−0.171ns
Test−0.0040.046−0.006−0.090ns
Luck0.0720.0400.1371.814ns
Family−0.0510.048−0.079−1.071ns
Instructor−0.0310.035−0.062−0.905ns
Friends0.0610.0430.1041.414ns
Self-Efficacy0.2950.1010.1872.921*
Subjective Confidence—After Test
Constant1.5350.529
Ability0.0050.0410.0070.115ns
Effort−0.0580.041−0.090−1.404ns
Test0.0320.0570.0370.555ns
Luck−0.0750.049−0.115−1.529ns
Family0.0490.0590.0610.834ns
Instructor0.0770.0430.1231.800ns
Friends−0.0160.053−0.022−0.300ns
Self-Efficacy0.3350.1250.1712.687*
Note: Estimation Inaccuracy–Before Test: R = 0.317; Estimation Confidence–Before Test: R = 0.258; Estimation Inaccuracy–After Test: R = 0.237; Estimation Confidence–After Test: R = 0.253. * significant contribution at 0.05 level.
Table 5. Regression Analyses with Explanatory Preferences for Best and Worst Grades as the Predictors and Self-Efficacy as the Outcome Variable.
Table 5. Regression Analyses with Explanatory Preferences for Best and Worst Grades as the Predictors and Self-Efficacy as the Outcome Variable.
Best GradesBSEBetatSig.
Constant2.3910.216
Ability0.0770.0310.1602.514*
Effort0.0650.0350.1221.846ns
Test−0.0330.024−0.090−1.396ns
Luck−0.0680.023−0.205−2.923*
Family0.0240.0190.0791.225ns
Instructor0.0220.0230.0590.954ns
Friends0.0400.0220.1231.814ns
Worst Grades
Constant3.2790.170
Ability−0.0440.021−0.135−2.128*
Effort−0.0140.021−0.044−0.683ns
Test0.0210.0290.0470.710ns
Luck−0.0330.025−0.101−1.350ns
Family−0.0230.030−0.056−0.778ns
Instructor−0.0070.022−0.022−0.328ns
Friends−0.0310.027−0.084−1.150ns
Note: Best Grades: R = 0.375; Worst Grades: R = 0.268. * significant contribution at 0.05 level.

Share and Cite

MDPI and ACS Style

Hamann, K.; Pilotti, M.A.E.; Wilson, B.M. Students’ Self-Efficacy, Causal Attribution Habits and Test Grades. Educ. Sci. 2020, 10, 231. https://doi.org/10.3390/educsci10090231

AMA Style

Hamann K, Pilotti MAE, Wilson BM. Students’ Self-Efficacy, Causal Attribution Habits and Test Grades. Education Sciences. 2020; 10(9):231. https://doi.org/10.3390/educsci10090231

Chicago/Turabian Style

Hamann, Kerstin, Maura A. E. Pilotti, and Bruce M. Wilson. 2020. "Students’ Self-Efficacy, Causal Attribution Habits and Test Grades" Education Sciences 10, no. 9: 231. https://doi.org/10.3390/educsci10090231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop