Next Article in Journal
Analysis of Innovative Methods’ Effectiveness in Teaching Foreign Languages for Special Purposes Used for the Formation of Future Specialists’ Professional Competencies
Previous Article in Journal
Technology and Higher Education: A Bibliometric Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calculus Self-Efficacy Inventory: Its Development and Relationship with Approaches to learning

Department of mathematical sciences, University of Agder, 4630 Kristiansand S, Norway
*
Author to whom correspondence should be addressed.
Educ. Sci. 2019, 9(3), 170; https://doi.org/10.3390/educsci9030170
Submission received: 7 June 2019 / Revised: 21 June 2019 / Accepted: 25 June 2019 / Published: 3 July 2019

Abstract

:
This study was framed within a quantitative research methodology to develop a concise measure of calculus self-efficacy with high psychometric properties. A survey research design was adopted in which 234 engineering and economics students rated their confidence in solving year-one calculus tasks on a 15-item inventory. The results of a series of exploratory factor analyses using minimum rank factor analysis for factor extraction, oblique promin rotation, and parallel analysis for retaining extracted factors revealed a one-factor solution of the model. The final 13-item inventory was unidimensional with all eigenvalues greater than 0.42, an average communality of 0.74, and a 62.55% variance of the items being accounted for by the latent factor, i.e., calculus self-efficacy. The inventory was found to be reliable with an ordinal coefficient alpha of 0.90. Using Spearman’ rank coefficient, a significant positive correlation ρ ( 95 ) =   0.27 ,   p <   0.05 (2-tailed) was found between the deep approach to learning and calculus self-efficacy, and a negative correlation ρ ( 95 ) =   0.26 ,   p <   0.05 (2-tailed) was found between the surface approach to learning and calculus self-efficacy. These suggest that students who adopt the deep approach to learning are confident in dealing with calculus exam problems while those who adopt the surface approach to learning are less confident in solving calculus exam problems.

1. Introduction

Studies on meaningful learning experiences of students in higher education have taken variant dimensions over the last decades. A good number of psychologists and sociologists have dug deep into students’ reflections of themselves as they learn [1,2,3]. An outcome of this insight into students’ learning is the identification of perceived self-efficacy as a good predictor of desirable learning outcomes [4]. Perceived self-efficacy, according to Bandura [5], refers to “beliefs in one’s capabilities to organize and execute the courses of action required to manage prospective situations” (p. 2). These internal convictions put an individual in a better situation to approach a presented task and behave in a particular way. An individual will tend to engage in tasks for which they have perceived self-competence and try to avoid the ones with less perceived self-competence. Self-efficacy is a determinant factor that positively correlates with the amount of effort expended on a task, perseverance when faced with impediments, and resilience during challenging situations [1].
There has been a long-time debate among educationists on what are appropriate ways of assessing self-efficacy with some contending for the general perspective while others opting for the domain/situation specific perspective (e.g., [6,7]). The domain-specific perspective has influenced the conceptualization of self-efficacy around many fields. For example, mathematics self-efficacy has long been conceptualized as “a situational or problem-specific assessment of an individual’s confidence in her or his fully perform or accomplish a particular” [2]. In a similar manner, engineering self-efficacy has been defined as a “person’s belief that he or she can successfully navigate the engineering curriculum and eventually become a practicing engineer” [8]. Self-efficacy among engineering students has been investigated from conceptualization through developing measuring instruments to correlation with other variables like performance, anxiety, and performance [9,10]. In the same way it has been investigated in mathematics and other science-based courses.
Despite studies on mathematics self-efficacy and performance being sparse, especially in higher education (HE), the available empirical evidence has established a remarkable relationship between mathematics self-efficacy and academic performance, with the former being a strong predictor of the latter [11,12,13,14]. For example, Peters [15] reported a quantitative empirical study on the relationship between self-efficacy and mathematics achievement including other constructs among 326 undergraduate students. Employing multi-level analysis, it was found that mathematics self-efficacy differed across genders, with boys taking the lead, and positively correlated with achievement. More recently, Roick and Ringeisen [16] found, in their longitudinal study, that mathematics self-efficacy exerted a great influence on performance and played a mediating role between learning strategies and mathematics achievement. Similar corroborative results can also be found in the quantitative study reported in [17].
A good number of educators have empirically shown and emphatically argued that the best way to achieve a higher predictive power of mathematics self-efficacy on academic performance of students is through task-specific measures (e.g., [14]). Surprisingly, an extensive search of the literature revealed a lack of instruments for measuring students’ self-efficacy on year-one calculus tasks. This is despite the fact that calculus has been a compulsory part of most year-one Science, Technology, Engineering, and Mathematics (STEM) curricula of many universities in the world. The current study therefore aimed at developing a measure for assessing students’ self-efficacy on year-one calculus tasks with high psychometric properties. Furthermore, in order to enhance the predictive validity of the developed instrument its relationship with approaches to learning was also investigated.

2. Literature review

It is Albert Bandura who is considered the first psychologist in the history of clinical, social, and counseling psychology to have introduced the word “self-efficacy” (see, [18]) to refer to “the conviction that one can successfully execute the behavior required to produce the outcomes” [19]. However, some authors have contended that the “outcome expectancy” concept, which was extensively investigated prior to 1977, is equivalent to self-efficacy in theory, logic, and operationalization [20,21]. In Bandura’s rebuttal of this criticism, he elicited the conceptual differences between outcome and self-efficacy expectancies while maintaining that the kinds of outcomes people expect are strongly influenced by self-efficacy expectancies (see, [22]). An overview of some of these controversies including arguments, counterarguments, disparities, and agreements can be found in the literature (e.g., [23,24]).
The basic tenet of the self-efficacy theory is that all psychological and behavioral changes occur as a result of modifications in the sense of efficacy or personal mastery of an individual [19,25]. In the words of Bandura [19], “people process, weigh, and integrate diverse sources of information concerning their capability, and they regulate their choice behavior and effort expenditure accordingly” (p. 212). In addition, Bandura’s theory posits that the explanation and prediction of psychological changes can be achieved through appraisal of the self-efficacy expectations of an individual. In other words, the mastery or coping expectancy of an individual is a function of outcome expectancy—the credence that a given behavior will or will not result to a given outcome—and self-efficacy expectancy—“the belief that the person is or is not capable of performing the requisite” [23].
Furthermore, the applications of Bandura’s theory as suitable frameworks of conceptualization are numerous in cardiac rehabilitation studies [26], educational research, clinical nursing, music and educational practices [27,28,29,30]. In a study involving undergraduate students taking a biomechanics course in the United States, Wallace and Kernozek [31] demonstrated how the self-efficacy theory can be used by instructors to improve students’ learning experience and lower their anxiety towards the course. Moreover, Sheu et al. [32] reported a meta-analysis study on the contributions of self-efficacy theory in learning science, mathematics, engineering, and technology. The foregoing discussion points to the wide acceptance of Bandura’s self-efficacy theory not only among psychologists but also the educational community at large.
The different conceptualizations of self-efficacy involving general and domain-specific perspectives have recurring implications on the measurement of the construct. A look into the literature reveals that mathematics self-efficacy has been measured with instruments tailored towards general assessment (e.g., [16]), sources of efficacy (e.g., [33]), task-specific efficacy (e.g., [34]), and adaptations from other instruments or which are self-developed (e.g., [35]). These instruments have their strengths and weaknesses. A brief account of each type of instruments is presented in the forthcoming paragraphs accompanied by the justification for a desired approach in the current study.
General assessment instruments have been developed to measure students’ self-reported ratings of their capabilities to perform in mathematical situations. Chan and Yen Abdullah [36] developed a 14-item mathematics self-efficacy questionnaire (MSEQ) in which respondents appraised their ability on a five-point Likert scale from 1 (never) to 5 (usually). MSEQ had four sub-structures comprised of three items each measuring general mathematics self-efficacy and “efficacy in future” coupled with four items each measuring self-efficacy in class and in assignments. Evidence of validity was provided, and internal consistency of the items was investigated with Cronbach’s alpha of 0.94, which showed high reliability. A similar result was also reported in an omnibus survey instrument developed by Wang and Lee [37], in which mathematics self-efficacy was a subcategory. These kinds of omnibus instruments have been reported to be problematic in their predictive relevance [38].
Other closely related instruments to mathematics general assessment types are the adapted mathematics subcategory items from other instruments. For example, in a longitudinal study involving 3014 students, You, Dang and Lim [39] developed a mathematics self-efficacy measure by adapting items from the motivated strategies for learning questionnaire (MSLQ) developed by Pintrich, Smith, Garcia, and McKeachie [40]. Furthermore, in an attempt to operationalize mathematics self-efficacy, Y.-L. Wang et al. [35] developed an instrument which was an adaptation of the science learning self-efficacy questionnaire developed in [41] by substituting mathematics for science in the original instrument. Some authors have independently developed measures for mathematics self-efficacy in which the sources of their items are not disclosed. For example, Skaalvik, Federici, and Klassen [42] developed a 4-item mathematics self-efficacy Norwegian measure as part of a survey instrument without any disclosure of the sources of their items. These instruments were not too different from the general academic self-efficacy measures in terms of their predictive power of performance [38].
Based on Bandura’s [3,5] theorized sources of self-efficacy—mastery experience, vicarious experience, verbal/social persuasions, physiological or affective states—some educationists have developed and investigated some measures [33,43,44]. In a quantitative empirical three-phase study, Usher and Pajares [33] developed a measure and investigated the sources of mathematics self-efficacy. The study started in Phase One with an 84-item measure and ended in Phase Three with a revised 24-item instrument. The final version contained six items in each of the mastery experience, vicarious experience, social persuasions, and physiological state subcategories with 0.88, 0.84, 0.88, and 0.87 Cronbach’s alpha coefficients as pieces of evidence of item internal consistency, respectively. The study confirmed the hypothesized mastery experience of Bandura [5] as the strongest predictor of learning outcome [33]. Other studies have also reported corroborative empirical evidence to confirm the hypothesized sources of mathematics self-efficacy using Usher and Pajares’ [33] instrument with either wording or language adaptations [45,46].
With the exception of sources of self-efficacy measures, the most effective approach in terms of achieving high predictive power of learning outcome is to assess mathematics self-efficacy through a task-specific measure [47]. The basic idea in developing a mathematics task-specific instrument is to conceptualize self-efficacy on predefined mathematical task(s) and tailor the instrument items towards the respondent’s self-capability to complete the tasks. An example of early instruments developed using this approach was the 52-item mathematics self-efficacy scale (MSES) by Betz and Hackett [34] to measure self-efficacy among mathematics college students. In the administration of this instrument, the respondents had to rate their confidence in successfully completing 18-item mathematics tasks; solving 18-item math related problems; and achieving at least a “B” grade in a 16-item college mathematics related course like calculus, statistics, etc. Evidence of reliability was provided with Cronbach’s alpha coefficients of 0.90, 0.93, and 0.92 on each subscale as well as 0.96 on the 52-item scale [34]. MSES has been investigated, revised, and validated with items adapted to university mathematics tasks/problems as well as its rating reduced from a 10-point to five-point Likert scale [14,48].
A task-specific mathematics self-efficacy instrument was also utilized by the Programme for International Student Assessment (PISA) in their 2012 international survey across 65 countries as reported in [49]. The eight-item instrument measured students’ self-reported level of confidence in completing some mathematical tasks without solving the problems. The rating involved a five-point Likert scale ranging from “not at all confident” to ”very confident”’ in which students were asked, for example, “how confident would they feel about solving an equation like 2(x + 3) = (x + 3) (x – 3)”? Cronbach’s alpha coefficient of 0.83 was provided as evidence of reliability [49].

3. Methods

3.1. Item Development

The items of the calculus self-efficacy inventory (CSEI) were developed based on the recommendations of Bandura’s self-efficacy theory using the guidelines explained in the literature (e.g., [50]). The initial inventory used in the current study contained 15 items selected from old final examination questions in a year-one calculus course from 2014/2015 to 2018/2019 academic sessions. Some of the topics covered in the course were functions, limits, continuity and differentiability, differentiations and its applications, integration and its applications, etc. The items varied in level of difficulty from procedural (involving recall of facts, definition, use of formulae, etc.) to conceptual items which involve higher cognitive abilities such as applications, analysis, evaluations, etc. The students were asked to rate their confidence to solve the tasks on a scale ranging from 0 (not confident at all), through 50 (moderate confidence), to 100 (very confident). The 100-point scale was used because it has been reported to enhance the predictive validity of the self-efficacy inventory (see, [50]). Sample questions are presented in Table 1.

3.2. Research Design and Participants

This study adopted a survey research design involving 234 year-one university students in engineering and economics programs offering a compulsory calculus course. The study population comprised 135 males and 99 females with an average age between 19–22 years. The multicollinearity and adequacy of the sample correlation matrix was checked using Bartlett’s test sphericity (N = 234, d = 91) = 1632.2, p < 0.05, which was significant, with a Kaiser–Meyer–Olkin (KMO) test = 0.88 and a determinant greater than 0.00001. These all confirmed the sufficiency of the sample for factor analysis as well as absence of multicollinearity in the data [51]. Moreover, the sample was also within the suggested ranges, in the literature, for factor analysis of multiple item instruments (e.g., [52]).

3.3. Materials

Two instruments were used in this study. The first was the 15-item CSEI described in the previous section entitled “item development”. The second instrument was a Norwegian version of the two-factor revised study process questionnaire (R-SPQ-2F) developed by Biggs, Kember, and Leung [53]. This version is a 19-item instrument that measures students’ approaches to learning on a five-point Likert scale with 10 items measuring deep approach to learning and 9 items measuring surface approach to learning mathematics. The psychometric properties of this instrument were investigated elsewehere [54,55], and its reliability was found to be appropriate from 0.72 to 0.81 using Raykov and Marcoulides’ [56] formula.

3.4. Procedure

The data were collected using both electronic and paper versions of the two questionnaires. A total of 110 engineering students completed both the CSEI and the R-SPQ-2F, out of whom 95 gave us their consent to identify their scores on both scales. Economics students only completed the CSEI due to some logistic problems and formed the remaining 124 of the sample. The collected data were screened for outlier cases and found to contain none. Responses on CSEI were coded on a 11-point scale with 0 coded as 0, 0 < values ≤ 10 coded as 1, 10 < values ≤ 20 coded as 2, …, and 90 < values ≤ 100 coded as 11. Univariate and multivariate descriptive statistics analysis of the data revealed the presence of excess kurtosis and skewness as both indices were greater than |1.0| on most of the items of CSEI [57]. For this reason, 11-point categories were further collapsed to five-point ones, and a polychoric correlation matrix was used in the factor analysis of the data using the FACTOR program version 10.8.04 [58]. The recoding into five-point categories was done is such a way that 0–2 were coded as 1, 3–4 were coded as 2, …, and 9–10 were coded as 5.

4. Results

4.1. Factor Analysis of CSEI

An exploratory factor analysis (EFA) was run on the 15-item CSEI data to determine the factor structures of the inventory. As the data were found to contain excess kurtosis and skewness, instead of a Pearson correlation matrix, a polychoric correlation matrix was used to enhance analysis effectiveness [59]. Minimum rank factor analysis (MRFA) was used in extracting the common underlying factors of CSEI instead of maximum likelihood (ML), unweighted least squares, etc., due to its ability to optimally yield communalities of the sample covariance matrix [60]. The number of factors to retain was based on the optimized parallel analysis procedure [61,62] which has been confirmed to outperform the original Horn’s parallel analysis [63].
This procedure involves simulations of 500 datasets by permuting the sample data at random so that numbers of cases and variables are unchanged. On each of these datasets, EFA was conducted using MRFA, and the average eigenvalues of the extracted factors were then compared with the eigenvalues of the sample. Factors with eigenvalues greater than the average eigenvalues of the simulated datasets were then retained. This procedure has been shown to be an effective way of deciding the number of factors to retain in EFA and also outperformed Kaiser’s criteria of eigenvalues greater than 1 and use of scree plot [61]. The extracted factors were rotated using promin, an example of oblique rotations described in [64]. An oblique rotation was appropriate because the latent factors are assumed to be correlated contrary to the assumption of disjoint factors in the orthogonal rotations. The analysis was performed on both the 11-point and five-point coding of the data. However, results from the five-point coding are presented in Table 2 due to slightly higher precisions in estimating factor loadings and communalities of the items. Factor loadings less than or equal to |0.30| are excluded from Table 2.
Table 2 presents rotated and unrotated factor loadings of a series of three exploratory factor analyses of the CSEI data. The first analysis column of Table 2 represents rotated factor loadings of a two-factor solution of the data. However, there was a gross misspecification in this model with Items 07, 09, and 13 exhibiting substantial cross-loadings and out of range rotated factor loadings. The out of range factor loadings in Item 09 (−1.04) and Item 13 (1.02) are suggestive of negative error variance in the factor solutions of the items. Furthermore, a look at the polychoric correlation matrix (see Appendix A) also revealed that Item 09 had negative correlation coefficients with most other items, which is an indication of a negative variance. For this reason, Item 09 was deleted before the second EFA was run. Moreover, the results of the optimized parallel analysis (Table 3) recommended retaining one-factor solution in the model based on the 95 percentile and 2-factor solution based on the mean. However, the 95 percentile recommendation of the parallel analysis has been reported to be more accurate than its recommendation based on the mean [61]. Therefore, the second analysis was run with a fixed one-factor solution of the model.
The second analysis column of Table 2 presents unrotated factor loadings and item communalities of a one-factor solution of the model with the exclusion of Item 09. This solution contains a Heywood case in form of the communality of Item 01 equals 1. This means that all the variance of Item 01 is shared with other items in the model and that this item has no unique variance at all [51]. Item 01 was removed from the model for this reason and the third analysis was run. The third analysis column of Table 2 presents unrotated factor loadings and item communalities of a one-factor solution of the model excluding Items 01 and 09. All factor loadings were greater than 0.42 and the average communality (0.74) was greater than the widely recommended 0.70, which are suggestive of a good model solution for the sample data [65]. The extracted eigenvalues accounted for a total of 62.55% common variance as depicted in Table 4. This can be interpreted to mean that the one-factor model explained 62.55% of common variance of the factor solution which can be used to justify goodness of fit of the model.

4.2. Reliability of the Instrument

There have been heated debates among methodologists on the appropriateness of using Cronbach’s alpha coefficients in estimating reliability of ordinal scale data. Some of these debates have been provoked by gross misuses and misinterpretations of Cronbach’s alpha especially in the presence of excess kurtosis and skewness, violations of the normality assumption, non-continuous item level of measurement, etc., inherent in ordinal data [66,67]. To circumvent this problem, alternative indices have been proposed for estimating the reliability of ordinal scales (e.g., [68,69]).
A widely used alternative estimate of reliability is the ordinal coefficient alpha proposed by Zumbo, Gadermann, and Zeisser [70]. Ordinal coefficient alpha is similar to Cronbach’s alpha coefficient in that they are both computed using the McDonald’s [71] formula (Equation (1)) for a one-factor factor analysis model. However, the former is based on polychoric correlation matrix estimates that are theoretically different from the Pearson correlation matrix estimates used in the latter. It has been shown both through simulation and raw data studies that ordinal coefficient alpha outperforms the Cronbach’s alpha coefficient in estimating reliability of scales measured using the Likert format of fewer than six-point categories (e.g., [70,72]).
α = p p 1 [ p λ ¯ 2 c ¯ p λ ¯ 2 +   u ¯ ]  
In Equation (1), α is the ordinal coefficient, p is the number of items in the instrument, and λ ¯ , c ¯ and u ¯ (where u = 1   c ) are the average factor loading, average communality, and average unique variance, respectively. Using the values of these parameters as presented in Table 5, the ordinal coefficient alpha can be calculated as follows:
α = 13 13 1 [ 13 .67 2 .74 13 .67 2 + 0   .26 ] = 0.91 .
This is suggestive of a highly reliable unidimensional instrument with an appropriate internal item consistency.

4.3. Correlation of Calculus Self-Efficacy with Approaches to Learning

In an effort to examine the predictive validity of the CSEI, a correlation between students’ scores on the inventory and their respective scores on the R-SPQ-2F was investigated. Scoring of the CSEI was accomplished by adding item scores on the final 13-item inventory while that of R-SPQ-2F was in line with the procedure described in [54]. Each of the 95 engineering students had scores on self-efficacy and deep and surface approaches to learning. These scores were explored using descriptive statistics and tested for normality assumptions before the correlation analysis. As shown in Table 6 and Figure 1, scores on both deep and surface approaches are normally distributed while scores on CSEI are not.
The non-normal distribution of scores in the CSEI is evident from the significance level of Shapiro–Wilk’s test statistic (N = 95, df = 95) = 0.94, p < 0.05, as shown in Table 6. Furthermore, the CSEI scores also exhibited a negatively skewed distribution as shown in the last diagram of Figure 1. For these reasons, a nonparametric bivariate Spearman rank correlation was used instead of the Pearson correlation to check the relationship between the CSEI and the R-SPQ-2F scores. The results revealed a significant positive correlation ρ ( 95 ) =   0.27 ,   p <   0.05 (2-tailed) between the deep approach to learning and calculus self-efficacy and a significant negative correlation ρ ( 95 ) =   0.26 ,   p <   0.05 (2-tailed) between the surface approach to learning and calculus self-efficacy. These results could be interpreted to mean, at the group level, that students who adopt the deep approach to learning are usually confident in dealing with calculus exam problems while those who adopt the surface approach to learning are less confident to successfully solve calculus exam problems. This finding confirms the hypothesis of the Bandura’s self-efficacy theory [4,6] and also corroborates the mediating role played by self-efficacy between learning strategies and performance reported in [16].

5. Conclusions

Despite the abundant empirical evidence of the high predictive power of task-specific mathematics self-efficacy in the literature, an instrument for its measure is still lacking [4,50]. The current study was framed within a quantitative research methodology to develop a concise measure of calculus self-efficacy with high psychometric properties among year-one university students. Bandura’s self-efficacy theory provided a theoretical framework for the conceptualization and operationalization of items on the developed calculus self-efficacy inventory (CSEI). This theory posits that all psychological and behavioral changes occur as a result of modifications in the sense of efficacy or personal mastery of an individual [19,25]. On this basis, the accompanied guidelines and recommendations of this theory [50] were followed in constructing the CSEI items.
The initial instrument contained 15 items, in which 234 respondents rated their confidence in solving year-one calculus tasks on a 100-point rating scale. The results of the factor analysis using MRFA for factor extraction, promin rotation, and parallel analysis for retaining factors revealed a one-factor solution of the model. The final 13-item inventory was unidimensional with all eigenvalues greater than 0.42, an average communality of 0.74, and a 62.55% variance of the items being accounted for by the latent factor, i.e., calculus self-efficacy. These results can be interpreted as evidence of construct validity in measuring students’ internal confidence in successfully solving some calculus tasks. The CSEI has the following advantages over the mathematics self-efficacy scale (MSES) developed by Betz and Hackett [34] and its revisions (e.g., [48]): Its concise length, task specificity, higher factor loadings, and communality.
Furthermore, the reliability coefficient of the CSEI was found to be 0.91 using the ordinal coefficient alpha with the formula described in [70]. This coefficient portrays evidence of high internal consistency of items in the inventory [63]. This reliability coefficient is higher than the coefficient of the mathematics task subscale of the MSES reported in [2,34], and it is within the ranges of the revised MSES reported in [14,48]. There are some misconceptions on the appropriate use of the ordinal coefficient alpha for estimating scale reliability as can be found in [73]. These misconceptions are acknowledged. However, the examples of the types of items provided in Chalmer’s own article are enough to justify the use of the ordinal coefficient alpha in the current study.
The results of the current study also provided an insight into the correlation between approaches to learning and calculus self-efficacy. The significant positive correlation between the deep approach and self-efficacy as well the significant negative correlation between the surface approach and self-efficacy are indications of the predictive validity of the CSEI. This finding also confirms the hypothesis of Bandura’s self-efficacy theory [4,6] as well as corroborates the mediating role played by self-efficacy between learning strategies and performance reported in [16]. It is a crucial to remark that the causal effect between calculus self-efficacy and approaches to learning is not claimed with this finding. Rather, the results have only established a relationship between these constructs that can be explored further in future studies. The final 13-item instrument is available in English and Norwegian upon request from the corresponding author. This inventory is therefore recommended to university teachers in order to assess students’ confidence in successfully solving calculus tasks.

Author Contributions

Y.F.Z. made contributions in Conceptualization, Data curation, Formal analysis, Methodology, Writing-original draft, and writing-review & editing. S.G. made contributions in Methodology, Resources, Supervision, Writing-review & editing. K.B. made contributions in Methodology, Resources, Supervision, Writing-review & editing. H.K.N. made contributions in Methodology, Resources, Supervision, Writing-review & editing.

Funding

The article processing charge (APC) was funded by the University of Agder library.

Acknowledgments

The Faculty of Engineering and Science, University of Agder, Kristiansand, Norway as well as the MatRIC, Centre for Research, Innovation and Coordination of Mathematics Teaching are acknowledged for supporting this research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Standardized Variance/Covariance Matrix (Polychoric Correlation)
Variable010203040506070809101112131415
CSEI 011.000
CSEI 020.6421.000
CSEI 030.6820.4271.000
CSEI 040.2550.2270.5011.000
CSEI 050.3650.2100.4090.3681.000
CSEI 060.0990.1520.3770.6610.2321.000
CSEI 070.5020.3140.5430.3390.2660.3671.000
CSEI 080.6650.3070.6020.3520.4090.3560.5201.000
CSEI 09-0.527 -0.136-0.3020.2200.0540.241-0.064-0.3411.000
CSEI 100.4630.3280.4990.3660.4350.3230.3830.612-0.0261.000
CSEI 110.3730.2530.4770.4920.2660.4400.5170.3760.2000.3981.000
CSEI 120.5470.3010.6320.4460.3050.4590.5430.573-0.0140.4590.7561.000
CSEI 130.2010.2200.3910.5060.3400.5010.4010.2860.3270.3330.8260.7091.000
CSEI 140.6840.3460.6230.2800.4440.1530.5020.667-0.4050.4800.4000.6090.3551.000
CSEI 150.3930.3310.5020.4220.3650.3870.4330.5360.0300.3610.3750.5550.3950.6011.000

References

  1. Pajares, F. Self-efficacy beliefs in academic settings. Rev. Educ. Res. 1996, 66, 543–578. [Google Scholar] [CrossRef]
  2. Hackett, G.; Betz, N.E. An exploration of the mathematics self-efficacy/mathematics performance. J. Res. Math. Educ. 1989, 20, 261–273. [Google Scholar] [CrossRef]
  3. Bandura, A. The explanatory and predictive scope of self-efficacy theory. J. Soc. Clin. Psychol. 1986, 4, 359–373. [Google Scholar] [CrossRef]
  4. Bandura, A. Perceived self-efficacy in cognitive development and functioning. Educ. Psychol. 1993, 28, 117–148. [Google Scholar] [CrossRef]
  5. Bandura, A. Self-efficacy: The Exercise of Control; W H Freeman: New York, NY, USA, 1997. [Google Scholar]
  6. Bandura, A. On the functional properties of perceived self-efficacy revisited. J. Manag. 2012, 38, 9–44. [Google Scholar] [CrossRef]
  7. Scherbaum, C.A.; Cohen-Charash, Y.; Kern, M.J. Measuring general self-efficacy: A comparison of three measures using item response theory. Educ. Psychol. Meas. 2006, 66, 1047–1063. [Google Scholar] [CrossRef]
  8. Jordan, K.L.; Sorby, S.; Amato-Henderson, S.; Donahue, T.H. Engineering Self-Efficacy of Women Engineering Students at Urban vs. Rural Universities. In Proceedings of the 41st ASEE/IEEE Frontiers in Education Conference, Rapid City, SD, USA, 12–15 October 2010. [Google Scholar]
  9. Marra, R.M.; Schuurman, M.; Moore, C.; Bogue, B. Women Engineering Students’ Self-Efficacy Beliefs—The Longitudinal Picture. In Proceedings of the American Society for Engineering Education Annual Conference and Exposition, Portland, OR, USA, 12–15 June 2005. [Google Scholar]
  10. Carberry, A.R.; Lee, A.-S.; Ohland, M.W. Measuring engineering design self-efficacy. J. Eng. Educ. 2010, 99, 71–79. [Google Scholar] [CrossRef]
  11. Jaafar, W.M.W.; Ayub, A.F.M. Mathematics self-efficacy and meta-cognition among university students. Procedia-Soc. Behav. Sci. 2010, 8, 519–524. [Google Scholar] [CrossRef]
  12. Yusuf, M. The impact of self-efficacy, achievement motivation, and self-regulated learning strategies on students’ academic achievement. Procedia-Soc. Behav. Sci. 2011, 15, 2623–2626. [Google Scholar] [CrossRef]
  13. Akin, A.; Kurbanoglu, I.N. The Relationships between math anxiety, math attitudes, and self-efficacy: A structural equation model. Studia Psychol. 2011, 53, 263–273. [Google Scholar]
  14. Pajares, F.; Miller, M.J. Mathematics self-efficacy and mathematics performances: The need for specificity of assessment. J. Couns. Psychol. 1995, 42, 190–198. [Google Scholar] [CrossRef]
  15. Peters, M.L. Examining the relationships among classroom climate, self-efficacy, and achievement in undergraduate mathematics: A multi-level analysis. Int. J. Sci. Math. Educ. 2013, 11, 459–480. [Google Scholar] [CrossRef]
  16. Roick, J.; Ringeisen, T. Students’ math performance in higher education: Examining the role of self-regulated learning and self-efficacy. Learn. Individ. Differ. 2018, 65, 148–158. [Google Scholar] [CrossRef]
  17. Lin, L.; Lee, T.; Snyder, L.A. Math self-efficacy and STEM intentions: A person-centered approach. Front. Psychol. 2018, 9, 2033. [Google Scholar] [CrossRef] [PubMed]
  18. Kirsch, I. Early research on self-Efficacy: What we already know without knowing we knew. J. Soc. Clin. Psychol. 1986, 4, 339–358. [Google Scholar] [CrossRef]
  19. Bandura, A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol. Rev. 1977, 84, 191–215. [Google Scholar] [CrossRef]
  20. Kirsch, I. Self-efficacy and expectancy: Old wine with new labels. J. Personal. Soc. Psychol. 1985, 49, 824–830. [Google Scholar] [CrossRef]
  21. Eastman, C.; Marzillier, J.S. Theoretical and methodological difficulties in Bandura’s self-efficacy theory. Cogn. Ther. Res. 1984, 8, 213–229. [Google Scholar] [CrossRef]
  22. Bandura, A. Recycling misconceptions of perceived self-efficacy. Cogn. Ther. Res. 1984, 8, 231–255. [Google Scholar] [CrossRef]
  23. Maddux, J.E.; Sherer, M.; Rogers, R.W. Self-efficacy expectancy and outcome expectancy: Their relationship and their effects on behavioral intentions. Cogn. Ther. Res. 1982, 6, 207–211. [Google Scholar] [CrossRef]
  24. Maddux, J.E.; Stanley, M.A. Self-efficacy theory in contemporary psychology: An overview. J. Soc. Clin. Psychol. 1986, 4, 244–255. [Google Scholar] [CrossRef]
  25. Bandura, A. Self-efficacy mechanism in human agency. Am. Pyschol. 1982, 37, 122–147. [Google Scholar] [CrossRef]
  26. Jeng, C.; Braun, L.T. Bandura’s self-efficacy theory: A guide for cardiac rehabilitation nursing practice. J. Holist. Nurs. 1994, 12, 425–436. [Google Scholar] [CrossRef] [PubMed]
  27. Montcalm, D.M. Applying Bandura’s theory of self-efficacy to the teaching of research. J. Teach. Soc. Work 1999, 19, 93–107. [Google Scholar] [CrossRef]
  28. Artino, A.R. Academic self-efficacy: From educational theory to instructional practice. Perspect. Med. Educ. 2012, 1, 76–85. [Google Scholar] [CrossRef] [PubMed]
  29. Kardong-Edgren, S. Bandura’s self-efficacy theory…Something is missing. Clin. Simul. Nurs. 2013, 9, e327–e328. [Google Scholar] [CrossRef]
  30. Hendricks, K.S. The sources of self-efficacy: Educational research and implications for music. Update Appl. Res. Music Educ. 2016, 35, 32–38. [Google Scholar] [CrossRef]
  31. Wallace, B.; Kernozek, T. Self-efficacy theory applied to undergraduate biomechanics instruction. J. Hosp. Leis. Sport Tour. Educ. 2017, 20, 10–15. [Google Scholar] [CrossRef]
  32. Sheu, H.-B.; Lent, R.W.; Miller, M.J.; Penn, L.T.; Cusick, M.E.; Truong, N.N. Sources of self-efficacy and outcome expectations in science, technology, engineering, and mathematics domains: A meta-analysis. J. Vocat. Behav. 2018, 109, 118–136. [Google Scholar] [CrossRef]
  33. Usher, E.L.; Pajares, F. Sources of self-efficacy in mathematics: A validation study. Contemp. Educ. Psychol. 2009, 34, 89–101. [Google Scholar] [CrossRef]
  34. Betz, N.E.; Hackett, G. The relationship of mathematics self-efficacy expectations to the selection of science-based college majors. J. Vocat. Behav. 1983, 23, 329–345. [Google Scholar] [CrossRef]
  35. Wang, Y.-L.; Liang, J.-C.; Lin, C.-Y.; Tsai, C.-C. Identifying Taiwanese junior-high school students’ mathematics learning profiles and their roles in mathematics learning self-efficacy and academic performance. Learn. Individ. Differ. 2017, 54, 92–101. [Google Scholar] [CrossRef]
  36. Chan, H.Z.; Yen Abdullah, M.N.L. Validity and reliability of the mathematics self-efficacy questionnaire (MSEQ) on primary school students. Pertanika J. Soc. Sci. Humanit. 2018, 26, 2161–2177. [Google Scholar]
  37. Wang, X.; Lee, Y.S. Investigating the psychometric properties of a new survey instrument measuring factors related to upward transfer in STEM fields. Rev. High. Educ. 2019, 42, 339–384. [Google Scholar] [CrossRef]
  38. Mamaril, N.A.; Usher, E.L.; Li, C.R.; Economy, D.R.; Kennedy, M.S. Measuring undergraduate students’ engineering self-efficacy: A validation study. J. Eng. Educ. 2016, 105, 366–395. [Google Scholar] [CrossRef]
  39. You, S.; Dang, M.; Lim, S.A. Effects of student perceptions of teachers’ motivational behavior on reading, english, and mathematics achievement: The mediating role of domain specific self-efficacy and intrinsic motivation. Child Youth Care Forum 2016, 45, 221–240. [Google Scholar] [CrossRef]
  40. Pintrich, P.R.; Smith, D.A.F.; Garcia, T.; McKeachie, W.J. Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educ. Psychol. Meas. 1993, 53, 801–813. [Google Scholar] [CrossRef]
  41. Lin, T.-J.; Tsai, C.-C. A multi-dimensional instrument for evaluating Taiwanese high school students’ science learning self-efficacy in relation to their approaches to learning science. Int. J. Sci. Math. Educ. 2013, 11, 1275–1301. [Google Scholar] [CrossRef]
  42. Skaalvik, E.M.; Federici, R.A.; Klassen, R.M. Mathematics achievement and self-efficacy: Relations with motivation for mathematics. Int. J. Educ. Res. 2015, 72, 129–136. [Google Scholar] [CrossRef]
  43. Lent, R.W.; Lopez, F.G.; Bieschke, K.J. Mathematics self-efficacy: Sources and relation to science-based career choice. J. Couns. Psychol. 1991, 38, 424–430. [Google Scholar] [CrossRef]
  44. Joët, G.; Usher, E.L.; Bressoux, P. Sources of self-efficacy: An investigation of elementary school students in France. J. Educ. Psychol. 2011, 103, 649–663. [Google Scholar] [CrossRef]
  45. Yurt, E. The predictive power of self-efficacy sources for mathematics achievement. Educ. Sci. 2014, 39, 176. [Google Scholar] [CrossRef]
  46. Zientek, L.R.; Fong, C.J.; Phelps, J.M. Sources of self-efficacy of community college students enrolled in developmental mathematics. J. Furth. High. Educ. 2019, 43, 183–200. [Google Scholar] [CrossRef]
  47. Toland, M.D.; Usher, E.L. Assessing mathematics self-efficacy. J. Early Adolesc. 2015, 36, 932–960. [Google Scholar] [CrossRef]
  48. Kranzler, J.H.; Pajares, F. An exploratory factor analysis of the mathematics self-efficacy scale revised (MSES-R). Meas. Eval. Couns. Dev. 1997, 29, 215–228. [Google Scholar]
  49. Borgonovi, F.; Pokropek, A. Seeing is believing: Task-exposure specificity and the development of mathematics self-efficacy evaluations. J. Educ. Psychol. 2018, 111, 268–283. [Google Scholar] [CrossRef]
  50. Bandura, A. Guide for constructing self-efficacy scales. In Self-Efficacy Beliefs of Adolescents; Pajares, F., Urdan, T., Eds.; Information Age Publishing: Greenwich, CT, USA, 2006; Volume 5, pp. 307–337. [Google Scholar]
  51. Field, A. Discovering Statistics Using IBM SPSS, 5th ed.; SAGE Publication Ltd.: London, UK, 2018. [Google Scholar]
  52. Gagne, P.; Hancock, G.R. Measurement model quality, sample size, and solution propriety in confirmatory factor models. Multivar. Behav. Res. 2006, 41, 65–83. [Google Scholar] [CrossRef]
  53. Biggs, J.B.; Kember, D.; Leung, D.Y.P. The revised two factor study process questionnaire: R-SPQ-2F. Br. Educ. Res. J. 2001, 71, 133–149. [Google Scholar] [CrossRef]
  54. Zakariya, Y.F.; Bjørkestøl, K.; Nilsen, H.K.; Goodchild, S.; Lorås, M. University students’ learning approaches: An adaptation of the revised two-factor study process questionnaire to Norwegian. Stud. Educ. Eval. 2019. under review. [Google Scholar]
  55. Zakariya, Y.F. Study approaches in higher education mathematics: Investigating the statistical behaviour of an instrument translated into Norwegian. Educ. Sci. 2019. under review. [Google Scholar]
  56. Raykov, T.; Marcoulides, G.A. Scale reliability evaluation under multiple assumption violations. Struct. Equ. Model. A Multidiscip. J. 2016, 23, 302–313. [Google Scholar] [CrossRef]
  57. Muthén, B.O.; Kaplan, D. A comparison of some methodologies for the factor analysis of non-normal Likert variables: A note on the size of the model. Br. J. Math. Stat. Psychol. 1992, 45, 19–30. [Google Scholar] [CrossRef] [Green Version]
  58. Lorenzo-Seva, U.; Ferrando, P.J. FACTOR 9.2: A comprehensive program for fitting exploratory and semiconfirmatory factor analysis and IRT models. Appl. Psychol. Meas. 2013, 37, 497–498. [Google Scholar] [CrossRef]
  59. Holgado-Tello, F.P.; Chacón-Moscoso, S.; Barbero-García, I.; Vila-Abad, E. Polychoric versus Pearson correlations in exploratory and confirmatory factor analysis of ordinal variables. Qual. Quant. 2008, 44, 153–166. [Google Scholar] [CrossRef]
  60. Shapiro, A.; ten Berge, J.M.F. Statistical inference of minimum rank factor analysis. Psychometrika 2002, 67, 79–94. [Google Scholar] [CrossRef] [Green Version]
  61. Timmerman, M.E.; Lorenzo-Seva, U. Dimensionality assessment of ordered polytomous items with parallel analysis. Psychol. Methods 2011, 16, 209–220. [Google Scholar] [CrossRef] [PubMed]
  62. Buja, A.; Eyuboglu, N. Remarks on Parallel Analysis. Multivar. Behav. Res. 1992, 27, 509–540. [Google Scholar] [CrossRef] [PubMed]
  63. Baglin, J. Improving your exploratory factor analysis for ordinal data: A demonstration using FACTOR. Pract. Assess. Res. Eval. 2014, 19, 1–15. [Google Scholar]
  64. Lorenzo-Seva, U. Promin: A Method for Oblique Factor Rotation. Multivar. Behav. Res. 1999, 34, 347–365. [Google Scholar] [CrossRef]
  65. Pituch, A.K.; Stevens, J.P. Applied Multivariate Statistics for the Social Sciences, 6th ed.; Routledge: New York, NY, USA; London, UK, 2016. [Google Scholar]
  66. Sijtsma, K. On the Use, the Misuse, and the Very Limited Usefulness of Cronbach’s Alpha. Psychometrika 2009, 74, 107–120. [Google Scholar] [CrossRef]
  67. Schmitt, N. Uses and abuses of coefficient alpha. Psychol. Assess. 1996, 8, 350–353. [Google Scholar] [CrossRef]
  68. Revelle, W.; Zinbarg, R.E. Coefficients Alpha, Beta, Omega, and the glb: Comments on Sijtsma. Psychometrika 2008, 74, 145–154. [Google Scholar] [CrossRef]
  69. Zinbarg, R.E.; Revelle, W.; Yovel, I.; Li, W. Cronbach’s α, Revelle’s β, and Mcdonald’s ωH: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika 2005, 70, 123–133. [Google Scholar] [CrossRef]
  70. Zumbo, B.D.; Gadermann, A.M.; Zeisser, C. Ordinal versions of coefficients alpha and theta for Likert rating scales. J. Mod. Appl. Stat. Methods 2007, 6, 21–29. [Google Scholar] [CrossRef]
  71. McDonald, R.P. Factor Analysis and Related Methods; Erlbaum: Hillsdale, MI, USA, 1985. [Google Scholar]
  72. Gadermann, A.M.; Guhn, M.; Zumbo, B.D. Estimating ordinal reliability for Likert-type and ordinal item response data: A conceptual, empirical, and practical guide. Pract. Assess. Res. Eval. 2012, 17, 3. [Google Scholar]
  73. Chalmers, R.P. On misconceptions and the limited usefulness of ordinal Alpha. Educ. Psychol. Meas. 2018, 78, 1056–1071. [Google Scholar] [CrossRef]
Figure 1. Normal distribution of scores on CSEI and R-SPQ-2F scales.
Figure 1. Normal distribution of scores on CSEI and R-SPQ-2F scales.
Education 09 00170 g001
Table 1. Sample items on the calculus self-efficacy inventory (CSEI).
Table 1. Sample items on the calculus self-efficacy inventory (CSEI).
SN.How Confident are You that You can Solve Each of These Problems Right Now?Confidence (0–100)
3 Calculate the limit:
          lim x 1 1 cos ( 1 x 2 ) x 2 2 x + 1
7 A curve is given by x = y 2 x 2 y 1 . Use implicit differentiation to find y .
11Evaluate the integral.
          x 7 x 2 + x 6 d x
14A surface is bounded by the function f ( x ) = 1 3 e x 2 where 0   x   2 and the x-axis. A vessel is made by rotating the surface about the x-axis. Find the resulting volume.
Table 2. Rotated and unrotated factor loadings and item communalities.
Table 2. Rotated and unrotated factor loadings and item communalities.
CSEIFirst AnalysisSecond AnalysisThird Analysis
F1F2F1CommunalityF1Communality
Item 01 0.990.731.00-----------
Item 02 0.510.500.820.430.42
Item 03 0.610.780.720.780.83
Item 040.79 0.620.770.640.70
Item 050.34 0.510.480.510.50
Item 060.83 0.550.800.600.89
Item 070.400.350.660.600.650.55
Item 08 0.710.760.820.730.75
Item 090.85−1.04-------------------------
Item 100.350.380.650.750.650.74
Item 110.38 0.740.950.760.96
Item 120.69 0.840.950.850.91
Item 131.02−0.300.680.980.720.90
Item 14 0.800.740.770.720.77
Item 150.45 0.680.770.680.66
Table 3. Parallel analysis—minimum rank factor analysis (MRFA) results based on the polychoric correlation matrix.
Table 3. Parallel analysis—minimum rank factor analysis (MRFA) results based on the polychoric correlation matrix.
VariableReal-Data % of VarianceMean of Random % of Variance95 Percentile Random % of Variance
150.09 **17.0019.33
217.04 *15.2517.18
36.4613.7515.24
45.4712.1513.33
55.2710.5111.84
64.258.8110.19
74.187.378.79
82.925.997.26
92.174.605.83
101.233.244.38
** Advised number of dimensions when 95 percentile is considered: 1. * Advised number of dimensions when mean is considered: 2.
Table 4. Eigenvalues and proportion of explained variance.
Table 4. Eigenvalues and proportion of explained variance.
VariableEigenvalue Proportion of Common VarianceCumulative Proportion of VarianceCumulative Percentage of Variance
15.98480.62550.625562.55
21.26500.1322
30.77480.0810
40.42870.0448
50.34770.0363
60.32580.0341
70.19810.0207
80.14380.0150
90.09110.0095
100.00870.0009
110.00010.0000
120.00000.0000
130.00000.0000
Table 5. Ordinal coefficient alpha reliability parameters.
Table 5. Ordinal coefficient alpha reliability parameters.
CSEI λ c u
Item 020.430.420.58
Item 030.780.830.17
Item 040.640.700.30
Item 050.510.500.50
Item 060.600.890.11
Item 070.650.550.45
Item 080.730.750.25
Item 100.650.740.26
Item 110.760.960.04
Item 120.850.910.09
Item 130.720.900.10
Item 140.720.770.23
Item 150.680.660.34
Average0.670.740.26
Table 6. Descriptive statistics and Shapiro–Wilk’s test of normality results.
Table 6. Descriptive statistics and Shapiro–Wilk’s test of normality results.
Descriptive StatisticsTest of Normality
NMin.Max.MeanStd. Dev.SkewnessKurtosisShapiro–Wilk
Stat.Std. Err.Stat.Std. Err.Stat.dfSig.
Deep approach951.204.702.820.680.030.250.100.490.99950.82
Surface approach951.004.002.420.630.150.25-0.490.490.99950.65
CSEI9513.0065.0046.0711.43−0.92−0.250.860.490.94950.00*
* Significant, p < 0.05.

Share and Cite

MDPI and ACS Style

Zakariya, Y.F.; Goodchild, S.; Bjørkestøl, K.; Nilsen, H.K. Calculus Self-Efficacy Inventory: Its Development and Relationship with Approaches to learning. Educ. Sci. 2019, 9, 170. https://doi.org/10.3390/educsci9030170

AMA Style

Zakariya YF, Goodchild S, Bjørkestøl K, Nilsen HK. Calculus Self-Efficacy Inventory: Its Development and Relationship with Approaches to learning. Education Sciences. 2019; 9(3):170. https://doi.org/10.3390/educsci9030170

Chicago/Turabian Style

Zakariya, Yusuf F., Simon Goodchild, Kirsten Bjørkestøl, and Hans K. Nilsen. 2019. "Calculus Self-Efficacy Inventory: Its Development and Relationship with Approaches to learning" Education Sciences 9, no. 3: 170. https://doi.org/10.3390/educsci9030170

APA Style

Zakariya, Y. F., Goodchild, S., Bjørkestøl, K., & Nilsen, H. K. (2019). Calculus Self-Efficacy Inventory: Its Development and Relationship with Approaches to learning. Education Sciences, 9(3), 170. https://doi.org/10.3390/educsci9030170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop