Next Article in Journal
From TPACK to DPACK: The “Digitality-Related Pedagogical and Content Knowledge”-Model in STEM-Education
Previous Article in Journal
An Interdisciplinary Learning Community of Education and Psychology Majors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Examination of the Structural Validity of Instruments Assessing PE Teachers’ Beliefs, Intentions, and Self-Efficacy towards Teaching Physically Active Classes

School of Education, Faculty of Arts and Education, Deakin University, Burwood 3125, Australia
Educ. Sci. 2023, 13(8), 768; https://doi.org/10.3390/educsci13080768
Submission received: 1 June 2023 / Revised: 21 July 2023 / Accepted: 24 July 2023 / Published: 27 July 2023

Abstract

:
Schools and, in particular, health and physical education (HPE) classes, has the potential to engage children and adolescents in health-enhancing physical activity. HPE teachers may enable or constrain this behaviour through appropriate classroom strategies and pedagogies that enhance not only student learning but also their engagement in physical activity. Such practices are a result of a teacher’s curricula beliefs, self-efficacy, and intention to teach physical activity as part of their HPE classes. The purpose of this study is to investigate the structural validity of six instruments designed to assess teachers’ beliefs, self-efficacy, and intentions regarding physical activity in physical education using the Rasch Measurement Model (RMM). Data suggest that three out of six instruments demonstrated multidimensional characteristics (curriculum beliefs, self-efficacy, and subjective norm). Model fit data were in the ranges of 0.5 to 1.61 for infit and −2.58 to 3.20 for outfit data. Differential item functioning was only present on one item in the curriculum beliefs instrument. Person reliability was >0.55 and item reliability was >0.73. Qualitative interpretation of Wright maps demonstrated a very good spread of items. Overall, each instrument demonstrated appropriate structural validity when weighed up against all of the components of the RMM.

1. Introduction

There has been increasing interest in teacher’s beliefs and their self-efficacy and how this impacts student learning and engagement during school physical education classes [1,2,3,4]. According to Kern, Graber, Woods, and Templin [5], beliefs and their acquisition drives the instructional decision making of teachers, including when they attempt to make pedagogical change. Some examples of recent work on teachers’ beliefs within physical education have included studies that have examined the beliefs of teachers and the influence of different research interventions on these beliefs [2], teacher socialisation [6], inclusion [7], and technology [8]. Ennis [9] has highlighted that the instructional context is complex and may impact and influence teachers’ ability to teach in a manner consistent with their beliefs. Contextually, within the state of Victoria, there has been a focus on the development of physical activity in line with the jurisdictional government policy known as Active Schools, Active Kids, Active Communities (read Active Schools Framework) [10]. Within the Active Schools Framework, physical education is positioned as one of six key priorities in the promotion of physical activity for physical, social, and psychological benefits. Two objectives, engagement in movement and physical activity and the development of knowledge, skills, and understandings regarding physical activity, are important in the development of children’s and adolescent’s lifelong physical activity participation, and have been collectively referred to as physical literacy [11,12]. Physical literacy according to Whitehead [13] is defined “as the motivation, confidence, physical competence, knowledge and understanding to value and take responsibility for engagement in physical activities for life” (p. 11). It is the latter part of this definition that is the focus of this article, especially on knowledge and understanding. In other words, what teachers believe and how they act on this belief has the potential to impact student engagement and student learning. It is argued that the Active Schools Framework and the Victorian Curriculum: Health and Physical Education (VCHPE) highlight the importance of learning about and participating in physical activity, and it is important to understand teachers’ beliefs and self-efficacy towards teaching physically active classes given that their beliefs may impact their practices. Given that teachers are responsible for the development of lessons and units of work, understanding the intentions of teachers and how they promote physical activity through their physical education classes are important research areas.

Instruments Assessing Teacher Beliefs, Intentions and Their Self-Efficacy

Jewett, Bain, and Ennis [14] suggested that teachers possess unique value systems that interact with diverse knowledge bases (see [15,16]), which in turn impact their decision making within the lessons that they teach. According to Kulinna and Cothran [3], “perhaps the largest body of literature specific to beliefs in PE is related to teachers’ value orientations” (p. 531). In presenting the values orientation inventory, Ennis and Hooper [17] drew on diverse literature in education and psychology to formulate five value orientations: discipline mastery, learning processes, self-actualisation, ecological identity, and social responsibility. Discipline mastery, as one value orientation, comprises the subject matter of physical education (for example, physical activity participation, development of motor competences, and learning game strategies and concepts). However, the instrument used to assess these values collapses much of the disciplinary mastery content together, including fitness, movement competency, physical activity participation and exercise testing. To provide more focus on content related to physical activity, as opposed to movement competency, Kulinna and Silverman [18,19] developed a valid and reliable instrument, underpinned by educational and psychological theories (e.g., social cognitive theory, self-efficacy) that was capable of assessing a teacher’s attitude towards physical activity and fitness. Subsequent instrument validation work on the curricular outcome goals for physical education has been conducted by Adamakis and colleagues [20,21,22,23], and Guan, McBride, and Xiang [24]. These studies confirmed the presence of a four factor model (physical activity and fitness, self-actualisation, motor skill development, and social development) through internal consistency and confirmatory factor analysis. In terms of curricular outcomes, Guan et al. [24] examined Chinese teachers’ attitudes toward physical activity and fitness, and they found that they scored significantly higher on self-actualisation and physical activity and fitness than on motor or social skills development. The study of Adamakis et al. [23] on Greek PE students’ found that physical activity and fitness was the most important curricular goal, whereas motor skill development was ranked the lowest.
According to social cognitive theory (SCT) [25], the choices teachers make in their classes not only relates to their curricula beliefs but also relates to their self-efficacy. The current study examined the role that SCT and in particular self-efficacy theory [25] plays in the promotion of beneficial physical activity during school health and physical education. According to Kulinna and Cothran [3] “self-efficacy is a subset of an individual’s beliefs specific to one’s ability to initiate, engage, and complete a behavior [sic] at a certain level of competence” (p. 537). Physical education teachers who feel more able to promote physical activity through their pedagogies in order to achieve positive beneficial outcomes for students are more likely to do so than their peers who lack self-efficacy in the promotion of physical activity [26]. Developing an understanding of teachers’ intentions to teach classes with higher physical activity levels is important, given that teachers have the ability to promote or constrain the physical activity of students within their classes.
Previous content and construct validation work examining teachers’ curricula beliefs [18,19], self-efficacy [26,27], and intentions [28,29] have demonstrated that these instruments possess valid and reliable psychometric properties. The commonality of all of these development and validation studies is that they employed traditional classical test theory (e.g., factor analytical techniques). Martin and Kulinna [27] developed and validated the Physical Education Teachers’ Physical Activity Self-Efficacy Scale (PETPAS) with a sample (n = 309) of primary-/elementary- and middle-school physical education teachers. Originally, they developed a 26-item scale around four hypothesised factors: student, space, time, and institution. Following confirmatory factor analysis and the use of goodness of fit modelling, 10 items were removed, leaving the PETPAS instrument with 16 items. The hypotheses were confined and an acceptable fit for the data was accepted. Gencay’s [26] validation work of the PETPAS with a Turkish sample demonstrated that it was a valid and reliable instrument that possessed a similar factor structure to the original instrument possessing four factors. Findings suggest that gender differences did exist in the cohort, with female teachers perceiving space, time, and institution as being less significant barriers to physical activity self-efficacy than male teachers.
Martin, Kulinna, Eklund, and Reed [29] developed the Physical Education Teachers’ Intentions to Teach Physically Activity Physical Education (PETITPAPE). Based on theories of reasoned action, planned behaviour, and self-efficacy, the authors developed items related to behavioural intention, attitude, control, and subjective norm. Psychometric properties using descriptive statistics and correlational/regression (hierarchical) statistics were used with a sample (n = 187) of physical education teachers, and they demonstrated appropriate validity. Results demonstrated that while teachers possessed positive characteristics towards teaching high amounts of physical activity, their motivation did not match their actions. Both Gencay [26] and Martin and Kulinna [27] highlighted that validation work is an ongoing and continuous process in the development of psychometrically valid instruments for use in research and in clinical (i.e., teaching) settings. To continue this validation work, the current study utilised the Rasch Measurement Model (RMM), a type of Item Response Theory to further explore the structural validity of several instruments designed to assess PE teachers’ beliefs, self-efficacy, and intentions towards physical activity in physical education. According to Newton and Shaw [30], structural validity can take the form of differential item functioning and dimensionality studies.
The need for valid and reliable measures of teacher beliefs, intentions, and self-efficacy in the promotion of physically active classes during school physical education is warranted. This is due to the attention and focus by governments and researchers on the importance and promotion of physical activity behaviours amongst children and adolescents for both educative and health outcomes [10,31]. To date, there are only a few studies available in the literature examining these constructs [18,19,20,23]. Studying these is an important research area given the importance of the physical activity engagement of students during school health and physical education as well as the nature of teachers’ pedagogical practices in affording or constraining such engagement. Given this, the purpose of the study, in line with Martin and Kulinna’s [27] call to continually examine the psychometric properties of instruments assessing these characteristics in teacher samples, was to examine the structural validity using the RMM in an Australian sample. To my knowledge, no published studies have examined the psychometric properties using Rasch modelling. Therefore, the primary aim of this study was to examine the structural validity of six instruments related to the beliefs, self-efficacy, and intentions of teachers (curricula beliefs, self-efficacy, behavioural intention, behavioural control, attitude, and subjective norm) to teaching physical activity during school physical education classes.

2. Materials and Methods

2.1. Recruitment

Three methods of participant recruitment were used: (a) the lead author posted an invitation to contribute to a research study on physical activity in physical education on social media (e.g., Twitter); (b) direct email contact to physical education teachers; and (c) an electronic advertisement in the professional learning association for physical education in Victoria (Australian Council for Health, Physical Education and Recreation [ACHPER]) e-newsletter. Teachers who agreed to volunteer for the study were provided with a URL to a Qualtrics survey. Prior to teachers completing the attitude instrument, they were provided with an explanatory statement and an electronic consent form. The design of the Qualtrics survey prohibited participants from answering the instrument items without first agreeing to consent to the research. The study was approved by the University’s Human Research Ethics committee. Data from the instrument were analysed to examine trends and differences in teacher subgroups.

2.2. Participants and Setting

Originally, one-hundred-and-twenty-four teachers expressed interest in this study via completion of the electronic consent form. However, forty-seven participants did not complete the questionnaire and therefore were removed from the participant pool. This study reports on the analysis of 77 physical education teachers (42 males and 35 females) from the state of Victoria, Australia that completed all questions on the instrument involved. The average participant age was 35.2 years (SD ± 8.5) and the average teaching experience of these teachers was 10.6 years (SD ± 7.6), with a range from 1 year through to 33 years of experience. Most of the teachers who were surveyed had completed a bachelor’s degree in Education/Teaching (n = 48, 62%) or an Applied Science/Human Movement/Exercise Science degree (n = 28, 36%), with a small number earning a masters by research degree (n = 1, 1%). Thirty-nine percent of the sample only taught physical education, with the remaining teachers teaching across a diverse range of second and third teaching methods.

2.3. Instruments

2.3.1. Curriculum Beliefs Instrument

The instrument used in this study was initially constructed by Kulinna and Silverman [18]. Participants completed the Physical Education Teachers’ Attitudes toward Teaching Physical Activity and Fitness Instrument (curriculum beliefs instrument) that was loosely designed on the values orientation inventory (VOI) developed by Ennis and Hooper [17]. The instrument is a 36-item, four-factor scale designed to assess physical education teachers’ attitudes towards teaching physical activity. The four domains include: (a) physical activity and fitness, (b) self-actualisation, (c) motor skill development, and (d) social development. The focus of this instrument was designed with the primary objective of measuring physical education teachers’ attitudes toward teaching physical activity. According to Kulinna and Silverman [18,19], each of the four domains focus on different aspects and value orientations for physical education curricula. For example, the physical activity and fitness domain focuses on the importance of physical activity participation and engagement leading to lifelong physical activity participation. Self-actualisation, also called individual development, aims to develop a student’s self-esteem and self-confidence as well as their enjoyment and self-efficacy toward physical activity participation. The motor skill development domain is focused on the development of motor competence proficiency for successful engagement in a range of sports and physical activities. The final domain, social development, is concerned with the development of social skills, the development of an understanding of equal opportunities, and appreciating differences among individuals.

2.3.2. Self-Efficacy Instrument

The physical education teacher’s physical activity self-efficacy scale instrument was designed by Martin and Kulinna [27] to measure the self-efficacy of physical education teachers in the promotion of physically active lessons during school health and physical education. The original scale possessed 26 items across four subdomains and was based theoretically on Bandura’s [25] self-efficacy theory, a subset of social-cognitive theory. The instrument utilised items across four different subscales/domains. These included 6 items for the student domain, e.g., “my students do not enjoy being physically active during my classes”; 6 items for the space domain, e.g., “additional students are regularly added to my physical education classes”; 7 items for the time domain, e.g., “I spend too much time on class management”; and an institution domain that possessed 7 items, e.g., “the budget for my physical education program is inadequate.” Participants were asked to read the question and respond to a 4-point Likert scale––with 1 = strongly disagree with the statement, and 4 = strongly agree with the statement.

2.3.3. Instruments to Assess Behavioural Intention, Behavioural Control, Attitude and Subjective Norm

For the purposes of this study, four different instruments were used to collect data from PE teachers associated with their intention to offer physically active physical education classes. Each instrument was designed by Martin and Kulinna [27] in line with theories of social cognitive theory and the theory of planned behaviour, and they comprised the psychological constructs of behavioural intention (5 items), attitude (7 items), behavioural control (3 questions), and subjective norm (8 items). Each instrument demonstrated acceptable reliability (alpha coefficient) using classical test theory (Cronbach’s α). Behavioural intention comprised 5 items on a 4-point Likert scale, α = 0.928; attitude was made up of 7 items on a 5-point scale, α = 0.912; behavioural control was made up of 3 questions, α = 0.921; and subjective norm possessed 8 items, α = 0.811.

2.4. Data Analysis

The Statistical Package for Social Sciences ([SPSS] Version 28.0.1.0, Chicago, IL, USA) was used for data entry, storage, and retrieval. The RMM computer program WINSTEPS (version 5.3.2.0) was used to complete the Rasch model analysis, using the Rasch rating scale model [32].

2.5. Rasch Analysis

Rasch model analysis is an approach that examines the measurement properties (e.g., dimensionality, hierarchical ordering, and differential item functioning [DIF]) within health sciences, exercise science, and physical education. At its heart is a rationale and justification for the increasing use of Rasch modelling that conforms with the idea of fundamental measurement [32]—or the measurement of a latent, underlying construct. The development of structural validity can be ascertained by considering the fit to the model of both of the items within the scale and the participants [33]. It is based on the principle that only two attributes determine how an individual responds to the scale items: the ability of the participant and the difficulty of the scale item, each expressed as estimates on the underlying latent trait. Analysis using the Rasch model is a well-recognised method for examining whether items on an instrument are all measuring the same skill, attribute, trait, or dimension [32].
The data were analysed using the Rasch Measurement Model [32,34,35]. RMM is one type of statistical model that can be used to evaluate the structural validity and internal structural validity of an instrument [36]. Rasch analysis is a form of item response theory that examines measurement properties. A rationale and justification for the increasing use of Rasch modelling is that it conforms with the idea of fundamental measurement. As a contemporary measurement approach, it overcomes several limitations of classical test theory approaches (e.g., factor analysis). It does so based on the principle that only two attributes determine how an individual responds to the scale items––that is, the assumption that the probability of a person’s response depends on the ability of the participant and the difficulty of the scale item, each expressed as estimates on the underlying latent trait [32,34]. The purpose of this paper was to examine the validity and reliability of six instruments that sought to examine the intentions of PE teachers in teaching physically active PE classes. Five approaches were used to determine this: (a) unidimensionality, (b) model fit statistics, (c) item/person reliability and separation, (d) item invariance/differential item functioning (DIF), and (e) a qualitative representation of the item-person (Wright) map.

2.5.1. Unidimensionality

One of the core aspects of the Rasch measurement model is the idea that measurement focusses on one attribute or dimension at a time, known as unidimensionality. This is important to understand as it means that other, unexpected dimensions do not confound results. To achieve this, a Rasch residual-based principal components analysis was completed. This process is not the same as conventional factor analysis as components come from residuals, or contrasts, and not from the original data [34]. According to Williams, Brown, and Boyle [37], at a minimum, scales should possess 50% of the variance being explained by the measures when a Rasch residual-based PCA is undertaken. This value represents how the items load on the respective factors. If items do not load on the factor under examination or if large percentages of item residuals are not accounted for, the scale may exhibit multidimensionality. As highlighted by Wuang, Lin, and Su [38], first contrasts that explain more than five per cent of the variance or possess eigenvalues >2 typically indicate more than one dimension and associations within the data.

2.5.2. Model Data Fit

Model data fit evaluates how the items fit the Rasch model. As part of a construct validation process, model data fit initially assesses the unidimensionality via infit mean square statistics (Infit MnSq) and outfit mean square statistics (Outfit MnSq). Infit statistics provide evidence of the difficulty of items being answered by higher performing individuals and easier items being responded to by lower performing individuals. In contrast, outfit statistics report unexpected results (e.g., the difficult items answered by lower performing individuals) [39]. They are evaluated against statistical ranges of 0.70–1.30 for infit/outfit MnSq and infit/outfit standardised values (ZSTD) between −2.0 and 2.0. For rating scales using Likert/survey scales, Bond and Fox [32] recommend item MNSQ ranges between 0.6 and 1.4. Items that fall out of this range indicate that items do not fit the model and in most cases are discarded.

2.5.3. Item/Person Reliability and Separation

The indices for items and persons with regard to reliability and separation illustrate the range of item difficulties and participants’ responses, respectively. A separation index is considered to be excellent when greater than 3.0, with values of >1.5 considered acceptable, and with >2.0 considered good. The item and person separation indexes were converted into strata according to the formula [4(separation index) + 1]/3 [40]. This process allows for a number of discrete groups of items (based on difficulty) and people (based on ability) to be identified. It is expected that scales should separate the items and people into at least two separate groups [37,40]. Reliability data highlights the confidence of the results, with values closer to 1.0 suggesting most confidence [41]. Low reliability < 0.5 was not acceptable since it demonstrates measurement errors between items or respondents.

2.5.4. Item Invariance/Differential Item Functioning (DIF)

DIF is a measure of item invariance (item bias) that examines differences between logit scores based on a dependent dichotomous variable (e.g., metropolitan/rural, healthy/unhealthy), and in this study, gender (female/male) was used [32]––for example, if one group of participants understands the item in similar or different ways to the other group. Logit values were generated and examined, and two steps are usually involved: first, comparing DIF contrasts to ensure >0.5 logits, and, second, examination for significant differences using t-test comparisons < 0.05 [32]. Item responses that are high or low represent abnormalities in item presentation, including poorly-placed items in sequencing or poorly-worded items [42].

2.5.5. Item-Person Variable (Wright) Map

An examination of the item–person variable map, known as the Wright map, provides a qualitative assessment that depicts the construct (validity) output through Rasch model analysis [43]. Items are displayed along a scale in order of difficulty, which are shown in logits, a basic unit of Rasch measurement. Logit values are distributed around a mean of 0 logits. The item–person variable map represents the relationship between the participants and the questionnaire items on a single map. It is organised as two vertical histograms. The left side shows participants and the right side shows the item’s response [44]. The left-hand uppermost section highlights the performance of respondents as marked with an X through to the lowest performance. On the right-hand side, the uppermost section describes the item’s performance and shows the most difficult item, and the bottom of the right-hand side illustrates the easiest item [36].

3. Results

The results in the following section will be presented by instrument (curriculum beliefs, self-efficacy, and behavioural control/intentions/attitudes/norm instruments) and then according to the Rasch Measurement Model parameters for each of the reported instruments: unidimensionality, model data fit, item invariance/differential item functioning, item reliability and separation, and item–person (Wright map). The Rasch Measurement Model parameters for structural validation can be found in Table 1.

3.1. Curriculum Beliefs Instrument

The fundamental assumption of the Rasch Measurement Model is to verify unidimensionality of the instruments within the study. For the purposes of this paper, unidimensionality was judged by a Rasch residual-based principal components analysis. For the 36-item curriculum beliefs instrument, the first contrast explained only 34.7% of the variance and possessed an eigenvalue of 6.80. Given that the first contrast was greater than the recommended 5% (in this case, 12.3%), this suggests the presence of multiple dimensions within the curriculum beliefs. Contrast 2 accounted for 6.8% of the variance with an eigenvalue of 3.7, contrast three accounted for 4.3% variance with an eigenvalue of 2.39, contrast 4 accounted for 3.6% of the variance with an eigenvalue of 1.97, and contrast 5 accounted for 3.3% of the variance with an eigenvalue of 1.81.
The model data fit indicates how well the overall data fitted based on the items and the participants’ responses. Model fit data are reported via infit mean square statistics (Infit MnSq) and outfit mean square statistics (Outfit MnSq) [32]. Two items were considered to misfit the model. Items did not conform to either MnSq or ZStd statistics. These included individual items 97_2 (how influential are the following factors in determining student participation in physical activity––the social, cultural, political and economic conditions an individual faces?) and item 93_2 (how important are the following as programmatic focus for PE—promoting concern over gender equity and equal opportunity for all students to participate?)
Item and person reliability and separation indices were calculated. According to Teman [45], item separation refers to how well the instrument can distinguish the measured variable, whereas person separation refers to how accurately participants can be distinguished by the instrument. According to Wright and Masters [40], the item reliability coefficient should ideally be >0.80 and the item separation index (ISI) should be >3.0. The person reliability coefficient should ideally be >0.80, the person separation index (PSI) should be >2.0, and the person raw score reliability should be >0.80. According to Boone [46], these reliability coefficients can be interpreted in a similar manner to Cronbach’s alpha.
For the curriculum beliefs instrument, person separation reliability was 0.91 and item separation reliability was 0.85. Separation indices were excellent for both person and item, with scores of 2.82 and 4.37, respectively.
One item possessed DIF in this instrument (item 96, choice 2): “How important are the following objectives for physical education at the level you teach? Physical development of the students (e.g., fitness)”. This indicates the item may be problematic when completed by male and female respondents. However, this item is retained as MnSq values were within the acceptable ranges of >1.30 [37].
The left side of the Wright map qualitative presents the distribution of the abilities of the respondents, and the right side of the map shows the distribution of the item difficulties (see Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6). Visually, there appears to be good spread in the items with highest response at 2.60 logits (item 93_4—promoting regular physical activity habits in students) with the lowest or most difficult response at −1.62 logits (performs at optimal physical level during sport performance) (see Figure 1). Contrary to normal expectations of the Wright map, the item focusing on the programmatic foci of promoting regular physical activity habits in students is located at the top of the scale. We would expect this response, under normal circumstances, to be the lowest on a Wright map, but due to the data being inverted, e.g., lower scores on the items represent greater agreement with the statements, this is why this is the case. Consequently, participants agreed that focusing on PA habits as a programmatic foci is important to physical education.

3.2. Self-Efficacy Instrument

A Rasch residual-based PCA was undertaken and it was clear from quantitative and qualitative data that, at minimum, two factors, and perhaps more factors, are present within the model. The first contrast explained only 36.6% of the variance and possessed an eigenvalue of 4.07. These values are outside the suggested ranges for unidimensionality, possibly suggesting the existence of multiple dimensions. Given the first contrast was greater than the recommended 5% (in this case, 11.7%), this suggests the presence of multiple dimensions within the PETPAS. Contrast 2 accounted for 8.6% of the variance with an eigenvalue of 2.98, contrast three accounted for 6.9% variance with an eigenvalue of 2.37, contrast four accounted for 4.7% of the variance with an eigenvalue of 1.62, and contrast 5 accounted for 4.6% of the variance with an eigenvalue of 1.60.
For individual items, it was noted that two of the items may potentially misfit the Rasch model based on the ZSTD criteria. These items were both in Factor 5 (My activity space is used for other purposes, I have too many students in my physical education class) and possessed outfit ZSTD > 2. Given that all other criteria were appropriate, these items were not removed and remained in the analysis.
For the self-efficacy instrument, person separation reliability was 0.89 and item separation reliability was 0.95. Separation indices were excellent for both person and item, with scores of 3.26 and 2.33, respectively.
There was no item invariance/differential item functioning in the self-efficacy instrument.
Figure 2 presents the item–person (Wright) map for the PETPAS in this sample. Item 43 at 1.02 logits (Other teachers at my school do not highly value physical education) and Item 32 at 0.91 logits (My activity space is used for other purposes) were the easiest items to endorse, due to the data being inverted, e.g., lower scores on the items represent greater agreement with the statements (e.g., Likert scale where 1 = strongly agree and 4 = strongly disagree). Items 49 at −0.75 logits (Administrators frequently cancel my class due to other school related activities) and item 35 at −0.61 logits (My class sessions are too short in duration) were more difficult to endorse. These findings are conceptually sound and relate to findings in the literature and the ongoing discussions/debates about the roles and purposes of physical education at school.

3.3. Behavioural Intentions/Control, Attitude and Subjective Norm Instruments

Unidimensionality of the four subscales was assessed using the results of the standardised residuals possessed at minimum 50% of the variation, and these should be explained by the measures [40]. Secondly, the standardised residuals and specifically the eigenvalue of the first contrasts should possess a criterion value of <2. Three out of the four subscales met these criteria (behavioural control/intention and attitude). Subjective norm possessed 40.9% of variance in the first factor and eigenvalues for the first contrast of 2.50, suggesting that there is more than one dimension being assessed in this subscale. A tertiary analysis of the subjective norm instrument examining the standardised residual contrast one plot found that, in fact, two dimensions existed clustering around items 20, 21, 22, and 23 as well as items 16, 17, 18, and 19 exist.
One item in the behavioural intention scale (item 4—I plan to teach lessons that provide large amounts of physical activity (i.e., at least 50% of class time)) was found to exhibit properties that did not conform to the RMM (see Table 1). All other items from the other three scales were in the acceptable ranges of infit and outfit statistics.
Person separation reliabilities were poor for the behavioural intention (0.55), moderate for behavioural control (0.68), very good for subjective norm (0.79), and excellent for attitude (0.87). It is likely that low reliabilities can be partially explained by the number of items [47], although behavioural intention possesses seven items. Another measure of instrument validation is that of separation index. Person separation attempts to measure person ability (that is, contrasting highly able participants with low ability participants), while low item separation is a measure of item hierarchy and how the items relate to the person. Low item separation could reflect sample size. The attitude subscale did not have any issues for either person or item separation. However, behavioural intention, behavioural control, and subjective norm all possessed lower than expected item and person separation (see Table 1). Item reliabilities for all scales was >0.73, suggesting very good items.
The four other instruments were examined for item invariance/DIF based on gender (males and females), and no items possessed across any of the subscales possessed DIF. This suggests that items were able to be understood appropriately by both males and females: in other words, there were no significant differences in how the items or how the individuals performed on any of the subscales of the instrument.
Behavioural intention demonstrated a good spread between the items. Possibly, items 1 and 4 measure that same concept (0.94 logits) and were the most difficult to agree with. Item 3 was the easiest of the five items (−1.37). See Figure 3.
The three items for behavioural control are visually represented on the Wright map. Two items (item 14, item 13) appear at least to be overlapping. However, statistically, these are different: item 14 (0.82 logits) and item 13 (0.64 logits). Item 15 was the more difficult to agree with (−1.46 logits). (See Figure 4 for these data.)
Attitude demonstrated a good spread of items. Item 7 was the most difficult to agree to (2.09), with items 11 and 12 were the easiest to agree with (−1.25), and there was good spread in the other items (see Figure 5).
Subjective norm possessed a good spread. Item 18 was the most difficult to agree with (0.45), whereas item 17 was the easiest for this sample (−0.59). Several items, 16 and 21 and items 22 and 23, shared similar characteristics, although they possessed different logit measures. On visual representation, therefore, they may be assessing the same underlying concept, even though the statistical data does not confirm this, which suggests that they may be measuring different concepts within the broader subjective norm construct. (See Figure 6 for these data).

4. Discussion

The purpose of this study was to examine structural validity of the six instruments related to teachers’ curricula beliefs, self-efficacy, behavioural control/intention, attitude, subjective norm of teaching physical activity, and fitness during physical education using a sample of Australian physical education teachers. This process employed item response theory using the Rasch measurement model. In following RMM processes, this study examined the dimensionality, model fit, differential item functioning, item and person hierarchies, and item and person separation indices as part of ongoing structural and construct validation of this instrument within the field of physical education. As highlighted by Brown and Bonsaken [48], the “body of evidence of a scale’s validity is never static; it is cumulative, active process with contributions made by authors … plus other researchers in the cognate field of interest” (p. 2).
The findings indicated that the dimensionality of three of the six instruments, namely, curriculum beliefs, self-efficacy, and subjective norm, demonstrated multidimensional characteristics. For curriculum beliefs and self-efficacy, this is unsurprising given that initial construct validation studies have demonstrated that these instruments both possess four factors, respectively, through the use of classical test theory, e.g., factor analysis [19,26,27]. This study adds further support to these earlier studies, with the RMM further confirming the existence of multidimensional characteristics within these instruments. Further developmental work on the subjective norm subscale is warranted given that this scale, at least according to these statistics, was not unidimensional, with two possible constructs existing.
Infit and outfit statistics confirmed that nearly all of the difficulties of items match the participants’ beliefs and attitudes responses, with two items in the curriculum beliefs and one item in the behavioural intention not fitting the data. Future work should consider these items in the light of these findings. For all of the instruments, item ordering and hierarchies were all supported through visual representation of item–person Wright maps, with two items in the behavioural intention and two items in the attitude subscales possibly demonstrating some overlap. Some discrepancies exist in the item and person reliabilities of the behavioural intentions, behavioural control, and subjective norm instruments, with attitude the least problematic using this statistical procedure. For example, the model reliability is appropriate for this sample, although person reliability for behavioural intention was poor, with other RMM indices here being satisfactory. This could be due the fact that a convenience sample was used. Other rationale for this can include number of categories per item and sample ability variance. One plausible explanation, given that all other structural and construct validity procedures exhibit positive outcomes using the RMM, is that the lower sample size is most likely to effect this statistic [32,49], and future studies must consider this. Given the small number of items, it might be prudent for future researchers to consider developing additional items in each of these categories, too.
This study makes an important contribution to the literature, as the ongoing development and validation of these instruments provides researchers with confidence that the overall instrument and items within it possess content, construct, and structural validity. To my knowledge, this is the first study that has examined the properties of these six instruments utilising RMM. This study examined the structural validity of instruments designed to assess the degree of teacher curricula beliefs, self-efficacy, attitudes, and values towards teaching physical activity and fitness. This is an important outcome of this research. Most other published validation studies have focussed on exploratory or confirmatory factor analysis (drawn out of classical test theory), and this analysis used the RMM (item response theory) to determine the structure of the instruments using both item and person parameter indices. Whilst important conclusions can be drawn from this current work, there are several limitations to the study that require caution when considering these findings, as the data collection was from a relatively small sample in one Australian state. A convenience sample was used to recruit participants, and while over one-hundred-and-twenty teachers began the instrument, only seventy-seven completed all questions on each of the six instruments. This sample was greater than a reasonably targeted sample of 50 persons. According to Linacre [50], there is a 99% confidence level that the estimated item difficulty is within +−1 logit of its stable value, which is close enough for most practical purposes, especially when persons take 10 or more items, as has been further confirmed by Azizan et al. [51]. For future research, this study could be replicated across Australia and beyond to collect representative data that can be used to make more conclusive inferences about HPE teachers’ beliefs, attitudes, and values towards their intentions to teach physically active classes. Several findings, such as those related to item and person separation and reliability indices, can be affected by smaller samples, sample ability variance, length of scales, rating categories per item, and item difficulty variance [48]. As an advocate of mixed-method approaches, other research approaches (e.g., in-depth interviews, focus groups, open-ended questionnaires/surveys) may provide opportunities to generate additional understandings about HPE teachers’ beliefs, attitudes, and values regarding this subject.

5. Conclusions

The contribution that this study makes to the physical education literature is positive and warranted. Ongoing validation is an important research approach, both methodologically and utilising appropriate measurement protocols, such as the RMM. Understanding PE teachers’ intentions to teach physically active physical education classes is an important research and pedagogical endeavour given the importance that movement and physical activity play in both the content and the context of learning in physical education. If a longer-term objective of physical education is the engagement with lifelong physical activity participation, research that examines the importance of why and how teachers choose to develop or maintain physically active classes must continue to be an important objective.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Monash University (protocol code 15123 and 19 July 2018).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Ennis, C.D. Knowledge and beliefs underlying curricular expertise. Quest 1994, 46, 164–175. [Google Scholar] [CrossRef] [Green Version]
  2. Kern, B.D.; Killian, C.M.; Ellison, D.W.; Graber, K.C.; Belansky, E.; Cutforth, N. Teacher Beliefs and Changes in Practice Through Professional Development. J. Teach. Phys. Educ. 2021, 40, 606–617. [Google Scholar] [CrossRef]
  3. Kulinna, P.H.; Cothran, D.J. Teacher beliefs and efficacy. In Routledge Handbook of Physical Education Pedagogies; Ennis, C.D., Ed.; Routledge: Abington, UK, 2016; pp. 530–540. [Google Scholar]
  4. Xiong, Y.; Sun, X.-Y.; Liu, X.-Q.; Wang, P.; Zheng, B. The Influence of Self-Efficacy and Work Input on Physical Education Teachers’ Creative Teaching. Front. Psychol. 2020, 10, 2856. [Google Scholar] [CrossRef] [PubMed]
  5. Kern, B.D.; Graber, K.C.; Woods, A.M.; Templin, T. The Influence of Socializing Agents and Teaching Context Among Teachers of Different Dispositions Toward Change. J. Teach. Phys. Educ. 2019, 38, 252–261. [Google Scholar] [CrossRef]
  6. Richards, K.A.R.; Pennington, C.G.; Sinelnikov, O.A. Teacher socialization in physical education: A scoping review of literature. Kinesiol. Rev. 2019, 8, 86–99. [Google Scholar] [CrossRef]
  7. Hutzler, Y.; Meier, S.; Reuker, S.; Zitomer, M. Attitudes and self-efficacy of physical education teachers toward inclusion of children with disabilities: A narrative review of international literature. Phys. Educ. Sport Pedagog. 2019, 24, 249–266. [Google Scholar] [CrossRef]
  8. Krause, J.M.; O’Neil, K.; Jones, E. Technology in physical education teacher education: A call to action. Quest 2020, 72, 241–259. [Google Scholar] [CrossRef]
  9. Ennis, C.D. A model describing the influence of values and context on student learning. In Student Learning in Physical Education: Applying Research to Enahnce Instruction; Silverman, S., Ennis, C.D., Eds.; Human Kinetics: Champaign, IL, USA, 1996; pp. 127–147. [Google Scholar]
  10. Department of Education and Training. Active Schools, Active Kids and Active Communities; Department of Education and Training: Melbourne, VIC, Australia, 2020.
  11. Society for Health and Physical Educators (SHAPE). National Standards for K-12 Physical Education; SHAPE America: Baltimore, MD, USA, 2013. [Google Scholar]
  12. Victorian Curriculum and Assessment Authority. Victorian Curriculum: Health and Physical Education; Victorian Curriculum and Assessment Authority: Melbourne, VIC, Australia, 2022.
  13. Whitehead, M. Physical literacy: Throughout the Lifecourse; Routledge: London, UK, 2010. [Google Scholar]
  14. Jewett, A.E.; Bain, L.L.; Ennis, C.D. The Curriculum Process in Physical Education; Brown & Benchmark: Cartersveille, GA, USA, 1995. [Google Scholar]
  15. Shulman, L.S. Those Who Understand: Knowledge Growth in Teaching. Educ. Res. 1986, 15, 4–14. [Google Scholar] [CrossRef]
  16. Shulman, L.S. Knowledge and Teaching: Foundations of the New Reform. Harv. Educ. Rev. 1987, 57, 1–23. [Google Scholar] [CrossRef]
  17. Ennis, C.D.; Hooper, L.M. Development of an instrument for assessing educational value orientations. J. Curric. Stud. 1988, 20, 277–280. [Google Scholar] [CrossRef] [Green Version]
  18. Kulinna, P.H.; Silverman, S. The Development and Validation of Scores on a Measure of Teachers’ Attitudes toward Teaching Physical Activity and Fitness. Educ. Psychol. Meas. 1999, 59, 507–517. [Google Scholar] [CrossRef]
  19. Kulinna, P.H.; Silverman, S. Teachers’ Attitudes toward Teaching Physical Activity and Fitness. Res. Q. Exerc. Sport 2000, 71, 80–84. [Google Scholar] [CrossRef] [PubMed]
  20. Adamakis, M. Physical Education students’ beliefs toward four important curricular outcomes: Results from three Greek faculties. J. Phys. Educ. Sport 2018, 18, 1001–1007. [Google Scholar] [CrossRef]
  21. Adamakis, M.; Dania, A. Are pre-service teachers’ beliefs toward curricular outcomes challenged by teaching methods modules and school placement? Evidence from three Greek physical education faculties. Eur. Phys. Educ. Rev. 2020, 26, 729–746. [Google Scholar] [CrossRef]
  22. Adamakis, M.; Zounhia, K. The impact of occupational socialization on physical education pre-service teachers’ beliefs about four important curricular outcomes:A cross-sectional study. Eur. Phys. Educ. Rev. 2016, 22, 279–297. [Google Scholar] [CrossRef]
  23. Adamakis, M.; Zounhia, K.; Hatziharistos, D.; Psychountaki, M. Greek pre-service physical education teachers’ beliefs about curricular orientations: Instrument validation and examination of four important goals. Acta Gymnica 2013, 43, 39–51. [Google Scholar] [CrossRef]
  24. Guan, J.; McBride, R.; Xiang, P. Chinese teachers’ attitudes toward teaching physical activity and fitness. Asia-Pac. J. Teach. Educ. 2005, 33, 147–157. [Google Scholar] [CrossRef]
  25. Bandura, A. Self-Efficacy: The Exercise of Control; W H Freeman/Times Books/ Henry Holt & Co: New York, NY, USA, 1997; pp. ix–604. [Google Scholar]
  26. Gencay, O. Validation of the Physical Education Teachers’ Physical Activity Self-efficacy Scale with a Turkish Sample. Soc. Behav. Personal. Int. J. 2009, 37, 223–230. [Google Scholar] [CrossRef]
  27. Martin, J.J.; Kulinna, P.H. The Development of a Physical Education Teachers’ Physical Activity Self-Efficacy Instrument. J. Teach. Phys. Educ. 2003, 22, 219–232. [Google Scholar] [CrossRef] [Green Version]
  28. Martin, J.J.; Kulinna, P.H. Self-efficacy theory and the theory of planned behavior: Teaching physically active physical education classes. Res. Q. Exerc. Sport 2004, 75, 288–297. [Google Scholar] [CrossRef]
  29. Martin, J.J.; Kulinna, P.H.; Eklund, R.C.; Reed, B. Determinants of teachers’ intentions to teach physically active physical education classes. J. Teach. Phys. Educ. 2001, 20, 129–143. [Google Scholar] [CrossRef] [Green Version]
  30. Newton, P.E.; Shaw, S.D. Disagreement over the best way to use the word ‘validity’ and options for reaching consensus. Assess. Educ. Princ. Policy Pract. 2016, 23, 178–197. [Google Scholar] [CrossRef]
  31. Dudley, D.; Weaver, N.; Cairney, J. High-Intensity Interval Training and Health Optimizing Physical Education: Achieving Health and Educative Outcomes in Secondary Physical Education—A Pilot Nonrandomized Comparison Trial. J. Teach. Phys. Educ. 2021, 40, 215–227. [Google Scholar] [CrossRef]
  32. Bond, T.G.; Fox, C.M. Applying the Rasch Model: Fundamental Measurement in the Human Sciences, 3rd ed.; Routledge: New York, NY, USA; London, UK, 2015. [Google Scholar]
  33. Wright, B.D.; Masters, G.N. Rating Scale Analysis; MESA Press: San Diego, CA, USA, 1982. [Google Scholar]
  34. Linacre, M. Practical Rasch Measurement-Tutorial 3. Investigating Test Functioning; Winsteps: Chicago, IL, USA, 2011. [Google Scholar]
  35. Rasch, G. Probabilistic Models for Some Intelligence and Attainment Tests; University of Chicago Press: Chicago, IL, USA, 1980. [Google Scholar]
  36. Tsuda, E.; Ward, P.; Ressler, J.D.; Wyant, J.; He, Y.; Kim, I.; Santiago, J.A. Basketball Common Content Knowledge Instrument Validation. Int. J. Kinesiol. High. Educ. 2022, 7, 35–47. [Google Scholar] [CrossRef]
  37. Williams, B.; Brown, T.; Boyle, M. Construct validation of the readiness for interprofessional learning scale: A Rasch and factor analysis. J. Interprofessional Care 2012, 26, 326–332. [Google Scholar] [CrossRef]
  38. Wuang, Y.-P.; Lin, Y.-H.; Su, C.-Y. Rasch analysis of the Bruininks–Oseretsky Test of Motor Proficiency-Second Edition in intellectual disabilities. Res. Dev. Disabil. 2009, 30, 1132–1144. [Google Scholar] [CrossRef]
  39. Tsuda, E.; Ward, P.; Kim, J.; He, Y.; Sazama, D.; Brian, A. The tennis common content knowledge measure validation. Eur. Phys. Educ. Rev. 2021, 27, 654–665. [Google Scholar] [CrossRef]
  40. Wright, B.D.; Masters, G.N. Number of Person or Item Strata. Rasch Meas. Tradit. 2002, 16, 888. [Google Scholar]
  41. Linacre, J.M. A User’s Guide to Winsteps Ministep Rasch-Model Computer Program; Winsteps: Chicago, IL, USA, 2022. [Google Scholar]
  42. Wright, B.D.; Linacre, J.M. Observations are always ordinal; measurements, however, must be interval. Arch. Phys. Med. Rehabil. 1989, 70, 857–860. [Google Scholar] [PubMed]
  43. Cruickshank, V.; Hyndman, B.; Patterson, K.; Kebble, P. Encounters in a marginalised subject: The experiential challenges faced by Tasmanian Health and Physical Education teachers. Aust. J. Educ. 2021, 65, 24–40. [Google Scholar] [CrossRef]
  44. Lunz, M.E. Using the Very Useful Wright Map. Measurement Research Associates Test Insights 505 North Lake Shore Dr., Chicago, IL 60611 2010. Available online: https://www.rasch.org/mra/mra-01-10.htm (accessed on 20 October 2022).
  45. Teman, E.D. A Rasch analysis of the statistical anxiety rating scale. J. Appl. Meas. 2013, 14, 414–434. [Google Scholar] [PubMed]
  46. Boone, W.J. Rasch Analysis for Instrument Development: Why, When, and How? CBE—Life Sci. Educ. 2016, 15, rm4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Marsh, H.W.; Martin, A.J.; Jackson, S. Introducing a Short Version of the Physical Self Description Questionnaire: New Strategies, Short-Form Evaluative Criteria, and Applications of Factor Analyses. J. Sport Exerc. Psychol. 2010, 32, 438–482. [Google Scholar] [CrossRef] [Green Version]
  48. Brown, T.; Bonsaksen, T. An examination of the structural validity of the Physical Self-Description Questionnaire-Short Form (PSDQ–S) using the Rasch Measurement Model. Cogent Educ. 2019, 6, 1571146. [Google Scholar] [CrossRef]
  49. Smith, A.B.; Rush, R.; Fallowfield, L.J.; Velikova, G.; Sharpe, M. Rasch fit statistics and sample size considerations for polytomous data. BMC Med. Res. Methodol. 2008, 8, 33. [Google Scholar] [CrossRef] [Green Version]
  50. Linacre, J.M. Sample Size and Item Calibration Stability. Rasch Meas. Trans. 1994, 7, 328. [Google Scholar]
  51. Azizan, N.H.; Mahmud, Z.; Rambli, A. Rasch rating scale item estimates using maximum likelihood approach: Effects of sample size on the accuracy and bias of the estimates. Int. J. Adv. Sci. Technol. 2020, 29, 2526–2531. [Google Scholar]
Figure 1. Item/Person (Wight) map—Curriculum beliefs.
Figure 1. Item/Person (Wight) map—Curriculum beliefs.
Education 13 00768 g001
Figure 2. Item/Person (Wright) map—Self-efficacy.
Figure 2. Item/Person (Wright) map—Self-efficacy.
Education 13 00768 g002
Figure 3. Item/Person (Wright) map—Behavioural intention.
Figure 3. Item/Person (Wright) map—Behavioural intention.
Education 13 00768 g003
Figure 4. Item/Person (Wright) map—Behavioural control.
Figure 4. Item/Person (Wright) map—Behavioural control.
Education 13 00768 g004
Figure 5. Item/Person (Wright) map—Attitude.
Figure 5. Item/Person (Wright) map—Attitude.
Education 13 00768 g005
Figure 6. Item/Person (Wright) map—Subjective norm.
Figure 6. Item/Person (Wright) map—Subjective norm.
Education 13 00768 g006
Table 1. Subscales with Rasch Measurement Model (RMM) requirements.
Table 1. Subscales with Rasch Measurement Model (RMM) requirements.
ParameterRMM RequirementsCurriculum Beliefs
(36 Items)
Self-Efficacy (22 Items)Behavioural Intention
(5 Items)
Attitude
(7 Items)
Behavioural Control
(3 Items)
Subjective Norm
(8 Items)
Model fit: summary of items
Item mean (SD) logits0.000.00 (0.19)0.00 (0.18)0.00 (0.45)0.00 (0.35)0.00 (0.44)0.00 (0.16)
Item reliability>0.80.950.850.790.920.820.73
Item separation index>3.04.372.331.973.312.151.65
Item strata>3.06.163.442.964.743.22.53
Item spreadDefined as the difference between maximum item logit score and minimum item logit score4.220.780.432.582.280.85
Item model fit Infit MNSQ range extremes0.60–1.400.65–1.46 (items 93_2, 97_2)0.62–1.310.50–1.140.73–1.240.73 to 1.130.74 to 1.28
Item model fit Infit ZSTD range extremes−2.0 to 2.0−2.36–2.50−2.79–1.86−1.86–0.54−0.08 to 0.49−1.10 to 0.58−1.67 to 1.54
Item model fit Outfit MnSq range extremes0.60–1.400.67–1.61(item 97_2, 93_2)0.61–1.310.24–1.250.75 to 1.070.42 to 0.740.77 to 1.23
Item model fit Outfit ZSTD range extremes−2.0 to 2.0−2.21–3.22 (item 97_2, 96_4)−2.58–2.21−2.27–0.73−0.72 to 0.34−1.04 to −0.31−1.45 to 1.30
Model fit:summary of person
Person mean logits (SD) −2.33 (0.33)0.36 (0.38)−3.50 (1.75)6.33 (1.32)5.75 (2.22)1.25 (0.62)
Person spread Defined as the difference between maximum person logit score and minimum person logit score5.069.243.3410.9116.474.54
Measurement quality: reliability and targeting
Person reliability>0.80 for individual measurement0.890.910.55 [poor]0.870.680.79
Person Separation Index>1.52.823.261.112.581.471.96
# of Separate Person Strata>3.0 strata for individual measurement4.094.681.813.772.292.94
Person Raw Score reliability>0.80 for individual measurement0.910.960.930.910.920.82
Unidimensionality
Variance accounted for by 1st factor>50%34.7%36.6%54.9%60.7%77.6%40.9%
PCA (eigenvalue for 1st contrast)<2.06.804.071.61.941.622.50
Differential Item functioning
DIF by gender>0.5 logits; p < 0.051 item possessed DIF (item 96_2)0 items possessed DIF0 items possessed DIF0 items possessed DIF0 items possessed DIF0 items possessed DIF
Figures in bold represent indices above or below RMM ranges.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brown, T.D. An Examination of the Structural Validity of Instruments Assessing PE Teachers’ Beliefs, Intentions, and Self-Efficacy towards Teaching Physically Active Classes. Educ. Sci. 2023, 13, 768. https://doi.org/10.3390/educsci13080768

AMA Style

Brown TD. An Examination of the Structural Validity of Instruments Assessing PE Teachers’ Beliefs, Intentions, and Self-Efficacy towards Teaching Physically Active Classes. Education Sciences. 2023; 13(8):768. https://doi.org/10.3390/educsci13080768

Chicago/Turabian Style

Brown, Trent D. 2023. "An Examination of the Structural Validity of Instruments Assessing PE Teachers’ Beliefs, Intentions, and Self-Efficacy towards Teaching Physically Active Classes" Education Sciences 13, no. 8: 768. https://doi.org/10.3390/educsci13080768

APA Style

Brown, T. D. (2023). An Examination of the Structural Validity of Instruments Assessing PE Teachers’ Beliefs, Intentions, and Self-Efficacy towards Teaching Physically Active Classes. Education Sciences, 13(8), 768. https://doi.org/10.3390/educsci13080768

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop