Next Article in Journal
Embedding Trauma-Informed Strategies Within a Multitiered System of Supports: A Framework for Early Childhood Education
Previous Article in Journal
Changing University Students’ Habit Strength Towards Alcohol Consumption Using Affectively and Cognitively Framed Messages
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rasch Analysis of General Self-Efficacy Among Individuals with Intellectual Disabilities

Department of Secondary Special Education, Jeonju University, Jeonju 55069, Republic of Korea
Behav. Sci. 2025, 15(12), 1639; https://doi.org/10.3390/bs15121639
Submission received: 4 September 2025 / Revised: 22 November 2025 / Accepted: 27 November 2025 / Published: 28 November 2025
(This article belongs to the Section Educational Psychology)

Abstract

The self-efficacy of individuals with intellectual disabilities is considered an important factor in the psychological adjustment process. The General Self-Efficacy Scale (GSES) is commonly used to measure self-efficacy. However, previous studies have not examined the psychometric properties of the GSES among individuals with intellectual disabilities. Therefore, this study investigated the psychometric properties of the GSES using the Rasch model based on the item response theory. This study used secondary data from the Employment Panel Survey of Persons with Disabilities provided by the Republic of Korea Employment Agency for Persons with Disabilities. The panel survey collected data from individuals with intellectual disabilities in the Republic of Korea using the GSES. This study analyzed data from 232 individuals to determine GSES item fitness, item difficulty, rating scale fit, and reliability. The results revealed that eight of ten GSES items exhibited appropriate fit. Item difficulty required modification, indicating the need for items with lower difficulty. The four-point Likert scale used for responses was appropriate. Person and item separation indices demonstrated good scale reliability. These findings suggest that the GSES is effective for measuring self-efficacy in people with intellectual disabilities; however, some adjustments, such as changes in difficulty level, are required.

1. Introduction

The American Association on Intellectual and Developmental Disabilities defines intellectual disability (ID) as a condition characterized by significant difficulties in intellectual functioning and adaptive behavior, affecting everyday social and practical skills, with onset before age 22 (Schalock et al., 2021). Individuals with ID exhibit difficulties in intellectual functioning, including reasoning, problem solving, planning, abstract thinking, judgment, academic learning, and learning from experience (APA, 2013). Their significant deficit in adaptive skills limits their ability to carry out age-appropriate daily activities (Boat et al., 2015). Due to their inadequately developed metacognitive skills, individuals with ID have inhibited ability to memorize information, rehearse, organize materials, and systematically regulate their learning process (Erez & Peled, 2001; Hunt & Marshall, 2002).
Such cognitive and adaptive limitations often lead to repeated experiences of task failure, resulting in frustration and avoidance of new challenges, which negatively influence the development of self-efficacy. Individuals with ID are sometimes perceived as lacking motivation or overly passive in task performance (Cuskelly & Gilmore, 2014). This is attributable to anxiety and frustration stemming from repeated failures, which can hinder goal pursuit and reinforce “learned helplessness” (Gacek et al., 2017). As a result, their behaviors may be driven only by external rewards or reinforcement, rather than internal satisfaction, and they may be less likely to initiate actions or actively set goals (Bernie-Smith et al., 2002; Taylor et al., 2005). Considering these characteristics, it is important to improve the self-efficacy of individuals with ID. Self-efficacy refers to the belief in one’s ability to achieve desired goals (Bandura, 2006), exercise control over performance, and organize behavior in a specific situation (Patrick et al., 1997). Self-efficacy affects the way of thinking that determines the choice of behavior, the extent of effort exerted, the length of effort until the behavior is completed, and emotional responses (Bandura, 1977; Bandura & Schunk, 1981; Bandura et al., 1980).
Self-efficacy determines the amount of effort invested in a behavior and the degree to which the individual continues a behavior despite obstacles or aversive experiences. Therefore, stronger self-efficacy is associated with greater effort and longer duration of the selected behavior, thereby limiting other behaviors and leading to effort investment even in difficult tasks (Bandura et al., 1975). Studies have reported that high self-efficacy aids the recovery from, and treatment of, various disorders, such as phobias, obesity, smoking, and heart disease (Bandura, 1986; Litt, 1988; O’Leary, 1985). In addition to behavioral patterns, self-efficacy affects emotions. According to Bandura (1982), fear and anxiety are caused by a sense of helplessness. If the situation is controlled, fear and anxiety decrease. Self-efficacy is the belief that one can perform coping behaviors in a given situation; anxiety is negatively correlated with self-efficacy.
To date, there has been little research on self-efficacy in individuals with ID. However, studies have explored the relationship between self-efficacy and other psychological factors, such as physical activity participation. Peterson et al. (2008) used self-report scales to measure self-efficacy and social support for physical leisure activities in 152 adults with ID. They reported that social support and self-efficacy positively predicted physical activity participation, with self-efficacy mediating between social support and physical activity. Jo et al. (2018) found increases in muscular endurance, self-efficacy, and physical activity levels in 23 adults with ID following a 12-week exercise program. Relatedly, low self-efficacy levels in individuals with ID have been reported to result in low participation in programs that can benefit their physical and mental health (Stevens et al., 2018; Yu et al., 2022). Self-efficacy is an important predictor of mental health. Therefore, improving self-efficacy in individuals with ID could help to alleviate their tendency to struggle with mental health issues. Moreover, self-efficacy is closely associated with the quality of life, career paths, health, and social participation of individuals with ID (Nota et al., 2010; Peterson et al., 2008).
To accurately understand and predict the level of self-efficacy, the measurement scale must be reliable and valid, and knowledge and understanding of the scale characteristics must be supported by empirical research. The Glasgow Social Self-Efficacy Scale (GSSES), developed to assess social self-efficacy in individuals with ID, has shown strong test–retest reliability, acceptable internal reliability, and low concurrent validity (Payne & Jahoda, 2004). Thus, although larger studies are required to examine its psychometric properties, the GSSES is a promising tool (Payne & Jahoda, 2004). However, the GSSES is rarely utilized, and self-developed tools are often used. For instance, a modified physical self-efficacy scale, not the GSSES, was employed after implementing an exercise program (Jo et al., 2018). Another study examined the relationship between career interests and self-efficacy beliefs, which were measured by My Future Preferences (Nota et al., 2010). One study found that fitness and health education significantly increased exercise self-efficacy among adults with Down syndrome based on an original scale (Heller et al., 2004).
The General Self-Efficacy Scale (GSES) is the most commonly used instrument for measuring self-efficacy (Schwarzer et al., 1997). This widely used ten-item scale has shown good internal consistency, with values of 0.84 for German students, 0.81 for Costa Rican students, and 0.91 for Chinese student samples (Schwarzer et al., 1997). In South Korea, the GSES has been used to collect large-scale panel data. For instance, the Korea Employment Agency for Persons with Disabilities has complied an Employment Panel Survey of Persons with Disabilities, which serves as a representative dataset of persons with ID. However, the psychometric properties of the GSES for individuals with ID have not been reported.
Factor analysis is a classical method of validating the psychometric properties of instruments. This statistical technique extracts common factors based on the correlations among variables, and is useful for confirming the existence of latent factors. Previous studies have primarily used factor analysis to verify the validity of the GSES and confirm its factor structure (Nel & Boshoff, 2016; Schwarzer et al., 1997). However, factor analysis has several limitations. First, researcher subjectivity is likely to be involved in determining the number of factors and their interpretations, which can lead to inconsistent results (Costello & Osborne, 2005). Second, reliable results are difficult to obtain without a sufficient sample size, which typically requires at least five to ten times the number of variables (Fabrigar & Wegener, 2012). Third, researcher judgment is required in selecting appropriate variables, which can impact the analysis results (DeVellis & Thorpe, 2021). Fourth, factor analysis assumes linear relationships between variables, making it difficult to reflect complex non-linear relationships (Preacher & MacCallum, 2003). These limitations hinder the ability to ensure the reliability and validity of measurement tools using factor analysis alone.
In contrast, the item response theory (IRT) is a statistical technique that analyzes subject ability and item characteristics at the individual item level, thereby enabling more precise measurements. First, it allows for the assessment of item difficulty, discrimination, and predictability, ensuring objective analyses of individual item quality (Embretson & Reise, 2013). Second, unlike factor analysis, which relies on the overall score of a subject, the IRT independently measures items and subject ability, providing a reliable tool across diverse populations (Baker & Kim, 2004). Third, the IRT allows computer-adaptive testing (CAT), which provides items tailored to the respondent’s abilities, enabling highly reliable measurements within a short period of time (Embretson & Reise, 2013). Fourth, this method eliminates unnecessary items and shortens the questionnaire length while maintaining high measurement accuracy, thereby reducing respondent fatigue and maximizing efficiency (Van der Linden & Glas, 2000). Finally, the IRT is not limited to a specific group; the same scale can be applied to various groups, thereby increasing the generalizability of the findings (Embretson & Reise, 2013). Due to these advantages, the IRT is increasingly used in assessment tool development and scale validation.
Therefore, this study aimed to investigate the psychometric properties of the GSES among individuals with ID using Rasch analysis based on the IRT. This study posed the following research questions:
  • What is the item fit of the GSES?
  • What is the item difficulty of the GSES?
  • What is the fit of the GSES rating scale?
  • What are the person and item separation indices of the GSES?

2. Materials and Methods

2.1. Data

This study analyzed data from the first year of the second wave of the Employment Panel Survey of Persons with Disabilities provided by the Korea Employment Agency for Persons with Disabilities. The purpose of this panel was to produce dynamic basic statistics on the overall economic activities of people with disabilities and to identify the personal and environmental factors that influence these activities, thereby providing useful baseline data for establishing and evaluating employment policies for people with disabilities. The first survey was conducted in 2016, targeting persons with disabilities registered under the Disabled Persons Welfare Act (aged 15–64) residing in Korea as of 15 May 2016. Stratified sampling was used first to divide the population according to various characteristics, such as residential area, type of disability, disability severity, and age. Next, probability proportional sampling was used to recruit participants.
This study used the Second Wave of the Panel Survey on Employment of Persons with Disabilities. The legal basis of this panel survey is Article 26 of the Act on Promotion of Employment and Vocational Rehabilitation of Persons with Disabilities (Survey on the Status of Persons with Disabilities). The survey was approved under Article 18 of the Statistics Act (Approval for Compilation of Statistics), with approval number 383003. All confidential information to individuals, corporations, or organizations was used only for the purpose of compiling statistics and deleted during this process, in accordance with Chapter 33 of the Statistics Act. As per Chapter 16 of the Bioethics Law of Korea and Chapter 13 of the Enforcement Regulation, IRB approval is not required for research that uses public information or does not collect or record personal identification information.
A total of 398 individuals with ID were identified in the dataset. After excluding 166 individuals with incomplete responses; a total of 232 participants with ID were included in the final analysis. In Rasch analysis, the key criterion for determining the reliability of item calibration is the standard error (SE). When three criteria are met—sufficient sample size (N), respondents appropriately distributed across item levels, and items answered in the intended manner—the average correct response rate for an item generally falls within the range of 0.50 to 0.87. The stability of item difficulty is determined by the sample size, with the SE decreasing as the sample size increases. It has been suggested that the SE of an item in the Rasch model theoretically lies between 2/√N and 3/√N (B. D. Wright & Stone, 1979). Furthermore, to reliably estimate item difficulty, at least eight correct and eight incorrect responses—i.e., responses from participants with different characteristics—are required for each item. For item difficulty to be accurately estimated within ±1 logit, the SE must be approximately 0.38 logits or less, calculated based on a 99% confidence interval (±2.6 SE). Converting this to sample size, the minimum sample size for stable item difficulty estimation is between 27 and 61. A sample size as low as 30 is considered appropriate for preliminary or exploratory analyses (B. Wright & Panchapakesan, 1969). Therefore, this study’s sample size of 232 participants is sufficient to perform Rasch analysis and reliably estimate item difficulty.
The participant characteristics are presented in Table 1. The participants comprised more men than women, and most participants were aged 15–29 years. In terms of disability severity, 50% of participants were classified as grade 3, which corresponds to relatively mild severity. Furthermore, most participants had a high school diploma and had no comorbid disabilities. Regarding the types of comorbid disabilities, mental illness accounted for the highest proportion (6 people, 26.1%), followed by autism spectrum disorder (4 people, 17.4%); physical disability, brain damage, and speech impairment (3 people each, 13.0%); and visual impairment and epilepsy disorder (2 people each, 8.7%) each. Those without comorbid disabilities accounted for 90.1% of the total. The distribution of residences was relatively even across large cities, small or medium-sized cities, and rural areas. Moreover, 80.2% of the participants were unmarried.

2.2. Measure

Self-efficacy was measured using the GSES developed by Jerusalem and Schwarzer (2014). The scale comprised the following questions: (1) “I believe that if I try hard enough, I can achieve it”; (2) “I find it relatively easy to focus on my goals”; (3) “I am not embarrassed by difficulties because I believe in my abilities”; (4) “I can solve most problems if I put in the necessary effort”; (5) “I believe that I will find a way to solve difficulties when they arise”; (6) “I believe that I will eventually be able to do a task effectively, even if I did not expect the task at first”; (7) “I can deal well with unexpected situations using my abilities”; (8) “When problems arise, I usually find a solution”; (9) “Even if someone disagrees with my opinion, I can still find a way to do it the way I want”; and (10) “I believe that I can handle it, no matter what happens.” Scores were calculated as the mean of ten items. Responses were rated on a four-point Likert scale (1 = strongly disagree; 4 = strongly agree), with higher scores indicating higher self-efficacy. Multicultural validation studies with South Korean respondents have reported consistent evidence of associations between perceived self-efficacy and the variables under study, confirming the validity of the GSES (Luszczynska et al., 2005). The Cronbach’s α of the scale was 0.995 in this study. This high value indicates that the GSES items may be too similar.

2.3. Statistical Analysis

The GSES item fit was assessed using Rasch analysis. Rasch analysis is used to confirm the unidimensionality of a scale (Smith & Miao, 1994). Unidimensionality assumes that the concept of being measured is composed of a single attribute or factor. The scale can be considered to accurately measure the concept being analyzed, and the total score can be calculated using the scale, only if the assumption of its unidimensionality is confirmed. Various studies have examined whether the scales developed for the general population demonstrate unidimensionality in other clinical groups. This study employed the rating scale model of Rasch analysis (Andrich, 2016; Battisti et al., 2010). To evaluate the fit to the unidimensionality of Rasch analysis and whether each item explained the common construct, this study used two item fit indices: infit (weighted) and outfit (unweighted) mean-square (MNSQ) values. The ideal fit value is 1.0 (Bond & Fox, 2013). Linacre et al. (2002) suggested a maximum allowable value of 1.5. Since infit MNSQ is sensitive to response patterns and difficult to diagnose, which could be more greatly treat to measurements. Therefore, this study considered items unsuitable if the infit MNSQ value was less than 0.6 or greater than 1.4 (Bond & Fox, 2013).
The difficulty of the GSES items was assessed using the Wright map of Rasch analysis as an indicator of construct validity. This tool visually shows the distribution of respondents and items; respondents located at the top indicate people with high levels of the corresponding construct concept, whereas items located at the top indicate items that are most difficult for respondents. If the average of the respondents was higher than the average of the items, the item was interpreted as relatively easy among respondents; conversely, if it was lower, the item was interpreted as difficult (Bond & Fox, 2013).
To examines the appropriateness of the four-point Likert scale used for the GSES, a categorical functional analysis was conducted on the Rasch model. The scale was evaluated using the mean, structural value, infit and outfit MNSQ of each category, and thresholds. The mean and structural values were expected to increase as the score increased; if the fit of each category was 1.5 or higher, the function was considered inadequate (Andrich, 2016). If the threshold did not progress linearly, the construct concept was considered not well reflected. Winsteps 3.6 was used for the analysis.
The person and item separation indices were used to describe the reliability of the questionnaire for Rasch analysis. A higher separation index indicated that the questionnaire could identify more distinct functions. A separation index higher than 2.0 was considered to reflect a good level of separation. Separation reliability was interpreted the same as Cronbach’s α (Boone et al., 2014).

3. Results

3.1. Unidimensionality

The results of item appropriateness are presented in Table 2, Table 3 and Table 4. The item fit test determined the second item (“I find it relatively easy to focus on my goals”) to be unfit. After the second item was deleted, and the scale was reanalyzed, the third item (“I am not embarrassed by difficulties because I believe in my abilities”) was determined to be unfit. Finally, eight items were evaluated as having an appropriate fit.

3.2. Item Difficulty

Figure 1 shows a comparison of item difficulty and individual abilities. The item mean was higher than the person means, indicating that some items were more difficult than the abilities of individuals with ID.

3.3. Suitability of the Rating Scale

The four-point Likert scale was suitable for measuring self-efficacy in individuals with ID (Table 5). The infit and outfit MNSQ values for each category were smaller than 1.5, and the structure measure increased with increasing levels.

3.4. Person and Item Separation Indices

The results of the person and item separation indices are presented in Table 6. Both separation indices were greater than 2.0, indicating good reliability.

4. Discussion

This preliminary study aimed to identify the need and direction for developing a self-efficacy measurement tool for people with ID by analyzing the reliability and validity of the GSES based on the Rasch model. The results demonstrated the unidimensionality of the GSES for individuals with ID. The results demonstrated the unidimensionality of the GSES for individuals with ID. These findings not only confirm the psychometric validity of the GSES for this population but also deepen understanding of how the cognitive and contextual characteristics of individuals with ID influence their perceptions of self-efficacy. The item fit analysis revealed that Items 2 and 3 did not conform to the expectations of the Rasch model. This suggested that the cognitive characteristics of individuals with ID and their limitations in abstract self-awareness affected their interpretation of items and response consistency (McGrew & Evans, 2004; Schalock et al., 2021). These findings were consistent with those of Park and Park (2019), who analyzed the Rosenberg Self-Esteem Scale for individuals with ID and demonstrated the utility of the Rasch model for measuring the psychological characteristics of individuals with developmental disabilities, revealing a low fit for items containing abstract concepts. Measuring self-efficacy for individuals with ID requires considering both cognitive demands, such as sentence structure and readability, and contextual relevance, such as daily life or familiar environments. Therefore, strategies such as specificity, short sentence structure, and experience-based statements are required when selecting item wording (Kim & Gray, 2024). Alternative expressions, particularly for items with high interpretability, should be developed.
Jo et al. (2018) demonstrated that researchers can secure the reliability of measurement tools by considering differences in the comprehension abilities of individuals with ID and implementing explanation and response verification procedures. These results suggest the need for alternative wording and support procedures to enhance the interpretability of items in surveys targeting individuals with ID. In particular, panel surveys that include individuals with ID should consider an approach similar to that of Jo et al. to enhance data reliability, thereby enhancing the validity and usability of the results. Furthermore, the results revealed that the difficulty of the GSES exceeded the average abilities of individuals with ID. This finding provides valuable insight into how currently standardized psychometric tools inadvertently underestimate the strengths of individuals with ID. This was likely because the original scale was developed based on the general population (Hampton & Mason, 2003). The confidence and self-efficacy that individuals with ID may possess in authentic or familiar settings may not be adequately captured by commonly used scales reliant on general statements that require metacognitive reflection. Therefore, the problem may not be low self-efficacy but rather a mismatch between the item format and actual experiences. Therefore, individuals with ID may have struggled with the levels of cognitive reasoning or emotional self-insight required by the GSES items. Similarly, Park and Park (2019) reported that discrepancies occurred between item difficulty and subject characteristics when existing scales did not sufficiently account for the developmental characteristics of individuals with ID. This finding is consistent with the developmental direction of the University of Washington Self-Efficacy Scale for individuals with disabilities, which uses items structured around the daily life and functions of individuals with disabilities, thereby enhancing the scale’s reliability and validity by presenting realistic items aligned with the respondents’ experiences (Amtmann et al., 2012). This highlights the importance of adopting a strengths-based assessment model (Thompson et al., 2016) that emphasizes the expression of self-efficacy based on experience and context.
Although the four-point Likert scale elicited relatively stable responses, suggesting its applicability for people with ID, the finding that only 1% of participants selected the highest option suggests that it may not have clearly understood or appropriately used. Finally, as the item and subject separation indices were greater than 4, the GSES possessed sufficient sensitivity to discriminate between individuals with ID. This demonstrated the stability of the Rasch-based scale, as defined by B. D. Wright and Masters (1982), and confirmed its ability to sensitively detect individual differences in self-efficacy among individuals with ID.
This study revealed that the GSES used in panel data could measure self-efficacy among individuals with ID. Most items showed good fit, confirming that they appropriately reflected the unidimensional structure of self-efficacy. However, certain adjustments are required. According to Bandura’s (1986) self-efficacy theory, individuals assess their abilities through a variety of cognitive appraisal processes, including task difficulty, prior success experiences, and environmental cues. The misfit of certain items in this study suggests that individuals with intellectual disabilities may engage in these cognitive processes differently from what the original GSES assumes, which helps explain the item-level misfit observed in the Rasch Model. Gobeille et al. (2024) conducted a Rasch analysis of the GSES for older adults with visual impairment and highlighted the need for cultural and contextual appropriateness of existing self-efficacy scales. Similarly, this study found that some items failed to reflect the complexity and contextual dependence inherent in self-efficacy, suggesting the need for designing items that fully reflect the life context of individuals with ID. The results of this study demonstrate that rather than simply applying existing theories to assess self-efficacy in people with intellectual disabilities, a theoretical restructuring that considers cognitive characteristics and information-processing methods is necessary. In other words, while maintaining the core elements of self-efficacy, the items assessing it need to be simplified, made more specific, and action-oriented. Self-efficacy is a fundamental psychological resource for individuals with ID in various domains, including independence, career development, and social participation. Therefore, it is essential to accurately measure their self-efficacy and use it as a basis for interventions. Building on previous studies that developed and applied My Future Preferences scale (Nota et al., 2010) and exercise self-efficacy scale (Heller et al., 2004), future research should develop a self-efficacy scale that distinguishes subdomains such as social and professional self-efficacy.
Although the use of panel data collected through a systematic sampling process represents a methodological strength, several limitations of this study should be noted. First, it used a rating scale originally developed for the general population. Because the items were not designed specifically for individuals with ID, some contained abstract wording and required high levels of cognitive and verbal reasoning. Therefore, some items may have been difficult for some respondents to understand accurately. As this was part of a panel survey that collected data from all individuals with disabilities, no alternative was available. Future research should develop tools to validate and reliably measure self-efficacy among individuals with ID. After developing a customized tool for people with ID, we hope to develop a method for utilizing the GSES in panel surveys that targets individuals with ID. Second, the generalizability of the sample was limited. While the sample size of 232 was sufficient for Rasch analysis, exceeding the recommended minimum of 30 to 100 participants (Linacre, 1994), a larger sample size reduces the SE and leads to more stable estimation. To address this, further large sample size studies are needed to generate more stable and reliable results. Moreover, this study used data from a panel study targeting individuals aged 15 years and older, which prevents generalizing the results to the entire population of individuals with ID. The GSES should be tested in practical contexts using a representative sample of individuals with ID of various ages to develop a more effective tool for this population. Third, bias may have been present in the self-reported responses. Of the 398 individuals with ID, only 232 were able to self-report and were included in the analysis. The remaining 166 were excluded from the analysis due to missing data. This may lead to an over-representation of the characteristics of individuals with milder ID who could self-report. Future studies should confirm the validity of the tool for people with severe ID, using additional support materials. Lastly, because this study was conducted in Republic of Korea, the results reflect cultural characteristics unique to the South Korean culture, such as the disability support system Therefore, caution is needed when interpreting the findings and considering their applicability to other countries and regions. In this regard, cross-cultural studies are recommended.

5. Conclusions

This study analyzed items measuring self-efficacy in individuals with ID using the Rasch model. After deleting two of the ten items, the remaining eight items of the GSES showed good fit, confirming that they adequately reflected the unidimensional structure of self-efficacy. The results of this study suggest that when analyzing the self-efficacy of people with ID using national panel data, it is more appropriate to use only the eight items verified through Rasch analysis, rather than the results from all ten items. Self-efficacy is a key psychological resource for individuals with ID in various aspects, including independence, career development, and social participation. Therefore, precisely measuring self-efficacy and utilizing it as a basis for interventions is crucial. Future research should develop a self-efficacy scale that distinguishes subdomains, such as social and career self-efficacy. Furthermore, this scale should be developed using a representative sample of individuals with ID of various ages and validated in practical settings to develop a more effective tool.

Funding

This research was supported by the Regional Innovation System & Education (RISE) program through the Jeonbuk RISE Center, funded by the Ministry of Education (MOE) and the Jeonbuk State, Republic of Korea (2025-RISE-13-JJU).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data can be requested at: https://edi.kead.or.kr/BoardType17.do?bid=18&mid=37 (accessed on 25 November 2024).

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

GSESGeneral Self-Efficacy Scale
IDIntellectual disabilities
GSSESGlasgow Social Self-Efficacy Scale
IRTItem response theory
CATComputer-adaptive testing
MNSQmean square

References

  1. Amtmann, D., Bamer, A. M., Cook, K. F., Askew, R. L., Noonan, V. K., & Brockway, J. A. (2012). University of Washington self-efficacy scale: A new self-efficacy scale for people with disabilities. Archives of Physical Medicine and Rehabilitation, 93(10), 1757–1765. [Google Scholar] [CrossRef] [PubMed]
  2. Andrich, D. (2016). Rasch rating-scale model. In Handbook of item response theory (pp. 75–94). Chapman & Hall/CRC. [Google Scholar]
  3. APA (American Psychiatric Association). (2013). Diagnostic and statistical manual of mental disorders (5th ed). APA. [Google Scholar]
  4. Baker, F. B., & Kim, S. H. (2004). Item response theory: Parameter estimation techniques. CRC Press. [Google Scholar]
  5. Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. [Google Scholar] [CrossRef]
  6. Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37(2), 122–147. [Google Scholar] [CrossRef]
  7. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory (pp. 23–28). Prentice-Hall Inc. [Google Scholar]
  8. Bandura, A. (2006). Guide for constructing self-efficacy scales. Self-Efficacy Beliefs of Adolescents, 5(1), 307–337. [Google Scholar]
  9. Bandura, A., Adams, N. E., Hardy, A. B., & Howells, G. N. (1980). Tests of the generality of self-efficacy theory. Cognitive Therapy and Research, 4(1), 39–66. [Google Scholar] [CrossRef]
  10. Bandura, A., Jeffery, R. W., & Gajdos, E. (1975). Generalizing change through participant modeling with self-directed mastery. Behaviour Research and Therapy, 13(2–3), 141–152. [Google Scholar] [CrossRef]
  11. Bandura, A., & Schunk, D. H. (1981). Cultivating competence, self-efficacy, and intrinsic interest through proximal self-motivation. Journal of Personality and Social Psychology, 41(3), 586–598. [Google Scholar] [CrossRef]
  12. Battisti, F. D., Nicolini, G., & Salini, S. (2010). The Rasch model in customer satisfaction survey data. Quality Technology and Quantitative Management, 7(1), 15–34. [Google Scholar] [CrossRef]
  13. Bernie-Smith, M., Ittenbach, R. F., & Patton, J. R. (2002). Mental retardation (6th ed.). Merrill. [Google Scholar]
  14. Boat, T. F., & Wu, J. T. (2015). Committee to Evaluate the Supplemental Security Income Disability Program for Children with Mental Disorders, & National Academies of Sciences, Engineering, and Medicine Clinical characteristics of intellectual disabilities. In Mental disorders and disabilities among low-income children. National Academies Press (US). [Google Scholar]
  15. Bond, T. G., & Fox, C. M. (2013). Applying the Rasch model: Fundamental measurement in the human sciences. Psychology Press. [Google Scholar]
  16. Boone, W. J., Staver, J. R., & Yale, M. S. (2014). Person reliability, item reliability, and more. In Rasch analysis in the human sciences (pp. 217–234). Springer. [Google Scholar] [CrossRef]
  17. Costello, A. B., & Osborne, J. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research and Evaluation, 10(1), 7. [Google Scholar] [CrossRef]
  18. Cuskelly, M., & Gilmore, L. (2014). Motivation in children with intellectual disabilities. Research and Practice in Intellectual and Developmental Disabilities, 1(1), 51–59. [Google Scholar] [CrossRef]
  19. DeVellis, R. F., & Thorpe, C. T. (2021). Scale development: Theory and applications. Sage Publications. [Google Scholar]
  20. Embretson, S. E., & Reise, S. P. (2013). Item response theory for psychologists. Psychology Press. [Google Scholar]
  21. Erez, G., & Peled, I. (2001). Cognition and metacognition: Evidence of higher thinking in problem solving of adolescents with mental retardation. Education and Training in Mental Retardation and Developmental Disabilities, 36, 83–93. [Google Scholar] [CrossRef]
  22. Fabrigar, L. R., & Wegener, D. T. (2012). Exploratory factor analysis. Oxford University Press. [Google Scholar]
  23. Gacek, M., Smoleń, T., & Pilecka, W. (2017). Consequences of learned helplessness and recognition of the state of cognitive exhaustion in persons with mild intellectual disability. Advances in Cognitive Psychology, 13(1), 42. [Google Scholar] [CrossRef]
  24. Gobeille, M., Bittner, A. K., Malkin, A. G., Ho, J., Idman-Rait, C., Estabrook, M., Ross, N. C., & CARE Study Team. (2024). Rasch analysis of the new general self-efficacy scale: An evaluation of its psychometric properties in older adults with low vision. Health and Quality of Life Outcomes, 22(1), 90. [Google Scholar] [CrossRef]
  25. Hampton, N. Z., & Mason, E. (2003). Learning disabilities, gender, sources of efficacy, self-efficacy beliefs, and academic achievement in high school students. Journal of School Psychology, 41(2), 101–112. [Google Scholar] [CrossRef]
  26. Heller, T., Hsieh, K., & Rimmer, J. H. (2004). Attitudinal and psychosocial outcomes of a fitness and health education program on adults with Down syndrome. American Journal of Mental Retardation: AJMR, 109(2), 175–185. [Google Scholar] [CrossRef] [PubMed]
  27. Hunt, N., & Marshall, K. (2002). Exceptional children and youth: An introduction to special education. Houghton Mifflin. [Google Scholar]
  28. Jerusalem, M., & Schwarzer, R. (2014). Self-efficacy as a resource factor in stress appraisal processes. In Self-efficacy (pp. 195–214). Taylor & Francis. [Google Scholar]
  29. Jo, G., Rossow-Kimball, B., & Lee, Y. (2018). Effects of 12-week combined exercise program on self-efficacy, physical activity level, and health related physical fitness of adults with intellectual disability. Journal of Exercise Rehabilitation, 14(2), 175–182. [Google Scholar] [CrossRef] [PubMed]
  30. Kim, J., & Gray, J. A. (2024). Measuring palliative care self-efficacy of intellectual and developmental disability staff using Rasch models. Palliative and Supportive Care, 22(1), 146–154. [Google Scholar] [CrossRef]
  31. Linacre, J. M. (1994). Sample size and item calibration stability. Rasch Measurement Transactions, 7(4), 328. Available online: https://www.rasch.org/rmt/rmt74m.htm (accessed on 26 November 2025).
  32. Linacre, J. M., Stone, M. H., William, J., Fisher, P., & Tesio, L. (2002). Rasch measurement. Rasch Measurement Transactions, 16, 871. [Google Scholar]
  33. Litt, M. D. (1988). Self-efficacy and perceived control: Cognitive mediators of pain tolerance. Journal of Personality and Social Psychology, 54(1), 149–160. [Google Scholar] [CrossRef]
  34. Luszczynska, A., Scholz, U., & Schwarzer, R. (2005). The general self-efficacy scale: Multicultural validation studies. The Journal of Psychology, 139(5), 439–457. [Google Scholar] [CrossRef] [PubMed]
  35. McGrew, K. S., & Evans, J. (2004). Expectations for students with cognitive disabilities: Is the cup half empty or half full? Can the cup flow over Synthesis Report (Vol. 55). Institute on Community Integration, University of Minnesota. [Google Scholar]
  36. Nel, P., & Boshoff, A. (2016). Evaluating the factor structure of the general self-efficacy scale. South African Journal of Psychology, 46(1), 37–49. [Google Scholar] [CrossRef]
  37. Nota, L., Ginevra, M. C., & Carrieri, L. (2010). Career interests and self-efficacy beliefs among young adults with an intellectual disability. Journal of Policy and Practice in Intellectual Disabilities, 7(4), 250–260. [Google Scholar] [CrossRef]
  38. O’Leary, A. (1985). Self-efficacy and health. Behaviour Research and Therapy, 23(4), 437–451. [Google Scholar] [CrossRef] [PubMed]
  39. Park, J.-Y., & Park, E. Y. (2019). The Rasch analysis of Rosenberg self-esteem scale in individuals with intellectual disabilities. Frontiers in Psychology, 10. [Google Scholar] [CrossRef]
  40. Patrick, H., Hicks, L., & Ryan, A. M. (1997). Relations of perceived social efficacy and social goal pursuit to self-efficacy for academic work. The Journal of Early Adolescence, 17(2), 109–128. [Google Scholar] [CrossRef]
  41. Payne, R., & Jahoda, A. (2004). The Glasgow Social self-efficacy Scale—A new scale for measuring social self-efficacy in people with intellectual disability. Clinical Psychology & Psychotherapy, 11(4), 265–274. [Google Scholar] [CrossRef]
  42. Peterson, J. J., Lowe, J. B., Peterson, N. A., Nothwehr, F. K., Janz, K. F., & Lobas, J. G. (2008). Paths to leisure physical activity among adults with intellectual disabilities: Self-efficacy and social support. American Journal of Health Promotion: AJHP, 23(1), 35–42. [Google Scholar] [CrossRef]
  43. Preacher, K. J., & MacCallum, R. C. (2003). Repairing Tom Swift’s electric factor analysis machine. Understanding Statistics, 2(1), 13–43. [Google Scholar] [CrossRef]
  44. Schalock, R. L., Luckasson, R., & Tassé, M. J. (2021). An overview of intellectual disability: Definition, diagnosis, classification, and systems of supports (12th ed., Volume 126, 6, pp. 439–442). American Journal on Intellectual and Developmental Disabilities. [Google Scholar] [CrossRef]
  45. Schwarzer, R., Bäßler, J., Kwiatek, P., Schröder, K., & Zhang, J. X. (1997). The assessment of optimistic self-beliefs: Comparison of the German, Spanish, and Chinese versions of the general self-efficacy scale. Applied Psychology, 46(1), 69–88. [Google Scholar] [CrossRef]
  46. Smith, R. M., & Miao, C. Y. (1994). Assessing unidimensionality for Rasch measurement. Objective Measurement: Theory into Practice, 2, 316–327. [Google Scholar]
  47. Stevens, G., Jahoda, A., Matthews, L., Hankey, C., Melville, C., Murray, H., & Mitchell, F. (2018). A theory-informed qualitative exploration of social and environmental determinants of physical activity and dietary choices in adolescents with intellectual disabilities in their final year of school. Journal of Applied Research in Intellectual Disabilities, 31, 52–67. [Google Scholar] [CrossRef]
  48. Taylor, R. L., Richards, S. B., & Brady, M. P. (2005). Mental retardation: Historical perspectives, current practices, and future directions. Allyn & Bacon. [Google Scholar]
  49. Thompson, J. R., Shogren, K. A., & Wehmeyer, M. L. (2016). Supports and support needs in strengths-based models of intellectual disability. In Handbook of research-based practices for educating students with intellectual disability (pp. 39–57). Routledge. [Google Scholar]
  50. Van der Linden, W. J., & Glas, C. A. W. (2000). Computerized adaptive testing: Theory and practice. Kluwer Academic Publishers. [Google Scholar]
  51. Wright, B., & Panchapakesan, N. (1969). A procedure for sample-free item analysis. Educational and Psychological Measurement, 29(1), 23–48. [Google Scholar] [CrossRef]
  52. Wright, B. D., & Masters, G. N. (1982). Rating scale analysis. MESA Press. [Google Scholar]
  53. Wright, B. D., & Stone, M. H. (1979). Best test design: Rasch measurement. MESA Press. [Google Scholar]
  54. Yu, S., Wang, T., Zhong, T., Qian, Y., & Qi, J. (2022). Barriers and facilitators of physical activity participation among children and adolescents with intellectual disabilities: A scoping review. Healthcare, 10(2), 233. [Google Scholar] [CrossRef]
Figure 1. Item difficulty map. # means item number.
Figure 1. Item difficulty map. # means item number.
Behavsci 15 01639 g001
Table 1. Participant characteristics.
Table 1. Participant characteristics.
Categoryn%X2
Gender
 Men14763.418.589 **
 Women8536.6
Age (years)
 15–2910746.1136.879 **
 30–396327.2
 40–492912.5
 50–592912.5
 60–6441.7
Education level
 No education166.9352.017 **
 Elementary school graduate2611.2
 Junior high school graduate2711.6
 High school graduate14462.1
 College graduate114.7
 University graduate83.4
Severity of disability
 Grade 14017.227.379 **
 Grade 27832.8
 Grade 311650.0
Comorbid disability
 Yes239.9149.121 **
 No20990.1
Residential area
 Big city8335.86.060 *
 Small or medium-sized city6025.9
 Rural area8938.4
Marital status
 Unmarried18680.2383.017 **
 Married3113.4
 Divorced104.3
 Bereaved52.2
Note. * p < 0.05; ** p < 0.001; n = number of participants; Grade 1 = IQ < 35 and requires lifelong supervision; Grade 2 = IQ 35–49 and requires continuous supervision or specific training; Grade 3 = IQ 50–70 and capable of social and vocational participation with appropriate support.
Table 2. Item fit statistics for ten items.
Table 2. Item fit statistics for ten items.
Item No.MeasureSEInfitOutfit
MNSQZ-ValueMNSQZ-Value
131.211.701.232.11.392.8
2 *51.891.751.816.01.694.2
349.151.741.242.11.161.2
451.891.750.83−1.60.72−2.2
552.191.750.85−1.40.75−1.9
651.581.750.96−0.40.91−0.6
754.131.760.87−1.20.87−0.9
850.671.750.65−3.70.56−3.8
954.661.760.69−3.10.60−3.3
1052.621.760.73−2.70.64−3.0
Note: * Items did not fit the Rasch model (infit MNSQ outside the 0.60–1.40 range). SE, standard error; MNSQ, mean square.
Table 3. Item fit statistics for nine items.
Table 3. Item fit statistics for nine items.
Item No.MeasureSEInfitOutfit
MNSQZ-ValueMNSQZ-Value
130.951.611.242.41.342.5
3 *48.921.661.595.01.453.0
451.411.670.96−0.40.87−0.9
552.251.670.82−1.80.69−2.5
652.531.671.000.00.94−0.4
754.601.691.020.31.050.4
851.971.670.67−3.60.55−3.9
954.221.680.75−2.60.65−2.8
1053.171.680.74−2.70.64−2.9
Note: * Items did not fit the Rasch model (infit MNSQ outside the 0.60–1.40 range). SE, standard error; MNSQ, mean square.
Table 4. Item fit statistics for eight items.
Table 4. Item fit statistics for eight items.
Item No.MeasureSEInfitOutfit
MNSQZ-ValueMNSQZ-Value
129.461.691.333.01.503.2
451.551.721.272.51.161.1
551.851.720.93−0.60.85−1.0
651.851.721.010.10.93−0.5
755.191.740.92−0.80.86−0.9
851.551.720.82−1.90.72−2.1
956.021.740.70−3.20.59−3.2
1052.531.730.78−2.30.68−2.4
Note: SE, standard error; MNSQ, mean square.
Table 5. Rating scale analysis of the GSES.
Table 5. Rating scale analysis of the GSES.
Category LevelObserved
Count
Observed
Rate (%)
Average MeasureInfit
MNSQ
Outfit
MNSQ
Structure Measure
122516−75.180.840.67None
280757−31.980.930.98−67.33
33652612.061.071.07−2.75
417155.951.311.4070.07
Table 6. Person and item separation indices of the GSES.
Table 6. Person and item separation indices of the GSES.
CategorySeparation IndexReliability
Person4.330.95
Item4.500.95
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, E.-Y. Rasch Analysis of General Self-Efficacy Among Individuals with Intellectual Disabilities. Behav. Sci. 2025, 15, 1639. https://doi.org/10.3390/bs15121639

AMA Style

Park E-Y. Rasch Analysis of General Self-Efficacy Among Individuals with Intellectual Disabilities. Behavioral Sciences. 2025; 15(12):1639. https://doi.org/10.3390/bs15121639

Chicago/Turabian Style

Park, Eun-Young. 2025. "Rasch Analysis of General Self-Efficacy Among Individuals with Intellectual Disabilities" Behavioral Sciences 15, no. 12: 1639. https://doi.org/10.3390/bs15121639

APA Style

Park, E.-Y. (2025). Rasch Analysis of General Self-Efficacy Among Individuals with Intellectual Disabilities. Behavioral Sciences, 15(12), 1639. https://doi.org/10.3390/bs15121639

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop