Next Article in Journal
Exploring Minority Students’ Perceptions of Using Open Educational Resources in a Computer Game Design Course
Previous Article in Journal
“Wow! This Is So Cool”: Learning Spanish with Augmented Reality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generative Artificial Intelligence in Education: Insights from Rehabilitation Sciences Students

1
Department of Physical Therapy, Thomas Jefferson University, Philadelphia, PA 19107, USA
2
Department of Occupational Therapy, Thomas Jefferson University, Philadelphia, PA 19107, USA
3
Department of Exercise Science, Thomas Jefferson University, Philadelphia, PA 19107, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(3), 380; https://doi.org/10.3390/educsci15030380
Submission received: 3 February 2025 / Revised: 11 March 2025 / Accepted: 15 March 2025 / Published: 19 March 2025
(This article belongs to the Section Higher Education)

Abstract

:
Little is known about how students in rehabilitation sciences accept and use generative artificial intelligence (GenAI) in their educational endeavors. We explored student perceptions, acceptance, and utilization of GenAI in school and their beliefs about its usefulness and ease of use. A cross-sectional online survey was conducted of adult students enrolled in rehabilitation sciences programs within a private urban academic university. The survey was based on the Technology Acceptance Model with questions specific to the use of GenAI in school. A total of 196 responses were included in the analysis (32.9% response rate), with responses received across all rehabilitation sciences programs. Half the respondents (50%) reported using GenAI “some of the time” in school, and 6.1% indicated frequent usage of “most of” or “all” the time. Users reported using GenAI to explain and review concepts (n = 49), to generate content or ideas (n = 20), and for grammatical support (n = 21). Users perceived GenAI as more useful and easier to use than non-users. Over half of rehabilitation sciences students use GenAI for school; however, only 6.1% report routine usage. Given the expected growth and potential of GenAI, faculty should explore strategies to facilitate the acceptance and appropriate use of this innovative technology.

1. Introduction

Artificial intelligence (AI), defined as the simulation of human-like intelligence by machines (Davenport & Kalakota, 2019), has broad application and the capacity to drive innovation in many different fields. Generative artificial intelligence (GenAI) is a specific subset of AI capable of generating new images, text, and audio in response to a prompt by the user (Fui-Hoon Nah et al., 2023). Within healthcare, GenAI is beginning to reshape comprehensive patient care, from clinician training and education to patient engagement, diagnosis, and treatment planning (Wójcik et al., 2023). The proper integration of GenAI into medical education is essential in order to ensure that future health professionals do not fall behind the rapidly evolving healthcare landscape.
Consumer awareness of, and access to, GenAI is relatively new, with novel applications and/or software continuing to be developed. ChatGPT (GPT-3.5 Turbo) was the first text generator widely available and marketed to consumers at no cost (Cotton et al., 2024). Introduced in 2022 by OpenAI, ChatGPT was quickly adopted, gaining 100 million active users in only two months (Hu, 2023). Within academia, the initial commentary focused on concerns related to academic integrity, with academicians discussing strategies to detect and limit the use of GenAI in student submissions (Huang, 2023). The discourse around GenAI is evolving from a restrictive stance to a more proactive and constructive approach. Literature reviews have begun to document the emerging implementation of this technology in healthcare education, revealing preliminary usage patterns among educators that includes using GenAI for developing curriculum and assessment materials (Almansour & Alfhaid, 2024; Furey et al., 2024; Hale et al., 2024; Janumpally et al., 2025). Educators are now exploring how this technology can be strategically integrated into the teaching and learning processes instead of primarily focusing on preventing GenAI’s presence (Mansour & Wong, 2024; Vaughn et al., 2024). Rather than viewing GenAI as a threat or replacement to traditional educational methods, the emphasis is shifting to understanding the opportunities and capabilities of GenAI to facilitate its responsible use (Sallam, 2023). Nevertheless, the successful integration of GenAI in education requires student acceptance and use.
There is an emerging understanding of how health science and medical education students are using GenAI to support their learning through the use of personalized tutoring systems that adapt to their needs (Almansour & Alfhaid, 2024; Furey et al., 2024; Hale et al., 2024; Janumpally et al., 2025; Preiksaitis & Rose, 2023), as writing assistants for academic assignments and research papers (Hale et al., 2024; Janumpally et al., 2025; Preiksaitis & Rose, 2023), as ideation tools (Hale et al., 2024; Janumpally et al., 2025; Preiksaitis & Rose, 2023), and as support systems for developing clinical reasoning skills (Furey et al., 2024; Janumpally et al., 2025). These reviews highlight the potential for GenAI to develop the next generation of healthcare professionals and reflect the influential nature of this topic. However, these reviews primarily used a narrative or scoping methodology (Almansour & Alfhaid, 2024; Furey et al., 2024; Janumpally et al., 2025; Preiksaitis & Rose, 2023), and there is a need for additional original research given the constant evolution of GenAI. Though data support the positive impact of GenAI on academic outcomes (Roganović, 2024; Sun & Zhou, 2024), the prevalence of GenAI use in college students can vary (Baek et al., 2024; Golding et al., 2024). A gap remains in the understanding of health professions students’ actual usage patterns and adoption rates, and even less is known about the perceptions of GenAI in non-users in need of support to adopt this technology.
The complexity and depth of healthcare curricula increasingly challenge traditional pedagogical approaches. There is growing acceptance of AI in clinical practice, highlighting the importance of integrating this technology into student education (Alsobhi et al., 2022). Although the threats of GenAI to academic integrity have been explored (Kazley et al., 2024), as well as predominantly academicians’ attitudes (Severin & Gagnon, 2023), less is known about students’ broader perceptions, acceptance, and utilization of GenAI in healthcare education. Specific to its use in school, key questions remain, in order to better understand (1) how health professions students perceive the usefulness and ease of use of GenAI, (2) current usage patterns, (3) barriers and facilitators to adopting GenAI, and (4) factors that differentiate users from non-users.
To address these questions, a mixed-methods survey study of undergraduate and graduate rehabilitation sciences students was conducted across disciplines in health sciences to describe their perceptions, acceptance, and utilization of GenAI in their education. To better understand students’ patterns, we explored differences between GenAI users’ and non-users’ beliefs about GenAI’s usefulness and ease of use as well as barriers and facilitators impacting its adoption. The survey approach was guided by the Technology Acceptance Model (TAM) (Davis, 1989; Davis et al., 1989) and the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003). These well-established models offer structured methodologies for analyzing technology acceptance and use. The insights gained from this study can be used to help promote the acceptance and responsible use of GenAI in healthcare education.

2. Materials and Methods

2.1. Study Design

This mixed-methods, cross-sectional study surveyed a sample of students enrolled in undergraduate and graduate rehabilitation sciences programs at a private academic university in the northeast region of the United States. The anonymous survey was conducted from 11 September 2024, through 4 October 2024. The initial invitation to participate in the survey was distributed through the Canvas learning management system in a non-course-related student group. To enhance recruitment, verbal announcements of the study opportunity were made in neutral courses to avoid undue influence, and a reminder invitation was electronically sent one week later. All study materials and methods were approved as exempt by the Institutional Review Board of the parent university

2.2. Participants

All students actively enrolled in Athletic Training (AT), Exercise Science (ES), Occupational Therapy (OT), Physical Therapy (PT), or Speech Language Pathology (SLP) programs within the parent university were eligible to participate in the survey. All were graduate-level programs, except for the ES and OT Assistant programs, which were at the undergraduate and associate levels, respectively. To incentivize survey completion, participants were given the option to enter a raffle to win one of ten $25 gift cards to the campus store. Students under 18 years of age were excluded. No other specific exclusion criteria were applied.

2.3. Survey Development

The survey was grounded in theory and developed by the study investigators with experience in both survey development and GenAI. To maximize survey validity, the survey was reviewed by three external faculty with expertise in GenAI in education. Based on this external feedback, survey questions were revised and subsequently tested in a small group of eligible students from each of the targeted programs to ensure alignment with the researchers’ intent. Using this feedback, the final survey was created to be completed in 5–10 min and was divided into three parts that explored participants’ perceived usefulness and ease of use of GenAI, facilitators of use and perceived accuracy of information generated, and participant demographics. Survey questions specific to GenAI included a combination of a 4-point Likert scale, dichotomous (yes/no), select all, and open-text entry questions. Using branching logic, two survey versions were created using nearly identical items with adjusted verbiage to compare individuals who use GenAI versus those who do not. In addition, qualitative data were obtained exploring how participants perceive the usefulness of GenAI in education and their familiarity with specific GenAI tools. The survey was distributed via QualtricsTM (SAP, Provo, UT, USA). A copy of the survey can be found in the Supplementary Materials.
The development of survey questions to assess the perceived usefulness and ease of use of GenAI was grounded using the TAM, which is a validated scale to measure factors influencing the likelihood of individuals adopting new technology based on their perceptions (Davis, 1989). For the purposes of this study, the language of the TAM-based survey questions was modified to focus on the use of GenAI specifically in school. Additional questions were developed based on the Unified Theory of Acceptance and Use of Technology (UTAUT) to explore the influence of the social environment and facilitating conditions on the use of GenAI. The UTAUT expanded on the TAM by describing facilitating conditions and social influences as additional factors that influence the actual use of the technology (Davis, 1989). Both the models (TAM and UTAUT) are validated tools used to describe factors related to the acceptance and use of novel technology within diverse healthcare settings (AlQudah et al., 2021). The final section of the survey sought information related to general demographic data, including the student’s program, year of study, undergraduate major, age, gender identity, race and ethnicity, and approximate household income during childhood.

2.4. Data Management and Analysis

Survey data were exported from Qualtrics into SPSS version 29.0.2.0 (IBM Corporation, Somers, NY, USA), and a 5% random sample was reviewed for completion and accuracy. All data were summarized with appropriate descriptive statistics to describe the sample and their responses. Continuous data were summarized by mean (SD), or median [range], and categorical data were summarized by frequency counts and percentages. Because few respondents selected “strongly agree” or “strongly disagree” on the 4-point Likert scale, responses were combined into two categories (agree/strongly agree and disagree/strongly disagree) for ease of interpretation. Given its exploratory nature, data were primarily presented and analyzed descriptively. Where indicated, differences between users and non-users of GenAI were compared using chi-square analyses for categorical data. Significance was set to α = 0.05. Missing data were excluded from analyses and the number of responses for each item are reported.
Open-ended questions were analyzed using summative qualitative content analysis. Content analysis is a qualitative method of placing text data in categories to create meaning from the responses (Hsieh & Shannon, 2005). Study authors reviewed the free text data and collaboratively developed a coding framework with descriptions and rules (Supplementary Materials). Free text responses were coded independently (percent agreement = 83%). Conflicts were discussed, and a consensus was reached between the coders.

3. Results

3.1. Participants

A total of 220 participants responded to the survey. Data were excluded if participants completed less than 15% of the survey (n = 24). A total of 196 responses (32.9% response rate) were included in the final analysis, with responses received from all programs within the college. Table 1 illustrates the demographic data for the participants included. Respondents were from the OT (n = 85), PT (n = 56), AT (n = 12), ES (n = 18), and SLP (n = 3) programs, with 22 individuals not indicating their associated program. The mean age of respondents was 23.0 ± 2.7 years old. Of those who reported their gender identity, 81.1% identified as women (n = 142), 16% as men (n = 28), 1.1% as non-binary/third gender (n = 2), and 1.7% preferred not to say (n = 3). Regarding race and ethnicity, 132 respondents identified as white, 22 as Asian, 12 as Black or African American, 11 as Hispanic/Latino/Spanish, 1 as Arabic/Middle Eastern, 6 preferred not to say, and 1 selected “other”.

3.2. Use of GenAI

Over half (56.1%) of the respondents indicated using GenAI at least “some of the time” in school, with only 6.1% reporting frequent use as either “most of” or “all” the time. Forty-four percent indicated that they do not use GenAI in school. Of those who have used GenAI for school, ChatGPT was the most common platform (n = 81), followed by Google/Gemini (n = 32), Grammarly (n = 9), ChatPDF (n = 4), Claude (n = 4), Microsoft Copilot/Bing Chat (n = 4), Adobe Firefly (n = 3), Meta Llama (n = 2), Packback (n = 2), SciSpace (n = 2), and Snapchat AI, Jungle, Quizlet, Testportal AI, and Quillbot (n = 1 each). In non-users of GenAI, ChatGPT was also the most recognized product (n = 76), followed by Google/Gemini (n = 59), Microsoft Copilot/Bing Chat (n = 31), and Adobe Firefly (n = 15). Only two respondents stated they pay to use GenAI for school, with one subscribing to ChatGPT and the other selecting “other”.
In response to the open-ended question, 85% of users identified how they “actually” use GenAI to support their schoolwork, and 66% of non-users identified how they “could” use GenAI in their schoolwork. Most respondents identified more than one use of GenAI, resulting in higher frequency counts than the number of respondents. Of the GenAI users, the most frequent use was to explain and review concepts (n = 49), generate content or ideas for schoolwork (n = 20), and grammatical support (n = 21). The most frequently identified potential use of GenAI by non-users was to explain and review concepts (n = 24), generate content or ideas for schoolwork (n = 18), and as a study aid (n = 16) (Figure 1).
Two students who used GenAI, and five students who did not, reported concerns about the negative impact of GenAI use on deeper learning. Illustrative comments included the following:
“I think overreliance on AI can be detrimental to the learning process. While it can be used as a tool for templates, at this point I prefer not to use it” (Non-user 1 response).
“I can see how using it on certain classes that are “easier” but have a higher workload, like the heavy writing classes. But, in the cases of classes like orgo, calc, physics, etc, just using AI wouldn’t help you learn if you’re only using it for answering homework. Yes, getting 100% on your homework grade boosts your grade, but failing the quizzes and tests reflects harder on your grade. Only if using the AI for learning how to do certain problems would I view it as a helpful resource in education. But that’s what the textbook is for” (Non-user 2 response).
“Generative AI helps me brainstorm concepts in classwork/assignments that can then be used to expand upon individually. I see the tool as a starting point, rather something to rely on for the entirety of a task. If it is utilized too often or without thought/consideration, I feel as though it is doing a disservice to my education and ability to absorb material effectively” (User 1 response).

3.3. Perceptions of GenAI

The majority of the survey participants agreed with all TAM statements that GenAI is both useful (Table 2) and easy to use (Table 3). Users more frequently perceived GenAI as useful and easy to use compared to non-users in most categories. Significant differences were noted in five of the seven questions pertaining to usefulness and five of the six questions pertaining to ease of use. Specifically, more users believed that GenAI could improve performance (<0.001), increase productivity (p = 0.015), enhance their effectiveness (p < 0.001), help their ability to learn (p < 0.001), and is overall useful (p < 0.001) in school as compared to non-users. In addition, more users found GenAI easy to learn to use (p = 0.038), easy to use (p < 0.001), easy to get to do the intended task (p < 0.001), clear and understandable (p < 0.001), and flexible to interact with (p = 0.016) as compared to non-users.

3.4. Facilitators and Knowledge of GenAI

Survey responses to facilitators of using GenAI and its perceived accuracy are shown in Table 4. A larger percentage of users (88.1%) reported that peers they respect use GenAI as compared to only 57.7% of non-users (p < 0.001). Most respondents (87.2%) either perceive “some” faculty as supportive of GenAI use or were uncertain of faculty views. There was no difference between users and non-users in faculty perceptions. In terms of resources, a greater percentage of users (82.4%) agreed that they had the resources needed to use GenAI as compared to 64.1% of non-users (p = 0.005). Nearly all respondents (98.9%) identified that GenAI does not always provide accurate information. No respondent believed they were not responsible for the accuracy of information from GenAI at all, with 81.8% of non-users accepting full responsibility as compared to 69.3% of users (p = 0.057). More users (50.5%) believed they knew how to cite the use of GenAI when used as compared to 28.6% of non-users (p = 0.003); however, more non-users (65.3%) indicated they would always indicate use of GenAI if used as compared to 24.0% of users (p < 0.001).

4. Discussion

To our knowledge, this was the first investigation to explore patterns of GenAI specific to its acceptance and utilization among rehabilitation sciences students using a mixed-methods approach. The combined quantitative and qualitative analyses enrich the understanding of factors differentiating non-users from users. Users exhibited more favorable views of GenAI’s utility and accessibility across multiple TAM dimensions than non-users and reported stronger peer influence. Though both groups identified similar applications for GenAI, our theory-driven cross-sectional survey study revealed differences in perceptions and practices that may influence the adoption of this technology. Non-users demonstrated greater commitment to information accuracy, accountability, and disclosure of GenAI use. These contrasting perspectives, alongside shared concerns about impacts on deeper learning, provide a better understanding of the perceptions and usage of GenAI in health science students that can guide the successful integration of this educational technology GenAI in their education (Kazley et al., 2024).
Few students are frequent users of GenAI, with 50% of students reporting using GenAI tools at least “some of the time” for school purposes but only 6.1% reporting using GenAI “most of” or “all” the time. This is less than the usage rates identified in the general college population (Amoozadeh et al., 2024; Baek et al., 2024). When students use GenAI, they largely use it as a tool to aid their learning rather than replace their active involvement in school. The main uses for GenAI reported by students in the current study are consistent with other reports from other health science disciplines (Almansour & Alfhaid, 2024; Furey et al., 2024; Hale et al., 2024; Janumpally et al., 2025; Kazley et al., 2024; Preiksaitis & Rose, 2023). These results are encouraging as respondents identified multiple ways to harness the opportunities and capabilities of GenAI to support their learning and align with the efforts of faculty who aim to encourage GenAI use. Nonetheless, a large percentage of health sciences students were not using GenAI and may subsequently be unprepared to interact with this technology in their profession.
Users of GenAI were more likely to perceive GenAI as both useful and easy to use, consistent with the predictions of the TAM. Though users perceived GenAI as easier to use, there was no difference in the perceived ability to become “skillful” in its use. In addition, users were more likely to perceive the usefulness of GenAI in school, though there was no statistically significant difference in the perception of GenAI’s ability to accomplish tasks and make school easier as compared to non-users. This suggests that non-users may perceive a potential use of GenAI but have concerns about the impact on their learning. The free text comments amplified the concern for a negative effect on learning. In combination, these findings add to the current evidence as they suggest specific perspectives of health sciences students that can be addressed to foster the appropriate adoption and continued use of GenAI. Instructing non-users in GenAI basics and appropriate use to foster learning while advancing the skill of current users may advance the awareness and mastery of this educational technology.
Despite the wide range of GenAI products available, both users and non-users were most familiar with ChatGPT. In contrast to prior work specific to ChatGPT usage (Baek et al., 2024), our respondents identified other GenAI platforms and illustrated the need for faculty to be versed in various platforms to meet student needs. Inclusive of all of the different platforms being used, nearly all respondents acknowledged that GenAI does not always provide accurate information, and a similarly high percentage of users acknowledged that they would not always be able to recognize inaccurate information. These findings highlight that some students understand there is still a need for human supervision and approval of GenAI output to ensure accuracy and reliability. Interestingly, non-users had a greater awareness of their responsibilities for the accuracy of information provided with GenAI than users. These observations should be investigated further to ensure current users understand their responsibilities given the potential for inaccurate and biased information produced from GenAI.
An understanding of barriers and facilitators associated with GenAI use is needed for the effective integration of GenAI in the educational environment. Consistent with prior work and the UTAUT model, the social influences of peers and faculty appear to affect student acceptance and use of GenAI for school (Gado et al., 2022). More users reported that peers they respected were using GenAI compared to the non-users, and non-users had more uncertainty about the faculty acceptance of GenAI. Interestingly, respondents of the current survey do not believe that faculty support the use of GenAI, which contributes to a lower prevalence of GenAI use in the present data as compared to other reports (Amoozadeh et al., 2024; Baek et al., 2024). While it has been reported that 72% of instructors have used GenAI as an instructional tool, a standard practice has not been established, and many faculty members still restrict its use (Ruediger et al., 2024). It is unclear if this perception truly reflects faculty beliefs around GenAI. Previous work has found that students perceive faculty as not giving enough guidance on allowable use, and there are varying perceptions on what constitutes cheating with the use of GenAI (Kazley et al., 2024). Regardless of the stance on GenAI usage, faculty should share their informed decision with students to help them distinguish between the appropriate and inappropriate use of GenAI in a learning environment.
The results of the present study provide insights that can facilitate and guide the proper integration of GenAI in health sciences education. Higher learning institutions are developing use policies to address the rapid shift in pedagogical practices caused by GenAI. Nevertheless, the literature lacks recommendations and a methodology for encouraging the responsible adoption of GenAI for learning. Consistent with the predictions of the TAM and UTAUT, developing competent users of GenAI will require addressing the perception of the product’s ease of use and usefulness and acknowledging and including the social influences in that training. The concern for the negative impact on learning must be addressed with non-users, providing training on how to use it for learning and data demonstrating the positive impact on learning. In contrast, advanced training may be required for current users to develop the skillful application and understanding of this educational technology.

5. Limitations

The observations reported in the present study should be considered within the context of its limitations. We employed both purposive and convenience sampling, and these results represent rehabilitation science students at a private urban university. Though these results may generalize to similar settings and disciplines, caution should be used if extrapolating to distinct students or settings. While our response rate was 32.9%, students may not have been fully transparent in their responses due to the ethical concerns around using GenAI within higher education and limited guidance on its use within the parent institution. Though the data suggest students perceive GenAI as a tool to support their active learning, students who may use it as an “alternative” to learning may have been less likely to complete this survey and expose their improper usage. Despite the wide range of GenAI options presented in our survey, most of the respondents only use ChatGPT, limiting the broader representation of the use and adoption of other GenAI technologies. As a descriptive study, the data presented do not clearly explain the rationale for student adoption and usage patterns. Though differences were noted between users and non-users, the etiology of these differences could not be identified. Future work should include statistical modeling to better understand the determinants of use. This manuscript provides justification for the use of the TAM and the UTAUT to guide a future study to explore the predictive factors that influence the use of Generative AI.
In addition, the survey did not ensure that students understood the difference between artificial intelligence and GenAI. During the coding of the open-ended questions, nine students identified themselves as using Grammarly. Grammarly is a service that incorporates both artificial intelligence and GenAI within its software, and the study authors could not differentiate between the features used. Finally, though students perceive GenAI as potentially useful and easy to learn, the survey did not objectively measure actual knowledge and abilities. GenAI models are complex, and advanced prompting may elicit superior output. It is unclear if students truly grasp the full capabilities of GenAI or if a superficial understanding of this technology influences their perceptions.

6. Conclusions

Our findings provide expanding insights into GenAI use among rehabilitation sciences students, revealing that over half of respondents appear to utilize these tools appropriately for concept explanation, content generation, and grammatical support. Equally as important, the present data identify a significant number of students who may be reluctant to utilize GenAI due to their perceptions that it is difficult to use, a concern that it may negatively impact learning, and an ambiguity in perceived faculty support. Given the inevitable global impact of GenAI, there appears to be an opportunity for the thoughtful integration of these tools into rehabilitation science education to ensure students are prepared to meet the evolving field of healthcare. Faculty can leverage these insights to develop targeted approaches that capitalize on students’ existing comfort with GenAI while addressing their shared concerns about academic integrity and deeper learning. Future research should explore faculty’s perceptions of GenAI within rehabilitation sciences programs to determine their alignment with student perspectives on improving learning outcomes.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/educsci15030380/s1, Supplementary S1: Survey used to assess perceptions of generative artificial intelligence in school. Supplementary S2: Coding framework.

Author Contributions

Conceptualization, R.D., A.B., M.J.C., E.R.P. and T.P.; funding acquisition, R.D.; methodology: R.D., A.B., M.J.C., E.R.P. and T.P.; writing—original draft preparation, R.D., A.B., M.J.C., E.R.P. and T.P.; writing—reviewing and editing, R.D., A.B., M.J.C., E.R.P. and T.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. Internal funding was partially provided by Thomas Jefferson University College of Rehabilitation Sciences.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Thomas Jefferson University, Philadelphia, PA (#iRISID-2024-0914) on 19 August 2024.

Informed Consent Statement

Informed consent was obtained from all subjects as part of the survey completion process.

Data Availability Statement

The data supporting the conclusions of this article will be made available by the authors on reasonable request.

Acknowledgments

The authors would like to acknowledge the following individuals from Thomas Jefferson University (Philadelphia, PA) for their contributions to the development of the final survey used in this study: Juan Leon, PhD, MA, Director of Online Learning, Demi Harte, BA, Instructional Design Specialist, and Leah Miller, MEd, Instructional Design Specialist. In addition, we would like to acknowledge The Jefferson College of Rehabilitation Sciences for providing the incentive for subject participation.

Conflicts of Interest

The authors declare no conflicts of interest related to this work.

References

  1. Almansour, M., & Alfhaid, F. M. (2024). Generative artificial intelligence and the personalization of health professional education: A narrative review. Medicine, 103(31), e38955. [Google Scholar] [CrossRef] [PubMed]
  2. AlQudah, A. A., Al-Emran, M., & Shaalan, K. (2021). Technology acceptance in healthcare: A systematic review. Applied Sciences, 11(22), 10537. [Google Scholar] [CrossRef]
  3. Alsobhi, M., Khan, F., Chevidikunnan, M. F., Basuodan, R., Shawli, L., & Neamatallah, Z. (2022). Physical therapists’ knowledge and attitudes regarding artificial intelligence applications in health care and rehabilitation: Cross-sectional study. Journal of Medical Internet Research, 24(10), e39565. [Google Scholar] [CrossRef]
  4. Amoozadeh, M., Daniels, D., Nam, D., Kumar, A., Chen, S., Hilton, M., Srinivasa Ragavan, S., & Alipour, M. A. (2024, March 20–23). Trust in generative AI among students: An exploratory study. 55th ACM Technical Symposium on Computer Science Education V. 1 (pp. 67–73), Portland, OR, USA. [Google Scholar] [CrossRef]
  5. Baek, C., Tate, T., & Warschauer, M. (2024). “ChatGPT seems too good to be true”: College students’ use and perceptions of generative AI. Computers and Education: Artificial Intelligence, 7, 100294. [Google Scholar] [CrossRef]
  6. Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
  7. Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 94–98. [Google Scholar] [CrossRef]
  8. Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  9. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003. [Google Scholar] [CrossRef]
  10. Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304. [Google Scholar] [CrossRef]
  11. Furey, P., Town, A., Sumera, K., & Webster, C. A. (2024). Approaches for integrating generative artificial intelligence in emergency healthcare education within higher education: A scoping review. Critical Care Innovations, 7(2), 34–54. [Google Scholar] [CrossRef]
  12. Gado, S., Kempen, R., Lingelbach, K., & Bipp, T. (2022). Artificial intelligence in psychology: How can we enable psychology students to accept and use artificial intelligence? Psychology Learning & Teaching, 21(1), 37–56. [Google Scholar] [CrossRef]
  13. Golding, J. M., Lippert, A., Neuschatz, J. S., Salomon, I., & Burke, K. (2024). Generative AI and college students: Use and perceptions. Teaching of Psychology, 25(3), 277–304. [Google Scholar] [CrossRef]
  14. Hale, J., Alexander, S., Wright, S. T., & Gilliland, K. (2024). Generative AI in undergraduate medical education: A rapid review. Journal of Medical Education and Curricular Development, 11, 1–15. [Google Scholar] [CrossRef]
  15. Hsieh, H.-F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. [Google Scholar] [CrossRef]
  16. Hu, K. (2023). ChatGPT sets record for fastest-growing user base—Analyst note. Reuters. Available online: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ (accessed on 24 January 2025).
  17. Huang, K. (2023). Alarmed by A.I. chatbots, universities start revamping how they teach. The New York Times. Available online: https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html (accessed on 24 January 2025).
  18. Janumpally, R., Nanua, S., Ngo, A., & Youens, K. (2025). Generative artificial intelligence in graduate medical education. Frontiers in Medicine, 11, 1525604. [Google Scholar] [CrossRef] [PubMed]
  19. Kazley, A. S., Andresen, C., Mund, A., Blankenship, C., & Segal, R. (2024). Is use of ChatGPT cheating? Students of health professions perceptions. Medical Teacher, 1–5. [Google Scholar] [CrossRef]
  20. Mansour, T., & Wong, J. (2024). Enhancing fieldwork readiness in occupational therapy students with generative AI. Frontiers in Medicine, 11, 1485325. [Google Scholar] [CrossRef] [PubMed]
  21. Preiksaitis, C., & Rose, C. (2023). Opportunities, challenges, and future directions of generative artificial intelligence in medical education: Scoping review. JMIR Medical Education, 9, e48785. [Google Scholar] [CrossRef]
  22. Roganović, J. (2024). Familiarity with ChatGPT features modifies expectations and learning outcomes of dental students. International Dental Journal, 74(6), 1456–1462. [Google Scholar] [CrossRef]
  23. Ruediger, D., Blankstein, M., & Love, S. (2024). Generative AI and postsecondary instructional practices (Vol. 320892). JSTOR. [Google Scholar] [CrossRef]
  24. Sallam, M. (2023). ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare, 11(6), 887. [Google Scholar] [CrossRef]
  25. Severin, R., & Gagnon, K. (2023). An early snapshot of attitudes toward generative artificial intelligence in physical therapy education. Journal of Physical Therapy Education, 10.1097. [Google Scholar] [CrossRef] [PubMed]
  26. Sun, L., & Zhou, L. (2024). Does generative artificial intelligence improve the academic achievement of college students? A meta-analysis. Journal of Educational Computing Research, 62(7), 1896–1933. [Google Scholar] [CrossRef]
  27. Vaughn, J., Ford, S. H., Scott, M., Jones, C., & Lewinski, A. (2024). Enhancing healthcare education: Leveraging ChatGPT for innovative simulation scenarios. Clinical Simulation in Nursing, 87, 101487. [Google Scholar] [CrossRef]
  28. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  29. Wójcik, S., Rulkiewicz, A., Pruszczyk, P., Lisik, W., Poboży, M., & Domienik-Karłowicz, J. (2023). Beyond ChatGPT: What does GPT-4 add to healthcare? The dawn of a new era. Cardiology Journal, 30(6), 1018–1025. [Google Scholar] [CrossRef]
Figure 1. Frequency counts of reported student uses of generative artificial intelligence in school.
Figure 1. Frequency counts of reported student uses of generative artificial intelligence in school.
Education 15 00380 g001
Table 1. Participant demographics and use of generative artificial intelligence in school.
Table 1. Participant demographics and use of generative artificial intelligence in school.
AllUsersNon-Users
Use of AI n (%)N = 196n = 110n = 86
All the time 2 (1.0%)2 (1.8%)-
Most of the time 10 (5.1%)10 (9.1%)-
Some of the time 98 (50%)98 (89.1%)-
None of the time 86 (43.9%)-86 (100%)
Age mean (SD) N = 169n = 96n = 73
23.0 (2.7)22.9 (2.5)23.1 (2.8)
Gender Identity n (%) N = 175n = 100n = 75
Man 28 (16%)18 (18%)10 (13.3%)
Woman 142 (81.1%)80 (80%)62 (82.7%)
Non-binary/third gender 2 (1.1%)0 (0%)2 (2.7%)
Prefer not to say 3 (1.7%)2(2.0%)1 (1.3%)
Race and Ethnicity n *
White 1326963
Black or African American 1284
American Indian/Alaskan Native 000
Asian 22148
Native Hawaiian/Pacific Islander 000
Hispanic/Latino/Spanish 1183
Arabic/Middle Eastern 110
Prefer not to say 642
Other 110
Annual Family Income Growing Up n (%)N = 175n = 98n = 77
Less than $50,000 14 (8.0%)9 (9.2%)5 (6.5%)
$50,000–$99,999 48 (27.4%)18 (18.4%)20 (26.0%)
$100,000–$149,999 39 (22.3%)26 (26.5%)13 (16.9%)
Greater than $150,000 35(20.0%)17 (17.3%)18 (23.4%)
Prefer not to answer 12 (7.4%)6 (6.1%)7 (9.1%)
I am unaware 26 (14.9%)12 (12.2%)14 (18.2%)
* Data reported as frequency count only as the question allowed respondents to select multiple options.
Table 2. Perceived usefulness of generative artificial intelligence in school.
Table 2. Perceived usefulness of generative artificial intelligence in school.
Question Domain *All
N = 192
Users
n = 109
Non-Users
n = 83
Chi-Square
p-Value
Accomplishes tasks (%)
Agree/Strongly Agree 69.3%73.4%63.8%0.156
Disagree/Strongly Disagree 30.7%26.7%36.1%
Improves performance (%)
Agree/Strongly Agree 51.0%66.1%31.3%<0.001
Disagree/Strongly Disagree 49.0%34.0%68.7%
Increases productivity (%)
Agree/Strongly Agree 55.7%63.3%45.8%0.015
Disagree/Strongly Disagree 44.3%36.7%54.2%
Enhances effectiveness (%)
Agree/Strongly Agree 52.6%66.9%33.7%<0.001
Disagree/Strongly Disagree 47.4%33.1%66.3%
Makes school easier (%)
Agree/Strongly Agree 67.2%67.0%67.5%0.942
Disagree/Strongly Disagree 32.8%33.0%32.5%
Is useful (%)
Agree/Strongly Agree 76.6%91.7%56.6%<0.001
Disagree/Strongly Disagree 23.4%8.2%43.3%
Helps my ability to learn (%)
Agree/Strongly Agree 59.9%76.1%38.5%<0.001
Disagree/Strongly Disagree 40.1%23.8%61.5%
* The general question domain is provided in the table. See Supplementary Materials for full question stem as verbiage was adapted for users versus non-users.
Table 3. Perceived ease of use of generative artificial intelligence in school.
Table 3. Perceived ease of use of generative artificial intelligence in school.
Question Domain *All
N = 181
Users
n = 103
Non-Users
n = 78
Chi-Square
p-Value
Learning to use is easy (%)
Agree/Strongly Agree 87.8%92.3%82.0%0.038
Disagree/Strongly Disagree 12.2%7.8%18.0%
Easy to get it to do intention (%)
Agree/Strongly Agree 70.7%81.5%56.4%<0.001
Disagree/Strongly Disagree 29.3%18.4%43.6%
Interaction is clear and understandable (%)
Agree/Strongly Agree 79.0%88.4%66.7%<0.001
Disagree/Strongly Disagree 21.0%11.7%33.3%
Flexible to interact with (%) a
Agree/Strongly Agree 77.8%84.4%69.3%0.016
Disagree/Strongly Disagree 22.2%15.7%30.8%
Easy to become skillful in GenAI
Agree/Strongly Agree 77.3%79.6%74.4%0.403
Disagree/Strongly Disagree 22.7%20.4%25.6%
Easy to use (%) b
Agree/Strongly Agree 88.3%96.0%78.2%<0.001
Disagree/Strongly Disagree 11.7%4.0%21.8%
* The general question domain is provided in the table. See Supplementary Materials for full question stem as verbiage was adapted for users versus non-users. a. n = 102 for user responses; b. n = 101 for user responses.
Table 4. Facilitators and knowledge of generative artificial intelligence (GenAI) usage.
Table 4. Facilitators and knowledge of generative artificial intelligence (GenAI) usage.
Question Domain *All
N = 180
Users
n = 102
Non-Users
n = 78
Chi Square
(p-Value)
Peers I respect use GenAI for school (%) a
Agree/Strongly Agree74.9%88.1%57.7%<0.001
Disagree/Strongly Disagree 25.1%11.9%42.3%
Faculty support use of GenAI for school (%)
All1.7%2.0%1.3%0.083
Most7.2%11.8%1.3%
Some53.3%52.9%53.8%
None3.9%3.9%3.8%
Unsure33.9%29.4%39.7%
I have the resources I need to use GenAI for school (%)
Agree/Strongly Agree74.4%82.4%64.1%0.005
Disagree/Strongly Disagree 25.6%17.6%35.9%
GenAI provides accurate information (%) ab
All of the time1.1%1.0%1.3%0.847
Most/Some/None of the time98.9%99.0%98.7%
I am responsible for the accuracy of GenAI (%) ab
Fully responsible74.7%69.3%81.8%0.057
Share responsibility/Unsure25.3%30.7%18.2%
I know how to indicate the use of GenAI (%) ab
Yes41.0%50.5%28.6%0.003
No59.0%49.5%71.4%
I indicate when I use GenAI (%) c
All of the time41.7%24.0%65.3%<0.001
Most/Some/None of the time58.3%76.0%34.7%
* The general question domain is provided in the table. See Supplementary Materials for full question stem as verbiage was adapted for users versus non-users. a. n = 101 for user responses. b. n = 77 for non-user responses. c. n = 100 for user and 75 for non-user responses.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dekerlegand, R.; Bell, A.; Clancy, M.J.; Pletcher, E.R.; Pollen, T. Generative Artificial Intelligence in Education: Insights from Rehabilitation Sciences Students. Educ. Sci. 2025, 15, 380. https://doi.org/10.3390/educsci15030380

AMA Style

Dekerlegand R, Bell A, Clancy MJ, Pletcher ER, Pollen T. Generative Artificial Intelligence in Education: Insights from Rehabilitation Sciences Students. Education Sciences. 2025; 15(3):380. https://doi.org/10.3390/educsci15030380

Chicago/Turabian Style

Dekerlegand, Robert, Alison Bell, Malachy J. Clancy, Erin R. Pletcher, and Travis Pollen. 2025. "Generative Artificial Intelligence in Education: Insights from Rehabilitation Sciences Students" Education Sciences 15, no. 3: 380. https://doi.org/10.3390/educsci15030380

APA Style

Dekerlegand, R., Bell, A., Clancy, M. J., Pletcher, E. R., & Pollen, T. (2025). Generative Artificial Intelligence in Education: Insights from Rehabilitation Sciences Students. Education Sciences, 15(3), 380. https://doi.org/10.3390/educsci15030380

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop