Next Article in Journal
Application of Explainable Artificial Intelligence Based on Visual Explanation in Digestive Endoscopy
Previous Article in Journal
Effects of Backward Walking on External Knee Adduction Moment and Knee Adduction Angular Impulse in Individuals with Medial Knee Osteoarthritis
Previous Article in Special Issue
Edge-Driven Disability Detection and Outcome Measurement in IoMT Healthcare for Assistive Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measuring the Impact of Large Language Models on Academic Success and Quality of Life Among Students with Visual Disability: An Assistive Technology Perspective

by
Ibrahim A. Elshaer
1,2,*,
Sameer M. AlNajdi
2,3 and
Mostafa A. Salem
2,4
1
Department of Management, School of Business, King Faisal University, Al-Ahsaa 31982, Saudi Arabia
2
King Salman Center for Disability Research, Riyadh 11614, Saudi Arabia
3
Education Technology Department, Faculty of Education and Arts, University of Tabuk, Tabuk 71491, Saudi Arabia
4
Deanship of Development and Quality Assurance, King Faisal University, Al-Ahsaa 31982, Saudi Arabia
*
Author to whom correspondence should be addressed.
Bioengineering 2025, 12(10), 1056; https://doi.org/10.3390/bioengineering12101056
Submission received: 2 September 2025 / Revised: 23 September 2025 / Accepted: 28 September 2025 / Published: 30 September 2025

Abstract

In the rapid digital era, artificial intelligence (AI) tools have progressively arisen to shape the education environment. In this context, large language models (LLMs) (i.e., ChatGPT vs. 4.0 and Gemini vs. 2.5) have emerged as powerful applications for academic inclusion. This paper investigated how using and trusting LLMs can impact the academic success and quality of life (QoL) of visually impaired university students. Quantitative research was conducted, obtaining data from 385 visually impaired university students through a structured survey design. Partial Least Squares Structural Equation Modelling (PLS-SEM) was implemented to test the study hypotheses. The findings revealed that trust in LLMs can significantly predict LLM usage, which in turn can improve QoL. While LLM usage failed to directly support the academic success of disabled students, but its impact was mediated through QoL, suggesting that enhancements in well-being can contribute to higher academic success. The results highlighted the importance of promoting trust in AI applications, along with developing an accessible, inclusive, and student-centred digital environment. The study offers practical contributions for educators and policymakers, shedding light on the importance of LLM applications for both the QoL and academic success of visually impaired university students.

Graphical Abstract

1. Introduction

Artificial intelligence (AI)-based solutions now hold significant potential in terms of overcoming long-standing barriers faced by students with disabilities (particularly those who are blind or visually impaired), such as limited physical assistance and restricted access to educational content and public services [1]. When aligned with universal design principles, AI can support the development of adaptable, user-centred technologies that enhance quality of life and promote active participation in academic, social, and professional contexts [2]. Although many international frameworks advocate disability inclusion, specific guidance on the implementation of AI remains limited. The World Health Organization (WHO), for example, emphasises accessibility in its disability and health guidelines, yet lacks detailed protocols for integrating AI-based solutions (WHO, 2023) [3]. This gap underscores the urgent need for targeted technology policies and inclusive standards that cater to the specific needs of blind and visually impaired learners within the context of digital transformation [4].
The integration of artificial intelligence into accessibility solutions is advancing rapidly. Technologies such as natural language processing (NLP) and computer vision are being deployed in increasingly innovative ways [5]. For example, real-time captioning, sign language interpretation, and intelligent navigation systems significantly enhance communication and mobility for individuals with hearing and visual impairments. However, the deployment of these tools remains inconsistent, often constrained by socioeconomic disparities and geographic inequalities [6]. This underscores the urgent need for standardized accessibility guidelines and user-driven design frameworks that authentically reflect the lived experiences of blind and visually impaired students [7].
Moreover, generative technologies—particularly Large Language Models (LLMs) such as ChatGPT, Gemini, LLaMA, Mixtral, and Claude—are fundamentally reshaping the landscape of digital accessibility [8]. Since the advent of transformer-based architecture, these models have demonstrated exceptional proficiency in generating text, images, audio, and video that are often indistinguishable from human-created content [9]. A comprehensive understanding of both their capabilities and limitations is essential for developing inclusive, ethical applications that effectively serve diverse user populations [10]. However, as AI evolves, it often outpaces regulatory frameworks, resulting in gaps in oversight and accountability [11]. Addressing these challenges requires cross-sector collaboration among developers, regulators, educators, and disability advocates to ensure that AI development remains both responsible and inclusive [6].
With the rise of assistive technologies—including APIs, AI platforms, and LLMs—investigators are revisiting history with a critical lens. Suppose Louis Braille (founder of the Braille system) [12], Valentin Haüy (founder of the first school for blind youth) [13], or Samuel Gridley Howe (founder of the Perkins School for the Blind) [14] had access to today’s AI tools—how might they have reimagined education and social inclusion for visually impaired learners? Similarly, what role would William B. Wait (creator of the New York Point system) [15] have envisioned for AI in advancing access, learning, and empowerment? These questions underscore the transformative potential of artificial intelligence, not merely as a functional tool, but as a catalyst for redefining accessibility in the digital age. The present study explores how LLM-driven technologies can serve as accessible, intelligent tools to support independent learning, academic performance, and well-being among students with visual impairments.

2. Literature Review

2.1. Large Language Models and Accessibility in Education

The advent of Large Language Models (LLMs) represents a transformative shift in education, particularly in enhancing accessibility for students with diverse learning needs—including those with disabilities, and more specifically, blind and visually impaired learners [16]. Models such as ChatGPT (OpenAI-USA- San Francisco) and Gemini (Google DeepMind-UK-London), trained on vast linguistic datasets, exhibit advanced capabilities in text comprehension, generation, and contextual adaptation [17]. These strengths offer unprecedented opportunities for supporting learners with visual impairments, cognitive differences, and other accessibility challenges [18]. A key advantage of LLMs is their ability to provide real-time, adaptive learning support. They can simplify complex texts for students with dyslexia or cognitive processing difficulties, summarize dense academic content to enhance comprehension, and assist learners through plain-language paraphrasing or translation services [19]. For blind or visually impaired students, LLMs integrated with voice interfaces and screen readers enable natural, interactive engagement, thereby reducing reliance on static or traditional assistive technologies [20]. Their multimodal capabilities—including text-to-speech conversion, alt-text generation for images, and audio descriptions—further democratize access to educational resources [21].
Recent studies underscore the efficacy of LLMs in special education contexts. For instance, learners who use AI-powered tutoring systems show improvements in information retention, faster task completion, and enhanced self-confidence—outcomes attributed to personalised feedback, dynamic scaffolding, and reduced cognitive load, all of which are typically lacking in traditional, one-size-fits-all instructional materials [22,23,24]. These gains stem from personalised feedback, dynamic scaffolding, and reduced cognitive load—advantages often absent in traditional, one-size-fits-all materials [21]. Both Safdar, Samina, and Farrukh [25] and Robinson, Bose, and Cross [26] highlight that LLMs also support educators in designing accessible curricula. These models can generate Universal Design for Learning (UDL)-compliant lesson plans, adapt assessments for diverse learners, and automate the creation of alternative text descriptions or simplified content, thereby alleviating the workload on teachers—especially those without specialized training in accessibility.
Despite their promise, LLMs also present several challenges. These include biases in training data that can reinforce systemic inequities, hallucinations or factual inaccuracies that risk misleading vulnerable learners, and privacy concerns related to handling sensitive educational data [27]. Addressing these risks requires transparent model development, human-in-the-loop oversight, and the inclusive curation of training datasets to ensure fairness and reliability [28]. Ultimately, LLMs are redefining accessibility in education by dynamically adapting content, enabling interactivity, and accommodating a spectrum of learner needs—bridging critical gaps that traditional assistive technologies often cannot [29]. With responsible implementation and ethical oversight, these technologies hold the power to democratize learning and empower all students to reach their full potential [30].

2.2. Trust in AI and Assistive Technologies

Artificial intelligence (AI) has become an indispensable part of modern life, playing an increasingly vital role in daily activities, particularly in education for individuals with disabilities, such as blind or visually impaired students. AI has demonstrated significant advancements over traditional solutions, leading to a rapid rise in AI-based educational approaches [31]. However, the widespread adoption of these technologies heavily depends on user trust, as distrust can hinder their acceptance and effectiveness [32]. Trust in AI can be defined as the willingness of individuals to rely on AI systems, accept their suggestions, share tasks, and contribute information [33]. For students with disabilities, AI can only serve as a dependable educational tool if it aligns with their expectations and needs, making trust a critical factor [34]. Trust is established when users can anticipate the system’s behaviour and determine whether it aligns with their goals [35]. Without integrating trust into the development, deployment, and use of AI, its full potential for individuals, organizations, and society will remain unrealized. Thus, it is essential to understand the definition, scope, and role of trust in AI, along with its influencing factors and application-specific requirements [36].
Trust in AI is multidimensional, encompassing perceptions of reliability, transparency, ethical integrity, and responsiveness [37]. Afroogh et al. [38] and Küper and Krämer [39] identified key factors influencing trust in Large Language Models (LLMs), including accuracy, ethical alignment, competence, and transparency. They also highlighted that trust is not solely based on technical performance, but also on the perceived alignment between AI responses and user values, which is a crucial consideration for students with disabilities, who may already face barriers to inclusion and thus demand higher standards of accountability from technology. For visually impaired learners, trust in assistive AI tools directly impacts academic participation and self-efficacy [40]. Studies indicate that students are more likely to adopt tools like screen readers, text-to-speech converters, and AI-generated summaries when outputs are consistently accurate and contextually appropriate [41,42,43]. Conversely, errors, such as mistranslations, misleading paraphrases, or biased content, can erode trust, leading to reduced usage or abandonment of the tool [33,38].
Research in human–computer interaction further supports the dynamic nature of trust, which evolves through positive user experiences [32,37,39]. Geethanjali and Umashankar [44] emphasise that trust in automation can be strengthened through transparency, effective error handling, and the predictability of student needs. In educational contexts, this entails clearly communicating system limitations, allowing users to correct outputs, and ensuring culturally and linguistically sensitive responses [31]. When AI tools incorporate these principles, users are more likely to integrate them into their learning routines and daily lives.
Accordingly, the first research objective (RO1) of this study is to examine the direct effect of trust in LLMs on academic usage, quality of life, and academic success among visually impaired students. This objective informs the following hypotheses:
  • H1: Trust in LLMs positively predicts LLMs usage.
  • H2: Trust in LLMs positively predicts quality of life.
  • H3: Trust in LLMs positively predicts academic success.

2.3. Academic Success and Quality of Life with LLM-Supported Learning

In the evolving landscape of inclusive education, Artificial Intelligence (AI)—particularly Large Language Models (LLMs) such as ChatGPT and Gemini—holds transformative potential for enhancing both academic achievement and quality of life for students with disabilities, especially those who are blind or visually impaired [14]. For these learners, educational success extends beyond traditional performance metrics to encompass perceived improvements in comprehension, classroom participation, and conceptual mastery [26]. This holistic perspective is reflected in the construct of academic success and quality of life, which prioritises learner experience and intellectual development alongside, or even above, conventional grade-based assessments [45]. LLM-supported learning tools provide adaptive, accessible, and personalised educational experiences, significantly reducing cognitive, behavioural, and skills barriers [46]. For instance, LLMs can convert visual content into coherent audio formats, summarize complex academic materials, and deliver real-time explanations, all of which are critical for blind or visually impaired students who often encounter unequal access to textual and graphical information [16]. By improving content accessibility, these technologies foster greater engagement in self-directed learning and promote equitable participation in educational settings, thereby contributing to narrowing achievement gaps between visually impaired and sighted learners [19]. Moreover, the integration of LLMs into academic routines has been shown to enhance academic self-efficacy [47]. Visually impaired students benefit from immediate, individualized feedback and tailored support, which helps build confidence in managing academic tasks [48]. This outcome is consistent with the Universal Design for Learning (UDL) principles, which advocates for flexible, inclusive educational environments that accommodate a diverse range of learner needs [49].
Beyond academic performance, LLM use has broader implications for psychosocial well-being. In addition to direct effects, academic usage of LLMs plays a mediating role by translating trust into improved quality of life and academic outcomes. Bach et al. [50] emphasise that when visually impaired students trust and actively engage with technological tools during learning, they gain better access to information and experience greater satisfaction, resulting in enhanced academic performance through empowering educational experiences.
Thus, the second research objective (RO2) is to investigate the mediating role of academic usage in the relationship between trust in LLMs and both quality of life and academic success. This leads to the following hypotheses:
  • H4: LLM usage positively predicts quality of life.
  • H5: LLM usage positively predicts academic success.
Michalos [51] define quality of life as the subjective assessment of one’s overall well-being—an outcome positively associated with educational inclusion and access to supportive technologies. Recent studies indicate that assistive and inclusive technologies, particularly those that alleviate everyday frustrations, enhance autonomy, and promote meaningful engagement, can significantly improve the well-being of students with disabilities [52,53,54]. In this regard, LLMs contribute not only by supporting learning but also by reducing dependency on human intermediaries, facilitating smoother academic interactions, and enabling visually impaired students to participate more confidently in both educational and social domains [25,26]. These advantages translate into reduced anxiety, greater motivation, and a heightened sense of belonging within academic communities.
Quality of life also emerges as a crucial mediating factor linking both trust in LLMs and academic usage to academic success. When students experience improved psychological well-being due to active and meaningful engagement with LLMs, their academic performance tends to improve. This suggests that emotional well-being and life satisfaction can indirectly enhance educational outcomes through trusted engagement with LLM tools [47].
Therefore, the third research objective (RO3) is to assess the mediating role of quality of life in the relationship between trust in LLMs, academic usage, and academic success. The corresponding hypotheses are
  • H6: Quality of life positively predicts academic success.
  • H7: Trust in LLMs indirectly predicts academic success via quality of life.
  • H8: Trust in LLMs indirectly predicts quality of life via LLM usage.
Building on global and national initiatives to promote inclusive education—particularly within the context of Saudi Arabia—this study investigates the adoption and use of AI assistive technologies, specifically Large Language Models (LLMs) such as ChatGPT and Gemini, among visually impaired and blind students in Saudi Arabian higher education. While numerous studies have explored the applications of LLMs in the Saudi context, there remains a significant gap in research focusing on their role as catalysts for academic success and quality of life among visually impaired students. This study addresses that gap by examining how these technologies can serve as digital bridges to improve educational outcomes and overall well-being. To explore the underlying factors influencing behavioural intention and actual usage, the study adopts the Unified Theory of Acceptance and Use of Technology (UTAUT) [55], providing evidence-based insights for policymakers, educators, and institutional leaders seeking to implement AI-enabled support systems that promote equity, inclusion, and academic excellence. UTAUT has been extensively investigated in several studies on AI adoption, particularly in the context of assistive technology adoption. While the current paper acknowledges the importance of employing UTAUT as a theoretical background guide, it did not directly use the common factors of UTAUT constructs (i.e., performance expectancy, effort expectancy, social influence, and facilitating conditions). Instead, UTAUT is employed to shed light on the broader classical perspective, linking technology use and trust with student outcomes. Building on this argument, our conceptual framework focuses on two primary constructs—LLMs usage and trust—as the most prominent elements that can influence the academic success and QoL of visually impaired students.
Trust in large language models (LLMs) significantly influences the academic engagement of visually impaired students [16]. High levels of trust in LLMs encourage frequent and effective academic use, enhancing access to learning resources and fostering more inclusive educational experiences [17]. This engagement, in turn, may lead to improved quality of life and academic success by promoting independence, confidence, and equitable learning opportunities [19].
Finally, trust in LLMs is expected to exert indirect effects on academic success through a sequential mediation pathway involving both academic usage and quality of life. Increased trust fosters more consistent and effective LLM usage, which enhances students’ quality of life and, in turn, contributes to improved academic outcomes. This process reflects a compounded mediation mechanism [37].
Thus, the fourth research objective (RO4) is to explore the indirect effects of trust in LLMs on academic success through academic usage and quality of life. The final hypothesis ( as seen in Figure 1) is as follows:
  • H9 Trust in LLMs indirectly predicts academic success via quality of life.
  • H10 Trust in LLMs indirectly predicts academic success via LLM usage.
  • H11 Trust in LLMs indirectly predicts academic success via LLM usage and quality of life.

3. Methods

3.1. The Study Developed Scale

To analytically validate the justified conceptual framework, this paper utilized a structured, self-administered questionnaire as the main data collection tool. The scale and measurement variables were derived from prior, validated scales in the academic literature, ensuring theoretical reliability and methodological accuracy. The questionnaire was organized into three main sections. The opening part provided respondents with a clear and brief explanation of the study’s main aim, along with a consent form developed to confirm their voluntary contribution and ensure ethical criteria. The second part was designed to collect necessary demographic data, including type of student disability, student age group, academic year, and participant gender. The final part addressed the study’s core factors and latent variables. Particularly, academic success was measured by a three-item scale adapted from the study of Owusu-Acheaw and Larson [56]. Although academic success can be employed as a multidimensional concept, concise scale items have been widely used in prior validated research (i.e., [57,58]) to measure students’ perceived academic performance and learning outcomes. In this paper, the sale items were operationalized to reflect the key role of LLMs in improving the educational performance of visually impaired university students. An example item is “The use of large language models (LLMs) has improved my overall learning experience”. Likewise, trust in LLMs was measured via eight items derived from De Duro et al. [59]. These items were carefully reviewed to fit the unique interactional context of LLMs, comprising comfort perceptions, reliability, and willingness. Sample items include “I feel at ease with LLMs, and I can freely share my ideas with them” and “I invest plenty of time developing and improving my prompts to interact with LLMs”. Furthermore, LLM usage was measured using 3-items as suggested by Venkatesh et al. [60]. Sample items include “I intend to use the knowledge and skills I acquired from the LLMs’ in my educational activities. Finally, quality of life was measured by 5 variables of the “Satisfaction with Life Scale” (SWLS). The SWLS was first coined by Diener et al. [61] and developed as a multidimensional item-scale with five variables to reflect individuals’ cognitive judgments concerning their overall life satisfaction. The SWLS is a broadly validated and robust scale designed to measure people’s cognitive assessment of life satisfaction. Its use in this paper enables a reliable evaluation of how visually impaired university students perceive their overall life satisfaction in relation to LLM use. Sampe items include “I am satisfied with my life“ and “The conditions of my life are excellent “. Participants were kindly asked to articulate their level of agreement with several life-related questions using a five-point Likert scale from 1 “strongly disagree” to 5 “strongly agree”. As the employed scale has been extensively validated in prior work, the inclusion offered a strong basis for measurement validity. To emphasise the scale’s face validity, the full design questionnaire was reviewed by eight academics who evaluated its clarity, appropriateness, and the relevance of the included items. Furthermore, a pilot test was conducted with ten students with visual impairments from King Faisal University (KFU). The responses from this preliminary phase showed that the variables were obviously understood, and only minimal language modifications were required. These steps together confirmed that the scale demonstrated good content and face validity.

3.2. Population and Sample Size

According to the 2022 Kingdom of Saudi Arabia (KSA) General Population and Housing Census [62], disabilities among residents in KSA encompass various types, including hearing, mobility, cognitive, visual, communication, and self-care challenges. This report indicated that roughly 1.8% of the total KSA population, approximately 648,000 out of 36 million residents, are living with some type of disability. Of these clusters, university students denoted a significant ratio, accounting for around 15.8% (n = 102,384). The report also showed that the majority of university students with disabilities are fully enrolled in five main public universities: King Abdulaziz University (1569 disabled students), King Saud University (663 disabled students), Taibah University (523 disabled students), Umm Al-Qura University (381 disabled students), and King Faisal University (330 disabled students). In this paper, these national figures are presented as background context to shed light on the scale of the issue. The actual sampling frame of our paper was independent of these estimates. It was instead based on university students with visual impairments recruited directly from the disability support centres at the universities.
For this paper, the main focus was on containing only university students with visual impairments, with other types of disabilities excluded from the study sample. Respondents were selected based on a convenience sampling technique. To identify the required and adequate sample size, a power analysis was employed by running the G*Power program (version 3.1). The test pattern used the F-test family, pertaining the “Linear multiple regression: Fixed model, R2 deviation from zero” option. Selecting a medium size effect (f2 = 0.15) with four predictors, and a statistical power of 0.95 with a significance level (α) equal 0.05, the output of the analysis suggested at least 74 responses. To help with the data collection procedure, a team of 40 well-trained enumerators were recruited. These enumerators received instructions regarding the ethical research principles, including how to collect informed consent, safeguard respondents’ confidentiality, and address sensitivity issues related to interactions with university students with disabilities. Orientation workshops were conducted to train enumerators with the study’s purposes and to prepare them to respond to any issues or concerns raised by respondents. Out of the 850 forms that were distributed, 385 were fully answered and fulfilled all validity standards, with a response rate of 45%. The final collected dataset was analysed using Partial Least Squares Structural Equation Modelling (PLS-SEM) to assess both the measurement model and the structural paths model of the unobserved variables. The dataset displayed a reasonably balanced gender distribution, with females encompassing 54% and males 46% of the total participants. Respondents ranged in age from 17 to 24 years old, reflecting a diverse representation of academic levels.
This study relied exclusively on self-reported scale measures of LLM trust and usage, as well as academic success, without obtaining objective system logs or real-time prompts. This methodological selection was primarily driven by several considerations related to privacy, ethical concerns, and accessibility restrictions for visually impaired respondents, many of whom rely on screen readers or adaptive AI technologies that can complicate log-based data collection. Our study recorded the frequency of AI adoption—with 40% of participants reporting occasional LLM usage (e.g., translation or reading help), 44% reporting moderate LLM usage (two–four times per week), and 20% reporting daily LLM usage, with a different academic duties performed (i.e., summarization, translation, grammar checks, information access, and drafting).

3.3. Testing Common Method Variance (CMV)

The common method variance (CMV) concern is likely to be presented in social studies research, mainly because the study respondents fill both the dependent and independent variables [63]. As suggested by Williams and Brown [64], this concern can threaten the model’s validity. Following Reio’s [65] recommendation, we conducted some procedural and statistical techniques to alleviate the impact of CMV. First, the questionnaire was developed with several precautionary actions, as suggested by Podsakoff et al. [63], comprising balancing the variables between parts in the questionnaire to minimise instructional impacts; structuring the questions to avoid obvious patterns; and maintaining a suitable questionnaire length. To statistically assess CMV, we run Harman’s single-factor test. The results indicated that the single extracted dimension accounted for 44%, implying that no single dimension explained the majority of the variance, giving strong evidence that CMV concern does not significantly impact our results.

3.4. Ethical Approvals

Given the sensitive structure of our research, which included university students with visual impairments, we prioritised the ethical compliance issue. Prior to initiating the data gathering process, we submitted a request to obtain formal institutional approval from the “Institutional Review Board at King Faisal University” (“Ethics Reference: KFU-2025-ETHICS3201, approved 6 October 2024”). This procedure certified that our employed methods were consistent with the institutional criteria and the ethical concerns outlined in the Declaration of Helsinki. We implemented several safeguards to defend participants’ rights: all participation was voluntary in its nature, with no pressure; written informed consent was obtained from each participant; respondents have the right to withdraw at any time without giving any reason; and all data received was anonymized to defend participant identities. Although the collected data contain no personally identifiable information, interested researchers may request access to the obtained data via an official email to the principal investigator.

4. Data Analysis and Study Findings

PLS-SEM was run as the primary data analysis method. PLS-SEM is a variance-based technique predominantly suitable for exploratory and predictive research approaches [66]. Distinguished from other covariance-grounded SEM (CB-SEM) [67], PLS-SEM has several advantages: it does not require a normally distributed dataset and can run well with small sample sizes. We conducted the analysis employing SmartPLS program v4, with a bootstrapping procedure and 5000 resamples in the reflective mode [67]. Following the recommendations of Sarstedt et al. [68], the analysis was performed in two subsequent stages. Stage one tested the measurement model’s psychometric properties (Table 1), and stage two tested the structural model for hypotheses confirmation or rejection.

4.1. Measurement Model Assessment

We thoroughly evaluated the measurement model by inspecting some main psychometric properties: all employed variables showed high standardized factor loadings (>0.7); the Cronbach’s alpha and composite reliability metrics showed scores that exceeded 0.7; “average variance extracted” (AVE) scores are above the benchmark value of 0.5 [69]. Together, these results support the robustness of the study measures, exhibiting proper internal consistency, scale reliability, and convergent validity. To test discriminant validity, we conducted two complementary methods. First, as suggested by Fornell and Larcker’s [70], we make sure that the square root of each factor’s AVE (presented in Table 2) surpassed all correlations between those factors in the model. This initial evaluation suggested a proper discriminant validity. Additionally, we also assessed the heterotrait–monotrait (HTMT) ratio of correlations [71]. Considered as a more thorough method than the Fornell–Larcker criterion, the HTMT method has potential validity issues when scores surpass the value 0.9. As seen in Table 3, all HTMT scores are below this threshold. Furthermore, the cross-loading values in Table 4 revealed that all variables are highly loaded to their predetermined dimension with no cross-loadings identified. The consistent findings from these two methods offered strong evidence for the discriminant validity of the employed measurement model.
The results obtained from PLS-SEM yielded significant insights into the intersection between the tested hypotheses. The bootstrapping findings showed that the three dimensions (LLM trust, LLM usage, and quality of life) can jointly explain 56.1% of the variance in academic success (Figure 2). This high explanatory power implied that the study model can capture most of the main factors impacting academic success.

4.2. Structural Model Findings

Before testing the research hypotheses, several Goodness of fit indices were inspected, including SRMR, “coefficient of determination” (R2), and “predictive relevance” (Q2). As per Hair et al. [67], satisfactory thresholds require SRMR to be less than 0.08, the R2 values to be at least 0.10, and Q2 scores to surpass zero. The PLS-SEM report indicated that our study model fulfilled these criteria, indicating adequate explanatory and predictive power. Specifically, the SRMR value has an adequate and satisfactory score of 0.07. The academic success endogenous variable has strong predictive power, with R2 = 0.561 and Q2 = 0.521. This was followed by LLM usage (R2 = 0.376, Q2 = 0.368) and QoL (R2 = 0.237, Q2 = 0.167), each displaying adequate levels of the variance explained and predictive power. Finally, to ensure that multicollinearity does not affect model validity, Variance Inflation Factor (VIF) values were inspected, as they should remain under a value of 5, as suggested by [66]. As seen in Table 1, all values are below 5, indicating that no multicollinearity issues existed in our model.
After confirming the reliability and validity of both the measurement model and structural model, hypotheses testing process can proceed. As presented in Table 4, the bootstrapped output of the tested study model revealed that trust in large language models (ChatGPT/Gemini) has a positive and significant impact on LLM usage (β = 0.613, t = 10.895, p < 0.001), QoL (β = 0.233, t = 3.745, p < 0.001), and academic success (β = 0.607, t = 14.001, p < 0.001). Accordingly, H1, H2, and H3 were supported. Furthermore, the PLS-SEM results have introduced evidence that LLM usage had a positive and significant impact on QoL (β = 0.307, t = 4.893, p < 0.001), supporting H5. However, interestingly, LLM usage did not directly lead to academic success (β = 0.066, t = 1.652, p = 0.099), rejecting H5. Additionally, QOL was found to have a significant influence on academic success (β = 0.185, t = 5.533, p <0.001), supporting H6.
The PLS-SEM output also revealed some evidence regarding the specific indirect effects. Trust in LLMs slightly but successfully impacted academic success through QoL (β = 0.043, t = 2.713, p < 0.01), supporting H7. However, trust in LLMs failed to significantly impact academic success through LLM usage (β = 0.040, t = 1.499, p = 0.134), rejecting H8. Additionally, the specific indirect effects in PLS-SEM revealed that LLM usage has an indirect significant impact on academic success through QoL (β = 0.057, t = 3.496, p < 0.001), supporting H9. Similarly, trust in LLMs was found to have an indirect significant impact on QoL through LLM usage (β = 0.188, t = 4.554, p ≤ 0.001), supporting H10. Finally, trust in LLMs was found to have a significant indirect impact on academic success through QoL and LLM usage (β = 0.35, t = 3.392, p ≤ 0.01), supporting H11, as seen in Table 5.

5. Discussion

This paper tested how trust in large language models (LLMs), more specifically ChatGPT and Gemini, foster academic success and affect QoL among university students with visual impairments in KSA. The results revealed that trust in LLMs among visually impaired students in KSA can significantly predict the usage of these AI technologies, as well as their QoL and academic success. These findings provide empirical evidence that highlights the key role of trust as an underlying mechanism for the successful implementation of AI-driven assistive tools in an educational context. The significant influence of LLM trust on usage is consistent with previous research that highlights the key role of trust in user adoption of AI technologies, specifically for underserved or vulnerable populations [50,72]. For students with visual impairments, who regularly depend deeply on AI assistive technology for both academic activities and their personal lives, trust acts as a doorkeeper to engagement. If AI assistive technology is perceived as user-friendly, non-discriminatory, reliable, and easily accessible, people are more likely to integrate it into their daily life routines [73].
Furthermore, the positive impact of LLMs’ trust on QoL resonates with previous literature that has connected digital attachment with well-being among people with various types of disabilities. When AI-assistive technologies, such as ChatGPT or Gemini, can offer timely, understandable, and context-sensitive data, they can minimise barriers to independent existing and external communication, thus enhancing social engagement and psychological well-being [74,75]. Furthermore, the significant link between LLMs trust and academic success suggests that university students who depend on and trust in these AI tools may obtain a performance advantage. LLMs provide adaptive and adequate language support, personalised descriptions, and timely academic assistance for students who encounter accessibility limitations [76]. These results align with recent research that has shown AI-based assistive technologies can foster cognitive access, learning independence, and self-efficacy among students with disabilities [77,78].
Furthermore, the results showed that while LLM usage had a positive and significant influence on students’ QoL, it did not show a direct significant impact on academic success. Surprisingly, the results also confirmed that QoL itself can significantly predict academic success, highlighting a potential indirect relationship. The positive relationship between LLM usage and QoL is aligned with recent research indicating that AI-derived tools can play a meaningful role in enhancing the daily experiences of people with disabilities [73,75]. For visually impaired university students, LLMs serve not only as academic tools but also as life-improving technologies that promote broader social and psychological well-being [74]. Conversely, the absence of a direct link between LLM usage and academic success warrants a deeper investigation. While previous research has suggested that AI technologies may foster learning efficiency and understanding [76], our results indicate that simply using LLMs alone is not sufficient to secure improved academic success. This finding may be explained by the way university students engage with these tools, perhaps using them more for daily information, internet browsing, or emotional support rather than for academic tasks [78]. Moreover, some contextual factors, such as AI literacy, the understanding of rapid engineering, and the capability to assess LLM-generated outcomes, may impact whether LLM usage can be translated into academic achievements. University students who lack the essential skills to structure effective enquiries or who accept AI replies uncritically might fail in leveraging the full academic potential of these AI technologies. Future research can further test moderating elements, such as digital literacy, type of LLM, and different usage types, as well as contextual support, to identify the conditions under which LLM usage can significantly impact academic success.
Notably, the significant positive impact of QoL on academic success highlighted the solid connections between well-being and educational success. As supported by previous research, students who have a higher level of social connection, emotional support, and self-efficacy are in a better situation to excel in their academic life [77]. For visually impaired students, this suggested that AI technologies, by enhancing QoL via reducing other dependencies, encouraging confidence, or fostering social interaction, may indirectly improve academic outcomes, even if their direct academic usefulness is restricted.
Inspection of the specific indirect effects in the PLS-SEM report revealed a set of mediating relationships, highlighting that LLM trust alone may not directly improve academic success, but can indirectly influence academic success through enhanced well-being and engagement with AI tools. Firstly, the PLS-SEM results demonstrated that trust in LLMs significantly impacted academic success via QoL. This suggests that when university students trust these AI technologies, it may foster their overall satisfaction with life through reduced stress, increased self-independence, and a high level of confidence in finalising academic tasks, which may consequently contribute to better academic performance. These results are aligned with prior studies that suggested that well-being can play a key mediating role in academic success, particularly for students with visual disabilities [75]. Nevertheless, trust in LLMs failed to significantly impact academic success through the use of LLMs alone. These findings are both revealing and informative. They imply that simply employing LLMs—even if university students trust them—did not necessarily translated into academic advantages. This may be because of the variation in how university students use these technologies—some might use them for regular inquiries or emotional support rather than to directly accomplish academic duties [78]. The model also revealed that LLM usage positively influences academic success indirectly through QoL. This highlighted an influential relationship: when LLMs are employed in ways that improve students’ daily operations and overall emotional well-being, the subsequent improvements in QoL might fuel a more attentive, motivated, and empowered learning process. These findings align with earlier evidence suggesting that digital accessibility technologies may contribute to high psychological readiness and increased cognitive engagement, which are key drivers of academic success [73,74].
Additionally, substantial indirect impacts were found from LLM trust to QoL via LLM usage. This emphasised the key role of trust as a precursor to successful technological commitment. University students who view LLMs as trustworthy are more likely to adopt them into their daily life routine, fostering convenience and self-independence that together contribute to a higher level of QoL [50,77]. Finally, LLM trust was found to have an indirect impact on academic success via both QoL and LLM usage. These multi-pathway mediation effects revealed that trust can play a grounded role not by immediately fostering academic advantages but by reshaping university students’ interaction with assistive AI technologies, enhancing QoL, and generating a more helpful learning context. These results support calls in the literature for a more holistic, student-focused approach to AI adoption in the educational context [73,76].

6. Conclusions

This paper investigated the influence of large language models (LLMs), especially ChatGPT and Gemini, on the academic success and QoL of students with visual impairments in Saudi Arabian higher education institutions. Based on PLS-SEM data analysis techniques, the study results revealed that LLM trust can significantly predict LLM usage, which consequently contributed to improved QoL. While LLM usage failed to generate higher academic performance directly, its positive impact on QoL indirectly encouraged academic success. These results highlight nuanced intersections in which academic achievements can be understood as part of a broader ecosystem of digital trust, emotional well-being, and comprehensive technology usage. Crucially, the paper confirmed that LLMs can act as a meaningful technology for educational equity when LLM usage aligns with the requirements and lived experiences of university students with disabilities. Trust and accessibility continued to be central to their efficiency, especially for visually impaired students who depend on such technologies not only for information purposes but also for the independent learning process and social interaction.
The study revealed several implications for university leaders and policymakers. Universities must adopt a more holistic approach to digital technology inclusion by not only offering access to LLM tools but also providing a supportive context that fosters user trust and enhances digital literacy. Training workshops focused on ethical usage, academic reliability, and LLM task-specific implementations should be incorporated into disability support services practices. Furthermore, guidelines should be updated to clearly incorporate LLMs as recognised assistive technologies within the university context for inclusive education. The study also provided some crucial implications for scholars: it highlighted the significance of investigating assistive AI technology not only from a performance point of view but also in terms of its impact on well-being. Future research papers should consider a longitudinal research design to detect the evolving intersections between LLM usage and university student improvements over time. Moreover, expanding this study to other disability types (beyond visual impairments) and cultural contexts (beyond university students) can provide a more inclusive understanding of the transformative potential of AI technologies. While several statistical and procedural approaches were implemented to deal with Common Method Bias (CMB), future research could conduct more advanced statistical and procedural techniques—such as temporal separation of measurement, unmeasured latent method factor analysis, or scale separation principles—to more thoroughly address possible CMB.
Furthermore, a key limitation of this paper is that it relied on self-reported measures of LLM trust and usage, as well as academic achievement, without obtaining objective measures such as system logs or actual-time prompts. Future research papers are consequently encouraged to use mixed-method approaches, obtaining both subjective and objective measures to create a richer understanding of how visually impaired students engage with LLMs in the academic environment. One more promising avenue for future studies lies in assessing the demographic subgroup analyses to better understand how demographic characteristics (i.e., gender type, age level, or educational discipline) may impact LLM trust and usage, academic success, and QoL. While this study obtained demographic data, multigroup analysis and measurement invariance were beyond the main scope of this study. Prior studies (e.g., Thakur et al. [79]; Fosch-Villaronga et al. [80]) have argued that demographic diversity can influence technology acceptance and user outcomes, highlighting the importance of incorporating these demographic differences into future research. Finally, although this paper can offer important implications, its results must be understood within the educational and cultural context of SA. These factors could impact both the level of LLM trust and the potential outcomes linked to their usage. As such, the generalizability of the results to other educational or cultural contexts should be approached with caution.

Author Contributions

Conceptualization, I.A.E.; Software, M.A.S.; Formal analysis, I.A.E.; Investigation, M.A.S.; Data curation, M.A.S.; Writing—original draft, I.A.E. and M.A.S.; Writing—review & editing, I.A.E., S.M.A. and M.A.S.; Visualization, S.M.A.; Supervision, I.A.E.; Funding acquisition, S.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the King Salman center For Disability Research for funding this work through Research Group no KSRG-2024-054.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the scientific research ethical committee- King Faisal University (Ethics Reference: KFU-2025-ETHICS3201, approved 6 October 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to its privacy.

Acknowledgments

The authors extend their appreciation to the King Salman center For Disability Research for funding this work through Research Group no KSRG-2024-054.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fitas, R.J. Inclusive Education with AI: Supporting Special Needs and Tackling Language Barriers. arXiv 2025, arXiv:2504.14120. [Google Scholar] [CrossRef]
  2. Paice, A.; Biallas, M.; Andrushevich, A. Assistive and Inclusive Technology Design for People with Disabilities (Special Needs). In Human-Technology Interaction: Interdisciplinary Approaches and Perspectives; Springer: Berlin/Heidelberg, Germany, 2025; pp. 329–347. [Google Scholar]
  3. Giansanti, D.; Pirrera, A. Integrating AI and Assistive Technologies in Healthcare: Insights from a Narrative Review of Reviews. Healthcare 2025, 13, 556. [Google Scholar] [CrossRef]
  4. Kumar, P.; Kumar, S. Digital Horizons in Special Education: Pioneering Transformational Technologies for Individuals with Visual Impairments. In Educating for Societal Transitions; Blue Rose Publishers: Noida, India, 2024. [Google Scholar]
  5. Uddin, M.K.S. A Review of Utilizing Natural Language Processing and AI For Advanced Data Visualization in Real-Time Analytics. Glob. Manag. J. 2024, 1. [Google Scholar] [CrossRef]
  6. Setiawan, A.J. Intelligent Systems For Disabilities Technology For A More Inclusive Life; Elsevier Inc.: Amsterdam, The Netherlands, 2024. [Google Scholar]
  7. McGrath, C.; Mohler, E.; Mcfarland, J.; Hand, C.; Rudman, D.L.; Fitzgeorge, B.; Stone, M. Research Practices to Foster Accessibility for, and the Inclusivity of, Older Adults With Vision Loss: Examples From a Critical Participatory Study. Int. J. Qual. Methods 2025, 24, 16094069251316214. [Google Scholar] [CrossRef]
  8. Choi, W.C.; Chang, C.I.; Choi, I.C.; Lam, L.C. Country Landscape of Large Language Models Development: A Review. Preprints 2025. [Google Scholar] [CrossRef]
  9. Lin, L.; Gupta, N.; Zhang, Y.; Ren, H.; Liu, C.H.; Ding, F.; Wang, X.; Li, X.; Verdoliva, L.; Hu, S. Detecting Multimedia Generated by Large AI Models: A Survey. arXiv 2024, arXiv:2402.00045. [Google Scholar] [CrossRef]
  10. Ray, P.P. ChatGPT: A Comprehensive Review on Background, Applications, Key Challenges, Bias, Ethics, Limitations and Future Scope. Internet Things Cyber-Phys. Syst. 2023, 3, 121–154. [Google Scholar] [CrossRef]
  11. Ghosh, A.; Saini, A.; Barad, H. Artificial Intelligence in Governance: Recent Trends, Risks, Challenges, Innovative Frameworks and Future Directions. In AI & SOCIETY; Springer: Berlin/Heidelberg, Germany, 2025; pp. 1–23. [Google Scholar]
  12. Weygand, Z. The Blind in French Society from the Middle Ages to the Century of Louis Braille; Stanford University Press: Redwood City, CA, USA, 2020. [Google Scholar]
  13. Henderson, B. The Blind Advantage: How Going Blind Made Me a Stronger Principal and How Including Children with Disabilities Made Our School Better for Everyone; Harvard Education Press: Cambridge, MA, USA, 2011. [Google Scholar]
  14. French, K. Perkins School for the Blind; Arcadia Publishing: Mount Pleasant, SC, USA, 2004. [Google Scholar]
  15. Trent, J.W. The Manliest Man: Samuel G. Howe and the Contours of Nineteenth-Century American Reform; University of Massachusetts Press: Amherst, MA, USA, 2012. [Google Scholar]
  16. Idris, M.D.; Feng, X.; Dyo, V. Revolutionising Higher Education: Unleashing the Potential of Large Language Models for Strategic Transformation. IEEE Access 2024, 12, 67738–67757. [Google Scholar] [CrossRef]
  17. Johnsen, M. Developing AI Applications With Large Language Models; Maria Johnsen: Trondheim, Norway, 2025. [Google Scholar]
  18. Kumar, A.; Nagar, D.K. AI-Based Language Translation and Interpretation Services: Improving Accessibility for Visually Impaired Students. In Transforming Learning: The Power of Educational Technology; BlueRose Publisher: Noida, India, 2024; p. 178. [Google Scholar]
  19. Nakhaie Ahooie, N. Enhancing Access to Medical Literature Through an LLM-Based Browser Extension 2024; University of Oulu: Oulu, Finland, 2024. [Google Scholar]
  20. Mitra, D.S. AI-Powered Adaptive Education for Disabled Learners. In Advances in Accessibility and Sustainability; Elsevier Inc.: Amsterdam, The Netherlands, 2024. [Google Scholar]
  21. Cho, J.; Puspitasari, F.D.; Zheng, S.; Zheng, J.; Lee, L.-H.; Kim, T.-H.; Hong, C.S.; Zhang, C. Sora as an AGI World Model? A Complete Survey on Text-to-Video Generation. arXiv 2024, arXiv:2403.05131. [Google Scholar]
  22. Ahmad, W.; Shokeen, R.; Raj, R. Artificial Intelligence: Solutions in Special Education. In Transforming Special Education Through Artificial Intelligence; IGI Global: Hershey, PA, USA, 2025; pp. 459–520. [Google Scholar]
  23. Lata, P. Towards Equitable Learning: Exploring Artificial Intelligence in Inclusive Education. Int’l JL Mgmt. Human. 2024, 7, 416. [Google Scholar] [CrossRef]
  24. Chopra, A.; Patel, H.; Rajput, D.S.; Bansal, N. Empowering Inclusive Education: Leveraging AI-ML and Innovative Tech Stacks to Support Students with Learning Disabilities in Higher Education. In Applied Assistive Technologies and Informatics for Students with Disabilities; Springer: Berlin/Heidelberg, Germany, 2024; pp. 255–275. [Google Scholar]
  25. Safdar, S.; Kamran, F.; Anis, F. Beyond Accommodation: Artificial Intelligence’s Role in Reimagining Inclusive Classrooms. Int. J. Soc. Sci. 2024, 2, 273–288. [Google Scholar]
  26. Robinson, R.; Bose, U.; Cross, J. BEYOND SIGHT: Transforming Visual Content into Accessible Learning Content for the Blind. In Proceedings of the EDULEARN24 Proceedings, Palma, Spain, 1–3 July 2024; IATED: Valencia, Spain, 2024. [Google Scholar]
  27. Rahman, M.A.; Alqahtani, L.; Albooq, A.; Ainousah, A. A Survey on Security and Privacy of Large Multimodal Deep Learning Models: Teaching and Learning Perspective. In Proceedings of the 2024 21st Learning and Technology Conference (L&T), Jeddah, Saudi Arabia, 15–16 January 2024; IEEE: New York City, NY, USA, 2024. [Google Scholar]
  28. Elsayed, H. The Impact of Hallucinated Information in Large Language Models on Student Learning Outcomes: A Critical Examination of Misinformation Risks in AI-Assisted Education. Nat. Rev. AI Res. Theor. Comput. Complex. 2024, 9, 11–23. [Google Scholar]
  29. Alam, A.; Mohanty, A. Educational Technology: Exploring the Convergence of Technology and Pedagogy through Mobility, Interactivity, AI, and Learning Tools. Comput. Educ. 2023, 10, 2283282. [Google Scholar] [CrossRef]
  30. Williams, R. Impact. AI: Democratizing AI Through K-12 Artificial Intelligence Education; Massachusetts Institute of Technology: Cambridge, MA, USA, 2024. [Google Scholar]
  31. Abubaker, N.M.; Kashani, S.M.; Alshalwy, A.M.; Garib, A. Reshaping Higher Education in MENA with Generative AI: A Systematic Review. In Emerging Technologies Transforming Higher Education: Instructional Design and Student Success; IGI Global: Hershey, PA, USA, 2025; pp. 231–256. [Google Scholar]
  32. Aladi, C.C. IT Higher Education Teachers and Trust in AI-Enabled Ed-Tech: Implications for Adoption of AI in Higher Education. In Proceedings of the 2024 Computers and People Research Conference, Murfreesboro, TN, USA, 29 May–1 June 2024. [Google Scholar]
  33. Chellappa, R.; Niiler, E. Can We Trust AI? JHU Press: Baltimore, MD, USA, 2022. [Google Scholar]
  34. Chalkiadakis, A.; Seremetaki, A.; Kanellou, A.; Kallishi, M.; Morfopoulou, A.; Moraitaki, M.; Mastrokoukou, S. Impact of Artificial Intelligence and Virtual Reality on Educational Inclusion: A Systematic Review of Technologies Supporting Students with Disabilities. J. Educ. 2024, 14, 1223. [Google Scholar] [CrossRef]
  35. Tripathi, V.; Bali, A.; Sharma, P.; Chadha, S.; Sharma, B. Empowering Education: The Role of Artificial Intelligence in Supporting Students with Disabilities. In Proceedings of the 2024 2nd International Conference on Recent Trends in Microelectronics, Automation, Computing and Communications Systems (ICMACC), Hyderabad, India, 19–21 December 2024; IEEE: New York City, NY, USA, 2024. [Google Scholar]
  36. Habib, H.; Jelani, S.A.K.; Najla, S. Revolutionizing Inclusion: AI in Adaptive Learning for Students with Disabilities. Mod. Stud. J. 2022, 1, 1–11. [Google Scholar]
  37. Paliszkiewicz, J.; Gołuchowski, J. Trust and Artificial Intelligence. Adv. Syst. 2024, 1, 344. [Google Scholar]
  38. Afroogh, S.; Akbari, A.; Malone, E.; Kargar, M.; Alambeigi, H. Trust in AI: Progress, Challenges, and Future Directions. J. AI Res. 2024, 11, 1–30. [Google Scholar] [CrossRef]
  39. Küper, A.; Krämer, N.J. Psychological Traits and Appropriate Reliance: Factors Shaping Trust in AI. Int. J. Hum.–Comput. Interact. 2025, 41, 4115–4131. [Google Scholar] [CrossRef]
  40. See, A.R.; Advincula, W.D. Creating Tactile Educational Materials for the Visually Impaired and Blind Students Using AI Cloud Computing. Appl. Sci. 2021, 11, 7552. [Google Scholar] [CrossRef]
  41. Darji, H. AI-Powered Digital Platform for Religious Literature. In Advances in Accessibility and Sustainability; Elsevier Inc.: Amsterdam, The Netherlands, 2025. [Google Scholar]
  42. Vayadande, K.; Bohri, M.; Chawala, M.; Kulkarni, A.M.; Mursal, A. The Rise of AI-Generated News Videos: A Detailed Review. In How Machine Learning is Innovating Today’s World: A Concise Technical Guide; Wiley-Scrivener: Austin, TX, USA, 2024; pp. 423–451. [Google Scholar]
  43. Julio, C. Evaluation of Speech Recognition, Text-to-Speech, and Generative Text Artificial Intelligence for English as Foreign Language Learning Speaking Practices. Ph.D. Thesis, Tokyo Denki University, Tokyo, Japan, 2024. [Google Scholar]
  44. Geethanjali, K.S.; Umashankar, N. Enhancing Educational Outcomes with Explainable AI: Bridging Transparency and Trust in Learning Systems. In Proceedings of the 2025 International Conference on Emerging Systems and Intelligent Computing (ESIC), Bhubaneswar, India, 8–9 February 2025; IEEE: New York City, NY, USA, 2025. [Google Scholar]
  45. Scott, R. Undergraduate Educational Experiences: The Academic Success of College Students with Blindness and Visual Impairments. Ph.D. Thesis, North Carolina State University, Raleigh, NC, USA, 2009. [Google Scholar]
  46. Kaliappan, S.; Anand, A.S.; Saha, K.; Karkar, R. Exploring the Role of LLMs for Supporting Older Adults: Opportunities and Concerns. arXiv 2024, arXiv:2411.08123. [Google Scholar] [CrossRef]
  47. Wang, Q.; Gao, Y.; Wang, X. Exploring Engagement, Self-Efficacy, and Anxiety in Large Language Model EFL Learning: A Latent Profile Analysis of Chinese University Students. Int. J. Hum.–Comput. Interact. 2024, 41, 7815–7824. [Google Scholar] [CrossRef]
  48. Vision Loss Expert Group of the Global Burden of Disease Study; Pesudovs, K.; Lansingh, V.C.; Kempen, J.H.; Tapply, I.; Fernandes, A.G.; Cicinelli, M.V.; Arrigo, A.; Leveziel, N.; Resnikoff, S.; et al. Global Estimates on the Number of People Blind or Visually Impaired by Cataract: A Meta-Analysis from 2000 to 2020. Eye 2024, 38, 2156–2172. [Google Scholar] [CrossRef] [PubMed]
  49. Rappolt-Schlichtmann, G.; Daley, S.G.; Rose, L.T. A Research Reader in Universal Design for Learning; ERIC: Cambridge, MA, USA, 2012. [Google Scholar]
  50. Bach, T.A.; Khan, A.; Hallock, H.; Beltrão, G.; Sousa, S. A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective. Int. J. Hum.–Comput. Interact. 2024, 40, 1251–1266. [Google Scholar] [CrossRef]
  51. Michalos, A.C. Connecting the Quality of Life Theory to Health, Well-Being and Education; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  52. Bright, D.J. An Integrative Review of the Potential of Wireless Assistive Technologies and Internet of Things (IoT) to Improve Accessibility to Education for Students with Disabilities. Assist. Technol. 2022, 34, 653–660. [Google Scholar] [CrossRef]
  53. Yenduri, G.; Kaluri, R.; Rajput, D.S.; Lakshmanna, K.; Gadekallu, T.R.; Mahmud, M.; Brown, D.J. From Assistive Technologies to Metaverse—Technologies in Inclusive Higher Education for Students with Specific Learning Difficulties: A Review. Educ. Inf. Technol. 2023, 11, 64907–64927. [Google Scholar] [CrossRef]
  54. Fernández-Batanero, J.M.; Montenegro-Rueda, M.; Fernández-Cerero, J.; García-Martínez, I. Assistive Technology for the Inclusion of Students with Disabilities: A Systematic Review. Educ. Rev. 2022, 70, 1911–1930. [Google Scholar] [CrossRef]
  55. Sarrab, M.; Al-Shihi, H.; Khan, A.I. An Empirical Analysis of Mobile Learning (m-Learning) Awareness and Acceptance in Higher Education. In Proceedings of the 2015 International Conference on Computing and Network Communications (CoCoNet), Anaheim, CA, USA, 16–19 February 2015; IEEE: New York City, NY, USA, 2015. [Google Scholar]
  56. Owusu-Acheaw, M.; Larson, A.G. Use of Social Media and Its Impact on Academic Performance of Tertiary Institution Students: A Study of Students of Koforidua Polytechnic, Ghana. J. Educ. Pract. 2015, 6, 94–101. [Google Scholar]
  57. Elshaer, I.A.; AlNajdi, S.M.; Salem, M.A. Sustainable AI Solutions for Empowering Visually Impaired Students: The Role of Assistive Technologies in Academic Success. Sustainability 2025, 17, 5609. [Google Scholar] [CrossRef]
  58. Zayed, M.A.; Moustafa, M.A.; Elrayah, M.; Elshaer, I.A. Optimizing Quality of Life of Vulnerable Students: The Impact of Physical Fitness, Self-Esteem, and Academic Performance: A Case Study of Saudi Arabia Universities. Sustainability 2024, 16, 4646. [Google Scholar] [CrossRef]
  59. De Duro, E.S.; Veltri, G.A.; Golino, H.; Stella, M. Measuring and Identifying Factors of Individuals’ Trust in Large Language Models. arXiv 2025, arXiv:2502.21028. [Google Scholar] [CrossRef]
  60. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  61. Diener, E.D.; Emmons, R.A.; Larsen, R.J.; Griffin, S. The Satisfaction with Life Scale. J. Personal. Assess. 1985, 49, 71–75. [Google Scholar] [CrossRef]
  62. KSA Census Saudi Arabia’s Fifth Housing and Population Census Is Approved to Commence on May 9, 2022 (Shawwal 8, 1443). Available online: https://www.stats.gov.sa/w/%D8%A7%D9%84%D9%87%D9%8A%D8%A6%D8%A9-%D8%A7%D9%84%D8%B9%D8%A7%D9%85%D8%A9-%D9%84%D9%84%D8%A5%D8%AD%D8%B5%D8%A7%D8%A1-%D8%A8%D8%AF%D8%A1-%D9%85%D8%B1%D8%AD%D9%84%D8%A9-%D8%AA%D8%AD%D8%AF%D9%8A%D8%AB-%D8%A7%D9%84%D8%B9%D9%86%D8%A7%D9%88%D9%8A%D9%86-%D9%84 (accessed on 16 September 2025).
  63. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.Y.; Podsakoff, N.P. Common Method Biases in Behavioral Research: A Critical Review of the Literature and Recommended Remedies. J. Appl. Psychol. 2003, 88, 879. [Google Scholar] [CrossRef]
  64. Williams, L.J.; Brown, B.K. Method Variance in Organizational Behavior and Human Resources Research: Effects on Correlations, Path Coefficients, and Hypothesis Testing. Organ. Behav. Hum. Decis. Process. 1994, 57, 185–209. [Google Scholar] [CrossRef]
  65. Reio, T.G., Jr. The Threat of Common Method Variance Bias to Theory Building. Hum. Resour. Dev. Rev. 2010, 9, 405–411. [Google Scholar] [CrossRef]
  66. Hair, J.; Alamer, A. Partial Least Squares Structural Equation Modeling (PLS-SEM) in Second Language and Education Research: Guidelines Using an Applied Example. Res. Methods Appl. Linguist. 2022, 1, 100027. [Google Scholar] [CrossRef]
  67. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to Use and How to Report the Results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  68. Sarstedt, M.; Ringle, C.M.; Hair, J.F. Partial Least Squares Structural Equation Modeling. In Handbook of Market Research; Springer International Publishing: Cham, Switzerland, 2021; pp. 587–632. [Google Scholar]
  69. Chin, W.W. The Partial Least Squares Approach to Structural Equation Modeling. In Modern Methods for Business Research; Psychology Press: Hove, UK, 1998; pp. 295–336. [Google Scholar]
  70. Fornell, C.; Larcker, D.F. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  71. Ringle, C.M.; Sarstedt, M.; Mitchell, R.; Gudergan, S.P. Partial Least Squares Structural Equation Modeling in HRM Research. Int. J. Hum. Resour. Manag. 2020, 31, 1617–1643. [Google Scholar] [CrossRef]
  72. Binns, R.; Veale, M.; Van Kleek, M.; Shadbolt, N. “It’s Reducing a Human Being to a Percentage”: Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montréal, QC, Canada, 21–26 April 2018; pp. 1–14. [Google Scholar]
  73. Ferdaus, M.M.; Abdelguerfi, M.; Ioup, E.; Niles, K.N.; Pathak, K.; Sloan, S. Towards Trustworthy AI: A Review of Ethical and Robust Large Language Models. arXiv 2024, arXiv:2407.13934. [Google Scholar]
  74. Bong, W.K.; Chen, W. Increasing Faculty’s Competence in Digital Accessibility for Inclusive Education: A Systematic Literature Review. Int. J. Incl. Educ. 2024, 28, 197–213. [Google Scholar] [CrossRef]
  75. Senjam, S.S.; Manna, S.; Bascaran, C. Smartphones-Based Assistive Technology: Accessibility Features and Apps for People with Visual Impairment, and Its Usage, Challenges, and Usability Testing. Clin. Optom. 2021, 13, 311–322. [Google Scholar] [CrossRef]
  76. Kasneci, E.; Seßler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Kasneci, G.; et al. ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  77. Frommel, J.; Sagl, V.; Depping, A.E.; Johanson, C.; Miller, M.K.; Mandryk, R.L. Recognizing Affiliation: Using Behavioural Traces to Predict the Quality of Social Interactions in Online Games. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–16. [Google Scholar]
  78. Sharma, S.; Mittal, P.; Kumar, M.; Bhardwaj, V. The Role of Large Language Models in Personalized Learning: A Systematic Review of Educational Impact. Discov. Sustain. 2025, 6, 243. [Google Scholar] [CrossRef]
  79. Thakur, N.; Cui, S.; Khanna, K.; Knieling, V.; Duggal, Y.N.; Shao, M. Investigation of the Gender-Specific Discourse about Online Learning during COVID-19 on Twitter Using Sentiment Analysis, Subjectivity Analysis, and Toxicity Analysis. Computers 2023, 12, 221. [Google Scholar] [CrossRef]
  80. Fosch-Villaronga, E.; Poulsen, A.; Søraa, R.A.; Custers, B.H.M. A Little Bird Told Me Your Gender: Gender Inferences in Social Media. Inf. Process. Manag. 2021, 58, 102541. [Google Scholar] [CrossRef]
Figure 1. Research framework.
Figure 1. Research framework.
Bioengineering 12 01056 g001
Figure 2. The research model.
Figure 2. The research model.
Bioengineering 12 01056 g002
Table 1. Factor loadings and scale psychometric properties.
Table 1. Factor loadings and scale psychometric properties.
FLαC.R.AVEVIF
Large Language Model Trust0.9340.9350.687
Trust_10.741 2.069
Trust_20.813 2.472
Trust_30.869 2.163
Trust_40.864 2.644
Trust_50.867 3.094
Trust_60.870 3.076
Trust_70.861 3.128
Trust_80.732 1.909
Large Language Model Usage0.8520.8580.772
Usage_10.905 2.350
Usage_20.876 2.115
Usage_30.854 1.936
Quality of life0.8590.8790.640
QoL_10.861 2.403
QoL_20.822 2.236
QoL_30.847 2.217
QoL_40.747 2.036
QoL_50.710 1.877
Academic success0.8630.8680.784
Acd_Prfmnc_10.880 2.034
Acd_Prfmnc_20.910 2.783
Acd_Prfmnc_30.866 2.209
Note. FL = Factor Loading; α = Cronbach’s Alpha; C.R. = Composite Reliability; AVE = Average Variance Extracted; VIF = Variance Inflation Factor. Recommended thresholds: FL > 0.70, α > 0.70, CR > 0.70, AVE > 0.50, VIF < 5.0.
Table 2. “Fornell and Larcker”.
Table 2. “Fornell and Larcker”.
Academic SuccessLLM TrustLLM UsageQuality of Life
Academic Success0.886
LLM Trust0.7250.829
LLM Usage0.5210.6130.879
Quality of Life0.4700.4220.4500.800
Table 3. “Heterotrait–monotrait ratio” (HTMT)—Matrix.
Table 3. “Heterotrait–monotrait ratio” (HTMT)—Matrix.
Academic SuccessLLM TrustLLM UsageQuality of Life
Academic Success
LLM Trust0.803
LLM Usage0.6020.683
Quality of Life0.5320.4640.515
Table 4. Factor cross-loadings.
Table 4. Factor cross-loadings.
Academic SuccessLLM TrustLLM UsageQuality of Life
Acd_Prfmnc_10.8800.7080.4940.440
Acd_Prfmnc_20.9100.6180.4260.397
Acd_Prfmnc_30.8660.5890.4580.408
QoL_10.4260.3950.4510.861
QoL_20.3760.3110.3300.822
QoL_30.4610.3750.3990.847
QoL_40.2650.2810.3240.747
QoL_50.3090.3020.2620.710
Trust_10.6150.7410.5540.329
Trust_20.6290.8130.5760.349
Trust_30.5480.8690.5490.324
Trust_40.6010.8640.4430.364
Trust_50.6070.8670.5080.409
Trust_60.5570.8700.5660.367
Trust_70.6050.8610.4810.335
Trust_80.6390.7320.3590.308
Usage_10.5120.5720.9050.423
Usage_20.4620.5260.8760.379
Usage_30.3910.5150.8540.382
Table 5. Path coefficient and related t and p values.
Table 5. Path coefficient and related t and p values.
HypothesesβTpResults
LLM Trust -> LLM Usage0.61310.8950.000H1: Supported
LLM Trust -> Quality of Life0.2333.7450.000H2: Supported
LLM Trust -> Academic Success0.60714.0010.000H3: Supported
LLM Usage -> Quality of Life0.3074.8930.000H4: Supported
LLM Usage -> Academic Success0.0661.6520.099H5: Rejected
Quality of Life -> Academic Success0.1855.5350.000H6: Supported
Specific indirect effects
LLM Trust -> Quality of Life -> Academic Success0.0432.7130.007H7: Supported
LLM Trust -> LLM Usage -> Academic Success0.0401.4990.134H8: Rejected
LLM Usage -> Quality of Life -> Academic Success0.0573.4960.000H9: Supported
LLM Trust -> LLM Usage -> Quality of Life0.1884.5540.000H10: Supported
LLM Trust -> LLM Usage -> Quality of Life ->
Academic Success
0.0353.3920.001H11: Supported
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Elshaer, I.A.; AlNajdi, S.M.; Salem, M.A. Measuring the Impact of Large Language Models on Academic Success and Quality of Life Among Students with Visual Disability: An Assistive Technology Perspective. Bioengineering 2025, 12, 1056. https://doi.org/10.3390/bioengineering12101056

AMA Style

Elshaer IA, AlNajdi SM, Salem MA. Measuring the Impact of Large Language Models on Academic Success and Quality of Life Among Students with Visual Disability: An Assistive Technology Perspective. Bioengineering. 2025; 12(10):1056. https://doi.org/10.3390/bioengineering12101056

Chicago/Turabian Style

Elshaer, Ibrahim A., Sameer M. AlNajdi, and Mostafa A. Salem. 2025. "Measuring the Impact of Large Language Models on Academic Success and Quality of Life Among Students with Visual Disability: An Assistive Technology Perspective" Bioengineering 12, no. 10: 1056. https://doi.org/10.3390/bioengineering12101056

APA Style

Elshaer, I. A., AlNajdi, S. M., & Salem, M. A. (2025). Measuring the Impact of Large Language Models on Academic Success and Quality of Life Among Students with Visual Disability: An Assistive Technology Perspective. Bioengineering, 12(10), 1056. https://doi.org/10.3390/bioengineering12101056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop