1. Introduction
China’s National Smart Education Platform (NSEP), rolled out in 2020 and officially launched nationwide on 28 March 2022. The platform integrates learning, teaching, governance and lifelong services. Later in 2024, its international version was online, serving users in 200+ countries (MOE, 2024) [
1]. The launch of the platform demonstrates China’s commitment to advancing Sustainable Development Goal 4 (quality education) and Sustainable Development Goal 10 (reduced inequalities). The NSEP aims to build global collaboration in digital education, promote inclusive access to learning opportunities, and support the development of a more equitable society. This research, taking 500 urban and rural users of different system qualities as the sample, aims to discover if and how China’s National Smart Education Platform is bridging the urban–rural education gap by analyzing perceived ease of use, usefulness, satisfaction, engagement, learning outcomes. Such a top-down platform intervention would only be successful in a long-term implementation if it can shift from a technology-centered deployment model to one that is deeply rooted in and responsive to regional socio-cultural, teaching, and administrative conditions. This is because the mere “availability” of a platform does not necessarily mean that there will be equal and effective use of it throughout different contexts [
2,
3].
Across recent global research, smart education platforms are recognized to be a powerful tool for reducing educational inequalities (Liu, Cao, & Chen, 2024) [
4]. By way of example, in high-income regions such as Europe, large-scale learning management systems, open educational resources, and AI-supported tutoring have been deployed to complement traditional schooling and to expand access for rural and marginalized groups, although persistent infrastructure and affordability gaps still constrain impact in some communities (Educause, 2024) [
5]. When it comes to developing countries across Asia, Africa, and Latin America, digital platforms are increasingly promoted as a cost-effective strategy to address teacher shortages, lack of textbooks, and geographic isolation, helping rural learners access the standards-aligned curricula and remote instruction that were historically limited to urban schools [
6]. UNESCO consistently highlight such online educational platforms are instruments for advancing Sustainable Development Goal 4 on inclusive, high-quality education and for narrowing urban–rural digital divides, while also warning that platform effectiveness depends on reliable connectivity, affordable devices, and sustained teacher professional development (UNESCO, 2025) [
2]. Within this global landscape, China’s National Smart Education Platform (NSEP) exhibits four distinctive traits [
1]: (1) It is state-led and centrally coordinated. It integrates services for learners, teachers, school governance, and lifelong learning in a single national hub. (2) It is an equitable resource synchronization. The system functions as a bridge for geographic democratization. It is explicitly designed to allow students in remote and rural areas to “share the same class” as those in large cities through high-quality shared resources and synchronized lessons. (3) It is a globalized public utility. The initiative serves as an international digital public good. By launching a version for users in over 200 countries, the NSEP positions itself as a cross-border vehicle for digital education and experience export. (4) It aligns with sustainability. Internal equity goals are inextricably linked to external global commitments. The platform connects the reduction of urban–rural disparities with SDG 4 (quality education) and SDG 10 (reduced inequalities) through international cooperation and resource sharing. Nevertheless, much of the empirical evidence on platform-driven reform is still concentrated in affluent, urban, or pilot settings, which leaves limited clarity on whether national platforms reduce gaps in real-world use and outcomes across unequal regional infrastructures and user capacities [
2,
7,
8].
The present study is anchored in a theoretical foundation that links technology acceptance, digital equity, and student engagement. Building on Davis’s (1989) technology acceptance model (TAM) [
9], we conceptualize perceived ease of use and perceived usefulness as core cognitive beliefs that shape user satisfaction and intention to use NSEP, thereby shifting the focus from simple access to second-level divides in capability and outcome (Van Deursen & Van Dijk, 2019) [
3]. Research on blended learning, multimedia-supported instruction, and learning management systems has shown that perceived ease and usefulness are central to sustained adoption, yet much of this work remains technologically deterministic, paying limited attention to how platforms are appropriated within diverse educational cultures (AlAli & Wardat, 2024; Kosasih & Sulaiman, 2024) [
10,
11]. At the same time, student engagement theory, particularly the cognitive, behavioral, and affective dimensions outlined by Fredricks, Blumenfeld, and Paris [
12], provides a lens for examining how NSEP affects learners’ active participation in both urban and rural settings. Emerging “smart education” studies that incorporate AI and learning analytics suggest that national platforms can enhance engagement and outcomes (Yang & Wu, 2024) [
13], but most evidence comes from affluent or experimental environments and does not fully capture regional and rural realities characterized by uneven connectivity, device access, and digital competence (Yin & Ying, 2025; Vičič Krabonja et al., 2024) [
7,
8]. To address the technical side, we also draw on the Information Systems Success Model (DeLone & McLean, 2003) [
14], treating system quality as a key determinant of satisfaction and continued use. Together, these strands of the literature highlight a clear gap: while digital transformation is widely promoted in policy discourse, there is still limited empirical work that simultaneously considers technology acceptance, engagement, system quality, and learning outcomes within a single national platform, and does so in a way that is sensitive to urban–rural disparities. Consequently, although the macro-level aspirations of national platforms are widely articulated, there remains a void in meso-level evidence on how such top-down processes are enacted within local systems, where digital transformation claims may either become productive realities or disintegrate due to non-alignment with regional conditions [
3].
This research responds with one central research question: what is the impact of NSEP on the main success determinants of regional education systems, i.e., the perceived ease of use, perceived usefulness, user satisfaction, engagement, learning outcomes, and system quality, for urban and rural user groups? We use a stratified sample of 500 users including students, teachers, and administrators from urban and rural areas to quantitatively investigate how these constructs relate to each other in real platform experience. The research work mainly comprises two aspects. First, it serves as a theoretical link between the technology acceptance model (TAM), student engagement theory, and digital divide scholarship, in the case of a large, state-led national smart education platform, which in turn expands digital equity deliberations by revealing how different regional users suffer from second-level divides in terms of ease of use, usefulness, engagement, and outcomes. Second, it brings fresh data about system quality, user satisfaction, engagement levels, and perceived learning outcome intersections within NSEP, thus giving a micro-level perspective often missing from the macro policy narratives and giving a micro design study context. Importantly, this positioning clarifies that the value of including system quality and contextual variables is not as a routine TAM extension, but as a necessary way to evaluate whether a national platform can deliver equitable “effective use” and benefits across structurally unequal settings.
Practically, the research findings aim to put forward actionable movements for both smart education designers and policymakers. Besides emphasizing the significance of user-centered design and system reliability, the study ideas point towards a necessary combination of infrastructure investment and continuous capacity-building so that platform access may be transformed into real, equitable educational benefits for urban and rural users. In short, the scientific novelty of this study is that, within a real national-scale platform context, it tests an integrated mechanism that jointly incorporates system quality within the framework of the technology acceptance model, while explicitly comparing urban–rural and role-based differences to determine whether platform adoption translates from access equity into capability and outcome equity.
2. Literature Review
There has been considerable academic focus on research integration within educational pedagogy, particularly on the evolution from basic computer-assisted learning systems, to sophisticated, ecosystem-based smart educational frameworks (Lin & Pang, 2024) [
15]. Pioneering studies documented the advantages technology integration offered, particularly in the form of blended learning frameworks, where formative in-person lessons complement online sessions (Li et al., 2024) [
16]. Additionally, the principles underlying multimedia learning have long established the ability of well-engineered digital resources to facilitate deeper cognitive processing and knowledge retention by optimizing cognitive load management (Zhang et al., 2025) [
17]. As artificial intelligence and adaptive technologies continue to advance rapidly, education systems are evolving beyond traditional digitalization to foster adaptive, learner-centered, and transformative educational experiences (Merino-Campos et al., 2025) [
18]. These streams have the ability to increase educational efficiency and efficacy by personalizing learning resources within adaptive learning streams (Saini & Kharb, 2025) [
19].
However, the implementation of such technologies is fraught with challenges that are often understated in optimistic policy discourses (Truong, 2024) [
20]. A significant portion of the literature in human–technology research has prioritized analyses of technological features and innovation, often at the expense of examining how social, organizational, and contextual conditions shape the ways these technologies are actually used and integrated into human practices and systems (Jarecki et al., 2025) [
21]. This divide is not limited to unequal access to devices and connectivity (the first-level divide); it also encompasses disparities in digital skills (the second-level divide) and, ultimately, unequal real-world benefits gained from technology use (the third-level divide) (Ayhan, 2024; Khoso et al., 2025) [
22,
23].
According to Foong et al. (2024), using platforms such as these are critical to teaching “21st-century skills sought by today’s employers, such as critical thinking, global awareness and digital literacy, in a world awash in technology” [
24]. Honcharuk et al. (2024) argue active learning is also key as “digital learning platforms” are not only places where students can acquire knowledge, but also ones where they can interact [
25]. Joseph et al. (2024) argue that this approach is best suited to 21st-century literacy even though there have been serious challenges [
26], ensuring that all students have equitable access to technology and related infrastructure is more important than ever. Adapting to digital platforms effectively also means adapting traditional curriculum and lesson methodologies (Gu, 2024) [
27].
The theoretical underpinnings of technology adoption are crucial for understanding these dynamics. Davis’s (1989) technology acceptance model (TAM) provides a robust framework, positing that the adoption of any technology is primarily influenced by its perceived usefulness and perceived ease of use [
9]. The effectiveness of a platform in an educational setting depends on educators and students perceiving it as a helpful and controllable factor that simplifies, rather than makes the work of an educator complicated (Adel, 2024) [
28]. This has a direct impact on user satisfaction, which is a measure of long-term sustainability. Furthermore, assisting students to literally play with meaning is the ultimate reason for any educational intervention and is simply interwoven with the aspect of engagement. Fredricks, Blumenfeld, and Paris (2004) consider engagement to be a multi-faceted behavioral (participation), emotional (interest and belonging), and cognitive (investment in learning) construct [
12]. The boundary to consider in the extant literature takes the form of the hillock on the two theoretical constructs, and the fact of the implementation of the SofAR channels in the region. On one hand, the beneficial possibilities of smart lessons are often praised (Aljaradin et al., 2024; Adeshina, 2024; Qian et al., 2025) [
29,
30,
31] and there are funneled explanations about the positive impact of collaborative learning platforms such as the NSEP in TAM variables and multi-dimensional involvement within the specific restrictions of teaching systems and the present global surroundings (Wu and Yang, 2024; Harahap and Mahardhani, 2025; Indrasari et al., 2024) [
32,
33,
34]. This paper aims to bridge this gap by quantitatively exploring these relationships and offering much-needed evidence on whether the promise of digital transformation is being realized in regional communities.
3. Methodology
To empirically evaluate the adoption and perceived effectiveness of NSEP, this study uses an extended TAM-based analytical framework, with particular attention to urban–rural differences and role-based variations among students, teachers, and administrators.
Figure 1 presents the research design, which considers how the aspects of system quality, perceived ease of use, and perceived usefulness predict user satisfaction and behavioral intention, and how these in turn relate to engagement and learning outcomes. These constructs inform the subsequent analyses, including the model summaries, regression tests, and diagnostic checks to assess robustness across user roles and regions.
3.1. Research Design and Participant Selection
A stratified random sampling approach was employed to reflect the diversity of the educational contexts served by the National Smart Education Platform and to ensure the adequate representation of key subgroups for meaningful comparison. The target population was stratified across three dimensions: (1) geographical location (urban vs. rural); (2) user role (students, teachers, and administrative staff), and (3) education level (primary: grades 5–6; secondary: grades 7–9). The strategy used will capture a rich array of socio-economic backgrounds, levels of access to technology, and individual experiences of pedagogical systems to develop a deep understanding of the efficacy of the platform in question and help pinpoint possible differences in its impact.
The urban and rural data were collected through the first author’s personal contacts to ensure practical access to NSEP users while maintaining regional comparability. The urban sample was obtained from a private school located in the urban districts of Beijing, recruited with support from the first author’s friends. The rural sample was collected from one primary school and one secondary school in a county town in Hebei Province, which is the first author’s hometown. Due to fact that Beijing and Hebei are geographically proximate, this design helps capture a meaningful urban–rural contrast in educational conditions and digital access. The questionnaire was administered online via Wenjuanxing (
www.wjx.com), and the survey link was distributed through WeChat. To ensure an adequate number of valid responses, the questionnaire was oversampled to 700 invitations so that at least 500 valid feedback responses could be retained after screening. The original questionnaire was written in Chinese to ensure that all participants could fully understand the items and respond accurately.
3.2. Data Collection Instruments and Measures
The instrument for data collection was a structured online questionnaire. It was meticulously designed by Likert 5-point scale to quantify users experiences. The questionnaire primarily comprised scales adapted from established theoretical frameworks to ensure construct validity and reliability. These measures were then refined to fit the specific context and functional features of the NSEP.
All items were measured on a 5-point Likert scale, ranging from 1 (Strongly Disagree) to 5 (Strongly Agree). Perceived ease of use (PEOU): This variable was assessed using a scale adapted from Davis’s (1989) technology acceptance model (TAM) [
9]. It measured the degree to which users believe using the NSEP would be free of effort, with items focusing on interface clarity, navigation simplicity, and ease of learning. In the same way, perceived usefulness (PU) was also adapted from Davis (1989) [
9], this scale measured the degree to which a user believes that the NSEP would enhance their job performance (for teachers/administrators) or learning outcomes (for students). It included items related to productivity, effectiveness, and overall advantage. User satisfaction (US): This construct was designed to capture the overall affective response to using the platform. It synthesized elements from both TAM and user experience literature, evaluating contentment with platform features, reliability, and whether it met user expectations. Behavioral intention to use (BI): A critical dependent variable in the TAM framework, this scale measured the likelihood that users will continue to use the NSEP in the future. It serves as a key indicator of the platform’s long-term sustainability and acceptance. Engagement levels (EL): Student engagement was measured using a multi-dimensional scale inspired by the framework of Fredricks, Blumenfeld, and Paris (2004) [
12]. It included sub-scales for behavioral engagement (time-on-task, participation), emotional engagement (interest, enjoyment), and cognitive engagement (investment in learning, strategic use of the platform). Learning outcomes (LO): This variable assessed users’ self-reported assessment of the educational benefits derived from the platform. For students, this pertained to knowledge acquisition and skill development. For educators, it related to teaching effectiveness and professional development. Items were framed with reference to Bloom’s revised taxonomy to capture cognitive gains [
35]. Finally, system quality (SQ) was added to assess the technical performance of the platform itself, a factor often crucial for user acceptance. It measured perceptions of the platform’s reliability, responsiveness, and availability, drawing from the Information Systems Success Model (DeLone & McLean, 2003) [
14].
Prior to full deployment, the survey instrument underwent a pilot testing phase with a small sample (n = 30) to assess internal consistency reliability using Cronbach’s Alpha, clarity of wording, and completion time [
36].
3.3. Data Collection Procedure
Data collection was conducted in two stages.
Stage 1 was preparation. We finalized the online survey and set clear ethical procedures. All participants provided informed consent. Responses were anonymous, and participants could withdraw at any time without penalty.
Stage 2 was implementation. The survey was administered online from 1 October to 15 October 2025. This time frame gave participants enough time to respond without feeling rushed. A reminder email was sent at the end of the first week to improve the response rate. All data were collected anonymously and stored on a password-protected server for analysis.
3.4. Sample Size Determination
The target sample size was determined using an a priori power analysis in G*Power 3.1.9.7 to reduce Type II error and ensure adequate statistical power. For the main multiple regression, we assumed a medium effect size (f2 = 0.15), a power = 0.95, and an α = 0.05, which indicated a minimum required sample of 119 participants.
Because this study used stratified sampling and aimed to compare subgroups (e.g., urban vs. rural and students, teachers, administrative staff), we recruited a larger sample to improve representativeness and to reduce the impact of non-response or incomplete questionnaires. The final sample was n = 500, providing sufficient power for both the main regression models and the planned subgroup analyses (t-tests and ANOVA).
3.5. Data Analysis
The quantitative data were analyzed in SPSS (Version 28). The analysis followed several steps. First, the descriptive statistics were calculated for all variables, including means, standard deviations, skewness, kurtosis, and frequency distributions. Next, internal consistency reliability was tested using Cronbach’s alpha, with an α > 0.70 treated as acceptable. Finally, multiple regression analyses were conducted to test whether perceived ease of use (PEOU) and perceived usefulness (PU) predicted user satisfaction, behavioral intention, and perceived learning outcomes. All models controlled for user role and geographical location to reduce potential confounding effects.
In addition, independent-samples t-tests were used to examine whether mean differences existed between two-group comparisons (e.g., urban vs. rural users). For comparisons involving more than two groups (e.g., students, teachers, and administrators), one-way ANOVA was conducted. When ANOVA results were significant, post hoc tests (Tukey HSD) were applied to identify which specific pairs differed. For all inferential analyses, statistical significance was set at p < 0.05, and effect sizes were reported to indicate the practical importance of the findings.
Common method variance (CMV) diagnostics. Because the data were collected using a single self-reported questionnaire, we conducted formal diagnostics to assess the potential risk of common method variance (CMV). First, Harman’s single-factor test was performed by entering all measurement items into an unrotated exploratory factor analysis. The results suggested that CMV was not a dominant issue, as multiple factors emerged and the first factor explained 36.2% of the total variance (i.e., below the commonly used 50% threshold). Second, we conducted a robustness check using a common latent factor (CLF) approach in AMOS (version 24) by adding a latent method factor that loaded on all observed indicators. The inclusion of the CLF did not materially change the standardized measurement loadings or the direction and significance of the key relationships, indicating that CMV is unlikely to materially bias the substantive conclusions. Detailed CMV outputs are reported in
Appendix A.
4. Results
The following section presents empirical findings on the following: (1) the descriptive patterns and reliability/validity of NSEP user measures; (2) the structural relationships among key constructs such as perceived ease of use, perceived usefulness, satisfaction, intention, engagement, and learning outcomes; and (3) group differences across urban–rural contexts and user roles.
4.1. Descriptive Statistics and Reliability of Measures
The reasoning for this is to ensure the effectiveness of the information collected, to verify that the measurement tools are appropriate in the specific setting of this study, and also to provide a sufficient overview of the responses given by the participants before consideration of the more sophisticated inferential analyses. The large reliability coefficients indicate that items in both scales are measuring a similar core concept; therefore, they can be employed in the subsequent analysis.
Table 1 reports generally positive perceptions across all constructs (M = 3.83–4.31 on a 5-point scale) with a relatively low dispersion (SD = 0.51–0.72), suggesting responses are fairly consistent; perceived usefulness is rated highest (M = 4.31, SD = 0.53), while engagement level is the lowest but still favorable (M = 3.83, SD = 0.72). Internal consistency is strong for every scale (Cronbach’s α = 0.85–0.92), exceeding the widely used ≥0.70 benchmark for acceptable reliability (Cronbach, 1951; Nunnally & Bernstein, 1994) [
36,
37].
Table 2 shows that all of the study variables are positively and significantly correlated (r = 0.32 to 0.75,
p < 0.01), indicating that higher perceived ease of use, usefulness, satisfaction, engagement, learning outcomes, and system quality tend to co-occur with stronger behavioral intention. The strongest association is between user satisfaction and behavioral intention (r = 0.75), followed by satisfaction and usefulness (r = 0.71 **) and engagement level and perceived learning outcomes (r = 0.66 **), while the weakest relationship is between perceived learning outcomes and system quality (r = 0.32 **). Descriptively, mean scores are generally high (≈3.83–4.31) with modest variability (SD = 0.51–0.72), suggesting overall favorable evaluations across constructs.
4.2. Assessment of the Measurement Model (Confirmatory Factor Analysis—CFA)
Before testing the structural relationships, a confirmatory factor analysis (CFA) was conducted using AMOS to validate the hypothesized measurement model. The CFA assessed model fit and established convergent and discriminant validity, ensuring that each latent construct was measured accurately and remained distinct from the others. This step confirmed that the survey items were appropriate indicators of their intended theoretical concepts and provided a solid measurement foundation for the subsequent structural/path analysis.
Table 3 presents the results of the confirmatory factor analysis (CFA), which assessed the convergent validity, reliability, and unidimensionality of the measurement model. All standardized factor loadings exceeded the recommended threshold of 0.70 and were statistically significant (
p < 0.001), indicating that the indicators strongly represented their respective latent constructs. Composite Reliability (CR) values for all constructs ranged from 0.86 to 0.93, exceeding the acceptable level of 0.70, which confirms the high internal consistency of the measures. The Average Variance Extracted (AVE) for each construct exceeded 0.50, ranging from 0.65 to 0.72, demonstrating that the constructs explain more than half of the variance in their indicators on average, which further supports convergent validity. The Maximum Shared Variance (MSV) values were all lower than the corresponding AVE values, providing preliminary evidence of discriminant validity.
Table 4 evaluates discriminant validity using the Fornell–Larcker criterion, which requires that the square root of the Average Variance Extracted (AVE) for each construct (diagonal values) should be greater than its highest correlation with any other construct (off-diagonal values in corresponding rows and columns). The results show that the square root of AVE for each construct (ranging from 0.81 to 0.85) is consistently higher than all inter-construct correlations, confirming that each latent variable shares more variance with its own indicators than with other constructs in the model. This establishes strong discriminant validity, meaning the constructs are empirically distinct and measure unique phenomena. For example, the strongest correlation is between behavioral intention and user satisfaction (r = 0.75), but the square root of AVE for both constructs (0.82) is higher, satisfying the criterion.
4.3. Testing the Structural Model: Regression Analyses
This section contains the findings of hierarchical multiple regression tests that were performed to test the hypothesized research model causal pathways. This test is critical because it goes beyond the bivariate correlation and looks at the specific and synergistic predictive value of independent variables on the core endogenous constructs, which is a strict test of the theoretical framework presented and provides a subtle understanding of what drives the acceptability of the platform and its educational success.
Table 5 presents the results of hierarchical regression analyses testing the core propositions of the technology acceptance model (TAM) while controlling for demographic variables and incorporating an expanded theoretical framework. The results demonstrate that after accounting for control variables (user role and region), the addition of system quality, perceived ease of use (PEOU), and perceived usefulness (PU) in Model 2 explained a substantial proportion of variance in both user satisfaction (ΔR
2 = 0.58,
p < 0.001) and behavioral intention (ΔR
2 = 0.61,
p < 0.001). PU emerged as the strongest predictor of both satisfaction (β = 0.45,
p < 0.001) and behavioral intention (β = 0.38,
p < 0.001), supporting TAM’s central premise. The significant interaction effect (PEOU * PU → BI: β = 0.09,
p < 0.05) in Model 2 for behavioral intention indicates that the relationship between ease of use and adoption intention is moderated by perceived usefulness, suggesting that PEOU becomes particularly important for driving usage intentions when users perceive the platform as highly useful. The negative coefficient for region in predicting BI (β = −0.09,
p < 0.05) suggests that rural users reported slightly lower adoption intentions even after accounting for technology perceptions.
Table 6 presents the results of regression analyses examining both direct and indirect pathways through which technology acceptance variables influence educational outcomes. While perceived usefulness maintained significant direct effects on both engagement (β = 0.22,
p < 0.01) and learning outcomes (β = 0.18,
p < 0.05), the most important finding emerges from the mediation analysis. The bootstrap results for specific indirect effects reveal that both user satisfaction and behavioral intention serve as significant mediators in the relationship between technology perceptions (PEOU and PU) and educational outcomes, as indicated by 95% confidence intervals that do not contain zero. Particularly noteworthy is the finding that the total effects of PEOU on both engagement and learning outcomes are fully mediated through these pathways (direct effects β = 0.11 and 0.05, ns), while PU maintains both direct and mediated effects. This pattern suggests that while ease of use operates entirely through user attitudes and intentions, perceived usefulness exerts both direct influence on educational outcomes and additional indirect effects through satisfaction and adoption intentions. The stronger total effects for perceived usefulness pathways (e.g., PU → BI → LO: β = 0.13) compared to perceived ease of use pathways (e.g., PEOU → BI → LO: β = 0.06) highlight the primacy of utility perceptions in driving meaningful educational results.
4.4. Group Differences: Urban vs. Rural Users
To directly address the research objective of examining the National Smart Education Platform’s role in empowering regional education, a series of independent samples t-tests were conducted to compare the experiences and perceptions of urban and rural users. This analysis is crucial for identifying potential digital divides and ensuring that the platform’s benefits are equitably distributed across different geographical contexts, thereby testing its effectiveness as a tool for mitigating educational disparities.
Table 7 reveals statistically significant disparities between urban and rural users across all seven measured constructs, with urban users consistently reporting more positive experiences (
p < 0.05 for all comparisons). The largest differences, evidenced by large effect sizes (Cohen’s d > 0.8 and 0.64), were found in perceived ease of use (d = 0.78) and system quality (d = 0.64), indicating that rural users found the platform significantly more difficult to use and perceived its technical performance as less reliable. Medium effect sizes were observed for user satisfaction (d = 0.43) and behavioral intention to use (d = 0.41), suggesting that these usability and quality issues translate into a lower overall satisfaction and a reduced willingness to continue using the platform among rural educators and students. While still statistically significant, the differences in perceived usefulness, engagement, and learning outcomes showed small effect sizes (d ranging from 0.23 to 0.33), implying that once the platform is used, its perceived educational value and impact are somewhat more comparable across regions, though still favoring urban areas.
4.5. Group Differences by User Role: Students, Teachers, and Administrators
To understand how the National Smart Education Platform serves the distinct needs and functions of its primary user groups, a one-way analysis of variance (ANOVA) was conducted to compare the perceptions of students, teachers, and administrators across all key variables. This analysis is vital for evaluating the platform’s differential impact and ensuring its design and functionality effectively support the unique workflows, goals, and requirements of each stakeholder group, thereby informing targeted improvements and professional development strategies.
Table 8 demonstrated statistically significant differences (
p < 0.01) among all seven variables according to user role, with a small to medium effect size (η
2 between 0.02 and 0.08). The post hoc Tukey HSD tests identified that teachers had the least positive perceptions regarding most of the constructs. Teachers showed much a lower perceived ease of use than students and administrators (
p < 0.01), indicating that they face more of a usability barrier in their professional use of the platform. In addition, teachers and students both noted a much lower perceived usefulness compared with administrators (
p < 0.01), meaning that administrators, who tend to manage the implementation and reporting of the platform, find a higher strategic value in the platform than those who simply learn or teach on it. Teachers also had lower engagement than students by a significant margin (
p < 0.01) which is in line with their lower satisfaction and intention to use.
4.6. Summary of the Contextualized TAM and Key Quantitative Findings
This study confirms the explanatory power of an extended technology acceptance model (TAM) for China’s National Smart Education Platform (NSEP). The measurement quality was robust (Cronbach’s α = 0.85–0.92), and overall user evaluations were positive (M = 3.83–4.31 on a 5-point scale), with perceived usefulness highest (PU: M = 4.31, SD = 0.53). Hierarchical regression showed that PU, perceived ease of use (PEOU), and system quality (SQ) jointly explained user satisfaction (ΔR
2 = 0.58,
p < 0.001; PU β = 0.45, PEOU β = 0.28, SQ β = 0.19; all
p < 0.001) and behavioral intention (ΔR
2 = 0.61,
p < 0.001; PU β = 0.38,
p < 0.001; PEOU β = 0.18,
p < 0.01; SQ β = 0.12,
p < 0.01), including a significant interaction (PEOU × PU → intention: β = 0.09,
p < 0.05). However, sizable urban–rural disparities remained, especially for PEOU (urban 4.25 ± 0.52 vs. rural 3.79 ± 0.63; d = 0.78;
p < 0.001) and SQ (4.12 ± 0.58 vs. 3.70 ± 0.69; d = 0.64;
p < 0.001). Teachers also reported less favorable usability than students and administrators (PEOU: 3.80 ± 0.62 vs. 4.15 ± 0.55 and 4.22 ± 0.58; F = 22.47,
p < 0.001), indicating that platform design is not yet fully aligned with pedagogical workflows (
Figure 2).
5. Discussion
The findings indicate that the National Smart Education Platform can support more inclusive, technology-enabled learning while still reproducing uneven benefits across places and user groups, which directly matters for SDG 4’s mandate to “ensure inclusive and equitable quality education”.
Consistent with the technology acceptance model (TAM), perceived usefulness, perceived ease of use, and system quality underpin satisfaction and behavioral intention, and these downstream attitudinal factors then translate into higher engagement and learning outcomes, which are most clearly shown by the strong direct effects of perceived usefulness and behavioral intention and the significant bootstrapped indirect paths through satisfaction and intention (
Table 6), which aligns with previous studies (Alshammari & Babu, 2025) [
38].
However, a critical perspective reveals that TAM, while predictive of individual intent, may be insufficient for explaining the persistence of structural inequalities in a national-scale digital education intervention. The model’s focus on individual psychological factors potentially de-contextualizes adoption by overlooking the systemic barriers embedded in the educational and infrastructural landscape (Porkodi, Khalil, & Tabash, 2024) [
39]. Specifically, the persistent regional penalty (Region → BI: β = −0.09,
p < 0.05) and the substantial urban–rural gaps, especially in perceived ease of use (d = 0.78) and system quality (d = 0.64), suggesting that “access” alone is insufficient, aligning with evidence that digital inequality also emerges in usage quality and offline outcomes (i.e., a third-level digital divide)—suggest that “access” alone is insufficient (Porkodi, Khalil, & Tabash, 2024) [
39].
This persistent disparity aligns strongly with evidence regarding the third-level digital divide, where digital inequality emerges not merely in access (first-level) or usage skills (second-level), but in the quality of usage and the subsequent offline outcomes, (Alqahtani et al., 2022) [
40]. A highly relevant and recent Chinese-authored paper stated rural users, particularly frontline teachers, are disproportionately hindered by poor technical reliability and usability constraints (Wang, Qu & Huang, 2024) [
41]. This is not just a matter of slower internet, but a reflection of the platform’s design and implementation not being robustly adaptable to diverse technological contexts.
The significant gap in perceived ease of use suggests that the cognitive load required to navigate and deploy the platform effectively is higher for users with less reliable infrastructure and potentially less specialized technical support. This difference in friction translates directly into the observed gaps in satisfaction and ultimately into learning outcomes.
From a sustainability perspective, these results imply that realizing SDG 4 (quality education) through national digital platforms requires parallel investments tied to SDG 9 (industry, innovation, and infrastructure) and SDG 10 (reduced inequalities). The findings reveal a direct interdependency: usability and technical reliability constraints, which are disproportionately borne by rural users and frontline teachers, weaken adoption intentions and limit the platform’s equalizing potential even when perceived educational value (usefulness) remains relatively high. At the policy level, following the official launch of China’s National Smart Education Platform in 2022, the
Plan for the Overall Layout of Building a Digital China, released on 27 February 2023, explicitly called for building an “inclusive and convenient digital society” [
42]. In the education domain, the plan emphasizes vigorously advancing the National Education Digitalization Strategy Action and improving the National Smart Education Platform, with “universal and inclusive digital public services” set as a key direction. In addition, a closely related supporting policy aimed at narrowing urban–rural gaps is the Digital Rural Development Action Plan (2022–2025), which highlights upgrading rural digital infrastructure and strengthening the digital capacity of public services to create enabling conditions for rural users’ “access and use” [
43]. At the time of finalizing this study, China’s macro-level policy agenda had explicitly incorporated initiatives aimed at mitigating digital inequality; however, authoritative statistical evidence quantifying the extent to which these policies have achieved their intended outcomes remains limited.
From a normative and philosophical perspective, NSEP reflects a typical logic of technological empowerment, namely the belief that large-scale digital infrastructure can expand educational opportunities in a more inclusive way. However, the findings of this study suggest that such empowerment does not automatically translate into substantive equity. The significant urban–rural gaps observed in system quality and perceived ease of use indicate that technology is far from value-neutral and that its outcomes are deeply shaped by the contexts in which it is embedded. In rural settings, limitations in network stability, device availability, and digital competence mean that platform access often results in what might be described as formal equality rather than outcome equality—a concept studied in various social issues [
3]. As noted earlier, at the level of national and international policy discourse, China has made deliberate efforts to improve both the hardware and software conditions of rural areas. Nevertheless, the issues raised by these policies point to the same conclusion reached in this study: the urban–rural gap has not yet been eliminated and continues to persist. The effectiveness of policy implementation and its translation into tangible educational outcomes still requires time and sustained observation to be properly evaluated. In this sense, the equity implications of the NSEP should not be evaluated solely in terms of whether users can access the platform, but rather whether the platform is capable of producing comparable educational returns across structurally unequal contexts. This points to the need for a notion of contextual justice, where platform design and policy implementation actively recognize and compensate for regional disparities instead of assuming that a uniform technological solution will yield uniform benefits.
At the level of practical implementation, the empirical results further show that while rural users report significantly lower perceived ease of use and system quality, the differences in perceived usefulness and learning outcomes are comparatively smaller. This suggests that once meaningful use is achieved, the educational potential of the platform can extend across regions. Accordingly, policy attention should shift from whether the platform has been deployed to how it is actually used in different local environments. For example, in rural areas, measures such as offline resource packages, low-bandwidth interface adaptations, and localized technical support could directly improve system performance and usability. At the same time, teacher-oriented, context-specific professional development is essential to align platform functions with everyday teaching practices [
44]. Unlike urban optimization strategies that emphasize feature expansion, rural implementation may require simplification, sustained on-site support, and long-term capacity building. Only when the platform is embedded within local pedagogical routines, institutional constraints, and infrastructural realities can the National Smart Education Platform move beyond symbolic inclusion and function as a genuinely effective mechanism for reducing urban–rural educational inequality.
A critical policy implication is that platform-centric intervention risks becoming an elite technology unless significant resources are diverted from content development to context-specific infrastructure subsidies, on-site technical support, and tailored teacher professional development. Failure to address the root causes of the system quality and ease of use deficits in marginalized regions could inadvertently solidify existing educational hierarchies, transforming a tool for equity into a mechanism for reproducing digital poverty within the educational sphere. This calls for a shift in focus from mere deployment metrics to metrics that capture effective, equitable utilization.