Next Article in Journal
Deep Approaches to Learning, Student Satisfaction, and Employability in STEM
Previous Article in Journal
Reforming First-Year Engineering Mathematics Courses: A Study of Flipped-Classroom Pedagogy and Student Learning Outcomes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AccessiLearnAI: An Accessibility-First, AI-Powered E-Learning Platform for Inclusive Education

by
George Alex Stelea
*,
Dan Robu
and
Florin Sandu
Department of Electronics and Computers, Transilvania University of Brașov, 500036 Brașov, Romania
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(9), 1125; https://doi.org/10.3390/educsci15091125
Submission received: 26 June 2025 / Revised: 26 August 2025 / Accepted: 28 August 2025 / Published: 29 August 2025

Abstract

Online education has become an important channel for extensive, inclusive and flexible learning experiences. However, significant gaps persist in providing truly accessible, personalized and adaptable e-learning environments, especially for students with disabilities, varied language backgrounds, or limited bandwidth. This paper presents AccessiLearnAI, an AI-driven platform, which converges accessibility-first design, multi-format content delivery, advanced personalization, and Progressive Web App (PWA) offline capabilities. Our solution is compliant with semantic HTML5 and ARIA standards, and incorporates features such as automatic alt-text generation for images using Large Language Models (LLMs), real-time functionality for summarization, translation, and text-to-speech capabilities. The platform, built on top of a modular MVC and microservices-based architecture, also integrates robust security, GDPR-aligned data protection, and a human-in-the-loop to ensure the accuracy and reliability of AI-generated outputs. Early evaluations indicate that AccessiLearnAI improves engagement and learning outcomes across multiple ranges of users, suggesting that responsible AI and universal design can successfully coexist to bring equity through digital education.

1. Introduction

E-learning platforms are increasingly becoming a transformative tool to meet modern educational demands. Due to the rapid growth of digital technologies and Internet access, online education has become a practical solution to provide diverse resources to students and to serve different needs.

1.1. Context and Motivation

The importance of online education has become especially manifest in times of crisis such as public health emergencies, economic recessions, armed conflicts or natural disasters, all of which can disrupt the conventional education system. During crises, virtual learning environments contribute to the continuity of education, allowing students to access educational content without being constrained to be present in a specific location (Mukherjee & Hasan, 2022). In addition, the e-learning paradigm is particularly important in the process of continuous professional training, as it allows workers to acquire/to update the knowledge and skills needed in the labor market (Arredondo-Trapero et al., 2024).
In the context of online platforms, education is not limited to specific geographical areas or time zones; therefore, students have the opportunity to choose from a range of practical subjects that would otherwise be unavailable. Also, asynchronous study opportunities and the availability of recorded courses make it possible for students to personalize their learning experiences in a way that is adaptable to their learning speed and availability.
In addition, online learning fosters inclusion by providing different learning approaches, such as audio material, transcripts, subtitles and adaptive tests, all of which address personalization of content and/or disability-related needs. Inclusive education is an approach that ensures all learners, regardless of ability, background, or circumstance, can fully participate and succeed in the same learning environment (Wray et al., 2022). It emphasizes transforming the education system to meet the diverse needs of students, rather than expecting students to adapt to a standard model. This includes addressing the needs of those with disabilities, language barriers, or socioeconomic disadvantages.
Central to inclusive education is the principle of equity and access. It promotes removing barriers to learning through flexible teaching methods, adaptable materials, and supportive technologies. Closely tied to Universal Design for Learning (UDL) (CAST, 2024), inclusive education encourages varied ways of presenting information and engaging students, ensuring content is accessible from the start.
The goal is not only academic success but also social inclusion (Khamzina et al., 2024). By fostering a sense of belonging and ensuring that all students are valued, inclusive education builds more equitable and effective learning environments where diversity is seen as a strength.
Despite their importance, conventional face-to-face teaching practices tend to confront challenges in terms of diverse learning needs, different curricula, physical abilities, and cognitive profiles. Traditional school-based learning approaches rely largely on standardized learning, while e-learning environments can use adaptive learning technologies to personalize content based on the learner’s level of proficiency and preference (Shafique et al., 2023). This is particularly helpful for students with disabilities, where the traditional system cannot always meet their specific needs. To address these challenges, assistive technologies, such as screen readers, speech recognition tools, and alternative input devices, can enhance accessibility and empower individuals to interact more effectively with educational content.
Similarly, working adults and parents often face obstacles if they want to follow a rigid study schedule. Online education is their opportunity to earn degrees and certifications and participate in various professional development courses, even with a busy work or personal life agenda. The freedom and opportunity to learn at their own pace, view materials whenever they want, and take part in online discussions serves as proof that the educational process continues, regardless of the difficulties a person may face. As e-learning continues to expand, it is important to realize its potential and create more inclusive, accessible, and personalized learning experiences that can provide equal access and learning conditions for students from all demographic categories.
The urgency of embedding accessibility in digital learning has sharpened because the European Accessibility Act (EAA), Directive (EU) 2019/882 (European Union, 2019), became fully enforceable on 28 June 2025 (AccessibleEU, 2025). From that date, any new consumer-facing ICT product or service placed on the EU market, including e-learning platforms, learning-management systems (LMSs) and the digital course materials they deliver, must meet the Act’s functional accessibility requirements or face fines, withdrawal orders, or reputational damage (Recite Me, 2025).
Beyond mere compliance, the EAA positions accessibility as a design baseline rather than a charitable add-on, acknowledging that around 87 million EU residents have some form of disability and that an aging population will soon make accessible interfaces the norm (European Commission, 2025).

1.2. Proposed Solution and Contributions

This paper presents AccessiLearnAI, a novel AI-enhanced e-learning platform that was designed and developed to improve accessibility, personalization, and adaptability of the content. Our contributions address key gaps in existing e-learning systems by integrating artificial intelligence (AI), and progressive web application (PWA) technologies (Google Developers, 2025a), in compliance with web accessibility standards. The main contributions of our work are:
  • Accessibility-Driven Content Structuring: We propose an AI-powered content organization system that automatically incorporates HTML5 semantic elements and ARIA attributes (World Wide Web Consortium [W3C], 2025b) to improve accessibility. Unlike many current e-learning platforms that incorporate accessibility support into the system as an afterthought, our system ensures from the outset that content is organized and structured for screen readers and adaptive technologies. A “human-in-the-loop” validation system allows teachers to optimize accessibility improvements generated by AI, thus ensuring high-quality implementation and usability.
  • AI-Based Personalization and Adaptive Learning: Our framework uses AI-based techniques to generate multi-level summaries, automatic alternative text for images, real-time text-to-speech (TTS) (Reddy et al., 2023), and dynamic translation. This set of features enables personalized learning experiences that adapt to students’ cognitive preferences and accessibility needs. Unlike previous studies that focus on isolated applications of these technologies, we integrate them into a unified system that improves student engagement and comprehension.
  • Semantic Enhancement and Offline Accessibility: We adopted Progressive Web Application (PWA) technology to provide seamless, offline access to educational content while maintaining optimal interactivity. Our platform ensures that accessibility features, such as screen reader support and AI-enhanced content structuring, remain functional and available even in offline mode. This contribution is particularly relevant for students in low-connectivity environments, enabling uninterrupted learning experiences.
  • Ethical Data Handling and Privacy Compliance: Our platform incorporates strong privacy measures aligned with GDPR (European Parliament & Council, 2016) and other data protection regulations. We address concerns about student data security, transparency around AI decisions, and bias mitigation by ensuring ethical implementation of AI. The system provides explainable AI (XAI) (Dwivedi et al., 2023) feedback mechanisms that allow students and teachers to understand and control how AI-based adaptations are used.
Through these contributions, our research addresses current gaps in AI-based e-learning by providing an integrated, accessible-first approach that improves student inclusion and engagement. Our new unified and combined approach of AI-based content adaptation, accessibility enforcement, and PWA-based offline support creates a novel enhancement for inclusive digital education.

1.3. Research Questions

To make the aims of this study explicit, we set the following research questions (RQs):
  • RQ1 (Architecture & Feasibility). Can an accessibility-first, AI-augmented e-learning platform integrate semantic HTML5/ARIA scaffolding, AI-generated alternative text, multi-level summarization/translation, text-to-speech, and offline PWA capability into a cohesive, usable system for the higher demands of academic education?
  • RQ2 (Accessibility & Usability in Practice). To what extent does such an integrated approach improve practical accessibility and user experience for diverse learners—including blind/visually impaired users—relative to mainstream LMS baselines, as reflected in screen-reader compatibility, keyboard-only navigation, and perceived ease-of-use?
  • RQ3 (Pedagogical Utility of AI Outputs). How useful and reliable are AI-generated summaries and image descriptions when mediated by a human-in-the-loop workflow for teachers?
RQ1 is addressed through the system design and implementation analysis (Section 3); RQ2 through the formative accessibility/usability testing and the comparative benchmarking against established platforms (Section 5 and Section 6); and RQ3 through the two-step expert review of AI-generated outputs (Section 6).

1.4. Paper Organization

The following sections of this paper are organized as follows: Section 2 reviews related work on AI-based personalization, accessibility, and e-learning, and identifies the research gap that motivates this study. Section 3 presents the architecture and implementation of the AccessiLearnAI platform, including its accessibility-focused front end, modular back-end, integrated AI services, offline PWA capabilities, and privacy and security measures. Section 4 describes the workflows for different user roles, showing how teachers and students interact with the system and how AI features enhance accessibility and personalization. Section 5 provides a comparative analysis between AccessiLearnAI and existing e-learning platforms, highlighting its advantages in accessibility, adaptability, and AI-driven features. Section 6 discusses the broader pedagogical and design implications of combining accessibility-first software development with real-time AI personalization. Section 7 outlines current limitations, including the limited scale of evaluation, reliance on third-party AI services, and partial support for diverse disability profiles. Section 8 concludes the paper and proposes directions for future development, such as large-scale validation, integration of additional accessibility features, and domain-specific extensions.

2. Related Work and Research Gap

The evolution of e-learning systems has given rise to numerous innovations in personalization, accessibility, and AI integration. However, most current platforms address these aspects in isolation, without combining them into a unified and inclusive learning experience. With inclusive education emerging as a key objective in global digital learning environments, there is increasing pressure to go beyond mere technical compliance and develop platforms that intelligently adapt to the diverse needs, abilities, languages, and contexts of learners. The following sections review existing work related to personalization, accessibility, AI-powered tools, and scalable architectures, highlighting where current solutions fall short and where new approaches, such as the one proposed in this study, can contribute.

2.1. Personalized E-Learning and AI-Driven Accessibility

In e-learning platforms, personalization has been identified as a key research element. Sanchez-Gordon et al. (2021) introduced a new model for profiling users with disabilities in e-learning in order to adapt interfaces for e-learning systems, highlighting the importance of meeting these diverse needs. Digital learning systems such as DreamBox Learning (DreamBox, 2024) and Smart Sparrow (2025a) demonstrate the benefits of machine learning algorithms that adapt content and difficulty based on performance values (Airaj, 2024). Kazimzade et al. (2019) present the intersection of adaptive learning technologies and inclusion, highlighting the fact that these areas are often not combined. Extensive accessibility elements are rarely incorporated into online learning platforms or are only incorporated through static processes and integrated later in their development. The comprehensive learning frameworks described in articles (Chen et al., 2020; Sri Ram et al., 2024) highlight the need for e-learning platforms to seamlessly integrate advanced AI-based personalization with universal design principles (such as text simplification, TTS, or alternative text generation) to achieve a truly inclusive experience.
At the same time, accessibility and inclusion perspectives remain underexplored in most current e-learning platforms. Although compliance with standards such as WCAG 2.1 (World Wide Web Consortium [W3C], 2018) and Section 508 (U.S. General Services Administration, 2025) exists, they are often only partially implemented in learning management systems (LMS) and are often tested as checklists. Tiwary and Mahapatra (2022) present a study on generating alternative text for images in e-learning platforms using AI, presenting their solutions and challenges with a focus on visually impaired users. Tools such as Blackboard Ally (2025) or specialized plugins for the Moodle platform (Moodle, 2025a) have capabilities to automatically check alternative text or color contrast, but rarely integrate AI to optimize the context of the alternative text or to dynamically adapt to the reading levels of users (Murtaza et al., 2022). This scenario is not optimal and can be problematic for students who require more than minimal compliance, especially those who require TTS or real-time speech synthesis (Liu et al., 2023).
Recent work has begun to operationalize AI-driven descriptive enrichment directly inside user interfaces, demonstrating how automated generation of semantically rich, context-aware descriptions can lower cognitive and perceptual barriers for visually impaired users in complex data environments. Stelea et al. (2025a) showed that AI-generated descriptions can significantly improve accessibility in complex interfaces for visually impaired users. Their approach demonstrates how such features can be integrated from the start, rather than treated as optional add-ons. Building on this insight, our platform incorporates alt-text suggestion, validation, and caching directly into the authoring workflow, ensuring accessibility evolves alongside content personalization. This alignment of AI-driven adaptation and inclusive design sets the foundation for a more responsive and equitable e-learning experience.

2.2. AI Tools for Summaries, Translation, and TTS

Summarization: Recent advances in deep learning have made content summarization more context-aware (Crompton & Burke, 2023; Acosta-Vargas et al., 2024a), which is essential for supporting and aiding diverse reading styles. However, many e-learning platforms still rely on either user-generated summaries or simple, rudimentary summaries.
Translation: Neural machine translation (NMT) can be used to convert entire lessons or just sub-sections of a lesson in real time, although technical language specific to certain domains or languages with fewer resources still present challenges (Klimova et al., 2023). The synergy of translations using LLM with teacher supervision can address inaccuracies or domain jargon.
Text-to-Speech (TTS): TTS systems encourage inclusion by removing visual or reading barriers by offering multilingual support and adjustable pacing (Liu et al., 2023; Liew et al., 2023; Hillaire et al., 2019). These studies confirm the ability of TTS to improve engagement of mobile learners or those who prefer auditory learning.

2.3. PWA-Based E-Learning

Unlike native apps, PWAs rely on offline caching mechanisms, push notifications, and near-instant installability across multiple platforms, factors that e-learning systems can significantly benefit from (Nugraha et al., 2022). This type of approach is favorable for learners with low bandwidth or who are located in rural areas, allowing them to store AI-generated lessons and course sections (such as summaries or TTS files) offline. After the device reconnects to the Internet, the automatic synchronization mechanism manages updated content, such as new or revised alternative text or extended teacher notes. This type of design helps to achieve more inclusive, robust, and device-independent solutions for large-scale deployment (Huber et al., 2021).
In conclusion, while significant progress has been made in terms of personalization, content transformation, and accessibility, these advances have not been thoroughly combined and experienced in a cohesive and coordinated manner. The following section outlines the specific gaps identified that continue to persist when these elements remain decoupled and not used in a unified manner.

2.4. Identified Gaps

Although e-learning platforms have greatly improved the ability to deliver education from anywhere, at anytime, there are still many obstacles that prevent a comprehensive and inclusive digital learning experience. Based on the analysis presented above, we have identified four main gaps: limited personalization, poor accessibility, language barriers, and unfulfilled AI potential, which need to be addressed in a system-wide and comprehensive way.
  • Limited Personalization. Many current e-learning platforms still offer static and uniform content to learners without taking into account their environment, level of prior knowledge, or different learning speeds (Murtaza et al., 2022; Gligorea et al., 2023). Therefore, beginning learners may be overwhelmed by the level of the material, while advanced learners may feel disengaged. Such a one-size-fits-all approach often fails to optimize learner engagement and can frequently lead to high dropout rates.
  • Deficient Accessibility. Acosta-Vargas et al. (2024b) highlight major accessibility deficiencies in generative AI tools, citing poor screen reader support, inadequate keyboard navigation, and low contrast as basic impediments to the inclusion of users with disabilities. They also highlight challenges such as the lack of transparency regarding AI and inclusive training data, calling for the need to develop proactive development strategies to ensure compliance with ethical and regulatory standards for digital inclusion. Despite the existence and promotion of W3C guidelines and legal mandates such as Section 508 or EN 301 549, e-learning platforms still have many gaps in inclusive design. Common deficiencies include the lack of alternative text for images, inadequate ARIA roles, limited text resizing, or insufficient keyboard navigation (Acosta-Vargas et al., 2024a). Because students with disabilities (visual, auditory, or motor impairments) require more than superficial compliance, these gaps often reduce participation and increase dropout rates among them. The lack of or limited implementation of additional features, such as sign language overlays or real-time TTS for any text, significantly reduces the level of inclusive education.
  • Language Barriers. The globalization of online education has increased the demand for multilingual courses, but many online learning platforms still offer, sometimes only partial, translation capabilities (Jónsdóttir et al., 2023). Students from non-English speaking backgrounds often face major obstacles in understanding domain-specific terminologies or examples that are embedded in different cultures. Although machine translation technologies are improving every day, they rarely adapt to the nuances of the local language in specialized domains—e.g., medical or engineering jargon—failing to generate optimal clarity and understanding of the content (for instance, instead of “electromagnetic field” machine translation could produce “electromagnetic plain”).
  • Unfulfilled AI Potential. Although research in the field of artificial intelligence in education (AIED) has produced advanced techniques and methods (Chen et al., 2020), such as automatic question generation, real-time text summarization, or adaptive reading level adjustments, commercial LMSs typically do not integrate them at scale. As a result, many students cannot benefit from AI-based dynamic personalization, multilingual content summarization, or advanced assistance tools.
When we consider personalization, AI-based transformation, multi-language support, TTS, semantic markup, and even offline use as factors that must be present jointly, we notice that typical e-learning systems do not implement them or implement them in a fragmented manner (Ingavelez-Guerra et al., 2023; Timbi-Sisalima et al., 2022). A unified, fully integrated approach capable of combining advanced personalization with accessible and robust design remains elusive. Although some platforms have incorporated either accessibility features or AI capabilities, few have sought to integrate both in order to deliver a truly inclusive educational experience for all learners.
To our knowledge, it seems very likely that no other existing system addresses all these issues together, which defines the gap our work fills. The platform combines proven methods (knowledge tracking, recommendations, LLM-based tasks) with strong accessibility features and offline support through PWA. This ensures that tools like summarization, text-to-speech, and alt-text generation are available even in areas with limited Internet, making the platform usable by all learners—beginners, advanced students, and those with disabilities.

3. Architecture and Implementation

AccessiLearnAI is an AI-enhanced e-learning platform designed to make online education more accessible, personalized, and adaptable. The system architecture is built prioritizing accessibility, ensuring that all students, including those with disabilities such as visual, hearing, or cognitive impairments, can use educational content in the most effective way.

3.1. System Overview

The platform enables teachers to create and adapt course material with AI assistance, while students benefit from on-the-fly content transformations such as summaries, audio files, and translations. Key features include automatic structuring of content for screen readers, AI-generated image descriptions, text summaries at different difficulty levels, real-time translation, and offline access through Progressive Web App (PWA) technology. By using AI to adapt both content creation and delivery to user needs, the platform improves accessibility and student engagement compared to traditional e-learning systems.
From a technical point of view, the AccessiLearnAI platform is developed using a layered architecture, which separates responsibilities into separate components (Stelea et al., 2025b): a responsive front-end user interface (with offline PWA support), a back-end application layer built on an MVC (Model–View–Controller) web framework, an AI integration layer (connecting to external AI services), and a data storage layer (relational database with caching), as shown in Figure 1.
The front-end interface supports accessibility-first rendering, integrating HTML5 landmarks, ARIA roles, and real-time toggles for text-to-speech, language translation, or contrast adjustments. Content authored or uploaded by teachers is parsed and automatically enhanced with structural and semantic annotations, reducing the burden on educators who may not be accessibility experts. This aligns with universal design principles, ensuring that content adapts not only to user preferences, but also to assistive technology requirements.
The AI integration layer plays a central role in providing dynamic personalization. For example, students can request summaries tailored to different reading levels or have complex paragraphs simplified. Image uploads are scanned and matched with context-aware, AI-generated alt text. In multilingual environments (Kirss et al., 2021), AI-based translation services allow students to consume materials in their preferred language. All AI interactions are designed to be transparent, with fallback mechanisms and manual override options to maintain pedagogical control.
The architecture is scalable, secure, and flexible, allowing institutions to adopt it fully or step by step. Unlike traditional LMSs that use static checklists or manual adjustments, AccessiLearnAI builds accessibility and personalization into the system itself. This reduces exclusion and promotes equity in digital learning.

3.2. Core Architectural Components

The platform architecture is organized into multiple layers and modules, each responsible for specific aspects of the system’s functionality. Using this modular design (illustrated in Figure 1), the main goal is to ensure clarity in the separation of responsibilities and that the system can be maintained, scaled, and improved in the future. Below we detail the core components of the architecture.

3.2.1. Front-End (User Interface & PWA Layer)

The front-end is the layer of the platform that users interact with, implemented as a web application, emphasizing responsive design and accessibility compliance. It was developed using the HTML5 and WAI-ARIA semantic standards (World Wide Web Consortium [W3C], 2025c) to shape and structure content in a way that allows assistive technologies (such as screen readers) to navigate through it appropriately. Content pages must use proper semantic tags (<header>, <nav>, <article>, etc.) and ARIA roles/labels to support visually impaired users. This markup helps screen readers convey structure and navigation clearly, following WCAG 2.1 (World Wide Web Consortium [W3C], 2025a) accessibility guidelines.
The front-end uses Tailwind CSS (Tailwind CSS, 2025) for responsive design because of its utility classes and built-in responsive features, but other frameworks, such as Bootstrap (2025), could have been chosen for the same result. Tailwind stands out by offering classes for spacing, typography, colors, and built-in responsive design (sm:, md:, lg:). This ensures the platform looks clear and works well on both smartphones and laptops, with flexibility for custom elements. Figure 2 shows the student dashboard on mobile and desktop, highlighting its clean and responsive design.
JavaScript (ES6) and jQuery (2025) are the main tools used to achieve front-end interaction and Document Object Model (DOM) manipulation (MDN Web Docs, 2025a), while AJAX (Asynchronous JavaScript and XML) (MDN Web Docs, 2025b) is used to send and retrieve data asynchronously. While modern frameworks such as React or Vue.js could have been used, we chose the jQuery library for implementation due to its simplicity and rapid development capabilities. Interactivity using this library focuses on minimal updates to dynamic content (e.g., fetching a paragraph summary or triggering text-to-speech on a button click) without the overhead of a full single-page application framework.
The Front-end layer and the Back-end layer are connected using RESTful endpoints (Ehsan et al., 2022) (via AJAX), for example to upload content or to retrieve a custom summary. All requests are sent over secure HTTPS to ensure that data is protected during transfer (as detailed in Section 3.4). In addition, the front-end layer is enhanced as a progressive web application (PWA). This means that the platform can be installed as a mobile app and also offers offline functionality.
Through a PWA manifest file (manifest.json), we added metadata such as the app name, icons, and theme colors, and included a service worker script (Google Developers, 2025b). A service worker caches essential assets so users can access pinned lessons offline, with automatic synchronization once reconnected. This is particularly beneficial for learners in environments with low bandwidth or intermittent connectivity, aligning with research highlighting offline access as significant for e-learning especially in rural areas (Tate & Warschauer, 2022; Lakshmi et al., 2025). Overall, the front-end/PWA layer provides an accessible, device-independent user experience, acting as a bridge between users and the platform’s intelligent services.

3.2.2. Back-End Application Logic (MVC Layer)

The AccessiLearnAI backend layer is run on a robust web framework that uses the MVC architecture (we used Laravel (Laravel, 2024), but any modern MVC framework could have been used). This layer acts as the central point of the platform, being responsible for authentication, authorization, business logic, and also coordinating the user interface, database, and AI services. By adhering to the MVC principles, we ensure clear separation between data models, application logic, and presentation templates, which improves the maintainability and scalability of the code.
Key server-side components include:
  • Controllers: Act as intermediaries that manage requests (e.g., file upload, summary generation) by invoking services and enforcing role-based access policies. The MVC framework provides built-in security features like session management and input validation (e.g., preventing SQL injection (Nasereddin et al., 2021) or cross-site scripting (XSS) (Weamie, 2022) by design through prepared statements and output escaping).
  • Services/Managers: In our architecture, processes that are more complex are handled by service classes. These classes handle tasks that involve AI or external APIs. For example, an “AIContentService” class has been implemented that completely handles all tasks that are related to AI-generated content. It can receive raw text from a lesson, process it, and communicate with AI APIs to produce summaries, simplified versions, or accessibility suggestions. This structure makes controllers easy and convenient to test, as services can be emulated during unit tests.
  • View Templates: Dynamic HTML pages are built with Laravel’s Blade templates, which combine static structure with AI-generated content. This ensures that semantic tags and ARIA annotations are correctly applied in the final rendering. Templates are reusable (e.g., headers, footers) and help maintain consistent accessibility features across all pages.
The back-end logic also handles the application’s caching operations and other background tasks. For example, when a teacher requests AI enhancements for a larger portion of text, the action can be handled by these processes (queue) in the background to avoid blocking the web request cycle, with the teacher receiving a notification when the task is completed (e.g., via real-time alert or email). Similarly, if a large number of students request the same resource, the server can provide cached results (e.g., a previously generated summary) reducing repeated computation.
At the back-end level, security is enforced and ensured by middleware, which authenticates each request and authorizes actions based on user roles (Yamani et al., 2022). In addition, some sensitive operations (such as resetting a password or deleting content) require specific privileges associated with them. Additional functions such as logging important events and AI interaction are also created at the back-end level. For example, when a teacher accepts or edits AI-generated content, the back-end records how it was generated. These logs are important for auditability and can also be used to further improve the AI model.

3.2.3. AI Integration Layer

The central novelty of the platform is the AI integration layer that manages the connection between the main application and external AI services whose innovative purpose is to enable content adaptation and personalization, as well as improving accessibility. In the application architecture, this layer can be considered a “controller/service”, but which is dedicated to managing calls to AI models and processing the outputs. It consists of two main AI subsystems: Natural Language Processing (NLP) (Egger & Gokce, 2022) (for text analysis, summarization, translations, etc.) and TTS (for audio generation). These subsystems are treated as external services that are not embedded in the platform, but are managed by the AI integration layer that handles the communication details of these services with the rest of the application.
In order to use a large language model (LLM) (Yao et al., 2024) from the Generative Pre-Training Transformer (GPT) (Yenduri et al., 2024) series, we integrated the OpenAI API (OpenAI, 2025) for NLP tasks. Compatibility with other models was also foreseen in the design, to generate summaries from text, simplify text to a specified level, or produce alternative text for images. For example, when a teacher uploads a document with a lesson or course, the AI integration layer handles the raw text and sends instructions to LLM such as “Analyze the following lesson text and identify key sections, then suggest appropriate HTML5 structural tags and ARIA roles for each section.” The AI-generated response is provided in JavaScript Object Notation (JSON) format, defining the placement of headings, sections and accessibility markups, which the platform embeds into the content for the teacher to approve. Similarly, for student requests, when a student clicks “Summarize” the integration layer sends an instruction such as “Summarize the following content in a concise, student-friendly manner (approximately 200 words): [lesson text].Figure 3 presents the structured algorithm used by the system to request and process AI-generated summaries, based on the input lesson content and the specified summary level. It outlines the construction of the prompt, the invocation of the AI service, response validation, post-processing steps, and optional caching for efficiency.
The EstimateMaxTokens() function configures the maximum token limit dynamically, based on the chosen summary length. AIService.CallAPI() represents the external interaction with a language model such as OpenAI’s GPT series. The temperature parameter controls the randomness of the generated output, lower values, like 0.3, encourage more deterministic responses. Post-processing steps such as Sanitize(), EnforceReadingLevel(), and StripPII() ensure the summary is clean, readable, and privacy-safe. Caching via Cache.Store() is optional but recommended to avoid redundant API calls and improve performance. The integration layer manages AI prompts and processes results asynchronously, allowing users to continue working while tasks complete in the background.
The Text-to-Speech (TTS) module of the AI layer is handled in a similar fashion. When the user requests an audio version of a text (for example a lesson section or summary), the integration layer calls an external TTS service. For our implementation, we chose Google Cloud Text-to-Speech (Google Cloud, 2025), but other services like Amazon Polly (Amazon Web Services, 2025a) could also be integrated. The text and desired voice/language parameters are sent to the service, and it returns an audio stream or file (e.g., an MP3). At this point, the integration layer forwards the audio file to the front-end for playback. In order to optimize performance, the platform can cache the outputted audio file on the server or in the user’s device (via PWA caching) so that later rendering does not require calling the TTS API again (especially, if many students ask for the same audio or one student replays it).
If AI results are delayed or unavailable, the system retries or shows a fallback message. Critical content (e.g., alt text) always undergoes human review to ensure quality. This human-in-the-loop (Mosqueira-Rey et al., 2023) checkpoint ensures that AI suggestions enhance content but do not degrade it with errors or biases.
Architecturally, the AI integration layer can be scaled independently if needed. In a high-load scenario, requests to the external AI service can be routed through a separate layer of microservices or serverless functions that are dedicated solely to AI tasks. This will allow the application server to focus on standard web requests. This modular design also means that the platform can change AI providers and also integrate new AI models in the future with minimal effort and few changes to other parts of the system, for example, using an on-premises open-source LLM for summarization (for privacy reasons), while continuing to use a Cloud API for TTS.

3.2.4. Database and Storage Layer

All persistent data is managed in the Database and Storage layer. The platform uses MySQL, an open-source relational database management system (MySQL—Oracle Corporation, 2025), to maintain structured data of users, content and interactions, but other database alternatives such as MariaDB (MariaDB Foundation, 2025) can be used. The schema includes tables such as: Users (with roles, profiles and login credentials), Lessons (storing metadata and references to lesson content), LessonContent or Sections (the actual text of the lessons, possibly split into sections for easier retrieval), AIGeneratedAssets (e.g., content quality). By managing the data in this way, we ensure that the platform selects and knows which AI-generated content belongs to which lesson and which user preferences to apply. Uploaded files are converted to text for storage, with images saved separately and linked to AI-generated alt text. A service worker syncs the offline cache with the main database, ensuring consistency.
In the database, the system secures all sensitive data (such as passwords and personal information) by encrypting or hashing all content. We take advantage of the security features that come already implemented with the framework to ensure the security of password hashing (we use Bcrypt or Argon2 in Laravel (Laravel, 2025a) and ensure that any personal information that is not needed by the system is either not stored or is encrypted. The database layer we built is also designed to support multi-tenancy, which means that if the platform is deployed for different schools or courses, data can be partitioned or labeled by institution or course, ensuring that the data of one entity is isolated from that of another.

3.2.5. External Services and APIs

External services are third-party mechanisms or APIs that the platform uses to generate extra capabilities beyond the functionality of the application core. The two main external integrations in AccessiLearnAI are the AI content service (LLM API) and the Text-to-Speech service, as presented in the previous AI Integration section. Yet, there are extra complementary or external services that are integrated:
  • Authentication and Identity (optional): Even though the platform has its own user database, it is extended to integrate also an external Single Sign-On (SSO) provider (like Google OAuth (Google Developers, 2025c) or the university login system), this is to allow users to authenticate using existing accounts. This implementation choice is configured based of the deployment needs or technical requirements. For our prototype, we used local authentication, but the architecture allows configuring an external OAuth service.
  • Notification Services: Because the workflow in the platform requires sending notifications (for example, emailing a teacher when AI processing of their content is complete, or sending students reminders), it is recommended to use specialized external email APIs (like SendGrid (2025)) or push notification services. The PWA’s push notifications are integrated using Firebase Cloud Messaging service to notify and send updates to the user’s device when new content is available or when AI-generated results are ready.
  • Analytics and Monitoring: In order to track and monitor the performance of the application, external services and components for analytics should be integrated. These could be services like Google Analytics (Google Developers, 2025d) for user behavior analysis (ensuring also privacy measures) or tools like NewRelic (New Relic, 2025) for server performance monitoring. The Analytics and monitoring tools are not part of the core learning capabilities, but they come as auxiliary features in order to help maintain the platform’s usage.
The most important external services are the AI and TTS APIs. The server handles the access to these services via secure web requests. The API keys or credentials are stored securely on the server (in environment variables) and are never exposed on the client side. The architecture of the platform anticipates the possibility that these external services could be a point of latency or failure, so the calls are isolated, ensuring that an error does not block the entire application. The platform is designed in such a way that in case of external service unavailability (e.g., the AI API), core functionality continues (e.g., students can still access the original content even if, for example, the summarization tasks are temporarily unavailable).
For the implementation of the platform we connect to two well-known cloud services, OpenAI (GPT) for text tasks and Google Cloud Text-to-Speech for audio. Because these services sit behind a small “adapter” micro-service, a school or university can swap them at any time for cheaper or locally hosted options (e.g., an open-source model or an on-premises TTS engine) without touching the rest of the platform. This design lets each deployment choose the mix that best meets its budget, privacy rules, speed requirements, or ethical guidelines.
Depending on outside vendors, however, carries risks: prices can rise, an API can go down, or a model might return biased or incorrect answers (“hallucinations”). To guard against this, the platform (1) shows the original, unsummarised content if an AI call fails, (2) stops non-essential requests once a monthly cost ceiling is hit, (3) lets teachers run simple checks on AI output before students see it, and (4) can automatically fall back to lightweight local models for critical tasks. These safeguards keep lessons flowing and costs predictable while still giving institutions the freedom to pick the AI tools that suit them best.
Figure 1 (the architecture diagram, presented above in the System Overview subchapter) illustrates how these external services are integrated in the overall system: the AI and TTS services are shown as separate entities that interface with the back-end’s AI Integration layer. Communication flows are indicated as arrows in the diagram and because external services have a distinct integration into the architecture, future changes can be made with minimal impact on the rest of the system.

3.3. AI and Personalization Features

One of the standout features of AccessiLearnAI is its rich set of AI-powered features that enable real-time personalization and adaptability of learning content. Together, these features address diverse types of student needs by adapting and transforming content into different difficulty levels and formats. Below, we present these key AI-powered features and how they enhance the learning experience:
  • Adaptive Content Summarization: The platform provides on-demand text summarization at different levels (short, intermediate, or detailed) using an integrated LLM API. Students or teachers can request summaries tailored to their needs, whether for quick review or in-depth exam preparation. Summaries are generated in real time, can be adjusted or regenerated for different focuses, and help adapt content to both advanced and beginner students. This dynamic approach addresses the lack of context-aware summarization in traditional e-learning. All summaries are clearly marked as AI-generated, while students are encouraged to verify them against the full content.
  • Reading Level Adjustment and Content Rewriting: Teachers can use AI to adapt content for different reading levels and learning needs, by simplifying complex texts, adding details and examples, or rephrasing content for clarity. While students mainly use summarization, teachers can apply this adaptation feature when preparing lessons. AI can suggest simpler versions of difficult sections, which teachers then refine (the human oversight). This helps address diverse student needs and overcomes the limitations of one-size-fits-all content.
  • Automated Alternate Text for Images: An important feature of the platform is AI-generated alt text. When a teacher adds an image, the system creates a description using the LLM. For example, when the biology lesson has an image representing a cell, the AI might produce alt text like “Diagram of a cell showing labeled organelles including the nucleus, mitochondria, and cell membrane”. This suggestion is then displayed to the teacher for approval or editing. This practice removes the barrier of accessibility in the educational environment where a great number of resources do not have descriptive alt texts due to the extra effort to write them manually. Our approach, inspired by works like Tiwary and Mahapatra (2022) who explored AI-generated image descriptions, integrates alt-text creation as part of the content workflow. Students with visual impairments orthose using screen readers benefit from having these descriptions available for every image.
  • Real-Time Language Translation: The AccessiLearnAI platform aims to improve the ease of understanding of lessons for students in a multilingual setting. A student is able to select any language from those available (for the prototype we refer to major languages such as English, Spanish, French, etc.) and access complete lessons in the language of their choice. Lessons are translated using a LLM or neural machine translation service, with attention to context and technical terms. For example, an English computer science lesson can be rendered in Spanish while keeping accurate terminology. Students can learn in their native language, while teachers can review or refine translations for accuracy. Translations are generated in real time, cached for better performance, and can be improved through human review. This feature supports inclusive design, broadens course access, and helps to address gaps in domain-specific or less-resourced languages.
  • Text-to-Speech (Audio Learning Mode): The platform’s Text-to-Speech feature lets students listen to any content—whether a sentence, summary, or full lesson—in a natural voice. This benefits visually impaired learners, students with reading difficulties, and those who prefer audio learning, such as during commutes. Adjustable playback speed, and where possible also voice selection is supported, in order to better suit students’ preferences. Studies have shown that TTS functionality can improve engagement for mobile learners and those who favor auditory processing (Jafarian & Kramer, 2025). TTS is integrated seamlessly: when a student clicks ‘Listen to lesson,’ the audio is streamed or generated and then played. Thanks to caching in the PWA, once an audio file is created, it can be replayed offline. This audio mode improves accessibility, supports students who struggle with on-screen reading, and reinforces learning by allowing reading and listening together.
  • User Preference Personalization: The platform can also provide a more personalized experience by allowing a user to specify his preferences. An example includes a student that always opts for “detailed” summaries, the interface could be set to “detailed” mode by default for that student. If another student finds a specific font size or color contrast more suitable to his visual needs, this information can be stored and applied while he logs in (this is more an accessibility UI feature than AI, but it comes complementary to the AI feature). Additionally, the platform’s adaptive features learn from feedback, if a student will rate AI-generated content (summary or translation) as not helpful, the system saves this feedback and could adjust, by altering the generated content or recommend to the student to request a different level of detail. As data will accumulate with time, AI with the help of the data can adjust itself better or select the best strategy for various users (e.g., very literal summaries might be suitable for some of the users, while others might require abstracted ones).
In summary, the integration of these AI and personalization features transforms static course content into a dynamic, interactive learning experience. All these adaptations are orchestrated by the underlying architecture described in Section 3.2, but from the user’s perspective, they simply have a richer set of controls (buttons like “Summarize”, “Translate”, “Listen”, etc., or recommended links to click). This optimized and unified mix of AI-driven content adaptation and user-centric controls is one of the novel aspects of AccessiLearnAI and it brings together capabilities that are often seen in isolation in other systems (e.g., some platforms might offer translation but not AI summarization, others offer TTS but not user preference personalization). By offering these features within a unified platform and ensuring their seamless integration, we significantly enhance personalization and accessibility, directly supporting the contributions outlined in our introduction.

3.4. Security and Privacy Considerations

Developing a platform for education that handles data from the users and also AI processing, needs a deep focus on the aspect of security and privacy. The layered architecture of AccessiLearnAI was implemented to have privacy-by-design and security-by-design principles, ensuring that data from users is protected at all levels and that the use of AI does not compromise ethical standards or regulatory compliance. This section describes how the layered architecture and the implementation of the platform ensures data security, privacy (including GDPR compliance), and ethical AI usage (Corrêa et al., 2023).
Data Encryption and Secure Communication: All communication is secured with HTTPS, while sensitive data such as passwords are hashed with strong algorithms (Laravel’s default Bcrypt/Argon2 with salts (Adams, 2025)) and never stored in plain text. The API keys (OpenAI’s API key and TTS service credentials) are stored in securely and contained in server-side configurations (environment variables) and are never revealed to the client side. The front end never connects directly to external AI services; all requests go through the back end for secure control and credentials. User input is monitored, with only essential data stored, while logs are minimized, sanitized, and free of personal identifiers. By applying best web security practices (e.g., input validation, output escaping, prepared statements), the system is protected against common threats like XSS and SQL injection. We also have the possibility to add an extra layer of account security by incorporating optional two-factor authentication (2FA) (Tirfe & Anand, 2022) for logins.
User Data Privacy and GDPR Compliance: AccessiLearnAI complies with the EU General Data Protection Regulation (GDPR) and other privacy regulations, thus effectively protecting users’ data rights. Consequently, we only collect the data we need to serve the platform (data minimization principle). For example, we may record a user’s language preference to ensure automatic translation selection; however, we never store personal details that are not relevant to learning. At the time of registration (or after the first use of AI features), a consent form is presented to users where the users are explicitly informed about the data that may be sent to external AI services (e.g., “Content you input may be processed by our AI engine to generate summaries or translations”). By ensuring this transparency, we build user trust and make sure they are fully aware when using the platform. Users are empowered to revoke the data as the platform offers a “Delete Account” feature which will erase the user’s personal data, their uploaded content, and any associated AI-generated assets from the system (with the exception of some anonymized logs that we are allowed to keep for statistical or system integrity purposes, in line with GDPR). Data retention policies are in effect: usage logs or analytic data are periodically reviewed and purged if no longer needed, thus decreasing the likelihood of long-term privacy breaches. For example, detailed logs of AI operations that are used to improve the system, are anonymized or deleted after analysis.
Ethical AI Usage and Bias Mitigation: Using AI in education requires strong ethical standards, with fairness, transparency, and human oversight as key principles. Sensitive personal data is never used, ensuring that AI-generated content and recommendations are free from bias. All AI outputs are shown alongside the original text and can be edited or removed by teachers or students, keeping humans in control. Teachers act as “human-in-the-loop”, approving or correcting AI-generated content before it reaches students. Students can also compare summaries or outputs with the original material and flag issues. To support transparency, the system records AI prompts and responses, allowing teachers to trace decisions and identify potential bias, consistent with Explainable AI (XAI) practices.
We also used concrete actions to minimize AI bias in content. Input data for large AI models is prone to bias, so to prevent this, we provide guidelines and usage instructions. For example, in the summary generation option, we advise our AI to use different contexts that are inclusive and diverse (avoiding the use of a single cultural perspective). Over time, user feedback on AI outputs (collected and recorded through feedback mechanisms in student or teacher interfaces) provides important information for improving the system. Even if the system itself is not able to automatically recognize problematic models, these comments, which are retained for later use, allow for modification requests and post-processing techniques to help refine AI-generated content over time. An AI audit log is maintained internally, recording instances of AI output that have been corrected or replaced by users. This helps developers adapt and improve the platform, giving the teacher confidence that the AI is being monitored and improved.
Human-in-the-loop content review process: To make sure that AI-generated content such as summaries or image descriptions is accurate, appropriate, and free from bias, AccessiLearnAI uses a structured review system that involves both automated checks and human oversight. First, the platform automatically screens all content for possible issues, by running accessibility validation (WCAG 2.1/HTML5/ARIA checks for heading order, contrast, required alt-text, and ARIA roles) and data-loss-prevention detection (e-mail addresses, phone numbers, student IDs). If any concerns are detected, the content is sent to two human reviewers—typically the course instructor and either another teacher or an instructional designer—who review it using a simple interface where they can approve, edit, or request a new version. This triggered review happens before the content is ever shown to students. Additionally, each week, a specialist in accessibility and a member of the development team reviews a random sample of approved content to look for bias patterns or accuracy recurring problems. If any content is found to be inaccurate or biased at any stage, it is removed, revised, and usable to improve the system in the future. This layered approach helps maintain high-quality standards while making the platform suitable for use in real educational settings, even at larger scales.
Role-Based Access and Content Security: AccessiLearnAI uses a policy-based access control system, where roles such as students, teachers, or administrators are linked to specific permissions defined by the platform’s security and content policies. Students should only have access to the information assigned to them when using a multi-user educational platform. The system ensures that the user has access according to the given role. For example, students cannot view lessons in draft mode or edit any content, and teachers cannot access other teachers’ course materials unless these are directly shared. Likewise, students’ private information is not accessible to unauthorized teachers or other students, but only to teachers who have the right to access it. Files, which are uploaded by teachers, are scanned for viruses or malware (using an antivirus integration on the server) to ensure that no harmful or malicious content is introduced.
In addition to standard technical safeguards such as encryption, role-based access, and GDPR-aligned data storage, ethical AI (Huang et al., 2023) systems in education should incorporate robust governance frameworks that ensure transparency, accountability, and user trust. Best practices include appointing a data protection and ethics lead responsible for maintaining a live register of personal data flows, conducting Data Protection Impact Assessments (DPIAs) (Kasirzadeh & Clifford, 2021) prior to deploying new features, and implementing clear, user-friendly consent mechanisms with ongoing opportunities for learners to manage their data rights. Regular training exercises and scenario planning can reinforce privacy-by-design as an active and evolving organizational commitment.
To mitigate algorithmic bias and support equitable learning experiences, platforms should integrate social and procedural safeguards alongside technical review processes. This may involve conducting Algorithmic Impact Assessments (AIAs) (Metcalf et al., 2021) that evaluate model performance across diverse user groups, publishing accessible “model cards” with findings, and maintaining a risk register reviewed by an independent ethics board. Feedback mechanisms should allow users to flag biased or inaccurate outputs in real-time, with trends used to guide prompt refinement or dataset diversification. Finally, periodic third-party audits can help verify that interventions are effective and that systems are continually improving in fairness, inclusivity, and accountability.
By ensuring secure data handling, privacy compliance, and ethical AI, our platform aims to be not only powerful, but also a trusted one. In a context where minors may be involved and personal data could be sensitive, this assurance of security and privacy is of significant importance. By aligning our approach with the recommendations for responsible data governance in educational technologies, we demonstrate that advanced AI functionality can be delivered without compromising user privacy and security.

3.5. Scalability and Performance Optimization

For AccessiLearnAI to ensure a growing user base and maintain low-latency AI-driven features, the platform is designed to be scalable and optimized in performance. Even though the presented implementation serves as a proof-of-concept, it has been developed with the possibility of extensive deployment.
Horizontal Scaling and Load Distribution: The stateless nature of the system backend allows not only a load balancer to operate, but also multiple application server instances to run. In this way, traffic can be distributed in an optimal way to the user. Sessions are saved either as cookies or in a shared repository such as Redis (2025), and not in the server memory, thus guaranteeing that any request can be handled by any server. The database can be simultaneously scaled vertically (by purchasing more powerful hardware) and horizontally (by using read replicas) to balance intensive tasks with normal workloads, ensuring smooth operation of the system as more and more users join.
Caching Strategies: Cache is used extensively by the platform to reduce repetitive tasks and unnecessary load on the system database. When AI generates content (e.g., lesson summaries or TTS audio files), it is cached so that repeated requests can be served instantly. We use the cache invalidation method when the content changes—for example, a teacher makes an update to a lesson, so any cached summaries or translations of that lesson are removed, as they may be outdated. In the case of TTS, once an audio file has been generated for a text fragment, it is saved and reused. This will not only optimize system performance, but also reduce the cost if the external AI service depends on paying fees for its use. In addition, the PWA cache makes the user interface faster because it keeps the lesson data locally. Additionally, static elements (CSS, JS, and images) are also cached for faster access and offline use.
Asynchronous and Batch Processing: Complex AI requests, such as a teacher uploading a large document and then asking the AI to process it, will be handled in the background. Once the process is complete, the user is notified and can view the result. Another example is the case several students request the same summary—the system processes it once, then saves the result and sends it to the other students. Future improvement could include predictive computing, in which AI summaries and translations are generated in advance, during a period of low traffic. For example, during the night, if no students are online, the AI server could pre-compute summaries and translations that would be ready in the morning.
Content Delivery Network (CDN): Additionally, a CDN can be added to optimize performance by serving static content such as videos, images, and TTS audio files. In this way, the latency issue can be solved by having the nodes serving the content closer to the users’ location, offloading bandwidth from the main servers. The modular architecture allows for seamless integration with Cloud storage solutions such as AWS S3 + CloudFront (Amazon Web Services, 2025b).
Optimizing AI Requests: To optimize the rendering of AI requests, API calls are organized in an optimized manner, minimizing redundant data transmission. Session-based AI requests help reduce data overhead by avoiding resending the full context for repeated queries. Additionally, when multiple AI tasks are required for the same content, requests are grouped together where possible. While AI processing is currently server-based for consistency and security, future implementations could experiment with on-device AI for lightweight tasks, such as text readability analysis.
Laravel’s wide adoption in large-scale academic information systems gives us confidence that the framework can keep pace when thousands of learners hit the platform at once. A comparative load-and-stress study found that, although its raw execution times were higher than CodeIgniter in light workloads, Laravel’s response-time stability actually improved as the test ramped up to 2000 simulated users—evidence that the framework copes well with increasing concurrency once proper caching and queue workers are enabled (Niarman et al., 2023). Newer versions of Laravel make the platform even more efficient. Features like the Concurrency Facade in Laravel 12 (Laravel, 2025b) allow the system to handle many tasks at once, especially those that take longer, like calling external services. Tools like Laravel Octane (Laravel, 2025d) and Horizon (Laravel, 2025c) help the platform run continuously in the background without slowing down, which means it can handle periods of high user concurrency or intensive system activity, like the start of a school day or a big online course, without needing to change how the main system works. To keep the platform fast even when many classes log in at once, AccessiLearnAI is broken into small, self-contained micro-services. All micro-services run in lightweight containers and share a MySQL database that is tuned for speed: extra “read-only” copies handle reporting, while one “write” copy saves new data (Sasmoko et al., 2024). When a new school term starts, administrators simply deploy additional container instances; there is no need to change code or schedule downtime. This keeps response times steady and costs predictable, whether the platform serves one classroom or thousands of learners.
When external Large Language Models (LLMs) are utilized, the system performance is generally adequate for small- to medium-scale deployments. Observed response times typically fall within the range of 0.5 to 1 s, either in terms of first-token latency or per-token generation latency (Dilmegani, 2025). This level of responsiveness is considered sufficient for the present use case, particularly given that data is not regenerated upon every request due to the Progressive Web Application’s (PWA) caching mechanisms. For larger-scale deployments, such as those within a university setting, an on-premises implementation may be a more suitable alternative (Sagi, 2025). This approach offers potential long-term cost benefits and enhanced data privacy, as all information remains within the institutional infrastructure rather than being processed and stored on external cloud platforms.
Through these combined and unified approaches, AccessiLearnAI is built to scale from a small classroom pilot to large institutional deployments. Initial benchmarks suggest that the architecture efficiently supports multiple simultaneous AI requests, with caching significantly reducing response times. Future work includes load testing with thousands of concurrent users and profiling potential bottlenecks, ensuring that the platform remains scalable, responsive, and adaptable as it grows.

4. Workflow for Different User Roles and Interaction Model

To illustrate how the platform works in practice, we describe the typical workflow for the two main user roles: Teachers and Students. Each role player interacts with the platform through steps that use the architectural and AI features described earlier. Teachers follow a workflow with human-in-the-loop oversight, while student interactions are simpler and guided by disclaimers that highlight the AI-generated nature of content.

4.1. Teacher Workflow

Step 1—Login and Authentication: The teacher logs in with his/her secure credentials. Once the authentication is validated, the system recognizes the user’s role as a teacher and allows access for content creation and analysis that students do not have access to. If two-factor authentication is enabled for the account, the teacher would complete that step here as well, adding a security code to the login process, as shown in Figure 4.
The teacher is presented with a dashboard displaying their courses or lessons after logging in and is given the option to create new content or manage existing content, as shown in Figure 5.
Step 2—Uploading Course Content: In the authoring interface, teachers can type content or upload files (text, PDF, Word). The system extracts the text, saves it in draft form, and detects images as placeholders. At this point, the source document has been published on the platform but not yet augmented by AI.
Figure 6 presents the interface where the teacher uploads a lesson, showing a screenshot of the content editor before AI suggestions are requested and applied.
Step 3—AI-Assisted Content Enhancement: The teacher can then invoke the AI tools to enhance the lesson’s accessibility and pedagogical quality. Typically, the interface provides buttons or menu options like “Generate Alt Text for Images”, “Simplify Language”, or “Suggest Summary”. Depending on what the teacher requests (they can do all or some of the following actions):
Alt Text Suggestions: For every image lacking a description, the system prompts the AI to generate an alternative text suggestion. These suggestions are shown beside the image in editable text fields, allowing teachers to review and refine the AI-generated alt text. Figure 7 shows an example where the AI suggested “Alt: A donut-style pie chart displaying the estimated usage share of web browsers in 2025. Chrome leads with 62%, followed by Safari at 19%, Edge at 7%, Firefox at 5%, Samsung Internet at 4% (highlighted in pink), and Others at 3%. Each section is color-coded, with labels and percentage values displayed on the chart.
Content Structuring and Markup: The platform can analyze the text to recommend inserting semantic breaks or ARIA landmarks. For example, if the uploaded text was a continuous block, the AI could detect natural sections or headings (“Introduction”, “Conclusion”, etc.) and suggest splitting the content and adding appropriate headings. It might also propose an outline if the content lacks one. In practice, the system could highlight segments of the text and indicate “Suggest <section> here with aria-label=‘Introduction’” or similar. The teacher can accept these to automatically apply proper HTML5 tags around those segments in the stored content. Additionally, the AI receives the blueprint of the HTML page where the lesson will be presented inside the designated container. This allows the semantic markup recommendations to be extended beyond the raw lesson text to the entire HTML content within the lesson template. Figure 8 shows an example of how the teacher’s content editor integrates AI-powered semantic markup recommendations.
Language Simplification or Expansion: Teachers can request simplified or expanded versions of text. The original remains visible alongside AI suggestions, ensuring transparency and teacher control. Figure 9 shows a screenshot of the teacher’s content editor displaying AI-generated simplification suggestions.
Quiz/Summary Generation (optional): Teachers may optionally generate summaries or questions—quizzes, which can then be edited before use. This feature is experimental but shows potential for future teaching support.
As the AI processes the data, the teacher will receive the information about the suggested improvements that are being made. Thanks to asynchronous handling, the user interface remains interactive. While waiting for the output from AI, the teacher can edit other parts. As soon as the AI produces the proposed recommendations, they appear in a separate part of the screen, allowing the teacher to see the AI suggestions along with the original text. This allows for easy revision and maintains transparency.
Step 4—Review and Finalize Content: After using the AI tools, the teacher takes a final look at the lesson and makes sure that all AI contributions (alt text, structural markup, rephrased sentences) are appropriate, relate to the context correctly, and convey the lesson to the intended teaching purpose. This review stage is crucial because it follows the principle of human oversight to maintain the quality and accuracy of the content. The teacher is able to manually make changes as they want during this final step. On top of it, they can also insert metadata like the lesson title, tags, or difficulty level that can help with student recommendation and search.
A teacher does not have to be an accessibility expert to evaluate AI-generated suggestions for improving accessibility. To make this process easier, we have created a preview lesson feature that allow teachers to use, test, and evaluate the accessibility improvements. Using browser plugins such as WAVE Web Accessibility Evaluation Tool (WebAIM, 2025) (which analyzes the lesson’s structure and highlights accessibility issues like missing alt text, contrast problems, and ARIA attributes) or Chrome Screen Reader (ChromeVox—Google, 2025) (which simulates how visually impaired students experience the content by reading it aloud) teachers can check the effectiveness of the accessibility enhancements with ease, before accepting or modifying them. Once satisfied, the teacher publishes the lesson. Figure 10 shows a screenshot of the preview lesson feature being tested for accessibility using the WAVE Web Accessibility Evaluation Tool plugin in Google Chrome.
Step 5—Post-Publish Monitoring and Updates: Once published, lessons appear on the dashboard with analytics on access and feedback. Teachers can monitor student progress, refine AI-generated adaptations, and create personalized learning paths while retaining full pedagogical control.
In summary, the teacher workflow captures how teachers can efficiently create high-quality, accessible teaching materials with the help of artificial intelligence, while maintaining control over the educational content. The platform’s design eliminates the complexity of implementing accessibility and personalization, allowing teachers to focus on pedagogy. The result is a well-structured lesson that is accessible to all learners and easily adaptable to meet their individual needs.

4.2. Student Workflow

Step 1—Login and Personalized Dashboard: A student logs in to the platform with his/her credentials (optionally using single sign-on, if configured). After logging in, the system identifies the user role as a student and displays the student dashboard. This dashboard is personalized, it can display the courses the student is enrolled in and/or the lessons recently added by their teachers. The dashboard also highlights any content available offline (if the student has previously pinned some lessons, an icon indicates that these can be opened without Internet). If the student has unstable connectivity, the application will still load using cached data and will indicate whether some features that require Internet (such as new AI requests) might be limited until the student is online. Figure 11 presents a screenshot of the student dashboard after login, displaying the courses the student is enrolled in and the personalized content.
Step 2—Accessing a Lesson and Navigating Content: The student selects a lesson to study. The request is sent to the server, which retrieves the lesson content (including all structured HTML code, images with alternative text, etc.) from the database. Thanks to the previous work performed by the teacher assisted by artificial intelligence, what the student sees is a well-organized and accessible content page. The lesson is displayed with clear titles and sections, and the student’s browser or assistive technology can easily navigate it (for example, a screen reader will announce the sections and allow switching between them thanks to the markup). The student reads the material and if the student is online, this content is loaded directly from the server’s database in real time. If the student is offline and this lesson was previously cached (pinned), the service worker serves it from the cache.
Step 3—On-Demand Summaries: Suppose the student is struggling to understand a long passage, they can click the Summarize button for that section (or for the whole lesson). A prompt allows them to choose the length/detail of summary (or default to a medium-length summary), as shown in Figure 12.
The request is sent to the back-end (via an AJAX call) with the lesson/section identifier and desired detail level. The AI Integration layer handles this, generating the summary (unless a cached one is readily available). After a brief moment, the summary text appears below the original text, in a separate container to easily distinguish it. The disclaimer: “This summary was generated by AI to assist with comprehension. While we strive for accuracy, please review the content and verify key information as needed. If any details seem unclear, refer to the original text or consult your teacher.”, is clearly displayed alongside the summary to ensure students understand the need for verification and critical review of the generated content, as shown in Figure 13.
Students can adjust or regenerate summaries as needed, always alongside the original text for comparison. This promotes active reading and critical engagement.
Step 4—Language Translation: Students can switch lessons into another language using the translation feature. The system provides either a stored translation or generates one with AI, keeping technical terms in English if no accurate equivalent exists. This way, learners can study in the language they prefer.
Figure 14 presents a screenshot of the translation feature, where the student selects a target language, while Figure 15 shows the lesson content that was dynamically updated.
If offline, translations would require going online unless some translations were cached previously. The design could foresee common languages being pre-fetched for important material if a significant part of users needs them.
Step 5—Audio Playback (Text-to-Speech): At any time, the student can choose to listen to the lesson content. TTS audio can be streamed live or played instantly from cache if pre-generated. Students may also download audio for offline listening. The students hear the text narration. They can pause, seek, or adjust the speed as needed using a media player interface. Figure 16 illustrates the three-step audio generation process.
This is helpful not just for visually impaired students, but also for those who want to rest their eyes or learn on the go (one could imagine a student listening to a lesson on their smartphone while commuting). With the PWA, a student could even download the audio file of a lesson for later offline listening.
Step 6—Interactive Engagement and Notes: (This step extends a bit beyond our initial scope but is a logical part of a student’s workflow on the platform.) As students go through content, they might take notes or highlight text. The platform could allow annotations that the student can save. These personal notes could be stored locally or in the Cloud for later review. Although not previously presented, an improvement to the system could be an AI-based assistant that answers questions, for example, the student selects a phrase and asks, “What does this mean?” and the AI provides clarifications. While our current implementation focuses on summary/translation/TTS, such an interactive Q&A feature could be integrated with the same AI integration backend to further personalize learning.
Step 7—Feedback and Personalization Loop: After engaging with the content, the students have the opportunity to provide feedback. They might rate the lesson itself (how helpful it was), and specifically rate the AI-generated aids (e.g., “Was the summary useful? [1–5 stars]”). They can also flag any issues (like “the translation was confusing here” or “the summary missed this important point”). This feedback is sent to the database and serves multiple purposes: it informs teachers about how their content is received, and it helps the system adjust its personalization. The platform may also implicitly learn from student behavior (e.g., if a student always immediately translates to Spanish, it could ask “Would you like to see content in Spanish by default?” and then remember that preference). The feedback loop ensures that students are not just passive recipients; they actively shape the AI’s contribution to their learning experience.
Finally, the student logs out or simply closes the app. Thanks to the PWA, even if they close it, part of content remains cached for quick access next time. This type of continuity improves learning outcomes by encouraging students to stay engaged.
This complete student workflow demonstrates how AI can be seamlessly integrated into everyday learning tasks: reading, comprehension, and review. By integrating summarization, translation, and TTS into the content consumption process, students benefit from a more interactive and supportive learning environment. All of these work on top of the content prepared by their teacher, closing the loop between the teacher’s AI-assisted content creation and the student’s AI-assisted content utilization.

4.3. Data Flow and Interaction Model

Figure 17 illustrates a sequence diagram, showing the relationship between the described components, specifying the data flow between users (teacher and student), the back-end and front-end of the platform, the database and artificial intelligence services.
This sequence is modeling the process by which a teacher uploads the content of a lesson, and a student views and interacts with that content. The model presents the life cycle of the content in the corresponding component, highlighting how each request and response is handled by the application.
In the teacher’s section of the flow: the teacher’s browser first sends the content (which may include text and images) to the server. The server uses a file parser to extract text and store it, and records references for images. When the teacher triggers AI enhancement, the server acts as an orchestrator: for each enhancement type, it formulates an API request (e.g., sending the image description prompt to OpenAI for alt text, or the lesson text to obtain a list of suggested sections). These requests may happen in parallel if multiple services are called. The AI service(s) processes the input and returns results that the server receives. The server then updates the draft lesson content (in memory or in a temporary store) with these suggestions and sends them back to the teacher’s front-end for review. Once the teacher confirms everything and clicks publish, the server commits all finalized content to the database in a structured format. The diagram shows the database being written with the lesson text, the approved alt texts, any metadata like structure or difficulty level, etc. At publish time, the system might also enqueue tasks like “generate default summary and TTS for this lesson” to prime the cache (this is an optional optimization not shown in the basic flow but logically possible).
On the student side: the student’s request to view a lesson prompts the server to retrieve the content. The sequence diagram indicates the server querying the database (which returns the lesson content, images, alt texts, etc.) and then the server responding to the student’s browser with that data (often as an HTML page or JSON data that the front-end renders). Now, when the student clicks something like “Summarize”, the diagram shows the front-end sending an AJAX request to the server (including parameters like lesson ID and summary length). The server then calls the AI service (e.g., OpenAI) with the relevant prompt. After the AI returns the summary, the server delivers this summary text back to the front-end, and simultaneously (or subsequently) stores the summary in the database or an in-memory cache with a key like (“lesson123_summary_brief” =: “…summary text…”) for quick retrieval next time. The student’s browser receives the summary and displays it. Similar sub-flows occur for translation or TTS: for translation, the AI service returns the translated text which then goes back to the browser and possibly into the cache; for TTS, the service returns an audio file/URL which the server passes back to the front-end for playback (and the file might be stored on the server for reuse).
The caching mechanism associated with the PWA is also present in the sequence diagram: the first time a student accesses the course’s materials, the service worker will be involved in the process by storing a copy of the lesson content in the browser’s cache storage. If the same lesson is requested by the student in the future, the browser can quickly load it from the cache (if the lesson is still valid). The same is true for a student who, for example, receives a summary or an audio file, where service workers can cache these results. This caching process is not explicitly described in the sequence diagram (to avoid additional complexity), but it is an important interaction that connects the front-end to the browser’s offline storage.
As depicted in the diagram, the database (DB) ensures the authenticity of the data and user information, and the AI service acts as an external stateless component outside of the main database, this means every AI request is independent. The backend layer of the platform is the main coordinator that maintains context, for example, which lesson and which part is summarized and who requested it. The student and the teacher are represented as two separate entities, but ultimately converge towards using the same system components.
Although not represented in the diagram, the student might send feedback—after the successful content retrieval—which would be another request to the server, resulting in a DB write (storing the feedback). If the teacher later queries analytics, the server would read that from DB.
The system design meets its objectives by allowing teachers to add AI-enhanced content, students to make their own AI requests, and the database to ensure persistence and consistency. AI services are loosely integrated, so the platform retains control over educational logic and data, delegating only specialized tasks to AI. This modular design also enables independent updates, for example, replacing the AI model without changing the database or front end.
Overall, AccessiLearnAI provides a comprehensive AI-enabled framework that balances personalization, accessibility, and security. The workflows and data flow diagram ground these technical concepts in real user activities.

5. Comparative Analysis with Established E-Learning Platforms

Digital-learning research typically presents novel solutions as a whole; however, situating a platform alongside proven alternatives sharpens the view of its genuine added value. The following comparative analysis positions AccessiLearnAI next to two well-known systems—Moodle (2025b) and Smart Sparrow (2025a)—representing, respectively, a widely adopted open-source learning management system and a targeted adaptive learning environment.
To provide a clear perspective on AccessiLearnAI’s relative strengths, the platforms are compared across eight critical dimensions: accessibility-driven content structuring, AI-based personalisation and adaptivity, semantic enhancement and offline support, ethical data handling, inclusive learning capabilities, summarisation and content transformation, user experience, and technical scalability. These dimensions reflect the core principles of inclusive, AI-enhanced education, and they are summarized in Table 1 to provide a structured comparison of AccessiLearnAI with established platforms.
The comparison above highlights how AccessiLearnAI distinguishes itself by unifying advanced AI capabilities with an accessibility-first philosophy, whereas Moodle and Smart Sparrow excel in more traditional domains of e-learning. Moodle, a widely used open-source LMS, provides a stable, scalable foundation with a strong community and broad feature set, but it relies on educators and add-ons to achieve personalization and some accessibility outcomes. Smart Sparrow, on the other hand, pioneered adaptive learning, allowing content to adjust to learners in real time, yet it did not incorporate the newer AI-driven content transformations or a comprehensive accessibility framework.
From an academic article standpoint, this comparison underscores a trend in e-learning: the convergence of accessibility, AI, and adaptivity into a single framework (embodied by AccessiLearnAI) versus earlier generations that tackled pieces of the puzzle separately. AccessiLearnAI’s advantage is in its unified vision, it does not treat accessibility and personalization as add-ons, but as core principles driving the technology. This is particularly beneficial for learners with disabilities or unique learning needs, who often have been an afterthought in platform design. This benchmarking provides an empirical bridge to the subsequent Discussion (Section 6), allowing us to interpret technical contributions through the lens of practical deployment and pedagogical impact.

6. Discussion

Our project addresses a key gap in e-learning: the absence of a unified platform that combines accessibility with real-time AI-driven personalization. Previous systems introduced partial accessibility features or AI enhancements, but none integrated these elements from the ground up. By using web accessibility standards (HTML5, ARIA, WCAG 2.1), adaptive learning technologies combined with artificial intelligence, and multi-format functions such as summarization, alternative text generation and text-to-speech, AccessiLearnAI can provide an inclusive educational environment that is accessible to a wide range of learners, especially those with visual, cognitive and linguistic limitations.
In relation to our research questions, the prototype demonstrates the feasibility of a unified accessibility-first architecture (RQ1), shows promising practical accessibility/usability indicators compared with mainstream LMS baselines (RQ2), and provides high-quality, educator-validated AI outputs for summaries and image descriptions (RQ3).
The platform architecture supports the philosophy of inclusive design that coexists with advanced artificial intelligence features without compromising usability or technical performance. PWA technology ensures digital equity by enabling offline access, which is crucial for learners with limited connectivity.
Each page template embeds semantic HTML5 landmarks and AI-generated alt-text, followed by automated WAVE audits and manual ChromeVox checks. Teachers review all AI suggestions before publication, ensuring technical compliance and pedagogical accuracy. The novelty of this approach lies in combining automated AI assistance with pedagogical oversight to ensure both technical compliance and meaningful accessibility. This workflow turns AccessiLearnAI into a repeatable blueprint for AI-assisted accessibility, rather than a one-off patch.
This work contributes not only to the creation of a functional prototype, but also to the creation of a design model that can help the future development of learning management systems (LMS) that aim at both inclusion and improved education. Workflows between teachers and students show how teachers can enrich lesson content using artificial intelligence tools, while maintaining control over their pedagogical decisions, and how students can personalize their experience through accessible interfaces and flexible ways of accessing and consuming content.
While AccessiLearnAI is designed to be accessible and intuitive, we recognize that not all users—particularly older educators or students with limited digital experience—may feel immediately comfortable using advanced online tools. To address this, the platform incorporates a minimal learning curve through a clean, distraction-free interface, guided walkthroughs for first-time users, and built-in tooltips that explain features in plain language. Additionally, educators are supported with a simplified authoring environment and optional onboarding videos that explain AI functionalities step by step. These efforts ensure that users of varying digital skill levels can gradually build confidence and benefit from the platform without feeling overwhelmed, aligning with inclusive design principles that prioritize usability for all.
The AI tools built into AccessiLearnAI serve clear teaching goals. On-demand summaries and reading-level rewrites let beginners grasp the “big picture” first (Remember, Understand in Bloom’s taxonomy terms) while still giving advanced learners the option to dive deeper and compare the summary with the full text (Analyse, Evaluate) (Momen et al., 2023). Likewise, instant translation removes language barriers at those same early levels, so students can comprehend ideas in a familiar tongue before practising them in the target language. Finally, text-to-speech (TTS) offers an audio channel that reinforces memory and supports learners who prefer listening or have reading difficulties, again anchoring the lower rungs of Bloom’s ladder (Masapanta-Carrión & Velázquez-Iturbide, 2018) while freeing attention for higher-order tasks. A novice might receive a short, plain-language summary plus native-language captions, whereas an experienced student receives a detailed digest and is prompted to critique it. The same logic adjusts TTS speed or chooses when to suggest a translation. In this way, each AI feature acts as a scaffold that meets students where they are, helps them climb Bloom’s hierarchy step by step, and does so without extra work for the teacher.
From a pedagogical perspective, the proposed platform aligns with the principles of Universal Design for Learning (UDL) (CAST, 2024), offering multiple means of representation (text, audio, translated content), engagement (adaptive summaries, personalization), and expression (student feedback, note-taking) (Bray et al., 2024). This design method promotes and suggests that inclusive practices supported by artificial intelligence can improve both access and engagement among diverse types of students.
AccessiLearnAI applies the Universal Design for Learning (UDL) framework not just by offering content in different formats, but by helping students become more independent and reflective learners. Features like simplified summaries, adjustable reading levels, and instant translations allow learners to choose a deeper engagement with content based on their needs. These choices help students better understand what works for them, supporting broader learning goals through skills like planning, goal-setting, and self-monitoring, key parts of developing metacognition (Fleur et al., 2021) and long-term learning strategies.
The platform also supports deeper learning through reflection prompts that appear after certain actions (e.g., “Did the summary help you understand the topic better?”) and reminders that encourage students to review material over time. Teachers can view student activity and progress to give targeted feedback or suggest different learning paths. These tools align with UDL principles by not only improving access, but also encouraging motivation, engagement, and flexible ways of learning and showing what students know.
Following the functional tests and the integration test, we have applied for the user tests. The Ethics Commission of our university obtained the assent of the group of experts in Socio-Human Ethics—regarding the compliance with the rules of research ethics within our extended testing—and issued the provisional approval on the conduct of the investigations (number 9728/18 July 2025). The thoroughly procedure and documents flow was in accordance with the Romanian confidentiality regulations of Law No. 677/2001 and for the protection of individuals with regard to the processing of personal data, in conjunction with EU Regulation No. 2016/679.
Sample size was one fifth of the target group—undergraduate students in the last year of Telecommunications program—and of their teachers. As the authors are teaching at the “Telecommunications Systems and Technologies” (TST) program and the special requirement was for a prior oral informed consent, the students invited for the user test were from TST. They were approached during the summer specialized practical training dedicated to those in the last year of study. This was deemed appropriate for a dedicated concentration on user testing, taking into consideration that earlier periods—groundwork for the summer exams and the exams period itself—have a focus shifted on other intensive work (the preparation duties). Students in the last year at TST were considered in order to be sampled for the user test also because they went through the special experience of e-Learning during the pandemic, using, at own pace and in a personalized environment, the Moodle platform of the university. The courses in the repository we prepared for the user testing are general purpose—not specific telecom courses—but the fact our chosen cohort is tech-savvy in ICT (including networks, cloud and AI) was not considered a bias but a promising circumstance for consistent test reports. As the total of TST students in the last year is a group and a half (3 “semi-groups”), they were invited 11 students. From the demographical point of view, all 11 were Romanians; from the gender balance (of the total) point of view, 7 were male and 4 were female.
The 1/5 sampling was applied also to teaching staff allocated to the last year of TST. The gender balance (4 male and 1 female) is quite illustrative for this teaching staff, and their titles are representative for the academic ranks: 1 professor, 1 associate professor, 1 senior lecturer and 2 assistants. The 5 invited teachers are all Romanians. They cover courses and applications for subjects like Operating Systems; Signals and Systems (part II—Systems and Control Theory); Interfaces Protocols and Signaling; Software Defined Radio; Mobile Communications.
After the prior informed oral consent, all the participants were enrolled on the familiar Moodle platform of the university (mentioned above), for anonymous feedback in the form of randomly labeled reports with assessment answers (on a 1 to 5 scale) and with open answers (in order to collect improvement suggestions). A set of access credentials in an AccesiLearnAI Users’ guide were made available, together with Test report templates and an explanatory “manifesto Accessibility-First Design of the AI API—Focus on Student Experience”—the fact that the invited student testers were in the last year of TST facilitated (as mentioned above) a better understanding and a faster familiarization with “AccesiLearnAI”.
A special attention was given to the precious collaboration—for user test and pre-launch improvement—with a visually impaired alumna, master in ICT for mobile apps development (a teaching program in German, joint cooperation of Transilvania University of Brasov—RO and Technische Universitaet Ilmenau—DE). Being not only tech-savvy but also being well equipped with a comprehensive suite of specialized software tools dedicated to visually impaired students she conducted thoroughly the user testing first on desktop computers and secondly on portable and mobile devices (tablets, notebooks and smartphones, respectively).
Participants at the user-test phase particularly valued the platform’s content-simplification tools and its AI-assisted support for teachers. During structured usability sessions, volunteers navigated multiple lesson pages and followed AI-personalized learning flows within a representative course.
The research methodology focused on evaluating the web platform’s accessibility, personalization, and usability in an educational context from the students’ perspective, while also examining the impact of integrated AI-based tools—such as summarization, translation and semantic structuring—on the end-user learning experience. The evaluation explored several key questions: how easy the platform is to use for first-time participants; which AI functionalities are perceived as most useful by users; the extent to which the platform supports personalized and accessible learning experiences; and what potential improvements can be identified based on user feedback.
Student user testing: A qualitative, page-by-page evaluation was conducted with descriptive notes on navigation flow, content semantics, and the quality of AI-generated adaptations (summaries and automatic translations). Across reports, navigation and structure were consistently described as intuitive, with coherent heading hierarchy, logical tab order, and screen-reader-friendly markup. Where task tables were present, outcomes were successful; numerical grids (1–5) showed mostly 4–5 per page or overall for ease of navigation, screen-reader performance, cognitive load, and confidence in reuse. Most participants reported minimal visual fatigue and high confidence in continued use. Identified issues were focused and actionable: under NVDA, switching between already-loaded versions could skip the main content and some language variants lacked diacritics. Open-ended feedback asked for broader language support, clearer differentiation of advanced-level summaries, and the ability to configure technical language styles. Additional suggestions included preserving earlier AI-generated versions before regenerating, adding a question-answer interface and a flashcard generator, and addressing minor UX items (e.g., button display and character-limit definitions). These findings confirm robust accessibility alignment while highlighting small refinements to prioritize before wider deployment. The minor concerns reported were straightforward to resolve, such as refining button layout and clarifying character limits for inputs, and were promptly corrected in software and implemented on the platform, while more complex suggestions were scheduled for future updates.
Teacher user testing (focus on lesson creation): A separate testing phase focused specifically on teachers and their use of the platform to create lessons. The tools tested included features for organizing content using clear structure (such as headings and labeled sections), simplifying complex language, and generating short summaries. These AI-generated elements were always reviewed by the teacher before being published, keeping the teacher in control of the final content. Teachers used a preview mode, with support from accessibility tools like screen readers and the WAVE checker, to ensure their materials were easy to navigate for all learners, including those with disabilities. Most participants found the process clear and helpful in refining AI output. One important detail: spelling errors were not corrected automatically. This was intentional, allowing teachers to keep subject-specific terminology or introduce new words when needed. However, until a custom dictionary feature is added, teachers should carefully review the content before final approval. Teachers also appreciated being able to save earlier versions when generating new AI suggestions, and requested more options to adjust the style or complexity of technical language.
To ensure the reliability and educational value of AI-generated summaries and alternative text, we implemented a two-step human evaluation for quality control process. First, we used automatic checks to compare AI-generated summaries with teacher-created examples and to assess whether image descriptions included key visual and contextual elements. If the system flagged content as potentially inaccurate or incomplete, it was reviewed by human experts. In this second stage, two educators and one accessibility specialist evaluated the outputs based on quality criteria like clarity, accuracy, usefulness for learners, and also potential bias. When necessary, the content was revised before being published to students. During testing, the majority of AI-generated outputs met our quality criteria, and human reviewers gave them consistently high ratings. This approach combines the efficiency of automation with the judgment of educators, helping to ensure that the content students receive is both trustworthy and pedagogically sound.
All core tasks were successfully completed, with the platform demonstrating strong screen-reader compatibility, smooth navigation, and stable cross-device performance. Participants evaluated the system positively in terms of navigational ease, content clarity, and the perceived usefulness of AI-driven features such as summarization and text-to-speech. Although these formative findings indicate promising levels of usability and accessibility, further empirical evaluation is required to provide quantitative evidence regarding learning outcomes, sustained usability, and long-term accessibility impact.
To address this, future testing will include structured evaluations involving both learners and educators. Planned activities involve usability testing, accessibility audits, and controlled comparisons to assess engagement, learning gains, and overall user satisfaction. These studies will employ established methodologies and metrics to provide robust, evidence-based insights into the platform’s effectiveness. Findings from these evaluations will directly inform the next stages of development and refinement.
Future iterations of the platform improvement can build on these findings by incorporating additional and advanced accessibility features (e.g., support for sign language overlays or interactive captions) and by expanding support for adapting domain-specific jargon. The principles and the approach of the AccessiLearnAI platform can thus set a potential precedent for next-generation educational systems, emphasizing the role of artificial intelligence not as a replacement for human pedagogy, but as an aid to improving equitable, accessible, and flexible learning experiences.
Despite these strengths, several limitations remain, ranging from incomplete support for all disability categories to dependence on external LLMs, which Section 7 details and which guide our next research steps.

7. Limitations

Although the results of the unified integration of the broad feature set are promising, this study has several limitations.
The evaluation of the AccessiLearnAI platform remains preliminary. Although qualitative feedback and accessibility tests demonstrate the effectiveness of the system, a large-scale quantitative study is welcomed to test in detail the improvement in student engagement and learning outcomes. Such a study should ideally measure student performance over time, across disciplines and with diverse user groups, including people with various types of disabilities and coming from different linguistic backgrounds.
Recognizing that true accessibility goes beyond technical compliance to include real-world usability, we aim to prepare dedicated user testing phase involving blind and visually impaired learners. This study will include comparative evaluations with and without AI-generated enhancements, focusing on key metrics such as screen reader task completion time, user satisfaction scores, and error rates during keyboard-only navigation. The results will offer valuable empirical evidence to assess the platform’s practical accessibility and will directly inform future improvements to our AI-assisted, accessibility-first design.
Also, AI-generated adaptations (e.g., summaries, alternative text, translations) rely on large external linguistic models (LLMs), which may have limitations in consistently generating high-quality results across all domains or languages. While the system’s “human-in-the-loop” design controls and mitigates these limitations, the effectiveness of the platform still depends in part on the reliability and accuracy of these AI models. Domain-specific or resource-poor linguistic content may still require manual corrections.
Another limitation is that, although AccessiLearnAI addresses key accessibility needs related to vision, reading comprehension, and language access, it is not fully optimized to support all categories of disabilities. For example, deaf or hard of hearing users require the integration of special features such as automatic video captioning or sign language overlays. Similarly, people with motor impairments require access through assistive switches or speech-based navigation—the platform offers limited access to them and further development is planned in this regard.
Finally, the platform, as presented in this paper, is a working prototype. Although the architecture has been designed to be scalable and cyber-secure, real-world implementation in institutional environments will require additional integration, stress testing, and evaluation of long-term user behavior. While our pilot captured usability and perceived satisfaction, we did not, by design, collect direct evidence of pedagogical impact (e.g., comprehension gains, time-on-task reductions, or comparative performance against an incumbent LMS). This omission, outside the current phase of our rollout, limits the strength of claims we can make about learning outcomes and should be read as a constraint on interpretation of the present results. We therefore explicitly acknowledge the absence of outcome-level metrics as a limitation of this study.
Recognizing these limitations is essential for guiding future improvements and for framing current work as an initial contribution towards building truly inclusive learning systems based on artificial intelligence.

8. Conclusions and Future Work

This paper presented AccessiLearnAI, an e-learning platform that unifies accessibility-focused design principles with AI-driven personalization. Using Large Language Models and Text-To-Speech services, our system enables learners with disabilities or bandwidth constraints to equally access educational content. The multi-tiered architecture supports robust security and offline PWA functionalities, ensuring continuous and inclusive education opportunities. New features include, but are not limited to, automated alternative text, content adaptation to reading level, on-the-fly text summarization, and language translation. Through these features, AccessiLearnAI provides an alternative and closes critical usability gaps that many traditional and online learning solutions still overlook.
This approach shows that integrating AI with strict accessibility standards can enhance digital learning experiences. By combining teacher-led content development with AI augmentation, the platform achieves personalization for diverse learner needs. Central to this approach is the ‘human-in-the-loop’ mechanism, ensuring that educators review and validate AI outputs before publication. This oversight reduces risks of inaccuracy or bias while maintaining pedagogical control. By uniting automation with human expertise, AccessiLearnAI promotes trust, reliability, and adaptability in educational contexts.
AccessiLearnAI is designed with accessibility at its core, offering a transformative experience for blind and visually impaired users. Unlike traditional platforms that treat accessibility as an add-on, AccessiLearnAI integrates features such as AI-generated semantic HTML5 structuring directly into the content creation process. These enhancements ensure compatibility with screen readers, allowing blind students to navigate and understand lessons more effectively. The platform also includes a human-in-the-loop mechanism, enabling teachers to review and refine AI-generated content to maintain accuracy and clarity. The system is also designed with data privacy, ethical AI usage, and user-controlled personalization, ensuring that blind learners can study independently in a secure and inclusive digital environment.
Looking ahead, future development will focus on comprehensive user testing in extended educational settings, enhancing domain-specific language processing for disciplines such as medicine and engineering, and refining content adaptations for under-resourced languages. We are also developing an adaptive quiz feature to tailor assessments based on a learner’s prior performance.
In our future work, we will conduct a controlled A/B evaluation against an existing institutional LMS across multiple courses and modalities. The study will pair pre/post comprehension assessments aligned to course outcomes with log-derived behavioral metrics (task-completion time, navigation steps, help requests, and error corrections), and course-level indicators (assignment on-time rates and pass/withdrawal ratios). We will stratify analyses by accessibility-relevant subgroups (e.g., screen-reader users, language background) to test whether benefits are equitably distributed, and report effect sizes with confidence intervals rather than relying solely on null-hypothesis tests. To ensure rigor, we will pre-register hypotheses, conduct an a priori power analysis targeting detection of small–moderate effects, and follow ethical and data-protection safeguards. This plan will allow us to quantify whether AccessiLearnAI delivers measurable learning gains and efficiency improvements beyond positive perceptions alone.
Finally, large-scale user testing and longitudinal studies are planned to better evaluate educational outcomes, student engagement, and platform effectiveness across diverse demographics and learning needs. By continuously expanding both its accessibility features and its pedagogical intelligence, AccessiLearnAI aims to set a new benchmark for inclusive, responsible, and AI-enhanced online learning.

Author Contributions

Conceptualization, G.A.S. and F.S.; methodology, D.R. and F.S.; software, G.A.S.; validation, G.A.S., D.R. and F.S.; formal analysis, G.A.S., D.R. and F.S.; investigation, G.A.S., D.R. and F.S.; resources, G.A.S., D.R. and F.S.; data curation, G.A.S., D.R. and F.S.; writing—original draft preparation, G.A.S. and D.R.; writing—review and editing, G.A.S., D.R. and F.S.; visualization, G.A.S. and D.R.; supervision, D.R. and F.S.; project administration, D.R. and F.S.; funding acquisition, D.R. and F.S. All authors have read and agreed to the published version of the manuscript.

Funding

The study was carried out with the support of Transilvania University of Brașov through its institutional research resources; no dedicated grant or project number applies.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are thoroughly included in the article. Further inquiries can be directed to the author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. AccessibleEU. (2025). A new era of inclusion begins: The European accessibility act enters into force. Available online: https://accessible-eu-centre.ec.europa.eu/content-corner/news/new-era-inclusion-begins-eaa-enters-force-2025-06-27_en (accessed on 21 July 2025).
  2. Acosta-Vargas, P., Acosta-Vargas, G., Salvador-Acosta, B., & Jadán-Guerrero, J. (2024a, June 24–26). Addressing web accessibility challenges with generative artificial intelligence tools for inclusive education. 10th International Conference on eDemocracy & eGovernment (ICEDEG) (pp. 1–7), Lucerne, Switzerland. [Google Scholar] [CrossRef]
  3. Acosta-Vargas, P., Salvador-Acosta, B., Novillo-Villegas, S., Sarantis, D., & Salvador-Ullauri, L. (2024b). Generative artificial intelligence and web accessibility: Towards an inclusive and sustainable future. Emerging Science Journal, 8(4), 1602–1621. [Google Scholar] [CrossRef]
  4. Adams, C. (2025). Salt. In S. Jajodia, P. Samarati, & M. Yung (Eds.), Encyclopedia of cryptography, security and privacy (pp. 2157–2158). Springer. [Google Scholar] [CrossRef]
  5. Airaj, M. (2024). Ethical artificial intelligence for teaching-learning in higher education. Education and Information Technologies, 29, 17145–17167. [Google Scholar] [CrossRef]
  6. Al-Fraihat, D., Alshahrani, A. M., Alzaidi, M., Shaikh, A. A., Al-Obeidallah, M., & Al-Okaily, M. (2025). Exploring students’ perceptions of the design and use of the Moodle learning management system. Computers in Human Behavior Reports, 18, 100685. [Google Scholar] [CrossRef]
  7. Ally. (2025). Ally accessibility tool. Available online: https://ally.ac/ (accessed on 12 April 2025).
  8. Amazon Web Services. (2025a). Amazon cloudfront. Available online: https://aws.amazon.com/cloudfront/ (accessed on 9 April 2025).
  9. Amazon Web Services. (2025b). Amazon polly. Available online: https://aws.amazon.com/polly/ (accessed on 11 December 2024).
  10. Arredondo-Trapero, F. G., Guerra-Leal, E. M., Kim, J., & Vázquez-Parra, J. C. (2024). Competitiveness, quality education and universities: The shift to the post-pandemic world. Journal of Applied Research in Higher Education, 16(5), 2140–2154. [Google Scholar] [CrossRef]
  11. Bootstrap. (2025). Bootstrap framework. Available online: https://getbootstrap.com/ (accessed on 18 April 2025).
  12. Bray, A., Devitt, A., Banks, J., Sanchez Fuentes, S., Sandoval, M., Riviou, K., Byrne, D., Flood, M., Reale, J., & Terrenzio, S. (2024). What next for universal design for learning? A systematic literature review of technology in UDL implementations at second level. British Journal of Educational Technology, 55, 113–138. [Google Scholar] [CrossRef]
  13. CAST. (2024). Universal design for learning guidelines. Available online: https://udlguidelines.cast.org/ (accessed on 12 November 2024).
  14. Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264–75278. [Google Scholar] [CrossRef]
  15. Corrêa, N. K., Galvão, C., Santos, J. W., Del Pino, C., Pinto, E. P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., & de Oliveira, N. (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, 4(10), 100857. [Google Scholar] [CrossRef] [PubMed]
  16. Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20, 22. [Google Scholar] [CrossRef]
  17. Dilmegani, C. (2025). LLM latency benchmark by use cases in 2025. Research AImultiple.com. Available online: https://research.aimultiple.com/llm-latency-benchmark/ (accessed on 10 July 2025).
  18. DreamBox. (2024). Online math & reading programs for students—DreamBox by discovery education. Available online: https://www.dreambox.com/ (accessed on 23 July 2024).
  19. Dwivedi, R., Dave, D., Naik, H., Singhal, S., Omer, R., Patel, P., Qian, B., Wen, Z., Shah, T., Morgan, G., & Ranjan, R. (2023). Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Computing Surveys, 55(9), 194. [Google Scholar] [CrossRef]
  20. Egger, R., & Gokce, E. (2022). Natural language processing (NLP): An introduction. In R. Egger (Ed.), Applied data science in tourism (pp. 307–334). Springer. [Google Scholar] [CrossRef]
  21. Ehsan, A., Abuhaliqa, M. A. M. E., Catal, C., & Mishra, D. (2022). RESTful API testing methodologies: Rationale, challenges, and solution directions. Applied Sciences, 12, 4369. [Google Scholar] [CrossRef]
  22. European Commission. (2025). Persons with disabilities—Strategy and policy. Available online: https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/disability/persons-disabilities_en (accessed on 21 July 2025).
  23. European Parliament & Council. (2016). Regulation (EU) 2016/679 (general data protection regulation). Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng (accessed on 4 November 2024).
  24. European Union. (2019). Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products and services (European Accessibility Act). Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32019L0882 (accessed on 21 July 2025).
  25. Fleur, D. S., Bredeweg, B., & van den Bos, W. (2021). Metacognition: Ideas and insights from neuro- and educational sciences. npj Science of Learning, 6, 13. [Google Scholar] [CrossRef]
  26. Gligorea, I., Cioca, M., Oancea, R., Gorski, A.-T., Gorski, H., & Tudorache, P. (2023). Adaptive learning using artificial intelligence in e-learning: A literature review. Education Sciences, 13, 1216. [Google Scholar] [CrossRef]
  27. Google. (2025). ChromeVox screen reader extension. Available online: https://chromewebstore.google.com/detail/screen-reader/kgejglhpjiefppelpmljglcjbhoiplfn (accessed on 15 January 2025).
  28. Google Cloud. (2025). Text-to-speech documentation. Available online: https://cloud.google.com/text-to-speech (accessed on 12 April 2025).
  29. Google Developers. (2025a). Explore progressive web apps—Web.dev. Available online: https://web.dev/explore/progressive-web-apps (accessed on 11 January 2025).
  30. Google Developers. (2025b). Google analytics developer documentation. Available online: https://developers.google.com/analytics (accessed on 18 April 2025).
  31. Google Developers. (2025c). Learn PWA: Service workers. Available online: https://web.dev/learn/pwa/service-workers (accessed on 16 April 2025).
  32. Google Developers. (2025d). OAuth 2.0 for web server applications. Available online: https://developers.google.com/identity/protocols/oauth2 (accessed on 19 March 2025).
  33. Hillaire, G., Iniesto, F., & Rienties, B. (2019). Humanising text-to-speech through emotional expression in online courses. Journal of Interactive Media in Education, 2019(1), 12. [Google Scholar] [CrossRef]
  34. Huang, C., Zhang, Z., Mao, B., & Yao, X. (2023). An overview of artificial intelligence ethics. IEEE Transactions on Artificial Intelligence, 4(4), 799–819. [Google Scholar] [CrossRef]
  35. Huber, S., Demetz, L., & Felderer, M. (2021). PWA vs the others: A comparative study on the UI energy-efficiency of progressive web apps. In Web engineering (ICWE 2021) (Lecture Notes in Computer Science 12706). Springer. [Google Scholar] [CrossRef]
  36. Ingavelez-Guerra, P., Oton-Tortosa, S., Hilera-González, J., & Sánchez-Gordón, M. (2023). The use of accessibility metadata in e-learning environments: A systematic literature review. Universal Access in the Information Society, 22, 445–461. [Google Scholar] [CrossRef]
  37. Jafarian, N. R., & Kramer, A.-W. (2025). AI-assisted audio-learning improves academic achievement through motivation and reading engagement. Computers and Education: Artificial Intelligence, 8, 100357. [Google Scholar] [CrossRef]
  38. Jónsdóttir, A. A., Kang, Z., Sun, T., Mandal, S., & Kim, J.-E. (2023). The effects of language barriers and time constraints on online learning performance: An eye-tracking study. Human Factors, 65(5), 779–791. [Google Scholar] [CrossRef]
  39. jQuery. (2025). The write less, do more, JavaScript library. Available online: https://jquery.com/ (accessed on 16 April 2025).
  40. Kasirzadeh, A., & Clifford, D. (2021, May 19–21). Fairness and data protection impact assessments. 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES’21) (pp. 146–153), Virtual Event, USA. [Google Scholar] [CrossRef]
  41. Kazimzade, G., Patzer, Y., & Pinkwart, N. (2019). Artificial intelligence in education meets inclusive educational technology—The technical state-of-the-art and possible directions. In J. Knox, Y. Wang, & M. Gallagher (Eds.), Artificial intelligence and inclusive education (pp. 61–73). Springer. [Google Scholar] [CrossRef]
  42. Khamzina, K., Stanczak, A., Brasselet, C., Desombre, C., Legrain, C., Rossi, S., & Guirimand, N. (2024). Designing effective pre-service teacher training in inclusive education: A narrative review of the effects of duration and content delivery mode on teachers’ attitudes toward inclusive education. Educational Psychology Review, 36, 13. [Google Scholar] [CrossRef]
  43. Kirss, L., Säälik, Ü., Leijen, Ä., & Pedaste, M. (2021). School effectiveness in multilingual education: A review of success factors. Education Sciences, 11, 193. [Google Scholar] [CrossRef]
  44. Klimova, B., Pikhart, M., Benites, A. D., Lehr, C., & Sanchez-Stockhammer, C. (2023). Neural machine translation in foreign language teaching and learning: A systematic review. Education and Information Technologies, 28, 663–682. [Google Scholar] [CrossRef]
  45. Lakshmi, G. J., Surekha, T. L., Harathi, R., & Bhargavi, B. (2025). Developing an online/offline educational application to the students in rural areas. In Data science & exploration in artificial intelligence (CODE-AI 2024). CRC Press. [Google Scholar]
  46. Laravel. (2024). Laravel PHP framework. Available online: https://laravel.com/ (accessed on 18 December 2024).
  47. Laravel. (2025a). Concurrency—Laravel 12.x—The PHP framework for web artisans. Available online: https://laravel.com/docs/12.x/concurrency (accessed on 15 July 2025).
  48. Laravel. (2025b). Laravel hashing documentation. Available online: https://laravel.com/docs/12.x/hashing (accessed on 17 November 2024).
  49. Laravel. (2025c). Laravel horizon—Laravel 12.x—The PHP framework for web artisans. Available online: https://laravel.com/docs/12.x/horizon (accessed on 15 July 2025).
  50. Laravel. (2025d). Laravel octane—Laravel 12.x—The PHP framework for web artisans. Available online: https://laravel.com/docs/12.x/octane (accessed on 15 July 2025).
  51. Liew, T. W., Tan, S. M., Pang, W. M., Khan, M. T. I., & Kew, S. N. (2023). I am Alexa, your virtual tutor!: The effects of Amazon Alexa’s text-to-speech voice enthusiasm in a multimedia learning environment. Education and Information Technologies, 28, 1455–1489. [Google Scholar] [CrossRef]
  52. Liu, J., Li, S., Ren, C., Lyu, Y., Xu, T., Wang, Z., & Chen, W. (2023). AI enhancements for linguistic e-learning systems. Applied Sciences, 13, 10758. [Google Scholar] [CrossRef]
  53. MariaDB Foundation. (2025). MariaDB database. Available online: https://mariadb.org/ (accessed on 24 March 2025).
  54. Masapanta-Carrión, S., & Velázquez-Iturbide, J. A. (2018, February 21–24). A systematic review of the use of bloom’s taxonomy in computer science education. 49th ACM Technical Symposium on Computer Science Education (SIGCSE’18), Baltimore, MD, USA. [Google Scholar] [CrossRef]
  55. MDN Web Docs. (2025a). AJAX—MDN glossary. Available online: https://developer.mozilla.org/en-US/docs/Glossary/AJAX (accessed on 14 April 2025).
  56. MDN Web Docs. (2025b). Document Object Model (DOM)—MDN. Available online: https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model (accessed on 18 April 2025).
  57. Metcalf, J., Moss, E., Watkins, E. A., Singh, R., & Elish, M. C. (2021, March 3–10). Algorithmic impact assessments and accountability: The co-construction of impacts. 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT’21) (pp. 735–746), Virtual Event, Canada. [Google Scholar] [CrossRef]
  58. Momen, A., Ebrahimi, M., & Hassan, A. M. (2023). Importance and implications of theory of bloom’s taxonomy in different fields of education. In M. A. Al-Sharafi, M. Al-Emran, M. N. Al-Kabi, & K. Shaalan (Eds.), Proceedings of the 2nd international conference on emerging technologies and intelligent systems (ICETIS 2022) (Lecture Notes in Networks and Systems). Springer. [Google Scholar] [CrossRef]
  59. Moodle. (2025a). Accessibility plugins for moodle. Available online: https://moodle.org/plugins/?q=accessibility (accessed on 15 April 2025).
  60. Moodle. (2025b). Accessibility with moodle. Available online: https://moodle.com/accessibility/ (accessed on 15 July 2025).
  61. Moodle. (2025c). GDPR—MoodleDocs. Available online: https://docs.moodle.org/500/en/GDPR/ (accessed on 15 July 2025).
  62. Moodle. (2025d). Moodle app|Moodle downloads. Available online: https://download.moodle.org/mobile/ (accessed on 14 July 2025).
  63. Moodle. (2025e). Moodle is a learning platform or learning management system (LMS). Available online: https://moodle.org/ (accessed on 12 July 2025).
  64. Moodle. (2025f). Moodle plugins directory. Available online: https://moodle.org/plugins/ (accessed on 16 July 2025).
  65. Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández-Leal, Á. (2023). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, 56, 3005–3054. [Google Scholar] [CrossRef]
  66. Mukherjee, D., & Hasan, K. K. (2022). Learning continuity in the realm of education 4.0: Higher education sector in the post-pandemic of COVID-19. In Future of work and business in COVID-19 era (Springer Proceedings in Business and Economics). Springer. [Google Scholar] [CrossRef]
  67. Murtaza, M., Ahmed, Y., Shamsi, J. A., Sherwani, F., & Usman, M. (2022). AI-based personalized e-learning systems: Issues, challenges, and solutions. IEEE Access, 10, 81323–81342. [Google Scholar] [CrossRef]
  68. Nasereddin, M., ALKhamaiseh, A., Qasaimeh, M., & Al-Qassas, R. (2021). A systematic review of detection and prevention techniques of SQL injection attacks. Information Security Journal: A Global Perspective, 32(4), 252–265. [Google Scholar] [CrossRef]
  69. New Relic. (2025). New relic application performance monitoring. Available online: https://newrelic.com/ (accessed on 8 March 2025).
  70. Niarman, A., Iswandi, & Candri, A. K. (2023). Comparative analysis of PHP frameworks for development of academic information system using load and stress testing. International Journal Software Engineering and Computer Science (IJSECS), 3, 424–436. [Google Scholar] [CrossRef]
  71. Nugraha, D., Anjara, F., & Faizah, S. (2022). Comparison of web-based and PWA in online learning. In 5th FIRST T1-T2 2021 International Conference (FIRST-T1-T2 2021) (pp. 201–205). Atlantis Press. [Google Scholar] [CrossRef]
  72. OpenAI. (2025). OpenAI platform documentation—Overview. Available online: https://platform.openai.com/docs/overview (accessed on 14 April 2025).
  73. Oracle Corporation. (2025). MySQL database. Available online: https://www.mysql.com/ (accessed on 16 April 2025).
  74. Recite Me. (2025). European accessibility act: What it means and what the fines are. Available online: https://reciteme.com/news/european-accessibility-act-fines/ (accessed on 21 July 2025).
  75. Reddy, V. M., Vaishnavi, T., & Kumar, K. P. (2023, July 19–21). Speech-to-text and text-to-speech recognition using deep learning. 2nd International Conference on Edge Computing and Applications (ICECAA) (pp. 657–666), Namakkal, India. [Google Scholar] [CrossRef]
  76. Redis. (2025). Redis in-memory data store. Available online: https://redis.io/ (accessed on 11 April 2025).
  77. Sagi, S. (2025). Optimizing LLM inference: Metrics that matter for real time applications. Journal of Artificial Intelligence & Cloud Computing, 2025(4), 2–4. [Google Scholar] [CrossRef]
  78. Sanchez-Gordon, S., Aguilar-Mayanquer, C., & Calle-Jimenez, T. (2021). Model for profiling users with disabilities on e-learning platforms. IEEE Access, 9, 74258–74274. [Google Scholar] [CrossRef]
  79. Sasmoko, Indrianti, Y., Manalu, S. R., & Danaristo, J. (2024). Analyzing database optimization strategies in Laravel for an enhanced learning management. Procedia Computer Science, 245, 799–804. [Google Scholar] [CrossRef]
  80. SendGrid. (2025). SendGrid email delivery service. Available online: https://sendgrid.com/en-us (accessed on 15 April 2025).
  81. Shafique, R., Aljedaani, W., Rustam, F., Lee, E., Mehmood, A., & Choi, G. S. (2023). Role of artificial intelligence in online education: A systematic mapping study. IEEE Access, 11, 52570–52584. [Google Scholar] [CrossRef]
  82. Smart Sparrow. (2025a). Higher education|Smart sparrow. Available online: https://www.smartsparrow.com/solutions/highered/ (accessed on 14 July 2025).
  83. Smart Sparrow. (2025b). How to make your lessons accessible. Available online: https://www.smartsparrow.com/2018/11/09/how-to-make-your-lessons-accessible/ (accessed on 14 July 2025).
  84. Smart Sparrow. (2025c). Smart sparrow adaptive eLearning platform. Available online: https://www.smartsparrow.com/ (accessed on 14 February 2025).
  85. Smart Sparrow. (2025d). What is adaptive learning? Available online: https://www.smartsparrow.com/what-is-adaptive-learning/ (accessed on 14 July 2025).
  86. Sri Ram, M. S., Joy, E., & J, L. S. (2024, April 26–27). WebSight: An AI-based approach to enhance web accessibility for the visually impaired. International Conference on Science Technology Engineering and Management (ICSTEM) (pp. 1–7), Coimbatore, India. [Google Scholar] [CrossRef]
  87. Stelea, G. A., Sangeorzan, L., & Enache-David, N. (2025a). Accessible IoT dashboard design with AI-enhanced descriptions for visually impaired users. Future Internet, 17, 274. [Google Scholar] [CrossRef]
  88. Stelea, G. A., Sangeorzan, L., & Enache-David, N. (2025b). When cybersecurity meets accessibility: A holistic development architecture for inclusive cyber-secure web applications and websites. Future Internet, 17, 67. [Google Scholar] [CrossRef]
  89. Tailwind CSS. (2025). Rapidly build modern websites without ever leaving your HTML. Available online: https://tailwindcss.com/ (accessed on 10 April 2025).
  90. Tate, T., & Warschauer, M. (2022). Equity in online learning. Educational Psychologist, 57(3), 192–206. [Google Scholar] [CrossRef]
  91. Timbi-Sisalima, C., Sánchez-Gordón, M., Hilera-González, J. R., & Otón-Tortosa, S. (2022). Quality assurance in e-learning: A proposal from accessibility to sustainability. Sustainability, 14, 3052. [Google Scholar] [CrossRef]
  92. Tirfe, D., & Anand, V. K. (2022). A survey on trends of two-factor authentication. In H. K. D. Sarma, V. E. Balas, B. Bhuyan, & N. Dutta (Eds.), Contemporary issues in communication, cloud and big data analytics (Lecture Notes in Networks and Systems 281). Springer. [Google Scholar] [CrossRef]
  93. Tiwary, T., & Mahapatra, R. P. (2022, November 11–12). Web accessibility challenges for disabled and generation of alt text for images in websites using artificial intelligence. 3rd International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT) (pp. 1–5), Ghaziabad, India. [Google Scholar] [CrossRef]
  94. U.S. General Services Administration. (2025). Section 508 standards. Available online: https://www.section508.gov/ (accessed on 6 December 2024).
  95. Weamie, S. (2022). Cross-site scripting attacks and defensive techniques: A comprehensive survey. International Journal of Communications, Network and System Sciences, 15, 126–148. [Google Scholar] [CrossRef]
  96. WebAIM. (2025). WAVE web accessibility evaluation tool. Available online: https://wave.webaim.org/ (accessed on 20 February 2025).
  97. World Wide Web Consortium (W3C). (2018). Web Content Accessibility Guidelines (WCAG) 2.1. Available online: https://www.w3.org/TR/WCAG21/ (accessed on 5 November 2024).
  98. World Wide Web Consortium (W3C). (2025a). ARIA in HTML. Available online: https://www.w3.org/TR/html-aria/ (accessed on 15 March 2025).
  99. World Wide Web Consortium (W3C). (2025b). WAI-ARIA overview. Available online: https://www.w3.org/WAI/standards-guidelines/aria/ (accessed on 26 March 2025).
  100. World Wide Web Consortium (W3C). (2025c). WCAG 2 Overview|Web Accessibility Initiative (WAI). Available online: https://www.w3.org/WAI/standards-guidelines/wcag/ (accessed on 21 April 2025).
  101. Wray, E., Sharma, U., & Subban, P. (2022). Factors influencing teacher self-efficacy for inclusive education: A systematic literature review. Teaching and Teacher Education, 117, 103800. [Google Scholar] [CrossRef]
  102. Yamani, A., Bajbaa, K., & Aljunaid, R. (2022, December 4–6). Web application security threats and mitigation strategies when using cloud computing as backend. 14th International Conference on Computational Intelligence and Communication Networks (CICN 2022) (pp. 811–818), Al-Khobar, Saudi Arabia. [Google Scholar] [CrossRef]
  103. Yao, Y., Duan, J., Xu, K., Cai, Y., Sun, Z., & Zhang, Y. (2024). A survey on large language model (LLM) security and privacy: The good, the bad, and the ugly. High-Confidence Computing, 4(2), 100211. [Google Scholar] [CrossRef]
  104. Yenduri, G., Ramalingam, M., Selvi, G. C., Supriya, Y., Srivastava, G., & Maddikunta, P. K. R. (2024). GPT (generative pre-trained transformer)—A comprehensive review on enabling technologies, potential applications, emerging challenges, and future directions. IEEE Access, 12, 54608–54649. [Google Scholar] [CrossRef]
Figure 1. High-level illustration of AccessiLearnAI’s layered architecture.
Figure 1. High-level illustration of AccessiLearnAI’s layered architecture.
Education 15 01125 g001
Figure 2. Student dashboard interface optimized for both mobile (right) and desktop screen sizes (left).
Figure 2. Student dashboard interface optimized for both mobile (right) and desktop screen sizes (left).
Education 15 01125 g002
Figure 3. Summary generation algorithm using AI: prompt creation, API request, result sanitization, and caching.
Figure 3. Summary generation algorithm using AI: prompt creation, API request, result sanitization, and caching.
Education 15 01125 g003
Figure 4. Login screen using two-factor authentication step 1 (left) and step 2 (right).
Figure 4. Login screen using two-factor authentication step 1 (left) and step 2 (right).
Education 15 01125 g004
Figure 5. Teacher’s dashboard main page after login.
Figure 5. Teacher’s dashboard main page after login.
Education 15 01125 g005
Figure 6. Teacher’s content editor interface before AI suggestions are requested and applied.
Figure 6. Teacher’s content editor interface before AI suggestions are requested and applied.
Education 15 01125 g006
Figure 7. Teacher’s content editor displaying an AI-generated alt-text suggestion.
Figure 7. Teacher’s content editor displaying an AI-generated alt-text suggestion.
Education 15 01125 g007
Figure 8. Teacher’s content editor with AI-powered semantic markup recommendations.
Figure 8. Teacher’s content editor with AI-powered semantic markup recommendations.
Education 15 01125 g008
Figure 9. Teacher’s content editor displaying AI-generated simplification suggestions.
Figure 9. Teacher’s content editor displaying AI-generated simplification suggestions.
Education 15 01125 g009
Figure 10. Preview lesson feature tested for accessibility using the WAVE tool in Chrome.
Figure 10. Preview lesson feature tested for accessibility using the WAVE tool in Chrome.
Education 15 01125 g010
Figure 11. Student dashboard after login on a desktop device (left) and a smartphone (right).
Figure 11. Student dashboard after login on a desktop device (left) and a smartphone (right).
Education 15 01125 g011
Figure 12. A screenshot of the interface where the student chooses the detail of the summary.
Figure 12. A screenshot of the interface where the student chooses the detail of the summary.
Education 15 01125 g012
Figure 13. Summary generated by AI displayed below the original text.
Figure 13. Summary generated by AI displayed below the original text.
Education 15 01125 g013
Figure 14. Screenshot of the translation feature where the student selects a target language.
Figure 14. Screenshot of the translation feature where the student selects a target language.
Education 15 01125 g014
Figure 15. Lesson content dynamically updated after translation.
Figure 15. Lesson content dynamically updated after translation.
Education 15 01125 g015
Figure 16. Screenshots of the interface showing the “Listen to Lesson” button (left), the loading animated icon (center), and the generated audio playback (right).
Figure 16. Screenshots of the interface showing the “Listen to Lesson” button (left), the loading animated icon (center), and the generated audio playback (right).
Education 15 01125 g016
Figure 17. Sequence diagram illustrating the step-by-step interactions between users, the platform, AI services, and the database.
Figure 17. Sequence diagram illustrating the step-by-step interactions between users, the platform, AI services, and the database.
Education 15 01125 g017
Table 1. Comparison between AccessiLearnAI, Moodle and Smart Sparrow.
Table 1. Comparison between AccessiLearnAI, Moodle and Smart Sparrow.
FeatureAccessiLearnAIMoodleSmart Sparrow
Accessibility-Driven Content StructuringDesigned accessibility-first: automatically structures content with semantic HTML5 elements and ARIA roles from the outset, ensuring screen readers can navigate course materials easily. AI tools embed alt-text and proper headings by default, with a human-in-the-loop for quality control, so accessibility is not an afterthought but built into content creation.Emphasizes compliance: Moodle’s interface and authoring tools are built to meet WCAG 2.1 standards (Moodle, 2025a). Core activities are accessible and perceivable, but detailed content structuring is left to instructors. Accessibility plugins can check for issues like missing alt text, but the platform does not automatically optimize semantic structure or add ARIA labels—authors must implement those best practices.Supports accessible design through guidance: the platform allows lessons to be created to meet WCAG 2.0 and Section 508 standards (Smart Sparrow, 2025b). It provides checklists and guidelines; however, it does not auto-generate accessibility markup. Achieving structured, assistive-technology-friendly content depends on the course creator applying the provided guidelines during development (Smart Sparrow, 2025c).
AI-Based Personalization & Adaptive LearningUnified AI adaptivity: integrates multiple AI-driven features (multi-level text summarization, automatic image alt-text, real-time TTS, and on-demand translation) to tailor content to each learner. The system dynamically adjusts material complexity and presentation based on learners’ needs and feedback, providing a highly personalized experience within one platform, guided by AI while remaining under teacher oversight.Rule-based personalization, emerging AI: Moodle allows some personalization through conditional activities and learning analytics, but mostly presents the same content to all learners unless teachers manually branch lessons. It lacks built-in AI adaptation—e.g., no automatic adjustment of reading level or difficulty based on performance. Adaptivity in Moodle comes from plugins or instructor-defined rules. (Moodle, 2025a)Adaptive by design: Smart Sparrow’s core focus is adaptive learning. It employs algorithms and instructor-set rules to continuously adjust content sequencing and difficulty based on student performance. The platform can provide just-in-time feedback, alternate pathways, or more challenging tasks depending on how a learner responds to questions (Smart Sparrow, 2025d).
Semantic Enhancement & Offline AccessibilityPWA-enabled offline learning: AccessiLearnAI leverages Progressive Web App technology to ensure content (including AI-generated summaries, translations, and audio) is available offline. Even without internet, the platform’s cached lessons retain their semantic structure and accessibility features, so a learner can use screen readers and navigate content offline. Any updates (e.g., new alt-text or teacher notes) sync automatically when connection is restored, providing continuous learning for low-bandwidth usersMobile app for offline use: Moodle itself is a web LMS (not inherently a PWA), but it offers an official mobile app that supports offline access. Students can use course content offline. This ensures learners with limited connectivity are not left behind, though some interactive or external content may still require internet. In the web interface, Moodle requires an active connection; offline use is primarily via the app rather than the browser (Moodle, 2025c). Primarily online: Smart Sparrow does not provide dedicated offline mode as part of the platform features—it is intended for use with an active internet connection to deliver interactive, adaptive content. Lessons are accessed via the web, and the platform’s real-time analytics and feedback loop presume connectivity. Students and instructors need to be online to fully utilize the adaptive learning experiences (Smart Sparrow, 2025c).
Ethical Data Handling & Privacy CompliancePrivacy-by-design: The platform incorporates strong data protection aligned with GDPR and other privacy regulations. Student data is kept secure and minimal—personal info not needed is not stored or is encrypted, leveraging Laravel’s security features. AccessiLearnAI also emphasizes transparency and ethics in AI: it uses explainable AI feedback and human review of AI outputs to prevent bias, and it clearly informs users about AI-generated content. This ethical framework means adaptations are not only effective but also accountable and privacy-respectful.Institution-controlled privacy: Moodle lets institutions self-host and control data. The Moodle project has built-in privacy features (e.g., a GDPR compliance toolkit) to assist in handling user consent and data requests (Moodle, 2025d). Moodle’s approach to AI is human-centered and transparent (guided by its AI ethics principles), but most data handling practices (retention, consent, etc.) are configured by the administering institution, reflecting Moodle’s role as a platform provider (Moodle, 2025e).Standard compliance, less emphasis on AI ethics: Smart Sparrow, as a commercial e-learning service, enables institutions to integrate via LMS and follow standard data protection protocols, though it does not publicly highlight specialized privacy features. Unlike AccessiLearnAI’s explicit human-in-the-loop AI checks, Smart Sparrow did not provide insight into its algorithmic decisions, operating as a “black box” adaptive engine from the user perspective (Smart Sparrow, 2025d).
Inclusive Education SupportUniversal design for learning: AccessiLearnAI was created to maximize inclusion, combining accessibility and personalization. It supports multimodal content delivery (text, audio, and visuals with captions/alt-text) and on-the-fly adjustments (like simplifying complex text or providing translations) to accommodate different needs. By bridging language barriers and disability accommodations in one system, it promotes equitable access.Widely used but no automatic personalization: Instructors can upload materials in multiple formats and enable features to support students with different needs. However, Moodle does not automatically personalize content for individual learners—the level of inclusivity depends on how educators use its features. Support comes through community-contributed plugins and good course design rather than inherent adaptivity (Moodle, 2025f).Instructor-enabled adaptivity, not automatic: Smart Sparrow’s adaptivity can make learning more inclusive by addressing individual learner needs. The platform thus can support inclusive education, by empowering instructors to create differentiated experiences, though it does not inherently provide features like automatic text simplification or multilingual translations (the instructor must anticipate and build those variants).
Summarization & Content TransformationBuilt-in AI content transformation: AccessiLearnAI can instantly transform content into more accessible forms. Users can request an on-demand summary of any lesson or section, with the AI generating concise or detailed versions as needed. Similarly, the platform can translate content segments into other languages and produce text-to-speech audio, broadening access for non-native readers and those who prefer listening. Images without descriptions are handled by AI-generated alt-text, and even reading level can be adjusted, all within the platform. These transformations occur seamlessly, allowing teachers and students to toggle between original and AI-enhanced content without leaving the learning environment.Manual and plugin-based transformation: Moodle does not natively offer AI summarization or automatic content simplification. Any summaries or alternative formats of content must be created by the teacher or added via external tools. For instance, an instructor might provide a summary of a reading, upload caption files for videos, or use a third-party plugin for text-to-speech—but Moodle itself treats these as additional content, not something it generates. While Moodle supports a variety of resources (PDFs, ebooks, etc.), it relies on human effort or plugins for transforming content into different forms, rather than doing it automatically (Moodle, 2025b).No automatic summarization features: Smart Sparrow focused on adaptivity and interactivity, and it did not include tools to algorithmically summarize or translate content. The platform expects course content to be crafted in advance; any needed simplifications or alternative explanations would be built into the lesson by the designer. It does not, for example, generate a summary of a text or convert a lesson into another language on the fly, the system concentrated on guiding learners through authored content variants and giving feedback, rather than rewriting or reformatting learning materials automatically (Smart Sparrow, 2025d).
User Experience & Ease of UseIntuitive and assistive UI: The interface is designed to be clean and straightforward, with an emphasis on accessibility and simplicity. Key AI functions are exposed via one-click buttons (“Summarize”, “Translate”, “Listen”) directly in the lesson UI, so users with all levels of tech-savvy can access them easily. The platform being a responsive PWA means, whether on desktop or mobile, the experience is consistent and fast, including for those using assistive tech or low-end devices. By unifying features in a single system, it avoids the patchwork feeling of plugins—navigation and controls are cohesive.Feature-rich, improved over time: Moodle is known for its comprehensive features (Moodle, 2025b), the platform is considered reasonably user-friendly for most educators and learners, but new users can find Moodle overwhelming at first, given its vast array of options and settings. Some users note that the interface can feel complex without customization (though many themes exist to improve it). In practice, with minimal training, most find it easy to navigate courses, and the extensive community documentation helps mitigate usability issues (Al-Fraihat et al., 2025). Empowering but requires design effort: Smart Sparrow provides a robust authoring environment that is graphical and meant to be relatively easy for instructors to use without programming. It offers editable templates and a library of content components, which facilitate creating interactive, visually rich lessons (Smart Sparrow, 2025d). Smart Sparrow aimed for a balance where instructors are empowered to create UX for students—the platform provides the means, but ease of use ultimately depends on the skill and effort.
Scalability & Technical FrameworkScalable modern architecture: AccessiLearnAI is built on a Laravel PHP framework (MVC pattern) with a microservices-oriented backend. This modular design means different services (AI processing, content management, etc.) can scale independently and be updated without monolithic disruption. The platform supports multi-tenancy (multiple institutions or courses on the same system) and uses caching and job queues to handle high loads efficiently.Proven at large scale: Moodle uses a monolithic PHP architecture with a modular plugin system. While not microservices-based, it can run in clustered environments and is highly configurable for performance. The trade-off is that implementing very new tech (like certain AI features) can be slower due to legacy support, but numerous APIs and plugin points exist to extend Moodle’s functionality within its stable core framework (Moodle, 2025d).Cloud service with enterprise adoption: Smart Sparrow’s platform is delivered via cloud infrastructure, exact technical details are proprietary, but it is built to integrate with other systems for wide deployment (Smart Sparrow, 2025b). However, Smart Sparrow often complemented an existing LMS rather than replacing it entirely; this meant it typically scaled in specific courses or modules rather than as a single system running an entire university’s operations.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stelea, G.A.; Robu, D.; Sandu, F. AccessiLearnAI: An Accessibility-First, AI-Powered E-Learning Platform for Inclusive Education. Educ. Sci. 2025, 15, 1125. https://doi.org/10.3390/educsci15091125

AMA Style

Stelea GA, Robu D, Sandu F. AccessiLearnAI: An Accessibility-First, AI-Powered E-Learning Platform for Inclusive Education. Education Sciences. 2025; 15(9):1125. https://doi.org/10.3390/educsci15091125

Chicago/Turabian Style

Stelea, George Alex, Dan Robu, and Florin Sandu. 2025. "AccessiLearnAI: An Accessibility-First, AI-Powered E-Learning Platform for Inclusive Education" Education Sciences 15, no. 9: 1125. https://doi.org/10.3390/educsci15091125

APA Style

Stelea, G. A., Robu, D., & Sandu, F. (2025). AccessiLearnAI: An Accessibility-First, AI-Powered E-Learning Platform for Inclusive Education. Education Sciences, 15(9), 1125. https://doi.org/10.3390/educsci15091125

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop