Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (62)

Search Parameters:
Keywords = web front-end

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 607 KB  
Article
Lunor: A Domain-Specific Language with Language Server Protocol Support for Rapid Prototyping of Front-End Web Applications
by Tomaž Kosar, Mateja Žvegler, Frédéric Loulergue and Marjan Mernik
Mathematics 2026, 14(7), 1163; https://doi.org/10.3390/math14071163 - 31 Mar 2026
Viewed by 254
Abstract
Modern web application development using frameworks such as React often requires writing a significant amount of initial code before reaching the stage where development becomes engaging. To address this, we developed Lunor, a domain-specific language that can be used in the early phases [...] Read more.
Modern web application development using frameworks such as React often requires writing a significant amount of initial code before reaching the stage where development becomes engaging. To address this, we developed Lunor, a domain-specific language that can be used in the early phases of front-end development by allowing developers to describe web interfaces in a clear, human-readable syntax that incorporates Markdown for defining the content of a web application. The proposed solution integrates three key components: the Lunor language definition, a template-based code generation, and a Visual Studio Code (VS Code) extension built on the Language Server Protocol, forming a comprehensive environment for efficient web development. Lunor enables rapid prototyping and the creation of simple, yet fully functional web applications, while the generated code remains compatible with standard web technologies for further expansion. Lunor demonstrates that domain-specific languages can simplify front-end web development effectively and integrate seamlessly into the modern web development process. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

24 pages, 304 KB  
Article
Security Risks in Responsive Web Design Frameworks
by Fernando Almeida and Carlos Sousa
Digital 2026, 6(1), 26; https://doi.org/10.3390/digital6010026 - 21 Mar 2026
Viewed by 400
Abstract
This study addresses a gap in the literature by explicitly linking responsive web design frameworks to concrete cybersecurity vulnerabilities, moving beyond traditional discussions of usability and device compatibility to incorporate security-by-design principles in contemporary frontend development. The research adopts a qualitative comparative approach [...] Read more.
This study addresses a gap in the literature by explicitly linking responsive web design frameworks to concrete cybersecurity vulnerabilities, moving beyond traditional discussions of usability and device compatibility to incorporate security-by-design principles in contemporary frontend development. The research adopts a qualitative comparative approach and considers five widely used responsive design frameworks: Bootstrap, Tailwind CSS, Foundation, Pure CSS, and Skeleton. These frameworks were selected based on criteria such as maturity, adoption, and architectural diversity. Three research questions guide the analysis: the identification of cybersecurity risks associated with responsive design frameworks, the extent to which these risks vary across frameworks, and the mitigation strategies required to address them. The findings confirm that most critical vulnerabilities originate outside the frontend layer, reinforcing the separation between presentation and backend logic. However, the results demonstrate that frameworks significantly influence the security risk profile, particularly regarding cross-site scripting, dependency management, and configuration practices. Modern utility-first frameworks shift security concerns toward the build pipeline and toolchain, while minimalistic and abandoned frameworks introduce risks related to obsolescence and unpatched “forever-day” vulnerabilities. The study concludes that frontend security depends less on framework choice alone and more on governance, continuous maintenance, and the systematic adoption of secure development and DevSecOps practices. Full article
24 pages, 2591 KB  
Article
AI-Driven IFC Processing for Automated IBS Scoring
by Annamária Behúnová, Matúš Pohorenec, Lucia Ševčíková and Marcel Behún
Algorithms 2026, 19(3), 178; https://doi.org/10.3390/a19030178 - 27 Feb 2026
Viewed by 531
Abstract
The assessment of Industrialized Building System (IBS) adoption in construction projects—a critical metric for evaluating prefabrication levels and construction modernization—remains largely manual, time-intensive, and prone to inconsistencies, with practitioners typically requiring 4–8 h to evaluate a single building using spreadsheet-based frameworks and visual [...] Read more.
The assessment of Industrialized Building System (IBS) adoption in construction projects—a critical metric for evaluating prefabrication levels and construction modernization—remains largely manual, time-intensive, and prone to inconsistencies, with practitioners typically requiring 4–8 h to evaluate a single building using spreadsheet-based frameworks and visual documentation review. This paper presents a novel AI-enhanced workflow architecture that automates IBS scoring through systematic processing of Industry Foundation Classes (IFC) building information models—the first documented integration of web-based IFC processing, visual workflow automation (n8n), and large language model (LLM) reasoning specifically for construction industrialization assessment. The proposed system integrates a web-based frontend for IFC file upload and configuration, an n8n workflow automation backend orchestrating data transformation pipelines, and an Azure OpenAI-powered scoring engine (GPT-4o-mini and GPT-5-0-mini) that applies Construction Industry Standard (CIS) 18:2023 rules to extracted building data. Experimental validation across 136 diverse IFC building models (ranging from 0.01 MB to 136.26 MB) achieved a 100% processing success rate with a median processing duration of 61.62 s per model, representing approximately 99% time reduction compared to conventional manual assessment requiring 4–8 h of expert practitioner effort. The system demonstrated consistent scoring performance with IBS scores ranging from 31.24 to 100.00 points (mean 37.14, SD 8.84), while GPT-5-0-mini exhibited 71% faster inference (mean 23.4 s) compared to GPT-4o-mini (mean 80.2 s) with no significant scoring divergence, validating prompt engineering robustness across model generations. Processing efficiency scales approximately linearly with file size (0.67 s per megabyte), enabling real-time design feedback and portfolio-scale batch processing previously infeasible with manual methods. Unlike prior rule-based compliance checking systems requiring extensive manual programming, this approach leverages LLM semantic reasoning to interpret ambiguous construction classifications while maintaining deterministic scoring through structured prompt engineering. The system addresses key interoperability challenges in IFC data heterogeneity while maintaining traceability and compliance with established scoring methodologies. This research establishes a replicable architectural pattern for BIM-AI integration in construction analytics and positions LLM-enhanced IFC processing as a practical, accessible approach for industrialization evaluation that democratizes advanced assessment capabilities through open-source workflow automation technologies. Full article
(This article belongs to the Special Issue AI Applications and Modern Industry)
Show Figures

Figure 1

27 pages, 3371 KB  
Article
An Airflow-Orchestrated AI Pipeline for Podcast Transcription, Topic Modeling, and Recommendation System
by Ioannis Kazlaris, Georgios Papadopoulos, Konstantinos Diamantaras, Marina Delianidi, Eftychia Touliou and Anagnostis Yenitzes
Multimedia 2026, 2(1), 1; https://doi.org/10.3390/multimedia2010001 - 9 Jan 2026
Viewed by 1357
Abstract
This study presents a production-ready AI pipeline for audio content processing, implemented within the Youth Radio platform, which serves as an extension of the European School Radio initiative. The system uses a multi-server architecture: an AI Server that runs batch/offline jobs, orchestrated by [...] Read more.
This study presents a production-ready AI pipeline for audio content processing, implemented within the Youth Radio platform, which serves as an extension of the European School Radio initiative. The system uses a multi-server architecture: an AI Server that runs batch/offline jobs, orchestrated by Apache Airflow, and two Web Servers that deliver all the Backend as well as the Frontend applications, configured with load balancing and redundancy to ensure high availability and fault tolerance. The implemented AI Pipeline includes tasks such as preprocessing, transcription, audio classification and topic modeling. Processed Podcasts are indexed in a Qdrant vector database to facilitate both dense and sparse retrieval while a recommendation system enriches the user’s experience. We summarize design choices and report system-level metrics and task-level indicators (ASR quality after correction, retrieval effectiveness) to guide similar deployments. Full article
Show Figures

Graphical abstract

28 pages, 4228 KB  
Article
Optimizing Access to Interoperability Resources in Mobility Through Context-Aware Large Language Models (LLMs)
by Sudarsana Varma Mandapati, Vishal C. Kummetha, Sisinnio Concas and Lisa Staes
Electronics 2026, 15(1), 152; https://doi.org/10.3390/electronics15010152 - 29 Dec 2025
Viewed by 904
Abstract
This study presents the development and implementation of a functional system that utilizes large language models (LLMs) to improve the identification, organization, and retrieval of mobility interoperability resources. The established framework assists novice and experienced implementers of mobility services such as planning organizations [...] Read more.
This study presents the development and implementation of a functional system that utilizes large language models (LLMs) to improve the identification, organization, and retrieval of mobility interoperability resources. The established framework assists novice and experienced implementers of mobility services such as planning organizations and multimodal transportation agencies to efficiently access interoperability resources, such as standards and case studies, which are often dispersed and difficult to navigate. The web-based system includes a backend that generates abstracts and tags and a frontend that supports manual or chatbot-based search. A prompt-refinement mechanism suggests improved queries within the context of mobility interoperability when no matches are found. To validate the quality of LLM-generated abstracts and tags, subject matter experts reviewed outputs from multiple prompt iterations to assess accuracy and clarity. Of the 82 resources evaluated, 72% of abstracts met expert expectations for relevance, while 91% of the tags were considered appropriate. A comprehensive case study of 330 representative user queries was also conducted to evaluate the chatbot’s output. Overall, the presented framework aims to reduce cataloging effort, improve classification consistency, and improve accessibility to relevant information. With minimal setup costs, the system offers a scalable and cost-effective solution for managing large, uncatalogued repositories. Full article
Show Figures

Figure 1

28 pages, 1383 KB  
Article
Dynamic Frontend Architecture for Runtime Component Versioning and Feature Flag Resolution in Regulated Applications
by Roman Fedytskyi
Software 2025, 4(4), 32; https://doi.org/10.3390/software4040032 - 8 Dec 2025
Viewed by 1470
Abstract
Regulated web systems require traceable, rollback-safe UI delivery, yet conventional static deployments and Boolean flagging struggle to provide per-user versioning, deterministic fallbacks, and audit-grade observability. The objective of this research is to develop and validate a runtime frontend architecture that enables per-session component [...] Read more.
Regulated web systems require traceable, rollback-safe UI delivery, yet conventional static deployments and Boolean flagging struggle to provide per-user versioning, deterministic fallbacks, and audit-grade observability. The objective of this research is to develop and validate a runtime frontend architecture that enables per-session component versioning with deterministic fallbacks and audit-grade traceability for regulated systems. We present a dynamic frontend architecture that integrates typed GraphQL flag schemas, runtime module federation, and structured observability to enable per-session and per-route component versioning with deterministic fallbacks. We formalize a version-resolution function v = f(u, r, t) and implement a production system that achieved a 96% reduction in MTTR, a P90 fallback rate below 0.7%, and over 280 k session-level logs across 45 days. Compared to static delivery and standard flag evaluators, our approach adds schema-driven targeting, component-level isolation, and audit-ready render traces suitable for compliance. Limitations include cold-start overhead and governance complexity; we provide mitigation strategies and discuss portability beyond fintech. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

35 pages, 2077 KB  
Article
Symmetry-Aware Causal-Inference-Driven Web Performance Modeling: A Structure-Aware Framework for Predictive Analysis and Actionable Optimization
by Han Lin and Wenhe Liu
Symmetry 2025, 17(12), 2058; https://doi.org/10.3390/sym17122058 - 2 Dec 2025
Cited by 1 | Viewed by 1081
Abstract
Understanding and improving web performance is essential for enhancing user experience, yet existing approaches remain largely correlation-based and lack causal interpretability. To address this limitation, we propose a causal-inference-driven framework for diagnosing and optimizing user-centric Web Vitals such as Largest Contentful Paint (LCP), [...] Read more.
Understanding and improving web performance is essential for enhancing user experience, yet existing approaches remain largely correlation-based and lack causal interpretability. To address this limitation, we propose a causal-inference-driven framework for diagnosing and optimizing user-centric Web Vitals such as Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS). Our contributions are threefold. (1) We construct a comprehensive feature representation that captures Document Object Model (DOM) structure, resource loading behaviors, rendering characteristics, and JavaScript execution, integrating browser-level domain knowledge into the modeling pipeline. (2) We introduce a hybrid causal discovery method that combines constraint-based reasoning with differentiable score-based learning to estimate high-dimensional causal structures reflecting real rendering processes. (3) We develop a causal-effect-based intervention optimization module that leverages counterfactual reasoning to identify actionable modifications for performance improvement. Our framework further leverages structural symmetries inherent in rendering processes, using repeated layout patterns and invariant dependency flows to reduce redundancy and strengthen the stability and identifiability of causal discovery. Extensive experiments on HTTP Archive, Chrome UX Report (CrUX), and a synthetic ground truth dataset demonstrate that our framework achieves higher causal accuracy, more stable predictive performance, more effective intervention recommendations, and improved interpretability compared with existing rule-based, statistical, and machine learning baselines. These results highlight the potential of causality-aware analysis for practical web performance optimization. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

29 pages, 1978 KB  
Review
Large Language Models in Mechanical Engineering: A Scoping Review of Applications, Challenges, and Future Directions
by Christopher Baker, Karen Rafferty and Mark Price
Big Data Cogn. Comput. 2025, 9(12), 305; https://doi.org/10.3390/bdcc9120305 - 30 Nov 2025
Viewed by 2841
Abstract
Following PRISMA-ScR guidelines, this scoping review systematically maps the landscape of Large Language Models (LLMs) in mechanical engineering. A search of four major databases (Scopus, IEEE Xplore, ACM Digital Library, Web of Science) and a rigorous screening process yielded 66 studies for final [...] Read more.
Following PRISMA-ScR guidelines, this scoping review systematically maps the landscape of Large Language Models (LLMs) in mechanical engineering. A search of four major databases (Scopus, IEEE Xplore, ACM Digital Library, Web of Science) and a rigorous screening process yielded 66 studies for final analysis. The findings reveal a nascent, rapidly accelerating field, with over 68% of publications from 2024 (representing a year-on-year growth of 150% from 2023 to 2024), and applications concentrated on front-end design processes like conceptual design and Computer-Aided Design (CAD) generation. The technological landscape is dominated by OpenAI’s GPT-4 variants. A persistent challenge identified is weak spatial and geometric reasoning, shifting the primary research bottleneck from traditional data scarcity to inherent model limitations. This, alongside reliability concerns, forms the main barrier to deeper integration into engineering workflows. A consensus on future directions points to the need for specialized datasets, multimodal inputs to ground models in engineering realities, and robust, engineering-specific benchmarks. This review concludes that LLMs are currently best positioned as powerful ‘co-pilots’ for engineers rather than autonomous designers, providing an evidence-based roadmap for researchers, practitioners, and educators. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

22 pages, 4967 KB  
Article
TreeHelper: A Wood Transport Authorization and Monitoring System
by Alexandru-Mihai Zvîncă, Sebastian-Ioan Petruc, Razvan Bogdan, Marius Marcu and Mircea Popa
Sensors 2025, 25(21), 6713; https://doi.org/10.3390/s25216713 - 3 Nov 2025
Viewed by 839
Abstract
This paper proposes TreeHelper, an IoT solution that aims to improve authorization and monitoring practices, in order to help authorities act faster and save essential elements of the environment. It is composed of two important parts: a web platform and an edge AI [...] Read more.
This paper proposes TreeHelper, an IoT solution that aims to improve authorization and monitoring practices, in order to help authorities act faster and save essential elements of the environment. It is composed of two important parts: a web platform and an edge AI device placed on the routes of tree logging trucks. The web platform is built using Spring Boot for the backend, React for the frontend and PostgreSQL as the database. It allows transporters to request wood transport authorizations in a straight-forward manner, while giving authorities the chance to review and decide upon these requests. The smart monitoring device consists of a Raspberry Pi for processing, a camera for capturing live video, a Coral USB Accelerator in order to accelerate model inference and a SIM7600 4G HAT for communication and GPS data acquisition. The model used is YOLOv11n and it is trained on a custom dataset of tree logging truck images. Model inference is run on the frames of the live camera feed and, if a truck is detected, the frame is sent to a cloud ALPR service in order to extract the license plate number. Then, using the 4G connection, the license plate number is sent to the backend and a check for an associated authorization is performed. If nothing is found, the authorities are alerted through an SMS message containing the license plate number and the GPS coordinates, so they can act accordingly. Edge TPU acceleration approximately doubles TreeHelper’s throughput (from around 5 FPS average to above 10 FPS) and halves its mean inference latency (from around 200 ms average to under 100 ms) compared with CPU-only execution. It also improves p95 latency and lowers CPU temperature. The YOLOv11n model, trained on 1752 images, delivers high validation performance (precision = 0.948; recall = 0.944; strong mAP: mAP50 = 0.967; mAP50-95 = 0.668), allowing for real-time monitoring. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

26 pages, 1958 KB  
Article
Real-Time Heartbeat Classification on Distributed Edge Devices: A Performance and Resource Utilization Study
by Eko Sakti Pramukantoro, Kasyful Amron, Putri Annisa Kamila and Viera Wardhani
Sensors 2025, 25(19), 6116; https://doi.org/10.3390/s25196116 - 3 Oct 2025
Viewed by 1192
Abstract
Early detection is crucial for preventing heart disease. Advances in health technology, particularly wearable devices for automated heartbeat detection and machine learning, can enhance early diagnosis efforts. However, previous studies on heartbeat classification inference systems have primarily relied on batch processing, which introduces [...] Read more.
Early detection is crucial for preventing heart disease. Advances in health technology, particularly wearable devices for automated heartbeat detection and machine learning, can enhance early diagnosis efforts. However, previous studies on heartbeat classification inference systems have primarily relied on batch processing, which introduces delays. To address this limitation, a real-time system utilizing stream processing with a distributed computing architecture is needed for continuous, immediate, and scalable data analysis. Real-time ECG inference is particularly crucial for immediate heartbeat classification, as human heartbeats occur with durations between 0.6 and 1 s, requiring inference times significantly below this threshold for effective real-time processing. This study implements a real-time heartbeat classification inference system using distributed stream processing with LSTM-512, LSTM-256, and FCN models, incorporating RR-interval, morphology, and wavelet features. The system is developed as a distributed web-based application using the Flask framework with distributed backend processing, integrating Polar H10 sensors via Bluetooth and Web Bluetooth API in JavaScript. The implementation consists of a frontend interface, distributed backend services, and coordinated inference processing. The frontend handles sensor pairing and manages real-time streaming for continuous ECG data transmission. The backend processes incoming ECG streams, performing preprocessing and model inference. Performance evaluations demonstrate that LSTM-based heartbeat classification can achieve real-time performance on distributed edge devices by carefully selecting features and models. Wavelet-based features with an LSTM-Sequential architecture deliver optimal results, achieving 99% accuracy with balanced precision-recall metrics and an inference time of 0.12 s—well below the 0.6–1 s heartbeat duration requirement. Resource analysis on Jetson Orin devices reveals that Wavelet-FCN models offer exceptional efficiency with 24.75% CPU usage, minimal GPU utilization (0.34%), and 293 MB memory consumption. The distributed architecture’s dynamic load balancing ensures resilience under varying workloads, enabling effective horizontal scaling. Full article
(This article belongs to the Special Issue Advanced Sensors for Human Health Management)
Show Figures

Figure 1

29 pages, 2319 KB  
Article
Research on the Development of a Building Model Management System Integrating MQTT Sensing
by Ziang Wang, Han Xiao, Changsheng Guan, Liming Zhou and Daiguang Fu
Sensors 2025, 25(19), 6069; https://doi.org/10.3390/s25196069 - 2 Oct 2025
Cited by 1 | Viewed by 2459
Abstract
Existing building management systems face critical limitations in real-time data integration, primarily relying on static models that lack dynamic updates from IoT sensors. To address this gap, this study proposes a novel system integrating MQTT over WebSocket with Three.js visualization, enabling real-time sensor-data [...] Read more.
Existing building management systems face critical limitations in real-time data integration, primarily relying on static models that lack dynamic updates from IoT sensors. To address this gap, this study proposes a novel system integrating MQTT over WebSocket with Three.js visualization, enabling real-time sensor-data binding to Building Information Models (BIM). The architecture leverages MQTT’s lightweight publish-subscribe protocol for efficient communication and employs a TCP-based retransmission mechanism to ensure 99.5% data reliability in unstable networks. A dynamic topic-matching algorithm is introduced to automate sensor-BIM associations, reducing manual configuration time by 60%. The system’s frontend, powered by Three.js, achieves browser-based 3D visualization with sub-second updates (280–550 ms latency), while the backend utilizes SpringBoot for scalable service orchestration. Experimental evaluations across diverse environments—including high-rise offices, industrial plants, and residential complexes—demonstrate the system’s robustness: Real-time monitoring: Fire alarms triggered within 2.1 s (22% faster than legacy systems). Network resilience: 98.2% availability under 30% packet loss. User efficiency: 4.6/5 satisfaction score from facility managers. This work advances intelligent building management by bridging IoT data with interactive 3D models, offering a scalable solution for emergency response, energy optimization, and predictive maintenance in smart cities. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

20 pages, 798 KB  
Article
Evaluating Generative AI for HTML Development
by Ahmad Salah Alahmad and Hasan Kahtan
Technologies 2025, 13(10), 445; https://doi.org/10.3390/technologies13100445 - 1 Oct 2025
Viewed by 2997
Abstract
The adoption of generative Artificial Intelligence (AI) tools in web development implementation tasks is increasing exponentially. This paper evaluates the performance of five leading Generative AI models: ChatGPT-4.0, DeepSeek-V3, Gemini-1.5, Copilot (March 2025 release), and Claude-3, in building HTML components. This study presents [...] Read more.
The adoption of generative Artificial Intelligence (AI) tools in web development implementation tasks is increasing exponentially. This paper evaluates the performance of five leading Generative AI models: ChatGPT-4.0, DeepSeek-V3, Gemini-1.5, Copilot (March 2025 release), and Claude-3, in building HTML components. This study presents a structured evaluation of AI-generated HTML code produced by leading Generative AI models. We have designed a set of prompts for popular tasks to generate five standardized HTML components: a contact form, a navigation menu, a blog post layout, a product listing page, and a dashboard interface. The responses were evaluated across five dimensions: semantic structure, accessibility, efficiency, readability, and search engine optimization (SEO). Results show that while AI-generated HTML can achieve high validation scores, deficiencies remain in semantic structuring and accessibility, with measurable differences between models. The results show variation in the quality and structure of the generated HTML. These results provide practical insights into the limitations and strengths of the current use of AI tools in HTML development. Full article
Show Figures

Figure 1

24 pages, 817 KB  
Article
Leveraging Large Language Models for Sustainable and Inclusive Web Accessibility
by Manuel Andruccioli, Barry Bassi, Giovanni Delnevo and Paola Salomoni
Big Data Cogn. Comput. 2025, 9(10), 247; https://doi.org/10.3390/bdcc9100247 - 26 Sep 2025
Viewed by 2070
Abstract
The increasing complexity of modern web applications, which are composed of dynamic and asynchronous components, poses a significant challenge for digital inclusion. Traditional automated tools typically analyze only the static HTML markup generated by frontend and backend frameworks. Recent advances in Large Language [...] Read more.
The increasing complexity of modern web applications, which are composed of dynamic and asynchronous components, poses a significant challenge for digital inclusion. Traditional automated tools typically analyze only the static HTML markup generated by frontend and backend frameworks. Recent advances in Large Language Models (LLMs) offer a novel approach to enhance the validation process by directly analyzing the source code. In this paper, we investigate the capacity of LLMs to interpret and reason dynamically generated content, providing real-time feedback on web accessibility. Our findings show that LLMs can correctly anticipate the presence of accessibility violations in the generated HTML code, going beyond the capabilities of traditional validators, also evaluating possible issues due to the asynchronous execution of the web application. However, together with legitimate issues, LLMs also produced a relevant number of hallucinated or redundant violations. This study contributes to the broader effort of employing AI with the aim of improving the inclusivity and equity of the web. Full article
(This article belongs to the Special Issue Generative AI and Large Language Models)
Show Figures

Figure 1

13 pages, 874 KB  
Data Descriptor
The Tabular Accessibility Dataset: A Benchmark for LLM-Based Web Accessibility Auditing
by Manuel Andruccioli, Barry Bassi, Giovanni Delnevo and Paola Salomoni
Data 2025, 10(9), 149; https://doi.org/10.3390/data10090149 - 19 Sep 2025
Cited by 1 | Viewed by 2989
Abstract
This dataset was developed to support research at the intersection of web accessibility and Artificial Intelligence, with a focus on evaluating how Large Language Models (LLMs) can detect and remediate accessibility issues in source code. It consists of code examples written in PHP, [...] Read more.
This dataset was developed to support research at the intersection of web accessibility and Artificial Intelligence, with a focus on evaluating how Large Language Models (LLMs) can detect and remediate accessibility issues in source code. It consists of code examples written in PHP, Angular, React, and Vue.js, organized into accessible and non-accessible versions of tabular components. A substantial portion of the dataset was collected from student-developed Vue components, implemented using both the Options and Composition APIs. The dataset is structured to enable both a static analysis of source code and a dynamic analysis of rendered outputs, supporting a range of accessibility research tasks. All files are in plain text and adhere to the FAIR principles, with open licensing (CC BY 4.0) and long-term hosting via Zenodo. This resource is intended for researchers and practitioners working on LLM-based accessibility validation, inclusive software engineering, and AI-assisted frontend development. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

31 pages, 2118 KB  
Article
Leveraging Multimodal Information for Web Front-End Development Instruction: Analyzing Effects on Cognitive Behavior, Interaction, and Persistent Learning
by Ming Lu and Zhongyi Hu
Information 2025, 16(9), 734; https://doi.org/10.3390/info16090734 - 26 Aug 2025
Cited by 2 | Viewed by 1981
Abstract
This study focuses on the mechanisms of behavior and cognition, providing a comprehensive analysis of the innovative path of multimodal learning theory in the teaching practice of the “Web Front-end Development” course. This study integrates different sensory modes, such as vision, hearing, and [...] Read more.
This study focuses on the mechanisms of behavior and cognition, providing a comprehensive analysis of the innovative path of multimodal learning theory in the teaching practice of the “Web Front-end Development” course. This study integrates different sensory modes, such as vision, hearing, and haptic feedback, with the core objective of exploring the specific impact of this multi-sensory integration form on students’ cognitive engagement status, classroom interaction styles, and long-term learning behavior. We employed a mixed-methods approach in this study. On the one hand, we conducted a quasi-experiment involving 120 undergraduate students. On the other hand, research methods such as behavioral coding, in-depth interviews, and longitudinal tracking were also employed. Results show that multimodal teaching significantly reduces cognitive load (a 34.9% reduction measured by NASA-TLX), increases the frequency of collaborative interactions (2.3 times per class), and extends voluntary practice time (8.5 h per week). Mechanistically, these effects are mediated by enhanced embodied cognition (strengthening motor-sensory memory), optimized cognitive load distribution (reducing extraneous mental effort), and the fulfillment of intrinsic motivational needs (autonomy, competence, relatedness) as framed by self-determination theory. This study fills in the gap between educational technology and behavioral science. We have developed a comprehensive framework that provides practical guidance for designing technology-enhanced learning environments. With such a framework, learners can not only master technical skills more smoothly but also maintain their enthusiasm for learning for a long time and continue to participate. Full article
(This article belongs to the Special Issue Digital Systems in Higher Education)
Show Figures

Graphical abstract

Back to TopTop