Next Article in Journal
Artificial Intelligence for 5G and 6G Networks: A Taxonomy-Based Survey of Applications, Trends, and Challenges
Previous Article in Journal
A Differentiation-Aware Strategy for Voltage-Constrained Energy Trading in Active Distribution Networks
Previous Article in Special Issue
Optimizing Teleconsultation Scheduling with a Two-Level Approach Based on Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Generative AI-Enhanced Robotic Desktop Automation Framework for Multi-System Nephrology Data Entry in Government Healthcare Platforms

1
College of Arts, Media and Technology, Chiang Mai University, Chiang Mai 50200, Thailand
2
Department of Library and Information Science, Faculty of Humanities, Chiang Mai University, Chiang Mai 50200, Thailand
3
Faculty of Public Health, Chiang Mai University, Chiang Mai 50200, Thailand
*
Authors to whom correspondence should be addressed.
Technologies 2025, 13(12), 558; https://doi.org/10.3390/technologies13120558 (registering DOI)
Submission received: 17 October 2025 / Revised: 12 November 2025 / Accepted: 26 November 2025 / Published: 29 November 2025
(This article belongs to the Special Issue AI-Enabled Smart Healthcare Systems)

Abstract

This study introduces a Generative AI-Enhanced Robotic Data Automation (AI-ERDA) framework designed to improve accuracy, efficiency, and adaptability in healthcare data workflows. Conducted over a two-month, real-world experiment across three government health platforms—one web-based (NHSO) and two PC-based systems (CHi and TRT)—the study compared the performance of AI-ERDA against a conventional RDA system in terms of usability, automation accuracy, and resilience to user interface (UI) changes. Results demonstrated notable improvements in both usability and reliability. The AI-ERDA achieved a mean System Usability Scale (SUS) score of 80, compared with 68 for the traditional RDA, while Field Exact Match Accuracy increased by 1.8 percent in the web system and by 0.2 to 0.3 percent in the PC systems. During actual UI modifications, the AI-ERDA maintained near-perfect accuracy, with rapid self-correction within one day, whereas the baseline RDA required several days of manual reconfiguration and assistance from the development team to resolve issues. These findings indicate that generative and adaptive automation can effectively reduce manual workload, minimize downtime, and maintain high data integrity across heterogeneous systems. By integrating adaptive learning, semantic validation, and human-in-the-loop oversight, the AI-ERDA framework advances sustainable digital transformation and reinforces transparency, trust, and accountability in healthcare data management.

1. Introduction

Chronic kidney disease (CKD) represents a growing public health challenge worldwide, with a substantial increase in end-stage renal disease (ESRD) cases requiring dialysis care. In Thailand alone, there are more than 80,000 patients receiving hemodialysis, distributed across over 2500 dialysis centers nationwide [1]. These centers are responsible for collecting and reporting large volumes of clinical and administrative data to government agencies. However, due to strict cybersecurity and privacy policies, there are no interoperable application programming interfaces (APIs) to enable automated data exchange between dialysis centers and national health information systems. Consequently, nurses and administrative staff must manually enter the same data into multiple government platforms, including two web-based portals and one legacy desktop application. This repetitive work leads to inefficiencies, delays, and a heightened risk of human errors, diverting clinical staff from direct patient care and contributing to burnout [2,3].
Robotic Process Automation (RPA) has emerged as a promising solution for automating rule-based workflows in healthcare, such as billing, claims management, and patient scheduling. However, conventional RPA solutions are often brittle and fail when user interfaces (UIs) are modified, particularly in heterogeneous environments where web and desktop systems must be integrated [4,5,6]. Recent studies emphasize the need for integrating RPA with advanced artificial intelligence to create intelligent automation frameworks that can adapt dynamically while keeping governance and sustainability considerations in view [7,8,9]. This integration is particularly relevant in nephrology, where high-frequency dialysis sessions generate repetitive documentation tasks requiring rapid, accurate data transfer between systems [10].
Generative artificial intelligence (GenAI), particularly large language models (LLMs), introduces new capabilities that extend beyond deterministic automation. In healthcare contexts, LLMs have been applied to tasks such as clinical note generation, anomaly detection, and semantic validation, demonstrating the potential to enhance the quality and efficiency of information workflows [11]. When embedded within RPA systems, GenAI can provide context-aware reasoning, automatic text generation for free-text fields, and self-healing UI adaptation, allowing bots to continue functioning even when interfaces evolve [12]. Furthermore, the inclusion of human-in-the-loop (HITL) mechanisms ensures that automation enhances safety and accountability, especially in high-stakes environments such as dialysis care [13,14,15].
Traditional RDA systems, while effective for automating repetitive desktop tasks, exhibit several critical limitations in complex healthcare environments. First, they depend on static, locator-based scripts and fixed UI element identifiers, which makes them brittle when layouts or element IDs change after routine software updates—especially in web-based applications that are updated frequently [4,6]. Second, because traditional RDA operates without semantic understanding or contextual validation, bots may mimic user actions yet fail to detect cross-system inconsistencies or invalid entries, thereby propagating input errors [9]. Third, conventional RDA deployments are often tied to single workstations and tightly coupled desktop environments, limiting scalability and interoperability in hospitals that combine on-premise applications with secure cloud portals; these constraints are well-recognized in governance and internal-control discussions of RPA [8]. Collectively, these technical and operational weaknesses constrain their long-term sustainability and hinder widespread adoption in mission-critical settings such as dialysis care.
This study addresses the remaining bottleneck of manual multi-system data entry. We propose a Generative AI-enhanced Robotic Desktop Automation framework designed to autonomously navigate two web-based and one desktop-based government system, perform accurate and efficient data entry, and adapt dynamically to UI changes. This research project, supported by research funding from the National Research Council of Thailand, seeks to develop and evaluate a prototype for using RDA to streamline data entry processes in nephrology centers across approximately 2500 clinics in Thailand. The objectives of this study are threefold: (1) to design and implement an improved RDA framework for multi-system nephrology data workflows; (2) to evaluate its performance in terms of automation accuracy, time efficiency, error prevention, usability, and resilience to interface changes; and (3) to assess its potential to reduce clerical workload and improve user satisfaction in real-world dialysis center operations. By combining deterministic automation with adaptive generative intelligence and HITL oversight, this framework aims to advance Thailand’s digital health transformation while respecting existing governance and security constraints.

2. Related Work

2.1. Robotic Process Automation and Robotic Desktop Automation in Healthcare

RPA is a rule-based technology that enables software robots to mimic human interactions with digital systems through the user interface, without requiring modifications to underlying system architectures. It is particularly suitable for repetitive, structured tasks such as form filling, data migration, and cross-system information retrieval. In healthcare, RPA has been applied to improve administrative processes, including billing, claims management, appointment scheduling, and patient record updates, thereby reducing clerical workload and minimizing manual data-entry errors [4,16]. Robotic Desktop Automation (RDA), a subtype of RPA, operates at the desktop level, making it well-suited for organizations that rely on a mixture of legacy desktop applications and web portals. This characteristic is especially relevant in public healthcare systems, where heterogeneous platforms must be integrated without direct API access. However, traditional RPA and RDA implementations are inherently fragile because they rely on fixed UI element locators; even minor interface changes such as layout adjustments or field renaming can cause workflows to fail, leading to maintenance challenges and reduced scalability [4]. Recent empirical studies have demonstrated that RPA adoption in hospitals and insurance claim processes significantly decreases process cycle times and operational inefficiencies while improving service quality [5,6,17].
The current trend in digital health automation focuses on integrating artificial intelligence (AI) and generative AI (GenAI) with RPA/RDA to create intelligent automation systems capable of semantic understanding, anomaly detection, and adaptive decision-making. This next-generation automation paradigm introduces self-healing capabilities, where bots dynamically recover from UI changes by interpreting layout shifts and contextual cues, thus reducing maintenance overhead and increasing operational resilience [7]. Moreover, governance and sustainability considerations have emerged as critical factors in the large-scale deployment of automation in healthcare, requiring strict compliance with data privacy and security regulations [13]. In contexts such as national health information systems where multiple government web portals and legacy desktop applications must be used simultaneously under stringent cybersecurity constraints RDA enhanced with GenAI represents a promising approach to reliably automate complex workflows. This study builds upon these concepts by developing a framework that leverages deterministic RDA for precise data handling while employing GenAI for adaptive, context-aware navigation and human-in-the-loop oversight, ensuring safety and accountability in dialysis center data management.

2.2. Generative AI in Healthcare Automation

GenAI, particularly LLMs, has emerged as a transformative force in healthcare by enabling machines to generate human-like text, synthesize structured and unstructured data, and reason across complex clinical contexts [11]. Unlike traditional rule-based or supervised machine learning systems, GenAI offers advanced capabilities for creating patient-centric narratives, summarizing electronic health records (EHRs), and facilitating more natural human–computer interaction. Recent breakthroughs have demonstrated that LLMs encode deep clinical knowledge and can support various healthcare tasks such as clinical documentation, question answering, and predictive modeling [18]. For instance, LLMs can generate structured reports, provide semantic validation of clinical data, and detect anomalies in laboratory results, thereby enhancing both data quality and clinical decision-making. These functions reduce clerical burden and have the potential to mitigate clinician burnout, a growing issue in modern healthcare systems [19]. However, implementing GenAI in clinical environments requires careful consideration of ethical, privacy, and safety concerns, particularly when handling sensitive patient data [13].
Recent research has explored integrating GenAI with RPA and Robotic Desktop Automation (RDA) to create intelligent automation systems capable of adaptive, context-aware workflow management. Traditional automation approaches are inherently brittle, often failing when user interface (UI) layouts or workflows change unexpectedly. By contrast, GenAI enables self-healing automation, dynamically interpreting UI changes, mapping form fields, and generating adaptive strategies for process execution without extensive reprogramming [12]. This integration is especially valuable in complex, heterogeneous environments such as government health information systems, where multiple platforms—both web-based and legacy desktop applications—must be operated simultaneously under strict cybersecurity constraints. Furthermore, GenAI-enhanced bots can automatically generate free-text content for reporting, detect inconsistencies across multiple systems before data submission, and provide recommendations to human reviewers. Incorporating human-in-the-loop (HITL) oversight ensures accountability, explainability, and compliance with governance standards, balancing automation efficiency with patient safety [13]. These capabilities highlight the potential of GenAI to significantly advance healthcare automation, setting the foundation for this study’s proposed framework.

2.3. Data Entry Challenges in Nephrology and Dialysis Care

Dialysis care produces high-frequency, high-volume clinical documentation. Most hemodialysis patients undergo treatment three times weekly, and each session requires timely recording of treatment parameters, prescriptions, and administrative reporting [20]. In Thailand, the national Thailand Renal Replacement Therapy (TRT) Registry has documented rapid growth in the country’s dialysis infrastructure. The 2023 report highlights nationwide expansion in dialysis centers and equipment and underscores the system’s reliance on consistent, center-level reporting for quality improvement and resource allocation [1,21]. More than 1100 dialysis facilities participated in national reporting in 2023, representing approximately 98% of all centers, which illustrates the scale and burden of routine data flows that must be accurately and promptly entered across heterogeneous information systems [1]. In practice, many government health platforms still require user interface–level manual data entry, resulting in multi-system, multi-portal workflows for clinical staff and increasing the risk of data transposition, omission, and timing errors.
Manual, repetitive data entry is also a well-recognized source of data-quality problems and clinician burden. Foundational health informatics studies have demonstrated substantial error rates in manually curated clinical databases and shown how data-entry method directly influences downstream statistical validity and research outcomes [22,23]. Contemporary nursing informatics research has linked documentation burden to clinician burnout and dissatisfaction with electronic health record usability [24], while hospital-based studies in Thailand have associated nurse burnout with lower reported quality of care and increased adverse outcomes [25]. These findings underscore the urgent need for automation solutions that can reduce redundant, cross-platform data entry; prevent content-level inconsistencies before submission; and preserve accountability through human-in-the-loop oversight. This need is especially critical for dialysis centers, where the cadence of thrice-weekly treatments magnifies even small inefficiencies into substantial operational load and potential risks for patient safety.

2.4. Human-in-the-Loop, Governance, and Sustainability Considerations

Automation in clinical settings must balance efficiency with patient safety, accountability, and transparency. Human-in-the-Loop (HITL) mechanisms are widely regarded as essential safeguards that keep clinicians responsible for critical decisions while leveraging algorithmic assistance [13,26]. In practice, HITL combines pre-submission verification, selective overrides, and audit trails to mitigate automation surprises and distribution shifts in real-world data. Explainability and documentation further support HITL by enabling users to interrogate system recommendations and provenance—reducing over-reliance on opaque models and helping maintain trust during deployment [13,27]. For high-cadence workflows such as dialysis data entry, HITL can be scoped to exception-based review (e.g., out-of-range values, cross-system conflicts) to preserve throughput without sacrificing safety.
Beyond local controls, governance frameworks and reporting standards provide shared norms for trustworthy AI/automation in healthcare. The EU High-Level Expert Group’s Guidelines for Trustworthy AI emphasize three foundational principles: lawfulness, ethics, and robustness, forming the normative basis for responsible AI governance [28]. In clinical research, Fairhurst et al. [29] introduced the CONSORT-AI and SPIRIT-AI extensions to promote transparency and reproducibility in reporting AI-driven interventions and trial protocols, while Rivera et al. [30] and Collins et al. [31] proposed the TRIPOD-AI guidelines to standardize model reporting and evaluation for machine learning in healthcare. Together, these frameworks ensure that AI systems are documented, auditable, and safe for deployment in high-stakes domains. In the context of government health platforms, where multiple user interfaces and legacy systems must interact, sustainable automation additionally requires operational governance mechanisms. Amann et al. [13] emphasize human-in-the-loop (HITL) checkpoints for safety and accountability; Rajkomar, Dean, and Kohane [26] highlight privacy-by-design and minimum-access principles; and Morley et al. [27] discuss post-deployment monitoring to maintain usability, safety, and public trust. Collectively, these sources establish the governance framework underpinning our proposed AI-ERDA architecture.

2.5. Research Gap and Motivation

Despite the growing adoption of automation and artificial intelligence (AI) in healthcare, most existing solutions focus on isolated, rule-based tasks or single-platform workflows. Traditional RPA systems are inherently fragile, often failing when user interface (UI) layouts change or when multiple heterogeneous platforms must be integrated simultaneously [4,16]. In the context of healthcare, especially in low- and middle-income countries, the lack of interoperable application programming interfaces (APIs) forces healthcare workers to rely on repetitive, manual data entry across several government health systems, leading to inefficiencies, transcription errors, and clinician burnout [2,3].While previous research has demonstrated the potential of AI-enhanced tools such as Optical Character Recognition (OCR) for digitizing paper records [10], these solutions do not address the end-to-end problem of cross-platform automation or provide mechanisms to adapt dynamically to changing system interfaces. Moreover, most implementations lack human-in-the-loop (HITL) controls and governance frameworks, which are essential for ensuring accountability and trust in high-stakes clinical environments [13].
Recent advances in Generative AI and LLMs offer new opportunities to overcome these limitations. GenAI has demonstrated capabilities in semantic reasoning, natural language understanding, and anomaly detection, enabling context-aware automation that can adapt to evolving workflows and generate accurate free-text documentation [11,18]. When integrated with RPA or Robotic Desktop Automation (RDA), GenAI enables self-healing automation, where bots can dynamically interpret UI changes, align cross-system fields, and maintain operational continuity with minimal human intervention [12]. However, there remains a clear research gap: to date, no published study has presented a comprehensive framework that combines GenAI with RDA for multi-system nephrology data entry in government health platforms, particularly under the stringent cybersecurity and privacy constraints present in countries like Thailand. Addressing this gap is critical, as dialysis centers generate high-frequency, high-volume data from approximately 2500 clinics nationwide, requiring accurate and timely reporting to support patient safety and national health policy [1,21].
This study focuses on reducing clerical burden, improving data quality, and ensuring governance in dialysis data workflows. By developing a Generative AI–enhanced RDA framework with HITL oversight, this research aims to (i) design an automation solution capable of navigating heterogeneous systems without relying on APIs, (ii) evaluate its performance in terms of accuracy, efficiency, and usability, and (iii) enhance the RDA framework through the integration of Generative AI. This approach directly addresses the identified research gap and contributes to advancing Thailand’s digital health transformation agenda.

3. System Implementation

3.1. System Architecture and Framework Design

The proposed system architecture, illustrated in Figure 1, was designed to address the persistent challenge of manual, repetitive, and error-prone data entry across multiple government nephrology platforms. The framework is conceptualized as a layered model that integrates deterministic automation with adaptive generative intelligence and human oversight, thereby ensuring both efficiency and resilience. At its foundation, the Input Data Layer enables secure connectivity with nephrology information sources, including hospital information systems and third-party dialysis management platforms. This layer is responsible not only for the transfer of patient records but also for their preprocessing and structuring into standardized formats suitable for automation. The subsequent RDA Execution Layer builds upon this foundation by deploying robotic desktop automation (RDA) tools to systematically input patient data into both web-based and PC-based government systems. By automating the majority of repetitive tasks, this layer significantly reduces clerical workload and minimizes the risk of human-induced errors. To address the common challenge of system updates and interface variability, the Generative AI Layer operates as an adaptive middleware that applies advanced natural language processing (NLP) and computer vision models to interpret user interfaces, dynamically remap workflow sequences, and validate data consistency. Together, these three layers form the technical backbone of the proposed automation engine.
Compared with prior RPA and AI-RDA architectures, the proposed framework emphasizes adaptability, semantic validation, and governance readiness across heterogeneous government platforms. Traditional RPA studies in healthcare typically target rule-based, task-level automation within a single web or desktop environment and do not provide robust self-healing against UI drift (e.g., layout or locator changes) [6]. More recent “intelligent automation” approaches add machine-learning components to RPA, yet still require periodic retraining or manual reconfiguration when interfaces evolve, limiting resilience and scalability in practice [9,15]. In contrast, our AI-ERDA design integrates a generative reasoning layer with human-in-the-loop oversight to dynamically reinterpret changing UIs, perform cross-system semantic checks, and maintain auditable operation in complex nephrology data flows. This positions the framework as a step beyond rule-augmented RPA toward sustainable, trustworthy automation under real-world update cadence.
While automation provides efficiency, healthcare data workflows also demand mechanisms for safety, transparency, and accountability. These aspects are embodied in the final two layers of the architecture, which focus on governance and oversight. The Human-in-the-Loop (HITL) Layer plays a critical role by introducing a human verification stage into the process, where anomalies, incomplete records, or context-dependent cases are flagged by the AI and subsequently reviewed by nephrology staff or data clerks. This hybrid approach ensures that decision-making for sensitive patient data is never fully delegated to automated systems, thereby safeguarding data integrity and regulatory compliance. The Monitoring and Logging Layer complements this oversight function by maintaining comprehensive records of all automated and human-involved actions, enabling traceability and auditability in line with healthcare governance frameworks. In practice, this layered structure integrates the strengths of deterministic RDA, adaptive generative AI, and human expertise, producing a scalable and sustainable architecture that not only reduces administrative burdens but also aligns with Thailand’s broader digital health transformation objectives.

3.2. RDA Bot Implementation and Generative AI Modules

The implementation of the proposed system integrates rule-based automation with adaptive generative intelligence to execute data entry across heterogeneous nephrology platforms (Figure 2). The workflow begins with the Data Integration module, which securely collects and preprocesses patient records from hospital and dialysis management systems. Within the Automation Engine, deterministic robotic desktop automation (RDA) agents are deployed for both web-based and PC-based government systems. Web automation is handled through frameworks such as Playwright, enabling scripted navigation, form completion, and submission. Legacy PC systems are supported through desktop automation (e.g., PyAutoWin v0.6.8), simulating human interactions such as keystrokes and mouse events. To interpret diverse user interfaces, the system applies OCR (Tesseract v5.3.0) and computer vision (OpenCV v4.8.0) for field recognition, while administrative rules (Pydantic 2.6.0 schemas) validate field formats and enforce compliance with nephrology data standards. All actions, whether successful or error-prone, are logged for traceability within the monitoring layer, ensuring both transparency and security.
Beyond deterministic execution, the Generative AI modules enhance resilience and reduce human workload through three core functions summarized in Table 1. First, Self-Healing Automation enables the bot to adapt automatically to minor UI modifications, such as renamed buttons or shifted field locations, without manual reprogramming. Second, Semantic Validation and Anomaly Detection ensure the integrity of clinical data by cross-checking entries, flagging abnormal dialysis frequencies, and detecting inconsistencies across multiple systems. Third, Decision Support for Human-in-the-Loop (HITL) allows the AI to prioritize flagged records and provide concise summaries of anomalies, enabling nephrology staff to review and approve submissions more efficiently. Together, these AI-enhanced functions transform a brittle automation pipeline into a robust and adaptive system, ensuring accuracy, accountability, and compliance across Thailand’s nephrology data ecosystem.

3.3. Human-in-the-Loop Oversight

Although robotic desktop automation and generative AI greatly enhance efficiency, healthcare data workflows demand safeguards that automation alone cannot guarantee. The proposed framework therefore integrates a HITL layer to ensure that ambiguous, anomalous, or high-risk records are verified by nephrology staff before being transmitted to government systems. Human oversight is essential because patient data entry often involves contextual judgment, such as interpreting unusual dialysis frequencies, validating incomplete laboratory results, or confirming compliance with evolving reporting policies, tasks where deterministic automation and AI reasoning may still be prone to error or bias. In practice, the AI modules assist by ranking flagged records according to severity and generating concise explanations, thereby reducing cognitive load and enabling staff to focus on the most critical cases. Each human decision is logged for traceability, producing a transparent audit trail that supports governance and accountability. This design aligns with emerging consensus in responsible AI for healthcare, which emphasizes that human oversight is indispensable for preserving trust, mitigating algorithmic risks, and ensuring that clinical and administrative decisions remain safe and ethically grounded [13,32].

3.4. System Implementation, Monitoring, and Security Compliance

The AI-ERDA framework was implemented as an integrated automation system capable of operating across both web-based and PC-based healthcare platforms. The system interface (Figure 3) includes a real-time monitoring dashboard that tracks progress, success rates, and error logs during automated data entry, ensuring transparency and facilitating rapid debugging. To enhance resilience, a built-in UI Change Detection module (Figure 4) automatically identifies modifications in user interface elements such as field labels or layout positions, prompting user confirmation before adaptive remapping is applied. This self-healing mechanism enables the system to maintain accuracy and operational continuity even when unexpected UI changes occur, while all adjustments are logged for audit and model retraining.
To ensure data security and compliance, the AI-ERDA framework was developed under a privacy-by-design principle. All patient records were pseudonymized prior to automation, and all communication channels between modules utilized end-to-end HTTPS encryption. Access to sensitive data fields required authenticated session tokens verified through the hospital’s single sign-on (SSO) infrastructure. During task execution, HITL verification prevented automated overwriting or modification of protected data without explicit user approval. Furthermore, all automation actions, including field remapping events, were time-stamped and stored in encrypted audit logs to ensure traceability, accountability, and adherence to national e-health data protection standards. These mechanisms collectively safeguard patient information while preserving the transparency and reliability of the automation process.

4. Evaluation Methodology

4.1. Study Design

This study adopts an experimental design to evaluate the effectiveness of the proposed Generative AI–ERDA framework compared with the baseline RDA system. Two groups were established: RDA System (control), which utilized the existing RDA system without AI augmentation, and AI-ERDA System (intervention), which employed the RDA framework integrated with generative AI modules for self-healing automation, semantic validation, and decision support to enable adaptive field mapping, contextual reasoning, and automated error correction during task execution. Both groups were assigned equivalent data entry tasks involving one web-based government platforms and two PC-based legacy systems. The overall experimental workflow is illustrated in Figure 5.
The experiment was conducted over a two-month period to reflect realistic operating conditions. Specifically, data inspection and pilot testing were performed between May and June 2025, using anonymized nephrology workflow datasets extracted from the national dialysis information platform. The dataset covered patient record transactions recorded from January to April 2025, encompassing registration, treatment, and discharge reporting activities across three regional hospitals. A longer duration was intentionally selected because government health platforms frequently undergo minor UI adjustments or text modifications, which can disrupt conventional automation workflows. By extending the observation period to two months, the study increased the likelihood of encountering such variations, thereby providing a more rigorous assessment of the system’s resilience, adaptability, and real-world usability.

4.2. Participants and Setting

Participants were data entry clerks employed in nephrology centers across Thailand, all of whom had practical experience in handling government health information systems. The study involved two nephrology clinics per group, with each clinic contributing four clerical staff, resulting in a total of n = 8 per group and n = 16 overall. Clinics were purposefully selected to represent diverse operational environments while maintaining comparability in workload and patient record volumes. Clerical staff were responsible for daily entry of patient records into nephrology registries and national reporting platforms.

4.3. Evaluation Metrics

4.3.1. Usability and User Experience

Usability was evaluated using the System Usability Scale (SUS), a standardized and validated instrument widely applied in assessing interactive systems [33]. The SUS consists of ten items rated on a five-point Likert scale ranging from “strongly disagree” to “strongly agree.” Responses are combined to produce a single composite score between 0 and 100, reflecting overall system usability [34,35]. Higher scores indicate greater user satisfaction and perceived ease of use, with a benchmark of 70 commonly recognized as representing acceptable usability. The questionnaire captures user perceptions of effectiveness, efficiency, and satisfaction after interacting with the system. Results were analyzed using descriptive statistics, including the mean, standard deviation, and confidence intervals, to summarize user experience.

4.3.2. Automation Accuracy

Automation accuracy was measured by comparing system outputs against a gold standard dataset. Two complementary indicators were used. The first was Field Exact-Match Accuracy, defined as the proportion of fields that were identical to the gold standard. This metric follows the principle of exact-match evaluation widely used in automated data validation and natural language processing benchmarks, where predicted outputs must perfectly coincide with reference values [26,36]. It was calculated as
F i e l d   E x a c t   M a t c h   A c c u r a c y ( % ) = N u m b e r   o f   f i e l d s   c o r r e c t l y   m a t c h e d T o t a l   n u m b e r   o f   f i e l d s   t e s t e d × 100
This measure was applied to categorical or structured variables such as patient identifiers, dates of treatment, and insurance codes. The second indicator was Tolerance Accuracy (numeric), which captured whether numeric values fell within predefined clinically acceptable limits, consistent with tolerance-based evaluation used in medical data validation [22,37]. It was calculated as
T o l e r a n c e   A c c u r a c y   % = N u m b e r   o f   n u m e r i c   f i e l d s   w i t h i n   a c c e p t a t b l e   r a n g e T o t a l   n u m b e r   o f   n u m e r i c   f i e l d s   t e s t e d × 100 .
Examples included dialysis start time within ±5 min, laboratory values within ±2%, and body weight within ±0.2 kg.

4.3.3. Resilience to UI Change

Resilience to UI change was evaluated to measure how effectively the automation frameworks could sustain accurate performance when user interface components or textual labels were modified. The assessment focused on monitoring Field Exact-Match Accuracy continuously over a 60-day period that included multiple UI modification events. Each event typically lasted around three days and involved adjustments to the structure or labeling of the web interface. Changes in accuracy during these events were analyzed to indicate the framework’s ability to adapt to altered interface layouts. The evaluation was visualized as a time-series plot of daily accuracy, allowing for a direct comparison of stability and recovery behavior after each UI change.

4.4. Data Collection Procedures

The data collection process was designed to evaluate the performance and resilience of both RDA and AI-ERDA automation frameworks across three major healthcare information systems. Each system represented a distinct operational environment and interface type, ensuring comprehensive testing of automation reliability in real-world conditions. The three platforms included: (1) the National Health Security Office (NHSO) system (Figure 6), a web-based application used for processing healthcare benefit claims; (2) the Thai Nephrology Society (TRT) desktop application (Figure 6), which manages patient registration and renal care records; and (3) the Health Service Information Bureau (CHi) platform (Figure 7), a PC-based information system responsible for healthcare service reporting and data integration across hospital networks.
For each platform, a series of standardized automation tasks were developed to simulate routine data entry and verification workflows performed by healthcare officers. The testing period spanned 60 consecutive days, during which both the RDA and AI-ERDA systems operated under identical task sequences and datasets. Daily Field Exact-Match Accuracy and Tolerance Accuracy metrics were recorded automatically, along with system logs documenting response time, field errors, and recovery behavior. In the case of the web-based NHSO system, periodic UI changes were intentionally introduced to assess the resilience of both frameworks under dynamic interface conditions. For the PC-based TRT and CHi platforms, the evaluation focused on execution stability and handling of intermittent system latency.

5. Results

5.1. Improvement in Usability and User Experience

The comparative analysis of the SUS revealed substantial improvements in overall usability and user experience when employing the AI-ERDA system over the conventional RDA framework. As summarized in Table 2, the AI-ERDA system consistently received higher mean scores across nearly all questionnaire items. Participants reported that the AI-ERDA interface was easier to use, better integrated, and less complex, suggesting that the inclusion of AI-assisted mechanisms effectively minimized manual operations and reduced cognitive load. The largest improvements were observed in the areas of system integration (item 5), consistency (item 6), and user confidence (item 9), indicating that users experienced smoother task flows and felt more assured of the system’s reliability. Conversely, the traditional RDA system exhibited lower scores in items related to technical dependency and task complexity (items 2, 4, and 6), which implies that users still encountered interruptions requiring external support and manual configuration. This difference reflects how automation enhanced by adaptive intelligence can fundamentally improve interaction flow, stability, and ease of use.
The overall SUS scores provide quantitative evidence of this improvement. As illustrated in Figure 8, the RDA system achieved a mean SUS score of 68, which falls within the high marginal usability range, while the AI-ERDA system reached a score of 80, classified as acceptable. This 12-point increase represents a meaningful enhancement in user satisfaction and perceived quality. According to standard SUS interpretation, scores above 70 are generally considered acceptable and reflect systems that are intuitive, efficient, and easy to use. Therefore, the transition from RDA to AI-ERDA demonstrates a clear progression toward more user-centered design and improved operational efficiency. The integration of AI provides automated handling of complex rule-based procedures and also enhances user trust, confidence, and perceived control during real-world data processing tasks. Taken together, the quantitative findings confirm that the AI-ERDA framework delivers a significantly better usability profile, aligning automation precision with human-centric design principles for a more seamless and satisfying user experience.

5.2. Performance of Automation Accuracy

The automation accuracy results demonstrated that the AI-ERDA framework consistently outperformed the conventional RDA across all testing platforms, particularly in scenarios involving dynamic user interface environments. As summarized in Table 3, the Web System (NHSO) exhibited the largest improvement, with Field Exact-Match Accuracy increasing from 97.2 ± 1.4% under RDA to 99.0 ± 0.5% under AI-ERDA, reflecting a gain of +1.8%. Similarly, Tolerance Accuracy improved by +0.8%, indicating that AI-enhanced automation maintained more reliable performance even under minor data irregularities. This improvement is primarily attributed to the AI-ERDA’s adaptive learning mechanism, which allowed it to automatically adjust to UI variations and minimize field-level mismatches that typically affect rule-based automation systems.
On the PC-based platforms, both systems exhibited near-perfect accuracy, although AI-ERDA still achieved marginal gains. In PC System 1 (CHi), the Field Exact-Match Accuracy rose slightly from 99.4 ± 0.3% to 99.7 ± 0.2%, while the Tolerance Accuracy improved from 99.6 ± 0.2% to 99.8 ± 0.1%. A similar pattern was observed in PC System 2 (TRT), where the differences were minimal yet consistent, reinforcing the stability of AI-assisted automation in more static interface environments. Overall, while both frameworks performed robustly in PC systems, the superior adaptability of AI-ERDA became most evident in the web-based environment, where layout and labeling changes occur more frequently. Based on these results, the Web System showed a slightly greater improvement of 1.8% under AI assistance compared with the RDA system, whereas both PC systems exhibited only marginal differences of +0.3% and +0.2%, respectively.

5.3. Evaluation of System Resilience to UI Change

The evaluation of system resilience aimed to determine the capability of the automation frameworks to maintain accuracy and operational stability under actual UI changes. As illustrated in Figure 9, Figure 10 and Figure 11, Field Exact-Match Accuracy was continuously monitored over 60 days for both RDA and AI-ERDA across three platforms. During the real experiment, three genuine UI change events occurred naturally within the two-month testing period, involving layout rearrangements, field label modifications, and form structure adjustments that directly affected data-entry automation. The Web System (Figure 10) exhibited the most evident divergence between the two frameworks: the conventional RDA system experienced severe accuracy degradation, dropping to approximately 94% during UI change periods, whereas the AI-ERDA system maintained substantially higher stability, with only minor declines of 1–2% and rapid recovery to near-perfect accuracy. This temporary reduction occurred only on the first day of each UI change event, when user confirmation was required for the AI to adapt to the new interface. Following each real UI update, the development team of the conventional RDA system required around three days to manually reconfigure and restore normal operation, while the AI-ERDA system automatically adapted to the new interface without human intervention.
In contrast, both PC System 1 (Figure 10) and PC System 2 (Figure 11) exhibited minimal variations throughout the testing period, maintaining accuracy above 99.5% for both frameworks. The minor fluctuations observed were primarily linked to transient network latency rather than UI modifications, suggesting that PC-based environments, with their static interface designs, are inherently less prone to disruption. Overall, the findings clearly indicate that the AI-ERDA framework provides slightly greater robustness and adaptability compared with the traditional RDA, particularly in healthcare systems where data must be handled with great care. Its AI-assisted field recognition allows the system to sustain high performance even in dynamic web environments, thereby reducing downtime and minimizing human intervention.

6. Discussion and Findings

6.1. Interpretation of Usability Improvements

The overall SUS results revealed a substantial enhancement in perceived usability when the AI-ERDA framework was employed in place of the conventional RDA system. Qualitative feedback collected from participants during the two-month experiment provided strong contextual support for this quantitative gain. Many users reported that whenever a UI change occurred in the RDA system, they had to manually re-map input fields, verify corresponding data labels, and perform repeated test runs before resuming normal operation tasks, which were cognitively demanding, time-consuming, and often required waiting for the developer team to implement fixes. This interruption not only delayed routine data entry but also reduced user confidence in the reliability of the automation. In contrast, users operating the AI-ERDA system described a smoother workflow, as the AI modules could recognize modified interface layouts and automatically adjust field mappings with minimal human confirmation on the first day of change. Participants noted that this capability significantly reduced frustration and perceived workload, allowing them to focus on data accuracy rather than troubleshooting the tool itself. The 12-point increase in SUS (from 68 to 80) therefore reflects not merely improved interface design but a reduction in manual overhead and increased task-flow continuity under realistic healthcare system conditions.
When compared with prior research, the usability improvement observed in this study demonstrates a clearer and more stable enhancement than those reported in earlier RPA deployments. Huang et al. [5] described that conventional RPA systems improved task efficiency but often left users burdened with post-update adjustments, while Park et al. [6] found that even with monitoring capabilities, manual retraining remained a major source of user frustration. In contrast, the adaptive mechanisms of the AI-ERDA framework reduced these interruptions and promoted sustained confidence in automation, aligning with Thirunavukarasu et al. [19], who emphasized that AI-driven systems can dynamically support users and lower cognitive demands. Similar patterns have also been reported in explainable-AI and healthcare-automation studies, which highlight that machine-learning-based interfaces enable smoother decision-making compared with static, rule-driven approaches [38]. Moreover, the present findings extend prior adaptive-interface design research [39] by demonstrating that self-healing and semantic-mapping capabilities can maintain usability continuity even under frequent UI variations.

6.2. Analysis of Automation Accuracy

Quantitatively, the AI-ERDA framework achieved superior automation accuracy across all platforms, with the largest gains observed on the web system where layout and labeling are most volatile. In this study, Field Exact-Match Accuracy increased from 97.2 ± 1.4 percent under RDA to 99.0 ± 0.5 percent under AI-ERDA, a gain of 1.8 percent. Likewise, Tolerance Accuracy improved by 0.8 percent, indicating that AI-assisted field recognition mitigated small formatting deviations and intermittent load delays. Participant feedback supported these metrics: the AI agent could “wait” for slow-rendering elements and re-associate fields after a UI update with minimal confirmation, whereas the conventional rules required manual remapping before data entry could continue. This dynamic adaptability allowed AI-ERDA to sustain automation continuity with minimal interruptions. On PC-based systems (CHi and TRT), both frameworks performed nearly perfectly, maintaining accuracy above 99.5 percent; however, AI-ERDA still demonstrated slightly tighter variance under transient latency, consistent with findings that learning-based interfaces sustain performance in the presence of runtime perturbations [39].
Compared with earlier RPA and intelligent automation studies, these findings demonstrate a clearer resilience advantage under dynamic interface conditions. Schwamm et al. [39] reported that conventional RPA enhanced process stability but often failed to recover from locator drift without manual rule reconfiguration, while Nimkar et al. [40] showed that hybrid RPA–ML systems reduced error propagation but required periodic retraining. In contrast, the generative AI modules in AI-ERDA achieved comparable precision without retraining, maintaining continuity through contextual recognition of UI patterns. This result extends the conclusion of Figueroa et al. [41], who found that adaptive learning improved recognition accuracy via user feedback loops, by demonstrating that generative reasoning can achieve similar adaptability autonomously. Moreover, our findings align with Fairhurst et al. [29], who highlighted that AI-driven context reasoning enhances data precision and reduces human correction needs in healthcare workflows, and with Baqar et al. [42], who emphasized that self-healing automation minimizes failure rates after interface updates. Together, these comparisons indicate that AI-ERDA not only aligns with but advances the current evidence on accuracy stability and recovery efficiency in intelligent RPA systems.

6.3. Evaluation of System Resilience and Adaptability

The resilience evaluation revealed clear distinctions between the AI-ERDA and traditional RDA frameworks when subjected to real-world UI changes. Across a 60-day observation period, three genuine UI modification events occurred within the web-based NHSO system, while both PC systems (CHi and TRT) remained largely stable. During these web interface changes, the RDA framework experienced significant performance degradation, with Field Exact-Match Accuracy dropping to approximately 94 percent and requiring manual reconfiguration that lasted up to three days for full recovery. In contrast, the AI-ERDA system demonstrated rapid self-correction capabilities: accuracy declined only slightly (by about 1–2 percent) on the first day following each UI change before recovering to near-perfect levels once the AI model had re-learned new field associations. The ability to sustain automation accuracy without developer intervention highlights the framework’s capacity for autonomous adaptation, an essential property for large-scale healthcare systems, where even brief interruptions can compromise data integrity and delay service delivery. Moreover, the benefits of AI-ERDA were particularly evident in the web-based system, which undergoes frequent updates and layout modifications, while the PC-based systems showed minimal differences due to their static and stable interface structures.
These findings align strongly with emerging research in adaptive AI systems and self-healing automation. Studies have shown that machine learning–driven RPA frameworks outperform traditional scripts when exposed to UI drift or evolving software environments, thanks to their capacity to detect anomalies and autonomously rebuild field relationships [39,42]. Similarly, resilience engineering in healthcare informatics emphasizes maintaining system functionality during unforeseen interface or data structure changes [43]. In particular, semantic AI models can dynamically adapt to new input schemas and recover from unexpected disruptions by leveraging contextual embeddings and pre-trained decision models [39]. Compared with these prior approaches, the AI-ERDA framework demonstrates a more immediate recovery cycle and does not rely on manual retraining or explicit anomaly labeling. While Schwamm et al. [39] and Abhichandani et al. [42] documented post-failure recovery times ranging from several hours to days in intelligent RPA systems, the generative reasoning component of AI-ERDA enables near-continuous operation under similar UI drift conditions. This observation complements the theoretical perspectives of Tolk et al. [43] on resilience engineering by providing empirical evidence that contextual AI integration can sustain operational robustness in a live government healthcare setting. Furthermore, the outcomes extend the work of Thirunavukarasu et al. [19] and Fairhurst et al. [29] by demonstrating that self-adaptive automation not only reduces maintenance burden but also reinforces user trust through uninterrupted workflow continuity. Therefore, the superior resilience of AI-ERDA confirms its readiness for deployment in complex, continuously evolving healthcare ecosystems, where interface stability cannot always be guaranteed and reliability is critical to patient data management.

6.4. Implications for Healthcare Data Automation

The integration of AI-ERDA within healthcare data workflows carries substantial implications for the efficiency, reliability, and long-term sustainability of digital health infrastructures. The experimental findings demonstrate that generative and adaptive automation can reduce manual workload, mitigate downtime, and enhance data quality in government health systems. In practice, such improvements translate directly into faster case processing, fewer human errors, and greater data completeness across interconnected agencies. In systems like NHSO, where personnel routinely manage thousands of records per day, even a one to two percent accuracy improvement represents a meaningful gain in operational throughput and public service reliability. Moreover, by automatically adapting to UI modifications without developer intervention, AI-ERDA reduces system maintenance costs and shortens response times during software updates. These outcomes align with broader policy goals in digital transformation and “AI for Public Health,” where automation serves as both an efficiency multiplier and a reliability assurance mechanism. Importantly, user feedback also revealed enhanced trust and satisfaction when AI systems demonstrated transparent reasoning and consistent performance—two crucial dimensions of human–AI collaboration in healthcare contexts [19]. Recent reviews also emphasize that AI adoption in healthcare must balance efficiency gains with ethical governance and safety oversight [44,45].
Beyond immediate operational gains, the broader implications of AI-assisted automation extend toward building resilient, learning-driven infrastructures for healthcare data ecosystems. As healthcare systems evolve toward interoperability and continuous data integration, the ability of automation frameworks to adapt across variable contexts becomes essential [29]. Adaptive AI agents embedded within RDA pipelines can serve as intelligent intermediaries that ensure accurate data exchange among diverse hospital information systems, laboratory databases, and national health registries. This capability reduces the cognitive and administrative load on healthcare personnel, allowing them to focus on higher-value analytical and clinical tasks. Furthermore, self-healing automation not only supports technical efficiency but also enhances governance and auditability by maintaining verifiable logs of every automated correction [39,42]. Nevertheless, despite these promising advantages, human oversight remains indispensable in healthcare data automation. Given the high-stakes nature of clinical information, final-stage verification by human experts is essential to ensure patient safety, ethical compliance, and contextual accuracy. Thus, AI-ERDA represents a pathway toward sustainable digital transformation, where automation acts not as a replacement for human intelligence but as a collaborative partner that strengthens resilience, transparency, and accountability in healthcare data management.

6.5. Limitations and Recommendations for Future Work

While this study provides compelling evidence of the AI-ERDA framework’s superiority in usability, automation accuracy, and resilience, several limitations merit attention. The experiments were conducted on three healthcare platforms over a 60-day period, which, although sufficient to capture genuine UI changes, may not fully reflect the diversity and long-term evolution of complex national health systems. The framework’s adaptability was primarily reactive, responding to UI alterations after they occurred rather than anticipating them. Moreover, while quantitative metrics such as Field Exact-Match Accuracy and Tolerance Accuracy effectively measure performance, they do not fully capture aspects of explainability, ethical transparency, or user trust, which are essential for responsible AI adoption. Future research should therefore pursue proactive adaptation through reinforcement or self-predictive learning agents capable of forecasting UI evolution, as well as cross-platform semantic alignment to strengthen interoperability. Incorporating explainable AI mechanisms and human–AI co-adaptation studies will also be vital to ensure user confidence, traceability, and accountability. Addressing these directions will enable AI-ERDA to evolve from adaptive automation into a fully autonomous, interpretable, and ethically aligned framework for sustainable healthcare data ecosystems.

7. Conclusions

This study proposed and empirically validated the AI-ERDA framework as a novel solution for improving healthcare data entry across multiple government platforms. By integrating generative and adaptive AI components such as self-healing automation, semantic validation, and intelligent waiting, the framework achieved measurable improvements in usability, automation accuracy, and resilience under real-world operating conditions. Compared with the conventional RDA system, AI-ERDA demonstrated a 12-point increase in usability (SUS 68 to 80), an average accuracy gain of up to 1.8 percent in web environments, and near-perfect performance stability despite multiple genuine UI change events during a 60-day evaluation period. The framework’s generative reasoning and human-in-the-loop oversight reduced manual reconfiguration, maintained workflow continuity, and strengthened user confidence, confirming its suitability for mission-critical healthcare automation. These results collectively indicate that AI-driven adaptability not only minimizes manual intervention but also ensures data reliability and operational sustainability in dynamic healthcare contexts.
Positioned within broader research trends, this work contributes to the ongoing evolution of Robotic Process Automation (RPA) toward intelligent, generative, and governance-driven automation paradigms. Recent studies have highlighted the transition from static, rule-based RPA to context-aware frameworks that integrate machine learning and human oversight to enhance transparency, resilience, and ethical governance [9,15]. The proposed AI-ERDA framework advances this trajectory by operationalizing these principles within regulated public-health infrastructures, bridging theoretical notions of trustworthy and explainable AI [13] with empirical validation in large-scale government systems. This alignment resonates with the growing digital-health literature emphasizing sustainable, human-supervised AI ecosystems that safeguard data integrity and compliance [6,28,29]. Future research should therefore extend this work toward cross-domain deployment, governance assessment, and longitudinal safety auditing, reinforcing AI-ERDA’s role in shaping the next generation of adaptive, human-centered digital transformation in healthcare.

Author Contributions

Conceptualization, S.S. and K.I.; methodology, S.S. and K.I.; software, S.S.; validation, P.W. and K.I.; formal analysis, P.J. and K.I.; investigation, P.W.; resources, K.I.; data curation, K.I.; writing—original draft preparation, S.S. and K.I.; writing—review and editing, K.P.; visualization, P.W. and P.J.; supervision, K.I.; project administration, K.P.; funding acquisition, K.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by Chiang Mai University and National council of Thailand (NRCT).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Committee of Research Ethics, Faculty of Public Health, Chiang Mai University (ET031/2024) on 30 August 2024.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author due to restrictions. The data are not publicly available.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
APIApplication Programming Interface
GenAIGenerative Artificial Intelligence
HITLHuman-in-the-Loop
LLMsLarge Language Model
RDARobotic Desktop Automation
RPARobotic Process Automation
SUSSystem Usability Scale
UIUser Interface

References

  1. Satirapoj, B.; Tantiyavarong, P.; Thimachai, P.; Chuasuwan, A.; Lumpaopong, A.; Kanjanabuch, T.; Ophascharoensuk, V. Thailand Renal Replacement Therapy Registry 2023: Epidemiological Insights into Dialysis Trends and Challenges. Ther. Apher. Dial. 2025, 29, 721–729. [Google Scholar] [CrossRef]
  2. Budd, J. Burnout Related to Electronic Health Record Use in Primary Care. J. Prim. Care Community Health 2023, 14, 21501319231166921. [Google Scholar] [CrossRef]
  3. Spanakis, E.G.; Sfakianakis, S.; Bonomi, S.; Ciccotelli, C.; Magalini, S.; Sakkalis, V. Emerging and Established Trends to Support Secure Health Information Exchange. Front. Digit. Health 2021, 3, 636082. [Google Scholar] [CrossRef]
  4. Syed, R.; Suriadi, S.; Adams, M.; Bandara, W.; Leemans, S.J.J.; Ouyang, C.; Ter Hofstede, A.H.M.; Van De Weerd, I.; Wynn, M.T.; Reijers, H.A. Robotic Process Automation: Contemporary Themes and Challenges. Comput. Ind. 2019, 115, 103162. [Google Scholar] [CrossRef]
  5. Huang, W.-L.; Liao, S.-L.; Huang, H.-L.; Su, Y.-X.; Jerng, J.-S.; Lu, C.-Y.; Ho, W.-S.; Xu, J.-R. A Case Study of Lean Digital Transformation through Robotic Process Automation in Healthcare. Sci. Rep. 2024, 14, 14626. [Google Scholar] [CrossRef]
  6. Park, A.; Jung, S.Y.; Yune, I.; Lee, H.-Y. Applying Robotic Process Automation to Monitor Business Processes in Hospital Information Systems: Mixed Method Approach. JMIR Med. Inform. 2025, 13, e59801. [Google Scholar] [CrossRef]
  7. Patrício, L.; Varela, L.; Silveira, Z. Integration of Artificial Intelligence and Robotic Process Automation: Literature Review and Proposal for a Sustainable Model. Appl. Sci. 2024, 14, 9648. [Google Scholar] [CrossRef]
  8. Eulerich, M.; Waddoups, N.; Wagener, M.; Wood, D.A. Development of a Framework of Key Internal Control and Governance Principles for Robotic Process Automation (RPA). SSRN Electron. J. 2022. [Google Scholar] [CrossRef]
  9. Fernandez, D.; Dastane, O.; Zaki, H.O.; Aman, A. Robotic Process Automation: Bibliometric Reflection and Future Opportunities. Eur. J. Innov. Manag. 2023, 27, 692–712. [Google Scholar] [CrossRef]
  10. Nitayavardhana, P.; Liu, K.; Fukaguchi, K.; Fujisawa, M.; Koike, I.; Tominaga, A.; Iwamoto, Y.; Goto, T.; Suen, J.Y.; Fraser, J.F.; et al. Streamlining Data Recording through Optical Character Recognition: A Prospective Multi-Center Study in Intensive Care Units. Crit. Care 2025, 29, 117. [Google Scholar] [CrossRef]
  11. Reddy, S. Generative AI in Healthcare: An Implementation Science Informed Translational Path on Application, Integration and Governance. Implement. Sci. 2024, 19, 27. [Google Scholar] [CrossRef] [PubMed]
  12. Gao, S.; Fang, A.; Huang, Y.; Giunchiglia, V.; Noori, A.; Schwarz, J.R.; Ektefaie, Y.; Kondic, J.; Zitnik, M. Empowering Biomedical Discovery with AI Agents. Cell 2024, 187, 6125–6151. [Google Scholar] [CrossRef]
  13. Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef]
  14. Nertinger, S.; Kirschner, R.J.; Naceri, A.; Haddadin, S. Acceptance of Remote Assistive Robots with and without Human-in-the-Loop for Healthcare Applications. Int. J. Soc. Robot. 2022, 16, 1131–1150. [Google Scholar] [CrossRef]
  15. Afrin, S.; Roksana, S.; Akram, R. AI-Enhanced Robotic Process Automation: A Review of Intelligent Automation Innovations. IEEE Access 2024, 13, 173–197. [Google Scholar] [CrossRef]
  16. Van Der Aalst, W.M.P.; Bichler, M.; Heinzl, A. Robotic Process Automation. Bus. Inf. Syst. Eng. 2018, 60, 269–272. [Google Scholar] [CrossRef]
  17. Ren, Y.; Xu, H.; Amanatidis, S.; Mao, L.; Shaw, M.; Simone, L.; Wen, L.M. Association of Demographic Characteristics of COVID-19 Patients with RPA Virtual Hospital Service Utilization in 2020-22. Health Policy Technol. 2025, 14, 101117. [Google Scholar] [CrossRef]
  18. Singhal, K.; Azizi, S.; Tu, T.; Mahdavi, S.S.; Wei, J.; Chung, H.W.; Scales, N.; Tanwani, A.; Cole-Lewis, H.; Pfohl, S.; et al. Large Language Models Encode Clinical Knowledge. Nature 2023, 620, 172–180. [Google Scholar] [CrossRef] [PubMed]
  19. Thirunavukarasu, A.J.; Ting, D.S.J.; Elangovan, K.; Gutierrez, L.; Tan, T.F.; Ting, D.S.W. Large Language Models in Medicine. Nat. Med. 2023, 29, 1930–1940. [Google Scholar] [CrossRef]
  20. Mathew, A.; Obi, Y.; Rhee, C.M.; Chen, J.L.T.; Shah, G.; Lau, W.-L.; Kovesdy, C.P.; Mehrotra, R.; Kalantar-Zadeh, K. Treatment Frequency and Mortality among Incident Hemodialysis Patients in the United States Comparing Incremental with Standard and More Frequent Dialysis. Kidney Int. 2016, 90, 1071–1079. [Google Scholar] [CrossRef]
  21. Kanjanabuch, T.; Takkavatakarn, K. Global Dialysis Perspective: Thailand. Kidney360 2020, 1, 671–675. [Google Scholar] [CrossRef]
  22. Goldberg, S.I.; Niemierko, A.; Turchin, A. Analysis of Data Errors in Clinical Research Databases. AMIA Annu. Symp. Proc. 2008, 242–246. [Google Scholar]
  23. Barchard, K.A.; Pace, L.A. Preventing Human Error: The Impact of Data Entry Methods on Data Accuracy and Statistical Results. Comput. Hum. Behav. 2011, 27, 1834–1839. [Google Scholar] [CrossRef]
  24. Gesner, E.; Dykes, P.C.; Zhang, L.; Gazarian, P. Documentation Burden in Nursing and Its Role in Clinician Burnout Syndrome. Appl. Clin. Inform. 2022, 13, 983–990. [Google Scholar] [CrossRef]
  25. Nantsupawat, A.; Nantsupawat, R.; Kunaviktikul, W.; Turale, S.; Poghosyan, L. Nurse Burnout, Nurse-Reported Quality of Care, and Patient Outcomes in Thai Hospitals. J. Nurs. Scholarsh. 2015, 48, 83–90. [Google Scholar] [CrossRef]
  26. Rajkomar, A.; Dean, J.; Kohane, I. Machine Learning in Medicine. New Engl. J. Med. 2019, 380, 1347–1358. [Google Scholar] [CrossRef] [PubMed]
  27. Morley, J.; Machado, C.C.V.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The Ethics of AI in Health Care: A Mapping Review. Soc. Sci. Med. 2020, 260, 113172. [Google Scholar] [CrossRef]
  28. European Commission Ethics Guidelines for Trustworthy AI. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 25 November 2025).
  29. Fairhurst, V.; Marcum, C.S.; Haun, C.; Mensah, P.B.; Arogundade, F.Q.; Kaul, R.P.; Narayanan, S.; Shah, S. Peer Review of “Artificial Intelligence in Healthcare: 2023 Year in Review (Preprint)”. JMIRx Med. 2024, 5, e65151. [Google Scholar] [CrossRef]
  30. Rivera, F.; Villareal, L.; Prádanos, P.; Hernández, A.; Palacio, L.; Muñoz, R. Enhancement of Swine Manure Anaerobic Digestion Using Membrane-Based NH3 Extraction. Bioresour. Technol. 2022, 362, 127829. [Google Scholar] [CrossRef]
  31. Collins, S.M.; Hedd, A.; Montevecchi, W.A.; Burt, T.V.; Wilson, D.R.; Fifield, D.A. Small Tube-Nosed Seabirds Fledge on the Full Moon and throughout the Lunar Cycle. Biol. Lett. 2023, 19, 20230290. [Google Scholar] [CrossRef]
  32. Yuan, H.; Kang, L.; Li, Y.; Fan, Z. Human-in-the-loop Machine Learning for Healthcare: Current Progress and Future Opportunities in Electronic Health Records. Med. Adv. 2024, 2, 318–322. [Google Scholar] [CrossRef]
  33. Brooke, J. SUS: A “Quick and Dirty” Usability Scale. In Usability Evaluation in Industry; 1986; pp. 189–194. [Google Scholar]
  34. Bangor, A.; Kortum, P.; Miller, J. Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. J. Usability Stud. Arch. 2009, 4, 114–123. [Google Scholar]
  35. Lewis, J.R.; Sauro, J. Revisiting the Factor Structure of the System Usability Scale. J. Usability Stud. Arch. 2017, 12, 183–192. [Google Scholar] [CrossRef]
  36. Lightning AI Exact Match — PyTorch-Metrics 1.8.2 Documentation. Available online: https://lightning.ai/docs/torchmetrics/stable/classification/exact_match.html (accessed on 25 November 2025).
  37. Clinical Data Management Team The Importance of Quality Tolerance Limits (QTLs) in Clinical Trials. Available online: https://www.quanticate.com/blog/quality-tolerance-limits-in-clinical-trials (accessed on 25 November 2025).
  38. Donoso-Guzmán, I.; Kacafírková, K.S.; Szymanski, M.; Jacobs, A.; Parra, D.; Verbert, K. A Systematic Review of User-Centred Evaluation of Explainable AI in Healthcare. arXiv 2025. [Google Scholar] [CrossRef]
  39. Schwamm, L.H.; Pletcher, S.; Erskine, A. AI and Technology Enabled Clinical Workflow Redesign. Telemed. Rep. 2024, 5, 415–420. [Google Scholar] [CrossRef]
  40. Nimkar, P.; Kanyal, D.; Sabale, S.R. Increasing Trends of Artificial Intelligence with Robotic Process Automation in Health Care: A Narrative Review. Cureus 2024, 16, e69680. [Google Scholar] [CrossRef]
  41. Figueroa, C.A.; Aguilera, A.; Chakraborty, B.; Modiri, A.; Aggarwal, J.; Deliu, N.; Sarkar, U.; Williams, J.J.; Lyles, C.R. Adaptive Learning Algorithms to Optimize Mobile Applications for Behavioral Health: Guidelines for Design Decisions. J. Am. Med. Inform. Assoc. 2021, 28, 1225–1234. [Google Scholar] [CrossRef] [PubMed]
  42. Abhichandani, S.; Vadrevu, N.R.T.; Bagmar, V. AI-Driven Self-Healing in Test Automation: A Review of Autonomous Quality Assurance; IEEE: New York, NY, USA, 2025; pp. 1601–1608. [Google Scholar]
  43. Tolk, A.; Rainey, L.B. Modeling and Simulation Support for System of Systems Engineering Applications; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  44. Zhu, L.; Lu, Q.; Ming, D.; Lee, S.U.; Wang, C. Designing Meaningful Human Oversight in AI. SSRN Electron. J. 2025. [Google Scholar] [CrossRef]
  45. Haque, A. Responsible Artificial Intelligence (AI) in Healthcare: A Paradigm Shift in Leadership and Strategic Management. Leadersh. Health Serv. 2025, 38, 644–656. [Google Scholar] [CrossRef]
Figure 1. System architecture with five layers integrating RDA, Generative AI, and human oversight for nephrology data workflows.
Figure 1. System architecture with five layers integrating RDA, Generative AI, and human oversight for nephrology data workflows.
Technologies 13 00558 g001
Figure 2. RDA bot with Generative AI modules for automation, validation, and HITL oversight across government systems.
Figure 2. RDA bot with Generative AI modules for automation, validation, and HITL oversight across government systems.
Technologies 13 00558 g002
Figure 3. Real-time monitoring interface of the RDA/AI-ERDA system showing task-level progress during data entry operations.
Figure 3. Real-time monitoring interface of the RDA/AI-ERDA system showing task-level progress during data entry operations.
Technologies 13 00558 g003
Figure 4. Automatic UI change detection in the AI-ERDA system, showing real-time identification and remapping of modified interface elements for adaptive automation.
Figure 4. Automatic UI change detection in the AI-ERDA system, showing real-time identification and remapping of modified interface elements for adaptive automation.
Technologies 13 00558 g004
Figure 5. Experimental workflow of the AI-ERDA evaluation, illustrating data preprocessing, control (RDA, n = 8) and intervention (AI-ERDA, n = 8) setups across three healthcare systems during a two-month period.
Figure 5. Experimental workflow of the AI-ERDA evaluation, illustrating data preprocessing, control (RDA, n = 8) and intervention (AI-ERDA, n = 8) setups across three healthcare systems during a two-month period.
Technologies 13 00558 g005
Figure 6. (Left) The NHSO system, a web-based application; and (Right) the TRT system, PC-based information system.
Figure 6. (Left) The NHSO system, a web-based application; and (Right) the TRT system, PC-based information system.
Technologies 13 00558 g006
Figure 7. The Health Service Information Bureau platform, a PC-based information system.
Figure 7. The Health Service Information Bureau platform, a PC-based information system.
Technologies 13 00558 g007
Figure 8. Comparison of System Usability Scale (SUS) ratings for RDA and AI-ERDA systems. AI-ERDA scored within the “Excellent” range, indicating higher perceived usability than the conventional RDA system.
Figure 8. Comparison of System Usability Scale (SUS) ratings for RDA and AI-ERDA systems. AI-ERDA scored within the “Excellent” range, indicating higher perceived usability than the conventional RDA system.
Technologies 13 00558 g008
Figure 9. Daily Field Exact-Match Accuracy of the Web System (NHSO) comparing RDA and AI-ERDA. Accuracy drops mark UI-change events where AI-ERDA recovered rapidly, confirming its adaptive resilience.
Figure 9. Daily Field Exact-Match Accuracy of the Web System (NHSO) comparing RDA and AI-ERDA. Accuracy drops mark UI-change events where AI-ERDA recovered rapidly, confirming its adaptive resilience.
Technologies 13 00558 g009
Figure 10. Daily Field Exact-Match Accuracy of PC System 1 (CHi) for RDA and AI-ERDA. Both systems sustained high accuracy, with AI-ERDA showing slightly steadier performance under transient latency.
Figure 10. Daily Field Exact-Match Accuracy of PC System 1 (CHi) for RDA and AI-ERDA. Both systems sustained high accuracy, with AI-ERDA showing slightly steadier performance under transient latency.
Technologies 13 00558 g010
Figure 11. Daily Field Exact-Match Accuracy of PC System 2 (RDA vs. AI-ERDA). Both frameworks maintained near-perfect accuracy, with AI-ERDA exhibiting smoother stability under transient fluctuations.
Figure 11. Daily Field Exact-Match Accuracy of PC System 2 (RDA vs. AI-ERDA). Both frameworks maintained near-perfect accuracy, with AI-ERDA exhibiting smoother stability under transient fluctuations.
Technologies 13 00558 g011
Table 1. Roles of Generative AI in the RDA Framework.
Table 1. Roles of Generative AI in the RDA Framework.
AI FunctionDescriptionExample of Application
Self-Healing AutomationAutomatically adapts to minor UI changes without requiring manual reprogramming of the bot.A “Save” button is moved to a different position or renamed; the bot continues operation seamlessly.
Semantic Validation & Anomaly
Detection
Validates the semantic correctness of entered data and detects anomalies across systems.A dialysis frequency entry exceeds the normal limit, or a lab value falls outside the expected range, triggering an alert.
Table 2. Item-wise comparison of SUS responses between the RDA and AI-ERDA systems.
Table 2. Item-wise comparison of SUS responses between the RDA and AI-ERDA systems.
NoQuestionsRDA (n = 8)AI-ERDA (n = 8)
MeanSDMeanSD
1I think that I would like to use this system frequently.3.580.533.960.47
2I found this system unnecessarily complex.2.730.612.380.55
3I thought this system was easy to use.3.840.574.120.41
4I think I would need support from a technical person to use this system.3.060.642.680.50
5I found the various functions in this system to be well integrated.3.310.584.250.39
6I thought there was too much inconsistency in this system.3.180.602.420.49
7I imagine most users would learn to use this system very quickly.3.910.494.180.45
8I found this system cumbersome to use.2.870.592.450.52
9I felt confident using this system.3.460.564.330.37
10I needed to learn a lot of things before I could get going with this system.3.020.632.730.54
Table 3. Comparison of Field Exact-Match Accuracy and Tolerance Accuracy across platforms.
Table 3. Comparison of Field Exact-Match Accuracy and Tolerance Accuracy across platforms.
PlatformMetricRDA
(Mean ± SD)
AI-ERDA
(Mean ± SD)
Improvement (%)
Web System (NHSO)Field Exact-Match Accuracy (%)97.2 ± 1.499.0 ± 0.5+1.8
Tolerance Accuracy (%)96.4 ± 1.398.9 ± 0.7+0.8
PC System 1 (CHi)Field Exact-Match Accuracy (%)99.4 ± 0.399.7 ± 0.2+0.3
Tolerance Accuracy (%)99.6 ± 0.299.8 ± 0.1+0.2
PC System 2 (TRT)Field Exact-Match Accuracy (%)99.6 ± 0.299.8 ± 0.1+0.2
Tolerance Accuracy (%)99.7 ± 0.199.9 ± 0.1+0.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sangamuang, S.; Worragin, P.; Puritat, K.; Julrode, P.; Intawong, K. A Generative AI-Enhanced Robotic Desktop Automation Framework for Multi-System Nephrology Data Entry in Government Healthcare Platforms. Technologies 2025, 13, 558. https://doi.org/10.3390/technologies13120558

AMA Style

Sangamuang S, Worragin P, Puritat K, Julrode P, Intawong K. A Generative AI-Enhanced Robotic Desktop Automation Framework for Multi-System Nephrology Data Entry in Government Healthcare Platforms. Technologies. 2025; 13(12):558. https://doi.org/10.3390/technologies13120558

Chicago/Turabian Style

Sangamuang, Sumalee, Perasuk Worragin, Kitti Puritat, Phichete Julrode, and Kannikar Intawong. 2025. "A Generative AI-Enhanced Robotic Desktop Automation Framework for Multi-System Nephrology Data Entry in Government Healthcare Platforms" Technologies 13, no. 12: 558. https://doi.org/10.3390/technologies13120558

APA Style

Sangamuang, S., Worragin, P., Puritat, K., Julrode, P., & Intawong, K. (2025). A Generative AI-Enhanced Robotic Desktop Automation Framework for Multi-System Nephrology Data Entry in Government Healthcare Platforms. Technologies, 13(12), 558. https://doi.org/10.3390/technologies13120558

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop