1. Introduction
Early intervention is essential for acute medical conditions such as stroke because clinical testing, diagnostic decisions, and treatments need to be started as early as possible, ideally within the first hour [
1]. Remote Patient Monitoring (RPM) has matured across multiple clinical contexts, with recent systematic reviews showing benefits for patient safety and adherence and a downward trend in utilization-related costs (e.g., lower readmissions and fewer outpatient visits), albeit with heterogeneous endpoints [
2]. However, most RPM systems remain disconnected from electronic health records (EHRs) and routine clinical workflows. This disconnection leads to latency bottlenecks, since they also rely on rigid, productized designs that make it difficult to incorporate new sensor technologies and evolving clinical models [
3,
4]. Meaningful adoption of RPM in clinical practice requires scalable systems that have low latency with an audit trail while respecting the ease of incorporating new medical devices and diagnostic algorithms.
The limitations can be addressed through a combination of real-time vital data transmission using Message Queue Telemetry Transport (MQTT), immutably logging data via blockchain with zero knowledge (ZK) Rollups, a scalable proof-based batching layer for blockchain, and pattern-oriented software architecture. MQTT provides a lightweight communication transport protocol that is ideal for time-sensitive clinical flows for low-latency, high-throughput data transport [
5]. As such, using blockchain can help record tamper-evident activity relevant to healthcare audits, including those referenced by HIPAA (U.S.), PIPEDA (Canada), and OPC guidance in Canada [
6,
7,
8].
MQTT has been widely adopted in IoT and e-health for low-latency publish–subscribe messaging [
9,
10,
11]. However, prior RPM systems often treat MQTT as a transport add-on rather than an end-to-end design principle. As a result, they rarely report sustained sub-second sensor-to-alert latency under high concurrency. Separately, blockchain has been explored for clinical auditability [
12,
13,
14,
15,
16], yet Layer-1 approaches frequently incur prohibitive costs and add write-path latency that undermines real-time use. Advances in ZK-Rollups amortize Layer-1 data availability, significantly reducing per-record cost while preserving tamper evidence.
From a software engineering perspective, pattern-oriented designs (e.g., Factory/Strategy, Observer, Adapter) improve maintainability and integration velocity [
17,
18,
19,
20]. Still, existing healthcare systems seldom combine these patterns with a streaming architecture that allows hot-plug model/sensor onboarding without downtime. Wearable biosensors (e.g., Samsung Galaxy Watch 5, Apple Watch Series 8) now provide physiological sensor data with clinically relevant accuracy characteristics [
21,
22]. Nevertheless, prior pipelines typically require proprietary ecosystems or manual code changes to integrate new devices/models, and rarely pair real-time alerts with immutable and low-cost audit trails.
Preliminary Clinical applicability was demonstrated by integrating a Food and Drug Administration (FDA)-cleared smartwatch [
21,
23] and a stroke risk stratification model in a non-clinical environment, which achieved a median end-to-end latency of 480 ms under load, ensuring a sub-second alert delivery time for this time-sensitive decision-making. For tamper-evident auditing, healthcare reviews conclude that blockchain’s strongest near-term utility is provenance and integrity assurance (rather than storing PHI on-chain), with most prototypes focusing on event logging and access control [
13,
24,
25]. Reviews from the perspective of healthcare professionals identify barriers such as increased workload from data influx, unclear reimbursement, security/privacy concerns, clinician training needs, and variability in patient digital literacy [
26,
27].
Even with the advances in monitoring technology, many of the aging RPM competition products cannot offer sub-second clinical alerts, update/add devices on-the-fly with zero downtime, or provide economically viable, immutable audit trails and logs at high volumes. To address identified gaps, this paper presents a comprehensive Remote Health Monitoring System (RHMS) that utilizes the real-time messaging capabilities of MQTT, blockchain-based immutable logging through ZK-Rollups, and a software architecture based on established software engineering design patterns. We evaluated RHMS through three research questions (RQs) as the main contributions of our study. RQ1: Can a previously unseen wearable be integrated into the RHMS in under ten minutes with zero downtime or disruption to ongoing clinical operations? RQ2: Can the framework sustain a median end-to-end latency below one second while handling at least 500 concurrent real-time sensor streams? RQ3: Can every alert event (i.e., a salted hash of the alert payload) be immutably recorded on a blockchain ledger at a modeled per-record cost below $0.001 (USD)?
RHMS provided (i) publish–subscribe streaming with configurable alert policies to reduce notification “flapping,” (ii) an adapter interface that isolates clinical models from transport/UI concerns to simplify onboarding, and (iii) event-only immutable audit trails to support accountability while minimizing PHI exposure. Our application of the ZK-Rollup blockchain allowed us to deliver immutable ledger entries at a fraction of the cost of using a conventional cloud-based database [
16]. An operational web dashboard implementing the RHMS pipeline was deployed and publicly viewable.
2. Methodology
2.1. System Overview
The RHMS demonstrates a modular architecture that incorporates MQTT messaging, blockchain-based audit logging, wearable sensors, machine learning models, and alerts (
Figure 1). The RHMS includes an MQTT layer for low latency and real-time data streaming with synthetic data, a blockchain for auditable-compliant and immutable audit logging, wearable sensors for continuous physiological sensor data over time, machine learning (ML) models to help make predictions and understanding to provide clinicians with actionable information. The design takes into consideration low-latency performance, scalability, and auditable compliance, and provides an architectural approach that is adaptable to different clinical workflows.
At a high level, RHMS takes signals from everyday wearables and routes them through one unified processing “engine” that (i) receives the data, (ii) interprets it with explainable ML models, (iii) stores it for short- and long-term use, (iv) records each event in an immutable, append-only audit log (blockchain-backed), and (v) ability to share clinical results with existing hospital systems. The outputs are simple and immediate, alerting when action is needed, providing a live view for caregivers, and recording updates in clinical systems. Because the engine is modular, new sensors or decision tools can be added with minimal effort, keeping the framework adaptable as needs evolve.
2.2. System Architecture and Workflows (Steps)
Figure 2 is structured to show the component layout and the data flow. Numbers (1–4) mark the major subsystems, while letters (a–l) trace the step-by-step message path. Black arrows show the real-time path to the user interface; gold arrows show persistence/standards/audit paths (database writes, FHIR/EHR exchange, and blockchain logging).
Major subsystems (1–4): (1) Wearable input: A smartwatch streams live vitals via MQTT and serves as the system’s entry point. These messages are time-stamped and authenticated, then proceed through steps (a–c) into the secure ingress (2). (2) Secure ingress: MQTT traffic passes the firewall and API gateway with TLS/SSL, which validates credentials and normalizes payloads before routing. (3) Application/UI (MVVM): The ViewModel subscribes to the Repository; a lightweight cache smooths UI updates under network jitter. (4) Data services: Firebase RTDB supports sub-second dashboards; the blockchain ledger provides a verifiable audit record; FHIR formatting enables HL7/EHR interoperability.
Flow (a–l). A new reading or app action begins at (a), passes (b) the firewall and (c) the API gateway, then reaches the Repository (e), which fans out the same message to multiple sinks. Along the black path, the Repository notifies the ViewModel (d) and cache (5) so the View (i) and web dashboard (l) show current vitals and any risk status immediately. In parallel, the ML pipeline (j) computes a stroke risk score and emits a risk alert (k). Along the gold path, the message is written to the real-time DB (f), recorded on blockchain ledger (g) for a verifiable audit record, and converted to FHIR (h) for downstream HL7/EHR exchange. Separating places (numbers) from steps (letters) makes the diagram easy to scan: readers first find the subsystem box, then follow the lettered arrows to see exactly how one vital reading becomes a live display, an alert, an EHR record, and an auditable log.
The RHMS web dashboard is implemented using Django v4.2.23 with server-rendered templates and lightweight Vanilla JavaScript (ES6). Live UI updates are delivered through the Firebase Realtime Database (Web SDK v10.8.0), while durable/history views are served via authenticated Django REST Framework v3.16.0 endpoints backed by PostgreSQL v15. The MQTT ingest layer uses Eclipse Mosquitto v2.0.18 with TLS, and client devices publish via the paho-mqtt v2.1.0 library.All components were developed and tested across macOS 14 (Sonoma), Windows 11 (23H2). Python-based services were executed using Python 3.11, and deployments were containerized where appropriate.
2.3. Software and Tools
All experiments were conducted using the following software stack:
Operating systems: macOS 14 (Sonoma), Windows 11 (23H2).
Backend: Python 3.11, Django 4.2.23, Django REST Framework 3.16.0, django-environ 0.12.0, sqlparse 0.5.3.
Database: PostgreSQL 15.
MQTT Ingest: Eclipse Mosquitto 2.0.18 (TLS), paho-mqtt 2.1.0.
Realtime communication: Firebase Realtime Database Web SDK 10.8.0, Firebase Admin SDK (Python) 6.9.0.
Frontend: Chart.js 4.4.2, HTML5/ES6 JavaScript.
Machine learning adapter: CatBoost 1.2.5 (placeholder inference), numpy 2.3.0, matplotlib 3.10.3.
FHIR/HL7 interoperability: fhir.resources 8.0.0, fhir-core 1.0.1.
Blockchain layer: Solidity compiler 0.8.19, web3.py 7.12.0, Ganache 7.9.1, py-solc-x 2.0.4.
Load testing: Locust 2.25.0.
Implementation specifics (MQTT & Ledger): Eclipse Mosquitto serves as the MQTT broker. QoS 1 is used for vitals (at-least-once), QoS 0 for non-critical UI pings. Topic schema: rhms/patientId/vitals/signal. Broker parameters: keep-alive 30 s, session-expiry 60 s, no retained vitals. Transport security: TLS 1.3 with mutual TLS at the broker and token validation at the API gateway. For the immutable audit, alert payloads are hashed (SHA-256 + per-site salt) and the hash only is emitted as a Layer-2 (rollup) event; no PHI is written on-chain. The contract functions as an append-only audit log.
2.4. Software Design Patterns
RHMS used a pattern-driven architecture to keep integrations fast, changes safe, and performance predictable. Patterns were applied where they deliver concrete leverage: hot-plug onboarding of sensors and ML models, real-time UI updates without polling, tight access control around diagnosis and ledger writes, uniform domain objects for maintainability, incremental extension of vital metrics, and shared connections to avoid resource waste (
Table 1). Implementation details for the Factory and Adapter pattern used in the model and sensor onboarding workflow are provided in
Appendix A.1. This design directly supported our objectives: rapid, downtime-free onboarding (RQ1), sub-second end-to-end latency at scale (RQ2), and a governed path for compliant recording of decisions (RQ3).
Briefly: Factory + Strategy generated interchangeable adapters for models and devices, enabling <10 min setup and zero downtime swaps. Observer-powered live dashboards by letting Firebase RTDB changes push UI refreshes. Protection Proxy enforced strict RBAC at the diagnosis/ledger boundary to guard sensitive actions. Composite unified PatientProfile and VitalReading so services share a single, coherent data model. Decorator added new vital fields without touching existing classes. Singleton centralized Firebase Admin and Web3 connections to cut overhead. Finally, the Adapter pattern normalized heterogeneous device and model interfaces for plug-and-play operation.
Table 1 shows the software design patterns in RHMS with component mapping, concrete implementations, and technical benefits. Note that RBAC stands for Role-Based Access Control, CLI refers to Command Line Interface, UI refers to user interface, and SDK refers to Software Development Kit. Web3, often called Web 3.0, is a proposed next version of the World Wide Web built around decentralization, blockchain technology, and token-based economies.
For implementation details, see
Appendix A, Code of Listing A1 (Factory and Adapter Pattern Implementation, Python 3.11.12).
2.5. Dual-Database Rationale (Realtime UI vs. Durable History)
We used Firebase Realtime Database (RTDB) for sub-second UI state and PostgreSQL for durable history/analytics. This choice reflected the present TRL 3–4 scope: RTDB provides push-based listeners (no polling), offline-first sync on web/mobile clients, and a managed auth+rules model that fits our single-page dashboard without introducing a separate cache/broker tier. In contrast, a Redis-centric design (e.g., Pub/Sub or Streams) would require operating and securing a high-availability cluster (persistence policy, failover, backups, ACLs), implementing client-side subscription semantics and offline behavior, and persisting events to PostgreSQL for longitudinal queries. For this feasibility study, minimizing DevOps surface area while retaining sub-second UI updates was the dominant constraint; PostgreSQL remained the system of record for audits/analytics. The adapter layer isolated storage so that future upgrades (e.g., migrating hot paths from RTDB to Redis Streams or Kafka) would not change model interfaces or transport.
2.6. Blockchain-Based Immutable Ledger
To create an immutable audit log without high fees, RHMS recorded a hashed alert event on ZKsync Rollups (Layer-2) [
15]. A rollup batches many writes together and pays once for L1 data availability, which lowers the cost per record.
For estimating the costs the following procedures were followed: We (i) ran the logging call on a local simulator to capture the gas used for the Layer-2 execution, (ii) added the L1 data-availability portion defined by the ZKsync fee model and amortized by the batch size [
25], and (iii) converted the result to USD using the ETH price at the time. We repeated this with several gas price samples to reflect normal volatility.
Empirical Testnet Validation Protocol (zkSync Era Sepolia)
To complement modeled costs without expanding scope beyond TRL 3–4, we performed a minimal on-chain validation on zkSync Era Sepolia (Chain ID 300). We deployed a single-purpose event-emitter contract (
AlertLog) that exposes
log(bytes32) and
batchLog(bytes32[]) and emits an
AlertLogged(h,ts) event per record (no on-chain storage writes; no PHI). We executed two transactions corresponding to batching regimes
and
with a fixed pseudo-payload hash, then retrieved the Layer-2 transaction fee from public block-explorer receipts. Per-record cost was computed as
fee/
B. Contract address and transaction hashes are provided in
Appendix C for verifiability and reproduction.
2.7. Interoperability Layer
The RHMS employed a robust interoperability layer to facilitate flawless integration with healthcare IT systems using standardized data formats such as HL7 (Health Level Seven) and FHIR (Fast Healthcare Interoperability Resources).
2.7.1. FHIR-HL7 Data Translation
The Django backend converted incoming wearable vitals (MQTT JSON) into FHIR Observation resources and then into HL7 v2 messages for EHR ingestion. This automated path ensured compatibility, reduced integration effort, and preserved end-to-end security.
Figure 3 shows the primary flow from left to right.
MQTT JSON → Interoperability Engine. A wearable (or gateway) publishes a compact JSON vital to the patient/id/vitals with QoS 1 (at-least-once). The transport is protected with TLS 1.3.
JSON → FHIR R4 Observation (≈5 ms). The engine validates the payload and creates a FHIR Observation tagged as a vital sign, attaching standard codes (e.g., LOINC 8867-4 for heart rate and UCUM for units).
FHIR → HL7 v2 message (±10 ms) via the HL7 Mapper (Python/HL7apy). HL7 v2 is the long-standing messaging format used by hospital systems (typically an ORU⌃R01 “observation result” message with MSH/PID/OBR/OBX segments).
The resulting HL7 v2 message was sent over the HL7 v2.x channel to the EHR Database, where the reading was stored (an ACK is returned by the EHR in normal operation). Solid arrows in the figure show the primary data path (MQTT JSON → FHIR JSON → HL7 v2 → EHR). Dashed lines show the Error Queue path: if validation, mapping, or delivery fails, the message is queued and retried; failures beyond the retry policy are dead-lettered for manual review. In short, the diagram shows how a small MQTT JSON vital is standardized to FHIR, converted to HL7 v2, and reliably delivered to the EHR with secure transport and safe retries.
2.7.2. JSON Schema Mapping for Clinical Observations
The translation rules were defined in a deployable mapping file (YAML/JSON) loaded by the Interoperability Engine. For each vital type, the file specifies: input field names, LOINC code, UCUM unit, and the FHIR ↔ HL7 mapping directives (e.g., value → OBX-5, unit → OBX-6, timestamp → OBX-14). Adding a new device or metric, therefore, requires updating the mapping file only; no core service code changes are needed. See
Appendix A, Code of Listing A2 for the FHIR Observation schema example. A complete example of the FHIR Observation JSON schema used in our pipeline is provided in
Appendix A.2.
Lastly, this JSON schema was dynamically populated using the adapter pattern, which minimized the development burden of adding new devices or clinical algorithms.
2.8. Experimental Setup
This section details the technical validation used to characterize RHMS’s latency, throughput, and integration workflow under controlled synthetic load with a limited 30 min wearable trace. Clinical diagnostic accuracy is out of scope for this TRL 3–4 feasibility study.
2.8.1. Stroke Risk Prediction Model Integration
To evaluate system performance with a stroke risk model and adaptability, we integrated a previously published stroke risk model built with CatBoost (a gradient-boosted decision-tree library well-suited to tabular physiological data) [
28]. The model used wearable-derived physiological sensor data, including heart-rate variability, ECG-derived features, and accelerometer statistics, and outputs a per-window risk score.
Integration uses our adapter layer, which maps the live MQTT stream to the model’s expected inputs and standardizes the output schema for downstream alerting. Configuration is provided via a single YAML file (a human-readable data serialization format commonly used for configuration files), specifying keys such as model_name, artifact_path, input_signals, output_schema, thresholds, and performance_target_ms. The adapter imposes an inference timeout limit of no more than 150 ms per window. If this limit is exceeded, the alerting process is bypassed for that particular window. Alerts are triggered when the risk score for a window meets or surpasses a predefined threshold in YAML configuration, provided this condition persists for N consecutive windows (see
Appendix A, Code of Listing A3). The full YAML configuration used for the stroke-risk adapter is shown in
Appendix A.3.
The model onboarding follows a three-step approach (no downtime):
Place the serialized model artifact in the models directory.
Add the YAML configuration with input signal names and output mapping.
The adapter loader hot-reloads the configuration, making the model immediately available to the pipeline.
This minimal configuration ensured immediate deployment without service interruption. The inference latency for the CatBoost adapter met our target (≤150 ms), with end-to-end sensor-to-alert latency.
Reproducibility note: Adapter timing (window/timeout), thresholding, debounce, and cooldown parameters are surfaced as configuration and reproduced in
Appendix B.
2.8.2. Workflow Timeline
The RHMS validation primarily focused on the acute stroke scenario, in which timely identification significantly impacts patient outcomes. Automated alerts were triggered when measurements crossed predefined thresholds to support timely intervention.
Figure 4 illustrates the timeline and workflow from sensor data acquisition to health risk alert generation within RHMS.
Our experiments simulated realistic acute-stroke scenarios (n = 120 episodes) using wearable sensor data streams. RHMS produced algorithmic risk alerts within a sub-second engineering target (median 480 ms). This is reported strictly as a system performance metric.
2.8.3. Experimental Setup for Latency and Throughput Testing
Latency and throughput experiments involved controlled synthetic load testing combined with real-world wearable data traces. Wearable device simulators generated synthetic MQTT payloads at clinically relevant rates (1 Hz per patient stream), corresponding to the update frequency of leading FDA-cleared wearable sensors. Network conditions were simulated using Linux tc-netem to emulate realistic WAN (Wide Area Network) conditions (median latency: 100 ms, jitter: ±10 ms, packet loss: 0.1%). The exact tc-netem WAN profile used for latency, jitter, and packet-loss emulation is listed in
Appendix A.5.
We performed a 30 min continuous recording on a Samsung Galaxy Watch 5 (Samsung Electronics, Suwon, Republic of Korea) by acquiring ECG, heart-rate, and accelerometer data to compare and validate synthetic benchmarks against real-world sensor outputs. Performance was benchmarked using the Locust load-testing framework, configured for a sustained load of 500 concurrent sensor streams over 5 min intervals, measuring end-to-end latency and throughput stability. The Locust launcher and task skeleton used to generate the 500 parallel 1 Hz publishers are shown in
Appendix A.6. The 500 concurrent sensor streams are not intended to represent a clinical scenario; rather, it is a stress test that is meant to test our system’s upper limits when it comes to scalability and performance boundaries.
2.8.4. Adapter Policy (Inference and Alerting)
Our approach limited the duration of inference for each window to a maximum of 150 ms, achieved through the use of asynchronous execution that can be canceled upon reaching a timeout. An alert was triggered if the risk level exceeds a threshold of (
), with a debounce interval of 10 s and a cooling period of 60 s; the default threshold (
) is set at 0.35. These parameters were all configurable, and the full adapter policy is provided in
Appendix B.
2.8.5. Security and Threat Model Assessment
We performed a preliminary STRIDE-style assessment and security scanning with OWASP ZAP and SonarQube. Findings were triaged and remediated before deployment. All external traffic terminates at an API gateway that enforces TLS 1.3, validates Firebase ID tokens, and applies request-level authorization. Data in transit was protected end-to-end.
Table 2 summarizes key risks and mitigations: Spoofing is mitigated by mutual TLS at the MQTT broker; tampering was mitigated by strict RBAC and input validation at the API layer; information disclosure was mitigated by log sanitization; denial of service was mitigated by QoS limits and client throttling; elevation of privilege is mitigated by hardening admin endpoints with MFA. These controls support HIPAA/GDPR objectives for authentication/authorization, integrity, confidentiality, availability, and accountability.
3. Results
RHMS’s empirical performance metrics were evaluated against key technical performance requirements identified in the research questions. The results reflected system-level technical performance and should not be interpreted as clinical validation or efficacy.
3.1. Latency Performance
Latency was evaluated under sustained clinical load scenarios using 500 concurrent sensor streams (1 Hz each). Measurements reflected median end-to-end latency (sensor-to-dashboard alert) as shown in
Table 3.
Under test conditions, RHMS consistently delivered sub-second end-to-end latency under load. These results do not constitute clinical efficacy [
1].
Figure 5 shows the empirical CDF of end-to-end alert latency measured at 1 Hz across 500 concurrent streams over 5 min (≈150,000 samples). The vertical lines indicate that the median latency is 480 ms, and the 95th percentile is 590 ms.
3.2. Throughput Performance
The system throughput was evaluated by incrementally increasing the number of concurrent MQTT sensor streams until performance degradation (defined as >1 s latency) occurred. The test used Locust 2.25.0: an open-source load-testing tool. Using Locust v2.25.0, the system sustained 545 streams/second for 5 min under steady load (
Table 4).
Comparative context: Prior MQTT-based RPM evaluations typically report packet-level timing on single streams rather than end-to-end alert latency at scale. For example, Yew et al. report avg. jitter = 5.7 ms (LAN) and 50.1 ms (WAN) with 0% packet loss across 5000 ECG packets [
10]. By contrast, under a sustained 545-stream (1 Hz) stress test, RHMS achieved median end-to-end alert latency = 480 ms and P95 = 590 ms (
Figure 5). Alshammari [
11] emphasizes reducing latency in an MQTT-based healthcare design but does not provide quantified end-to-end timings; we therefore present our 545-stream figure as scalability headroom rather than a clinical census assumption.
RHMS exhibited consistent and reliable data management at the system level, effectively supporting throughput capacities that are applicable to standard ward-scale environments.
3.3. Blockchain Transaction Cost Analysis
To evaluate the cost of immutably recording alert events, we modeled fees using an instrumented local execution to estimate L2 gas usage (
) together with historical fee-oracle snapshots for L2/L1 gas prices (
F and
). Costs use snapshots from August–September 2025 and contemporaneous ETH–USD rates. All costs are reported as modeled values derived from snapshots rather than live on-chain transactions (
Table 5).
At the median modeled rate of $0.00016 (USD), approximately 6000 events can be logged per US dollar. This is 6–7× lower than typical pay-as-you-go immutable storage options, subject to fee conditions and batching policy.
3.4. Empirical Validation on Testnet
To complement the modeled cost analysis, we conducted a lightweight empirical validation using a minimal event-emitter smart contract (AlertLog.sol) deployed on the zkSync Era Sepolia testnet (Chain ID 300). The contract emitted alert-hash logs under two batching regimes ( and ) without storing data or exposing PHI. The total Layer-2 fees were 0.000002704825 ETH and 0.000004863575 ETH, corresponding to per-record costs of ETH and ETH, respectively. The results empirically validated the batching cost model, confirming an ≈82% reduction in per-record cost when using 10-record batches.
This testnet-based approach was aligned with TRL-3/4 scope and ensured verifiable, reproducible evidence without requiring mainnet spending.
3.5. Modeled Cost Calculator and Sensitivity
To remain within a TRL 3–4 feasibility scope, all ledger costs were modeled from fee-oracle snapshots without any on-chain spend. We parameterize the per-record cost (
Table 6) by batch size
B, L2 gas price
F (wei), L1 gas price
(wei), L2 execution gas
(units), L1 data-availability gas
(units), and the ETH–USD rate
R. The modeled per-record USD cost is
where
converts wei to ETH. A short routine for recomputing
from fee snapshots is provided in
Appendix A, Code of Listing A4—All figures in this subsection, therefore, represent modeled costs. A short routine for recomputing C USD from fee snapshots is provided in
Appendix A.4.
Interpretation: Increasing B amortizes the L1 data-availability term, while off-peak operation reduces both F and . In our modeled snapshot, moving from (peak) to (off-peak) reduces cost by ∼19×. Because the audit log is asynchronous, these controls do not affect real-time alert latency; operational guardrails for keeping within a site-defined band.
Empirical Testnet Check (zkSync Era Sepolia)
To complement our modeled cost analysis, we deployed a minimal event–emitter (
AlertLog) on zkSync Era Sepolia (Chain ID 300) and executed alert–hash logs under two batching regimes:
and
(
Table 7). The measured Layer–2 fees were 0.000002704825 ETH and 0.000004863575 ETH, yielding per–record costs of 2.704825 × 10
−6 ETH (
) and 4.863575 × 10
−7 ETH (
= fee/10). These measurements validate the batching effect (≈82% reduction) while keeping the study within TRL–3/4 scope.
3.6. Integration Efficiency and Adaptability
The modular, adapter-based architecture reduced the integration time for new ML models and wearable sensors significantly. In empirical trials, a new model was integrated in under ten minutes with zero service downtime across five tests, which demonstrates a clear operational advantage over existing RPM solutions.
4. Discussion
This study was designed to develop and rigorously validate a unified RHMS capable of delivering sub-second alerts, cost-efficient immutable logging, and seamless zero downtime integration of new devices and machine learning models. Compared to prior MQTT-based RPM frameworks [
9,
10,
11], RHMS demonstrated low-latency potential with integrated blockchain-backed auditability and dynamic device onboarding.
Under 500 concurrent 1 Hz streams, RHMS achieved a median end-to-end latency of 480 ms and a 95th percentile of 590 ms (
Figure 5), meeting our sub-second engineering target. This latency performance surpassed previously reported MQTT-based health monitoring systems [
10,
11] that typically operated in the sub-second to multi-second range under similar loads. However, the system performance metrics do not constitute clinical efficacy claims. The adapter-based approach reduced model onboarding to under ten minutes without downtime, simplifying integration compared with more rigid, pattern-based clinical architectures and typical deployment timelines reported in prior work [
17,
18,
19]. Furthermore, ZK-Rollup blockchain logging yielded a modeled per-record cost of
$0.00016 (USD), which was approximately six–seven times cheaper than traditional immutable cloud storage solutions such as AWS DynamoDB or managed blockchain services [
12,
13,
14,
15,
16].
Operational Guardrails for Fee Volatility: To bound ledger-cost variance at TRL 3–4, we combined (i) an off-peak scheduler that periodically queries a fee oracle and defers non-urgent audit writes whenever the instantaneous fee exceeds a site-chosen percentile (e.g., 75th), with a hard dwell cap (e.g., 15 min) to preserve liveness; (ii) smart batching that aggregates alert hashes to a target batch size B (e.g., 10–50) while enforcing an upper dwell bound (e.g., 5 min) to avoid excessive delay under sparse traffic; and (iii) a daily Layer-1 anchor that publishes a single commitment (e.g., a Merkle root over the day’s Layer-2 logs) to cap long-horizon data-availability risk while keeping L1 exposure predictable. Empirical validation on zkSync Era Sepolia corroborated our modeled cost analysis and showed that batching materially lowered per-record fees (≈82% reduction from to in our check). In this TRL 3–4 study, ledger writes run on an asynchronous path and therefore do not affect real-time alert latency and testnet measurements are used solely to verify the batching effect and reproducibility rather than to assert production-environment economics.
Integration of an FDA-cleared smartwatch-based stroke risk model demonstrated the feasibility of RHMS integration with existing medical-grade algorithms. The CatBoost model achieved an Area Under the Curve (AUC) of 0.98 as reported in [
28] while maintaining sub-150 ms inference times, and the adapter pattern enabled immediate compatibility with RHMS data flows. This ensured that real-time predictions were consistently delivered within the sub-second latency end-to-end target, aligning with and improving upon previous wearable-based stroke detection studies that reported higher latency or required proprietary device ecosystems [
21,
22].
The dual-database strategy using Firebase RTDB for real-time responsiveness and PostgreSQL plus blockchain for historical integrity ensured that high-speed alerting was not compromised by immutable logging overhead. This contrasts with previous healthcare blockchain prototypes [
12,
13] where on-chain latency penalties limited clinical feasibility. Our evaluation focused on technical performance and feasibility rather than clinical outcomes. Experiments used synthetic load (Locust-generated MQTT streams at 1 Hz) and limited real-world traces (Samsung Galaxy Watch 5, 30 min) to stress the end-to-end pipeline. Reported blockchain costs were derived from ZK-Rollup simulations cross-checked with public gas fee data and they reflect typical conditions and may vary with network state. A read-only RHMS demo is hosted online for inspection of the user interface and alert workflow and the instance displays only de-identified, synthetic data and exposes no PHI (Protected Health Information) [
24]. However, our use of event-only, hash-based logging aligned with these recommendations [
24,
25].
Regulatory Considerations with HIPAA, PIPEDA, and OPC: ZK-Rollup integration preserved compliance with HIPAA, PIPEDA, and OPC guidelines for immutable clinical record-keeping [
6,
7,
8]. These frameworks set standards for the collection, storage, and sharing of patient data, which is effectively a mandate to maintain the strongest protections possible for Protected Health Information (PHI). Since RHMS included the processing of continuous physiological sensor data and device identifiers, following these frameworks require all forms of communication, at all levels, to include strong encryption, access control, and audit trail mechanisms. The ZK-Rollup blockchain and dual-database architecture already provided transparent and immutable data logging that complies with regulatory auditability principles, while future incorporation of federated learning could further enhance privacy by preventing centralized data aggregation. The clinical implementation will involve approval through institutional ethics boards, certifications on data sovereignty, and validation on cyber security assurance on credible and auditable interoperability and integrations into hospital information systems with clinical and legal compliance.
Limitations: While RHMS showed strong performance and feasibility for practical use in care centers, some limitations should be acknowledged when interpreting these results. The current validation with an ML model focused mainly on stroke-related care with integration of the CatBoost model, and was tested using synthetic data streams and limited real-world traces from a single smart watch (Samsung Galaxy Watch 5). Blockchain cost analysis relied on simulation and small-scale testbed deployment rather than production environments, and scalability was measured and validated only up to 500 concurrent streams. Additionally, the security assessment was preliminary and did not include comprehensive penetration testing. These constraints highlight areas for improvement and caution when generalizing results to broader clinical deployments.
Future work will focus on scaling the performance of the MQTT broker for deployments beyond the 500 current test loads and expanding the adapter library to cover a wider range of devices and predictive models in the current study. This would be a critical step towards enabling large-scale clinical deployments. Next, we will add other physiological sensor data to provide a richer clinical context. In addition, we will be exploring how federated learning will help with addressing privacy-preserving model updates. Finally, larger, multi-condition clinical trials will help with scalability, generalizability, and sustained compliance under real-world operational conditions. This step specifically addresses the limitations of this study’s reliance on synthetic data and a single device. Before any clinical deployment, we plan third-party penetration testing of the broker and gateway, formal methods for the smart contract, broker clustering with HA/failover runbooks, and disaster-recovery drills that cover PostgreSQL and RTDB. The clinical implementation will involve approval through institutional ethics boards, certifications on data sovereignty, and validation on cyber-security assurance on credible and auditable interoperability and integrations into hospital information systems with clinical and legal compliance.