Next Article in Journal
Impact of Chokeberry (Aronia melanocarpa L.) Extracts on the Physicochemical Properties of Wheat Bread
Next Article in Special Issue
Care-MOVE: A Smartphone-Based Application for Continuous Monitoring of Mobility, Environmental Exposure and Cognitive Status in Older Patients
Previous Article in Journal
Multi-Criteria Framework for Evaluating Robotic Arm Power Prediction Models
Previous Article in Special Issue
Robotics, IoT and AI Technologies in Bioengineering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Modular Framework for Secure and Scalable Remote Health Monitoring: RHMS

Faculty of Science, Thompson Rivers University, Kamloops, BC V2C 0C8, Canada
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(23), 12623; https://doi.org/10.3390/app152312623
Submission received: 8 September 2025 / Revised: 1 November 2025 / Accepted: 24 November 2025 / Published: 28 November 2025
(This article belongs to the Special Issue Robotics, IoT and AI Technologies in Bioengineering, 2nd Edition)

Abstract

Background: Remote health monitoring for time-critical conditions (e.g., acute stroke) demands rapid, reliable data delivery and immediate clinical interpretation. However, existing Remote Patient Monitoring (RPM) frameworks often exhibit fragmented designs, latency bottlenecks, and integration challenges when onboarding new sensors or clinical algorithms. Methods: To address these gaps, we introduce a unified Remote Health Monitoring System (RHMS) that combines MQTT-driven sensor transport, a pattern-oriented software architecture, and blockchain-based immutable audit logging. Results: In a TRL 3–4 technical feasibility evaluation using synthetic load and a 30 min smartwatch trace, RHMS achieved a median end-to-end latency of 480 ms (IQR 110 ms; P95 < 600 ms) under 500 concurrent 1 Hz streams and a peak throughput of 545 streams/s in controlled environments. The system emits algorithmic risk alerts from an integrated model; no adjudicated clinical diagnoses were performed. A modeled rollup-backed audit log estimates a per-record cost of $0.00016 (USD). Conclusion: RHMS demonstrates technical feasibility and interoperability that aligns with clinical recommendations. Clinical validation is out of scope for this study and will require prospective trials.

1. Introduction

Early intervention is essential for acute medical conditions such as stroke because clinical testing, diagnostic decisions, and treatments need to be started as early as possible, ideally within the first hour [1]. Remote Patient Monitoring (RPM) has matured across multiple clinical contexts, with recent systematic reviews showing benefits for patient safety and adherence and a downward trend in utilization-related costs (e.g., lower readmissions and fewer outpatient visits), albeit with heterogeneous endpoints [2]. However, most RPM systems remain disconnected from electronic health records (EHRs) and routine clinical workflows. This disconnection leads to latency bottlenecks, since they also rely on rigid, productized designs that make it difficult to incorporate new sensor technologies and evolving clinical models [3,4]. Meaningful adoption of RPM in clinical practice requires scalable systems that have low latency with an audit trail while respecting the ease of incorporating new medical devices and diagnostic algorithms.
The limitations can be addressed through a combination of real-time vital data transmission using Message Queue Telemetry Transport (MQTT), immutably logging data via blockchain with zero knowledge (ZK) Rollups, a scalable proof-based batching layer for blockchain, and pattern-oriented software architecture. MQTT provides a lightweight communication transport protocol that is ideal for time-sensitive clinical flows for low-latency, high-throughput data transport [5]. As such, using blockchain can help record tamper-evident activity relevant to healthcare audits, including those referenced by HIPAA (U.S.), PIPEDA (Canada), and OPC guidance in Canada [6,7,8].
MQTT has been widely adopted in IoT and e-health for low-latency publish–subscribe messaging [9,10,11]. However, prior RPM systems often treat MQTT as a transport add-on rather than an end-to-end design principle. As a result, they rarely report sustained sub-second sensor-to-alert latency under high concurrency. Separately, blockchain has been explored for clinical auditability [12,13,14,15,16], yet Layer-1 approaches frequently incur prohibitive costs and add write-path latency that undermines real-time use. Advances in ZK-Rollups amortize Layer-1 data availability, significantly reducing per-record cost while preserving tamper evidence.
From a software engineering perspective, pattern-oriented designs (e.g., Factory/Strategy, Observer, Adapter) improve maintainability and integration velocity [17,18,19,20]. Still, existing healthcare systems seldom combine these patterns with a streaming architecture that allows hot-plug model/sensor onboarding without downtime. Wearable biosensors (e.g., Samsung Galaxy Watch 5, Apple Watch Series 8) now provide physiological sensor data with clinically relevant accuracy characteristics [21,22]. Nevertheless, prior pipelines typically require proprietary ecosystems or manual code changes to integrate new devices/models, and rarely pair real-time alerts with immutable and low-cost audit trails.
Preliminary Clinical applicability was demonstrated by integrating a Food and Drug Administration (FDA)-cleared smartwatch [21,23] and a stroke risk stratification model in a non-clinical environment, which achieved a median end-to-end latency of 480 ms under load, ensuring a sub-second alert delivery time for this time-sensitive decision-making. For tamper-evident auditing, healthcare reviews conclude that blockchain’s strongest near-term utility is provenance and integrity assurance (rather than storing PHI on-chain), with most prototypes focusing on event logging and access control [13,24,25]. Reviews from the perspective of healthcare professionals identify barriers such as increased workload from data influx, unclear reimbursement, security/privacy concerns, clinician training needs, and variability in patient digital literacy [26,27].
Even with the advances in monitoring technology, many of the aging RPM competition products cannot offer sub-second clinical alerts, update/add devices on-the-fly with zero downtime, or provide economically viable, immutable audit trails and logs at high volumes. To address identified gaps, this paper presents a comprehensive Remote Health Monitoring System (RHMS) that utilizes the real-time messaging capabilities of MQTT, blockchain-based immutable logging through ZK-Rollups, and a software architecture based on established software engineering design patterns. We evaluated RHMS through three research questions (RQs) as the main contributions of our study. RQ1: Can a previously unseen wearable be integrated into the RHMS in under ten minutes with zero downtime or disruption to ongoing clinical operations? RQ2: Can the framework sustain a median end-to-end latency below one second while handling at least 500 concurrent real-time sensor streams? RQ3: Can every alert event (i.e., a salted hash of the alert payload) be immutably recorded on a blockchain ledger at a modeled per-record cost below $0.001 (USD)?
RHMS provided (i) publish–subscribe streaming with configurable alert policies to reduce notification “flapping,” (ii) an adapter interface that isolates clinical models from transport/UI concerns to simplify onboarding, and (iii) event-only immutable audit trails to support accountability while minimizing PHI exposure. Our application of the ZK-Rollup blockchain allowed us to deliver immutable ledger entries at a fraction of the cost of using a conventional cloud-based database [16]. An operational web dashboard implementing the RHMS pipeline was deployed and publicly viewable.

2. Methodology

2.1. System Overview

The RHMS demonstrates a modular architecture that incorporates MQTT messaging, blockchain-based audit logging, wearable sensors, machine learning models, and alerts (Figure 1). The RHMS includes an MQTT layer for low latency and real-time data streaming with synthetic data, a blockchain for auditable-compliant and immutable audit logging, wearable sensors for continuous physiological sensor data over time, machine learning (ML) models to help make predictions and understanding to provide clinicians with actionable information. The design takes into consideration low-latency performance, scalability, and auditable compliance, and provides an architectural approach that is adaptable to different clinical workflows.
At a high level, RHMS takes signals from everyday wearables and routes them through one unified processing “engine” that (i) receives the data, (ii) interprets it with explainable ML models, (iii) stores it for short- and long-term use, (iv) records each event in an immutable, append-only audit log (blockchain-backed), and (v) ability to share clinical results with existing hospital systems. The outputs are simple and immediate, alerting when action is needed, providing a live view for caregivers, and recording updates in clinical systems. Because the engine is modular, new sensors or decision tools can be added with minimal effort, keeping the framework adaptable as needs evolve.

2.2. System Architecture and Workflows (Steps)

Figure 2 is structured to show the component layout and the data flow. Numbers (1–4) mark the major subsystems, while letters (a–l) trace the step-by-step message path. Black arrows show the real-time path to the user interface; gold arrows show persistence/standards/audit paths (database writes, FHIR/EHR exchange, and blockchain logging).
Major subsystems (1–4): (1) Wearable input: A smartwatch streams live vitals via MQTT and serves as the system’s entry point. These messages are time-stamped and authenticated, then proceed through steps (a–c) into the secure ingress (2). (2) Secure ingress: MQTT traffic passes the firewall and API gateway with TLS/SSL, which validates credentials and normalizes payloads before routing. (3) Application/UI (MVVM): The ViewModel subscribes to the Repository; a lightweight cache smooths UI updates under network jitter. (4) Data services: Firebase RTDB supports sub-second dashboards; the blockchain ledger provides a verifiable audit record; FHIR formatting enables HL7/EHR interoperability.
Flow (a–l). A new reading or app action begins at (a), passes (b) the firewall and (c) the API gateway, then reaches the Repository (e), which fans out the same message to multiple sinks. Along the black path, the Repository notifies the ViewModel (d) and cache (5) so the View (i) and web dashboard (l) show current vitals and any risk status immediately. In parallel, the ML pipeline (j) computes a stroke risk score and emits a risk alert (k). Along the gold path, the message is written to the real-time DB (f), recorded on blockchain ledger (g) for a verifiable audit record, and converted to FHIR (h) for downstream HL7/EHR exchange. Separating places (numbers) from steps (letters) makes the diagram easy to scan: readers first find the subsystem box, then follow the lettered arrows to see exactly how one vital reading becomes a live display, an alert, an EHR record, and an auditable log.
The RHMS web dashboard is implemented using Django v4.2.23 with server-rendered templates and lightweight Vanilla JavaScript (ES6). Live UI updates are delivered through the Firebase Realtime Database (Web SDK v10.8.0), while durable/history views are served via authenticated Django REST Framework v3.16.0 endpoints backed by PostgreSQL v15. The MQTT ingest layer uses Eclipse Mosquitto v2.0.18 with TLS, and client devices publish via the paho-mqtt v2.1.0 library.All components were developed and tested across macOS 14 (Sonoma), Windows 11 (23H2). Python-based services were executed using Python 3.11, and deployments were containerized where appropriate.

2.3. Software and Tools

All experiments were conducted using the following software stack:
  • Operating systems: macOS 14 (Sonoma), Windows 11 (23H2).
  • Backend: Python 3.11, Django 4.2.23, Django REST Framework 3.16.0, django-environ 0.12.0, sqlparse 0.5.3.
  • Database: PostgreSQL 15.
  • MQTT Ingest: Eclipse Mosquitto 2.0.18 (TLS), paho-mqtt 2.1.0.
  • Realtime communication: Firebase Realtime Database Web SDK 10.8.0, Firebase Admin SDK (Python) 6.9.0.
  • Frontend: Chart.js 4.4.2, HTML5/ES6 JavaScript.
  • Machine learning adapter: CatBoost 1.2.5 (placeholder inference), numpy 2.3.0, matplotlib 3.10.3.
  • FHIR/HL7 interoperability: fhir.resources 8.0.0, fhir-core 1.0.1.
  • Blockchain layer: Solidity compiler 0.8.19, web3.py 7.12.0, Ganache 7.9.1, py-solc-x 2.0.4.
  • Load testing: Locust 2.25.0.
Implementation specifics (MQTT & Ledger): Eclipse Mosquitto serves as the MQTT broker. QoS 1 is used for vitals (at-least-once), QoS 0 for non-critical UI pings. Topic schema: rhms/patientId/vitals/signal. Broker parameters: keep-alive 30 s, session-expiry 60 s, no retained vitals. Transport security: TLS 1.3 with mutual TLS at the broker and token validation at the API gateway. For the immutable audit, alert payloads are hashed (SHA-256 + per-site salt) and the hash only is emitted as a Layer-2 (rollup) event; no PHI is written on-chain. The contract functions as an append-only audit log.

2.4. Software Design Patterns

RHMS used a pattern-driven architecture to keep integrations fast, changes safe, and performance predictable. Patterns were applied where they deliver concrete leverage: hot-plug onboarding of sensors and ML models, real-time UI updates without polling, tight access control around diagnosis and ledger writes, uniform domain objects for maintainability, incremental extension of vital metrics, and shared connections to avoid resource waste (Table 1). Implementation details for the Factory and Adapter pattern used in the model and sensor onboarding workflow are provided in Appendix A.1. This design directly supported our objectives: rapid, downtime-free onboarding (RQ1), sub-second end-to-end latency at scale (RQ2), and a governed path for compliant recording of decisions (RQ3).
Briefly: Factory + Strategy generated interchangeable adapters for models and devices, enabling <10 min setup and zero downtime swaps. Observer-powered live dashboards by letting Firebase RTDB changes push UI refreshes. Protection Proxy enforced strict RBAC at the diagnosis/ledger boundary to guard sensitive actions. Composite unified PatientProfile and VitalReading so services share a single, coherent data model. Decorator added new vital fields without touching existing classes. Singleton centralized Firebase Admin and Web3 connections to cut overhead. Finally, the Adapter pattern normalized heterogeneous device and model interfaces for plug-and-play operation.
Table 1 shows the software design patterns in RHMS with component mapping, concrete implementations, and technical benefits. Note that RBAC stands for Role-Based Access Control, CLI refers to Command Line Interface, UI refers to user interface, and SDK refers to Software Development Kit. Web3, often called Web 3.0, is a proposed next version of the World Wide Web built around decentralization, blockchain technology, and token-based economies.
For implementation details, see Appendix A, Code of Listing A1 (Factory and Adapter Pattern Implementation, Python 3.11.12).

2.5. Dual-Database Rationale (Realtime UI vs. Durable History)

We used Firebase Realtime Database (RTDB) for sub-second UI state and PostgreSQL for durable history/analytics. This choice reflected the present TRL 3–4 scope: RTDB provides push-based listeners (no polling), offline-first sync on web/mobile clients, and a managed auth+rules model that fits our single-page dashboard without introducing a separate cache/broker tier. In contrast, a Redis-centric design (e.g., Pub/Sub or Streams) would require operating and securing a high-availability cluster (persistence policy, failover, backups, ACLs), implementing client-side subscription semantics and offline behavior, and persisting events to PostgreSQL for longitudinal queries. For this feasibility study, minimizing DevOps surface area while retaining sub-second UI updates was the dominant constraint; PostgreSQL remained the system of record for audits/analytics. The adapter layer isolated storage so that future upgrades (e.g., migrating hot paths from RTDB to Redis Streams or Kafka) would not change model interfaces or transport.

2.6. Blockchain-Based Immutable Ledger

To create an immutable audit log without high fees, RHMS recorded a hashed alert event on ZKsync Rollups (Layer-2) [15]. A rollup batches many writes together and pays once for L1 data availability, which lowers the cost per record.
For estimating the costs the following procedures were followed: We (i) ran the logging call on a local simulator to capture the gas used for the Layer-2 execution, (ii) added the L1 data-availability portion defined by the ZKsync fee model and amortized by the batch size [25], and (iii) converted the result to USD using the ETH price at the time. We repeated this with several gas price samples to reflect normal volatility.

Empirical Testnet Validation Protocol (zkSync Era Sepolia)

To complement modeled costs without expanding scope beyond TRL 3–4, we performed a minimal on-chain validation on zkSync Era Sepolia (Chain ID 300). We deployed a single-purpose event-emitter contract (AlertLog) that exposes log(bytes32) and batchLog(bytes32[]) and emits an AlertLogged(h,ts) event per record (no on-chain storage writes; no PHI). We executed two transactions corresponding to batching regimes B = 1 and B = 10 with a fixed pseudo-payload hash, then retrieved the Layer-2 transaction fee from public block-explorer receipts. Per-record cost was computed as fee/B. Contract address and transaction hashes are provided in Appendix C for verifiability and reproduction.

2.7. Interoperability Layer

The RHMS employed a robust interoperability layer to facilitate flawless integration with healthcare IT systems using standardized data formats such as HL7 (Health Level Seven) and FHIR (Fast Healthcare Interoperability Resources).

2.7.1. FHIR-HL7 Data Translation

The Django backend converted incoming wearable vitals (MQTT JSON) into FHIR Observation resources and then into HL7 v2 messages for EHR ingestion. This automated path ensured compatibility, reduced integration effort, and preserved end-to-end security. Figure 3 shows the primary flow from left to right.
  • MQTT JSON → Interoperability Engine. A wearable (or gateway) publishes a compact JSON vital to the patient/id/vitals with QoS 1 (at-least-once). The transport is protected with TLS 1.3.
  • JSON → FHIR R4 Observation (≈5 ms). The engine validates the payload and creates a FHIR Observation tagged as a vital sign, attaching standard codes (e.g., LOINC 8867-4 for heart rate and UCUM for units).
  • FHIR → HL7 v2 message (±10 ms) via the HL7 Mapper (Python/HL7apy). HL7 v2 is the long-standing messaging format used by hospital systems (typically an ORU⌃R01 “observation result” message with MSH/PID/OBR/OBX segments).
The resulting HL7 v2 message was sent over the HL7 v2.x channel to the EHR Database, where the reading was stored (an ACK is returned by the EHR in normal operation). Solid arrows in the figure show the primary data path (MQTT JSON → FHIR JSON → HL7 v2 → EHR). Dashed lines show the Error Queue path: if validation, mapping, or delivery fails, the message is queued and retried; failures beyond the retry policy are dead-lettered for manual review. In short, the diagram shows how a small MQTT JSON vital is standardized to FHIR, converted to HL7 v2, and reliably delivered to the EHR with secure transport and safe retries.

2.7.2. JSON Schema Mapping for Clinical Observations

The translation rules were defined in a deployable mapping file (YAML/JSON) loaded by the Interoperability Engine. For each vital type, the file specifies: input field names, LOINC code, UCUM unit, and the FHIR ↔ HL7 mapping directives (e.g., value → OBX-5, unit → OBX-6, timestamp → OBX-14). Adding a new device or metric, therefore, requires updating the mapping file only; no core service code changes are needed. See Appendix A, Code of Listing A2 for the FHIR Observation schema example. A complete example of the FHIR Observation JSON schema used in our pipeline is provided in Appendix A.2.
Lastly, this JSON schema was dynamically populated using the adapter pattern, which minimized the development burden of adding new devices or clinical algorithms.

2.8. Experimental Setup

This section details the technical validation used to characterize RHMS’s latency, throughput, and integration workflow under controlled synthetic load with a limited 30 min wearable trace. Clinical diagnostic accuracy is out of scope for this TRL 3–4 feasibility study.

2.8.1. Stroke Risk Prediction Model Integration

To evaluate system performance with a stroke risk model and adaptability, we integrated a previously published stroke risk model built with CatBoost (a gradient-boosted decision-tree library well-suited to tabular physiological data) [28]. The model used wearable-derived physiological sensor data, including heart-rate variability, ECG-derived features, and accelerometer statistics, and outputs a per-window risk score.
Integration uses our adapter layer, which maps the live MQTT stream to the model’s expected inputs and standardizes the output schema for downstream alerting. Configuration is provided via a single YAML file (a human-readable data serialization format commonly used for configuration files), specifying keys such as model_name, artifact_path, input_signals, output_schema, thresholds, and performance_target_ms. The adapter imposes an inference timeout limit of no more than 150 ms per window. If this limit is exceeded, the alerting process is bypassed for that particular window. Alerts are triggered when the risk score for a window meets or surpasses a predefined threshold in YAML configuration, provided this condition persists for N consecutive windows (see Appendix A, Code of Listing A3). The full YAML configuration used for the stroke-risk adapter is shown in Appendix A.3.
The model onboarding follows a three-step approach (no downtime):
  • Place the serialized model artifact in the models directory.
  • Add the YAML configuration with input signal names and output mapping.
  • The adapter loader hot-reloads the configuration, making the model immediately available to the pipeline.
This minimal configuration ensured immediate deployment without service interruption. The inference latency for the CatBoost adapter met our target (≤150 ms), with end-to-end sensor-to-alert latency.
Reproducibility note: Adapter timing (window/timeout), thresholding, debounce, and cooldown parameters are surfaced as configuration and reproduced in Appendix B.

2.8.2. Workflow Timeline

The RHMS validation primarily focused on the acute stroke scenario, in which timely identification significantly impacts patient outcomes. Automated alerts were triggered when measurements crossed predefined thresholds to support timely intervention. Figure 4 illustrates the timeline and workflow from sensor data acquisition to health risk alert generation within RHMS.
Our experiments simulated realistic acute-stroke scenarios (n = 120 episodes) using wearable sensor data streams. RHMS produced algorithmic risk alerts within a sub-second engineering target (median 480 ms). This is reported strictly as a system performance metric.

2.8.3. Experimental Setup for Latency and Throughput Testing

Latency and throughput experiments involved controlled synthetic load testing combined with real-world wearable data traces. Wearable device simulators generated synthetic MQTT payloads at clinically relevant rates (1 Hz per patient stream), corresponding to the update frequency of leading FDA-cleared wearable sensors. Network conditions were simulated using Linux tc-netem to emulate realistic WAN (Wide Area Network) conditions (median latency: 100 ms, jitter: ±10 ms, packet loss: 0.1%). The exact tc-netem WAN profile used for latency, jitter, and packet-loss emulation is listed in Appendix A.5.
We performed a 30 min continuous recording on a Samsung Galaxy Watch 5 (Samsung Electronics, Suwon, Republic of Korea) by acquiring ECG, heart-rate, and accelerometer data to compare and validate synthetic benchmarks against real-world sensor outputs. Performance was benchmarked using the Locust load-testing framework, configured for a sustained load of 500 concurrent sensor streams over 5 min intervals, measuring end-to-end latency and throughput stability. The Locust launcher and task skeleton used to generate the 500 parallel 1 Hz publishers are shown in Appendix A.6. The 500 concurrent sensor streams are not intended to represent a clinical scenario; rather, it is a stress test that is meant to test our system’s upper limits when it comes to scalability and performance boundaries.

2.8.4. Adapter Policy (Inference and Alerting)

Our approach limited the duration of inference for each window to a maximum of 150 ms, achieved through the use of asynchronous execution that can be canceled upon reaching a timeout. An alert was triggered if the risk level exceeds a threshold of ( τ ), with a debounce interval of 10 s and a cooling period of 60 s; the default threshold ( τ ) is set at 0.35. These parameters were all configurable, and the full adapter policy is provided in Appendix B.

2.8.5. Security and Threat Model Assessment

We performed a preliminary STRIDE-style assessment and security scanning with OWASP ZAP and SonarQube. Findings were triaged and remediated before deployment. All external traffic terminates at an API gateway that enforces TLS 1.3, validates Firebase ID tokens, and applies request-level authorization. Data in transit was protected end-to-end. Table 2 summarizes key risks and mitigations: Spoofing is mitigated by mutual TLS at the MQTT broker; tampering was mitigated by strict RBAC and input validation at the API layer; information disclosure was mitigated by log sanitization; denial of service was mitigated by QoS limits and client throttling; elevation of privilege is mitigated by hardening admin endpoints with MFA. These controls support HIPAA/GDPR objectives for authentication/authorization, integrity, confidentiality, availability, and accountability.

3. Results

RHMS’s empirical performance metrics were evaluated against key technical performance requirements identified in the research questions. The results reflected system-level technical performance and should not be interpreted as clinical validation or efficacy.

3.1. Latency Performance

Latency was evaluated under sustained clinical load scenarios using 500 concurrent sensor streams (1 Hz each). Measurements reflected median end-to-end latency (sensor-to-dashboard alert) as shown in Table 3.
Under test conditions, RHMS consistently delivered sub-second end-to-end latency under load. These results do not constitute clinical efficacy [1].
Figure 5 shows the empirical CDF of end-to-end alert latency measured at 1 Hz across 500 concurrent streams over 5 min (≈150,000 samples). The vertical lines indicate that the median latency is 480 ms, and the 95th percentile is 590 ms.

3.2. Throughput Performance

The system throughput was evaluated by incrementally increasing the number of concurrent MQTT sensor streams until performance degradation (defined as >1 s latency) occurred. The test used Locust 2.25.0: an open-source load-testing tool. Using Locust v2.25.0, the system sustained 545 streams/second for 5 min under steady load (Table 4).
Comparative context: Prior MQTT-based RPM evaluations typically report packet-level timing on single streams rather than end-to-end alert latency at scale. For example, Yew et al. report avg. jitter = 5.7 ms (LAN) and 50.1 ms (WAN) with 0% packet loss across 5000 ECG packets [10]. By contrast, under a sustained 545-stream (1 Hz) stress test, RHMS achieved median end-to-end alert latency = 480 ms and P95 = 590 ms (Figure 5). Alshammari [11] emphasizes reducing latency in an MQTT-based healthcare design but does not provide quantified end-to-end timings; we therefore present our 545-stream figure as scalability headroom rather than a clinical census assumption.
RHMS exhibited consistent and reliable data management at the system level, effectively supporting throughput capacities that are applicable to standard ward-scale environments.

3.3. Blockchain Transaction Cost Analysis

To evaluate the cost of immutably recording alert events, we modeled fees using an instrumented local execution to estimate L2 gas usage ( g 2 ) together with historical fee-oracle snapshots for L2/L1 gas prices (F and G 1 ). Costs use snapshots from August–September 2025 and contemporaneous ETH–USD rates. All costs are reported as modeled values derived from snapshots rather than live on-chain transactions (Table 5).
At the median modeled rate of $0.00016 (USD), approximately 6000 events can be logged per US dollar. This is 6–7× lower than typical pay-as-you-go immutable storage options, subject to fee conditions and batching policy.

3.4. Empirical Validation on Testnet

To complement the modeled cost analysis, we conducted a lightweight empirical validation using a minimal event-emitter smart contract (AlertLog.sol) deployed on the zkSync Era Sepolia testnet (Chain ID 300). The contract emitted alert-hash logs under two batching regimes ( B = 1 and B = 10 ) without storing data or exposing PHI. The total Layer-2 fees were 0.000002704825 ETH and 0.000004863575 ETH, corresponding to per-record costs of 2.70 × 10 6 ETH and 4.86 × 10 7 ETH, respectively. The results empirically validated the batching cost model, confirming an ≈82% reduction in per-record cost when using 10-record batches.
This testnet-based approach was aligned with TRL-3/4 scope and ensured verifiable, reproducible evidence without requiring mainnet spending.

3.5. Modeled Cost Calculator and Sensitivity

To remain within a TRL 3–4 feasibility scope, all ledger costs were modeled from fee-oracle snapshots without any on-chain spend. We parameterize the per-record cost (Table 6) by batch size B, L2 gas price F (wei), L1 gas price G 1 (wei), L2 execution gas g 2 (units), L1 data-availability gas g DA (units), and the ETH–USD rate R. The modeled per-record USD cost is
C USD = g 2 · F + g DA · G 1 B R 10 18 ,
where 10 18 converts wei to ETH. A short routine for recomputing C USD from fee snapshots is provided in Appendix A, Code of Listing A4—All figures in this subsection, therefore, represent modeled costs. A short routine for recomputing C USD from fee snapshots is provided in Appendix A.4.
Interpretation: Increasing B amortizes the L1 data-availability term, while off-peak operation reduces both F and G 1 . In our modeled snapshot, moving from B = 1 (peak) to B = 50 (off-peak) reduces cost by ∼19×. Because the audit log is asynchronous, these controls do not affect real-time alert latency; operational guardrails for keeping C USD within a site-defined band.

Empirical Testnet Check (zkSync Era Sepolia)

To complement our modeled cost analysis, we deployed a minimal event–emitter (AlertLog) on zkSync Era Sepolia (Chain ID 300) and executed alert–hash logs under two batching regimes: B = 1 and B = 10 (Table 7). The measured Layer–2 fees were 0.000002704825 ETH and 0.000004863575 ETH, yielding per–record costs of 2.704825 × 10−6 ETH ( B = 1 ) and 4.863575 × 10−7 ETH ( B = 10 = fee/10). These measurements validate the batching effect (≈82% reduction) while keeping the study within TRL–3/4 scope.

3.6. Integration Efficiency and Adaptability

The modular, adapter-based architecture reduced the integration time for new ML models and wearable sensors significantly. In empirical trials, a new model was integrated in under ten minutes with zero service downtime across five tests, which demonstrates a clear operational advantage over existing RPM solutions.

4. Discussion

This study was designed to develop and rigorously validate a unified RHMS capable of delivering sub-second alerts, cost-efficient immutable logging, and seamless zero downtime integration of new devices and machine learning models. Compared to prior MQTT-based RPM frameworks [9,10,11], RHMS demonstrated low-latency potential with integrated blockchain-backed auditability and dynamic device onboarding.
Under 500 concurrent 1 Hz streams, RHMS achieved a median end-to-end latency of 480 ms and a 95th percentile of 590 ms (Figure 5), meeting our sub-second engineering target. This latency performance surpassed previously reported MQTT-based health monitoring systems [10,11] that typically operated in the sub-second to multi-second range under similar loads. However, the system performance metrics do not constitute clinical efficacy claims. The adapter-based approach reduced model onboarding to under ten minutes without downtime, simplifying integration compared with more rigid, pattern-based clinical architectures and typical deployment timelines reported in prior work [17,18,19]. Furthermore, ZK-Rollup blockchain logging yielded a modeled per-record cost of $0.00016 (USD), which was approximately six–seven times cheaper than traditional immutable cloud storage solutions such as AWS DynamoDB or managed blockchain services [12,13,14,15,16].
Operational Guardrails for Fee Volatility: To bound ledger-cost variance at TRL 3–4, we combined (i) an off-peak scheduler that periodically queries a fee oracle and defers non-urgent audit writes whenever the instantaneous fee exceeds a site-chosen percentile (e.g., 75th), with a hard dwell cap (e.g., 15 min) to preserve liveness; (ii) smart batching that aggregates alert hashes to a target batch size B (e.g., 10–50) while enforcing an upper dwell bound (e.g., 5 min) to avoid excessive delay under sparse traffic; and (iii) a daily Layer-1 anchor that publishes a single commitment (e.g., a Merkle root over the day’s Layer-2 logs) to cap long-horizon data-availability risk while keeping L1 exposure predictable. Empirical validation on zkSync Era Sepolia corroborated our modeled cost analysis and showed that batching materially lowered per-record fees (≈82% reduction from B = 1 to B = 10 in our check). In this TRL 3–4 study, ledger writes run on an asynchronous path and therefore do not affect real-time alert latency and testnet measurements are used solely to verify the batching effect and reproducibility rather than to assert production-environment economics.
Integration of an FDA-cleared smartwatch-based stroke risk model demonstrated the feasibility of RHMS integration with existing medical-grade algorithms. The CatBoost model achieved an Area Under the Curve (AUC) of 0.98 as reported in [28] while maintaining sub-150 ms inference times, and the adapter pattern enabled immediate compatibility with RHMS data flows. This ensured that real-time predictions were consistently delivered within the sub-second latency end-to-end target, aligning with and improving upon previous wearable-based stroke detection studies that reported higher latency or required proprietary device ecosystems [21,22].
The dual-database strategy using Firebase RTDB for real-time responsiveness and PostgreSQL plus blockchain for historical integrity ensured that high-speed alerting was not compromised by immutable logging overhead. This contrasts with previous healthcare blockchain prototypes [12,13] where on-chain latency penalties limited clinical feasibility. Our evaluation focused on technical performance and feasibility rather than clinical outcomes. Experiments used synthetic load (Locust-generated MQTT streams at 1 Hz) and limited real-world traces (Samsung Galaxy Watch 5,  30 min) to stress the end-to-end pipeline. Reported blockchain costs were derived from ZK-Rollup simulations cross-checked with public gas fee data and they reflect typical conditions and may vary with network state. A read-only RHMS demo is hosted online for inspection of the user interface and alert workflow and the instance displays only de-identified, synthetic data and exposes no PHI (Protected Health Information) [24]. However, our use of event-only, hash-based logging aligned with these recommendations [24,25].
Regulatory Considerations with HIPAA, PIPEDA, and OPC: ZK-Rollup integration preserved compliance with HIPAA, PIPEDA, and OPC guidelines for immutable clinical record-keeping [6,7,8]. These frameworks set standards for the collection, storage, and sharing of patient data, which is effectively a mandate to maintain the strongest protections possible for Protected Health Information (PHI). Since RHMS included the processing of continuous physiological sensor data and device identifiers, following these frameworks require all forms of communication, at all levels, to include strong encryption, access control, and audit trail mechanisms. The ZK-Rollup blockchain and dual-database architecture already provided transparent and immutable data logging that complies with regulatory auditability principles, while future incorporation of federated learning could further enhance privacy by preventing centralized data aggregation. The clinical implementation will involve approval through institutional ethics boards, certifications on data sovereignty, and validation on cyber security assurance on credible and auditable interoperability and integrations into hospital information systems with clinical and legal compliance.
Limitations: While RHMS showed strong performance and feasibility for practical use in care centers, some limitations should be acknowledged when interpreting these results. The current validation with an ML model focused mainly on stroke-related care with integration of the CatBoost model, and was tested using synthetic data streams and limited real-world traces from a single smart watch (Samsung Galaxy Watch 5). Blockchain cost analysis relied on simulation and small-scale testbed deployment rather than production environments, and scalability was measured and validated only up to 500 concurrent streams. Additionally, the security assessment was preliminary and did not include comprehensive penetration testing. These constraints highlight areas for improvement and caution when generalizing results to broader clinical deployments.
Future work will focus on scaling the performance of the MQTT broker for deployments beyond the 500 current test loads and expanding the adapter library to cover a wider range of devices and predictive models in the current study. This would be a critical step towards enabling large-scale clinical deployments. Next, we will add other physiological sensor data to provide a richer clinical context. In addition, we will be exploring how federated learning will help with addressing privacy-preserving model updates. Finally, larger, multi-condition clinical trials will help with scalability, generalizability, and sustained compliance under real-world operational conditions. This step specifically addresses the limitations of this study’s reliance on synthetic data and a single device. Before any clinical deployment, we plan third-party penetration testing of the broker and gateway, formal methods for the smart contract, broker clustering with HA/failover runbooks, and disaster-recovery drills that cover PostgreSQL and RTDB. The clinical implementation will involve approval through institutional ethics boards, certifications on data sovereignty, and validation on cyber-security assurance on credible and auditable interoperability and integrations into hospital information systems with clinical and legal compliance.

5. Conclusions

This work presented a unified RHMS designed to address key limitations of existing solutions such as high latency, rigid integration pipelines, and costly or incomplete audit logging. We combined low-latency MQTT-based communication, a modular adapter-oriented architecture, and ZK-Rollup audit logging, RHMS demonstrates technical feasibility and standards-aligned interoperability at TRL 3–4. Our system makes the following contributions: (i) zero downtime onboarding of unseen wearables and ML models via a single-file adapter; (ii) sub-second median latency (480 ms) under 500 concurrent streams; and (iii) a modeled per-record ledger cost of $0.00016 (USD), 6–7× lower than conventional immutable storage, while maintaining compliance objectives. The low cost of $0.00016 (USD) enabled 6 M records/year on the blockchain ledger for about $1000 (USD). These results positioned RHMS as a proof-of-principle study in TRL 3–4 and a feasible pathway to future clinical evaluation in acute-care monitoring applications.

Author Contributions

Conceptualization, Y.M.; Methodology, S.K., E.M., I.K. and Y.M.; Software, S.K. and E.M.; Validation, S.K.; Investigation, S.K. and Y.M.; Resources, E.M. and I.K.; Writing—original draft, S.K., E.M., I.K. and Y.M.; Writing—review & editing, E.M. and I.K.; Visualization, S.K.; Supervision, Y.M.; Project administration, Y.M.; Funding acquisition, Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported by the Internal Research Fund (IRF) from Thompson Rivers University, Canada, 2024.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

A read-only demo of the RHMS dashboard (synthetic data) is available at https://www.mamatjanlab.com/rhms/, accessed on 6 September 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Code Listings

Appendix A.1. Code of Listing A1

Listing A1. Factory and Adapter Pattern Implementation (Python).
class AbstractMLAdapter:
  def predict(self, data):
      raise NotImplementedError
 
class CatBoostStrokeAdapter(AbstractMLAdapter):
  def predict(self, data):
      # CatBoost model inference <120 ms latency
      return self.model.predict(data)
 
class AdapterFactory:
   adapters = {
      "stroke_model": CatBoostStrokeAdapter,
      # Additional adapters can be registered here
  }
 
  @staticmethod
  def get_adapter(adapter_type):
      adapter_cls = AdapterFactory.adapters.get(adapter_type)
      if adapter_cls:
        return adapter_cls()
      else:
        raise ValueError(f"Unknown adapter type: {adapter_type}")
 
# Adapter instantiated dynamically from config
adapter = AdapterFactory.get_adapter("stroke_model")
prediction = adapter.predict(sensor_data)

Appendix A.2. Code of Listing A2

Listing A2. JSON Schema for FHIR Observation (Simplified Example).
{
  "resourceType""Observation",
  "id""obs12345",
  "status""final",
  "category": [{
    "coding": [{
      "system""http://terminology.hl7.org/CodeSystem/observation-category",
      "code""vital-signs"
    }]
  }],
  "code": {
    "coding": [{
      "system""http://loinc.org",
      "code""8867-4",
      "display""Heart rate"
    }]
  },
  "subject"{
    "reference""Patient/patient123"
  },
  "effectiveDateTime": "2024-06-01T13:10:00Z",
  "valueQuantity": {
    "value": 75,
    "unit""beats/minute",
    "system""http://unitsofmeasure.org",
    "code""/min"
  }
}

Appendix A.3. Code of Listing A3

Listing A3. YAML-Based Adapter Configuration for Stroke-Risk Model Integration.
name: StrokeRiskCatBoost
 window_ms: 1000
 timeout_ms: 150    # hard cap per inference window
 
alerts:
 threshold: 0.35    # risk score in [0 ,1] to trigger alert
 debounce_seconds: 10
 cooldown_seconds: 60
 
telemetry:
 log_timeouts: true
 log_skipped_windows: true

Appendix A.4. Code of Listing A4

Listing A4. (Pseudocode): ZK-Rollup per-Record Fee Estimation.
Inputs :
 TX_HASH ( optional )  # if available , fetch real receipt ; else manual inputs
 RPC_URL ( optional )  # L2 or local simulator endpoint
 l2_gas_used       # integer ; from receipt.gAsused or manual measure
 l2_gas_price       # wei ; from receipt.effectiveGasPrice or assumed
 l1_pubdata_gas     # integer ; pubdata gas for this payload ( from L2 fee )
 l1_gas_price      # wei ; sampled mainnet L1 gas price
 batch_size       # integer ; rollup batch size used for amortization
 eth_usd         # decimal ; ETH→USD rate at measurement time
 
Procedure :
  if TX_HASH and RPC_URL provided :
     receipt ← get_transaction_receipt(RPC_URL , TX_HASH )
     l2_gas_used   ← receipt.gAsused
     l2_gas_price ← receipt.effectiveGasPrice ( fallback : tx. gasPrice )
  # else : use provided manual values for l2_gas_used and l2_gas_price
 
  # L2 execution fee ( wei )
  l2_fee_wei  ← l2_gas_used × l2_gas_price
 
  # Amortized L1 data-availability fee ( wei )
  if batch_size > 0:
     l1_fee_wei ← ( l1_pubdata_gas × l1_gas_price ) / batch_size
  else :
     l1_fee_wei ← 0
 
  total_fee_wei ← l2_fee_wei + l1_fee_wei
 
  # Convert to ETH and USD
  total_fee_eth ← total_fee_wei / 1e18
  total_fee_usd ← total_fee_eth × eth_usd
 
Outputs :
  total_fee_eth , total_fee_usd ,
  and ( optionally ) the split : { L2_execution , L1_pubdata }
 
Experiment runner ( for robustness ):
  For each gas-price snapshot s in S ( covering typical ± volatility ):
         Compute total_fee_usd_s using the above procedure
  Report median ( and IQR ) across S runs in §3.C

Appendix A.5. tc netem WAN Profile Used for Latency/Jitter/Loss Emulation

# Add WAN profile:100 ms latency + or - 10 ms jitter, 0.1% loss
sudo tc qdisc add dev eth0 root netem delay 100ms
10ms distribution normal loss 0.1%
# Verify
tc qdisc show dev eth0
# Remove profile after tests
sudo tc qdisc del dev eth0 root

Appendix A.6. Locust Launcher for 500 Parallel 1 Hz Publishers

locust -f locustfile.py--headless -u 500 -r 25 -t 25m--csv=run1
Listing A5. Locust task skeleton: 1 Hz publish per user (Paho MQTT).
fromlocustimport  User,  task,  between
importpaho.mqtt.client as mqtt
importtime, os, json
 
BROKER = os.getenv("MQTT_HOST", "127.0.0.1")
PORT      = int(os.getenv("MQTT_PORT", "8883"))
TOPIC     = "rhms/{patient}/vitals/hr"
 
class MqttUser(User):
    wait_time = between(0.9, 1.1)  ~1 Hz
 
    def on_start(self):
          self.client = mqtt.Client()
          # Configure TLS/mTLS here per Methods Section 2.2
          (certs, verify mode, etc.)
          self.client.connect(BROKER, PORT, keepalive=30)
 
    @task
    def publish_heart_rate(self):
          payload = {"ts": time.time(), "hr": 72}
          patient_id = self.environment.runner.user_count
          self.client.publish(TOPIC.format(patient=patient_id),
                        json.dumps(payload), qos=1)

Appendix B. Adapter Policy (Inference & Alerting)

Appendix B.1. Policy Summary

  • Windowing: Fixed windows (default: 1000 ms).
  • Inference budget: Hard cap ≤ 150 ms per window; overruns are cancelled asynchronously and the window is skipped (no alert emitted).
  • Thresholding: An alert is emitted when the per-window risk score τ (configurable).
  • Debounce & cooldown: A debounce interval requires sustained exceedance before the first alert; a cooldown suppresses immediate re-alerts.
  • Failure handling: If a window is invalid (timeout, missing features), the pipeline logs the incident and continues without blocking the stream.

Appendix B.2. Default Parameters Used in Reported Experiments

ParameterValue
Window size1000 ms
Inference timeout (budget)150 ms
Alert threshold ( τ )0.35
Debounce interval10 s
Cooldown interval60 s

Appendix C. Empirical Validation (zkSync Era Sepolia)

To provide verifiable, on-chain evidence for the modeled audit-logging cost, we deployed a minimal event–emitter contract on the zkSync Era Sepolia testnet (Chain ID 300) and executed two transactions emitting alert-hash events with batch sizes B = 1 and B = 10 (no storage writes; no PHI). The block-explorer receipts reported the following fees and proofs:
Contract address:0x6f4E760d1F070DE206e2f63d36aEa0193aA437BD
B = 1 transaction:0x0413897a5eb7c9a5fec35c5ad5bc29b
b7bfd98efc15cb484fa3f6414ccfba
Fee: 0.000002704825 ETH
B = 10 transaction:0xd0dafe5423dc645f3580c5146ff7b59
c1acb0da7824080a7ec713f58275ba0
Fee: 0.000004863575 ETH
  • Per-record costs derived from these receipts are 2.704825e-6 ETH ( B = 1 ) and 4.863575e-7 ETH ( B = 10 = fee/10).
 
Minimal event-emitter (AlertLog.sol).
  // SPDX-License-Identifier: MIT
     pragma solidity v 0.8.19;
     contract AlertLog {
        event AlertLogged(bytes32 indexed h, uint64 ts);
        function log(bytes32 h) external { emit AlertLogged(h, uint64(block.timestamp)); }
        function batchLog(bytes32[] calldata hs) external {
        uint64 ts = uint64(block.timestamp);
        for (uint i = 0; i < hs.length; i++) emit AlertLogged(hs[i], ts);
        }
      }

References

  1. Saver, J.L. Time is brain—Quantified. Stroke 2006, 37, 263–266. [Google Scholar] [CrossRef] [PubMed]
  2. Tan, S.Y.; Sumner, J.; Wang, Y.; Wenjun Yip, A. A systematic review of the impacts of remote patient monitoring (RPM) interventions on safety, adherence, quality-of-life and cost-related outcomes. npj Digit. Med. 2024, 7, 192. [Google Scholar] [CrossRef] [PubMed]
  3. Baig, M.; Afifi, S.; GholamHosseini, H.; Mirza, F. A Systematic Review of Wearable Sensors and IoT-Based Monitoring Applications for Older Adults—A Focus on Ageing Population and Independent Living. J. Med. Syst. 2019, 43, 233. [Google Scholar] [CrossRef] [PubMed]
  4. Haghi, M.; Thurow, K.; Stoll, R. Wearable devices in medical Internet of Things: Scientific research and commercially available devices. Healthc. Inform. Res. 2017, 23, 4–15. [Google Scholar] [CrossRef] [PubMed]
  5. MQTT Version. 5.0 Specification. OASIS Standard. Available online: https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html (accessed on 6 September 2025).
  6. U.S. Department of Health & Human Services. Health Information Privacy. Available online: https://www.hhs.gov/hipaa/index.html (accessed on 6 September 2025).
  7. Office of the Privacy Commissioner of Canada. The Personal Information Protection and Electronic Documents Act (PIPEDA); Office of the Privacy Commissioner of Canada: Gatineau, QC, Canada, 2021; pp. 1–8.
  8. Office of the Privacy Commissioner of Canada. Health, Genetic and Other Body Information; OPC: Gatineau, QC, Canada, 2019.
  9. MQTT Technical Specification. Available online: https://mqtt.org/mqtt-specification (accessed on 6 September 2025).
  10. Yew, H.T.; Ng, M.F.; Ping, S.Z.; Chung, S.K.; Chekima, A.; Dargham, J.A. IoT based real-time remote patient monitoring system. In Proceedings of the 2020 IEEE 16th International Colloquium on Signal Processing & Its Applications (CSPA), Langkawi, Malaysia, 28–29 February 2020; pp. 176–179. [Google Scholar] [CrossRef]
  11. Alshammari, H.H. The internet of things healthcare monitoring system based on MQTT protocol. Alex. Eng. J. 2023, 69, 275–287. [Google Scholar] [CrossRef]
  12. Saeed, H.; Malik, H.; Bashir, U.; Ahmad, A.; Riaz, S.; Ilyas, M.; Bukhari, W.A.; Khan, M.I.A. Blockchain technology in healthcare: A systematic review. PLoS ONE 2022, 17, e0266462. [Google Scholar] [CrossRef] [PubMed]
  13. Kuo, T.-T.; Kim, H.-E.; Ohno-Machado, L. Blockchain distributed ledger technologies for biomedical and healthcare applications. J. Am. Med. Inform. Assoc. 2017, 24, 1211–1220. [Google Scholar] [CrossRef] [PubMed]
  14. Ethereum Network. Gas Fee Analysis. Available online: https://ethereum.org/en/developers/docs/gas (accessed on 6 September 2025).
  15. ZKsync. ZKsync Docs. 2025. Available online: https://docs.zksync.io/ (accessed on 31 August 2025).
  16. Ma, S.; Zhang, X. Integrating blockchain and ZK-ROLLUP for efficient healthcare data privacy protection system via IPFS. Sci. Rep. 2024, 14, 11746. [Google Scholar] [CrossRef] [PubMed]
  17. Gamma, E.; Helm, R.; Johnson, R.; Vlissides, J. Design Patterns: Elements of Reusable Object-Oriented Software; Addison-Wesley: Reading, MA, USA, 1994. [Google Scholar]
  18. Buschmann, F.; Meunier, R.; Rohnert, H.; Sommerlad, P.; Stal, M. Pattern-Oriented Software Architecture, Volume 1: A System of Patterns; Wiley: Chichester, UK, 1996. [Google Scholar]
  19. Fowler, M. Patterns of Enterprise Application Architecture; Addison-Wesley: Boston, MA, USA, 2002. [Google Scholar]
  20. Riehle, D.; Züllighoven, H. Understanding and using patterns in software development. Theory Pract. Object Syst. 1996, 2, 3–13. [Google Scholar] [CrossRef]
  21. U.S. Food and Drug Administration (FDA). FDA-Cleared Wearable Device Validation: Samsung Galaxy Watch ECG Approval. 2022. Available online: https://www.accessdata.fda.gov/cdrh_docs/pdf24/K240909.pdf (accessed on 14 August 2025).
  22. Chon, K.H.; McManus, D.D. Detection of atrial fibrillation using a smartwatch. Nat. Rev. Cardiol. 2018, 15, 657–658. [Google Scholar] [CrossRef] [PubMed]
  23. U.S. Food and Drug Administration. FDA Clearance Database for Wearable Medical Devices. 2025. Available online: https://www.fda.gov/medical-devices (accessed on 14 August 2025).
  24. Rifi, N.; Rachkidi, E.; Agoulmine, N.; Taher, N.C. Towards using blockchain technology for IoT data access protection. In Proceedings of the 2017 IEEE 17th International Conference on Ubiquitous Wireless Broadband (ICUWB), Salamanca, Spain, 12–15 September 2017; pp. 1–5. [Google Scholar] [CrossRef]
  25. Esposito, C.; De Santis, A.; Tortora, G.; Chang, H.; Choo, K.-K.R. Blockchain: A Panacea for Healthcare Cloud-Based Data Security and Privacy? IEEE Cloud Comput. 2018, 5, 31–37. [Google Scholar] [CrossRef]
  26. Oudbier, S.J.; Souget-Ruff, S.P.; Chen, B.S.J.; Ziesemer, K.A.; Meij, H.J.; Smets, E.M.A. Implementation barriers and facilitators of remote monitoring, remote consultation and digital care platforms through the eyes of healthcare professionals: A review of reviews. BMJ Open 2024, 14, e075833. [Google Scholar] [CrossRef] [PubMed]
  27. Serrano, L.P.; Maita, K.C.; Avila, F.R.; Torres-Guzman, R.A.; Garcia, J.P.; Eldaly, A.S.; Haider, C.R.; Felton, C.L.; Paulson, M.R.; Maniaci, M.J.; et al. Benefits and Challenges of Remote Patient Monitoring as Perceived by Health Care Practitioners: A Systematic Review. Perm. J. 2023, 27, 100–111. [Google Scholar] [CrossRef] [PubMed]
  28. Khan, S.; Dekhil, N.; Mamatjan, E.; Hassan, S.; Mamatjan, Y. An automated online recommender system for stroke risk assessment. CMBES Proc. 2023, 45, 1051. [Google Scholar]
Figure 1. Generic RHMS overview: inputs from wearables and user context flow into the RHMS framework, which was built on MQTT, blockchain, design patterns, and interoperability to produce patient alerts, live monitoring, and an adaptable platform for new devices.
Figure 1. Generic RHMS overview: inputs from wearables and user context flow into the RHMS framework, which was built on MQTT, blockchain, design patterns, and interoperability to produce patient alerts, live monitoring, and an adaptable platform for new devices.
Applsci 15 12623 g001
Figure 2. RHMS system architecture illustrating the end-to-end workflow and communication paths between wearables, MQTT ingest, backend services, databases, machine-learning adapters, the clinician dashboard, and the blockchain audit ledger. System components are numbered in the diagram as follows: (1) wearable sensors and gateway, (2) secure ingress layer (firewall and API gateway), (3) application/UI layer (web dashboard), (4) data services layer, (5) client-side cache for responsive UI updates, (6) interoperability layer for FHIR/HL7 integration with EHR systems, and (7) blockchain-backed immutable diagnosis ledger. Lettered arrows (a–l) denote the sequential message flow—from initial wearable data ingestion through MQTT, backend validation, ML risk scoring, persistent storage, and final alert or diagnosis recording.
Figure 2. RHMS system architecture illustrating the end-to-end workflow and communication paths between wearables, MQTT ingest, backend services, databases, machine-learning adapters, the clinician dashboard, and the blockchain audit ledger. System components are numbered in the diagram as follows: (1) wearable sensors and gateway, (2) secure ingress layer (firewall and API gateway), (3) application/UI layer (web dashboard), (4) data services layer, (5) client-side cache for responsive UI updates, (6) interoperability layer for FHIR/HL7 integration with EHR systems, and (7) blockchain-backed immutable diagnosis ledger. Lettered arrows (a–l) denote the sequential message flow—from initial wearable data ingestion through MQTT, backend validation, ML risk scoring, persistent storage, and final alert or diagnosis recording.
Applsci 15 12623 g002
Figure 3. HL7 ↔ FHIR interoperability flow. Solid arrows show the primary data path (MQTT JSON → FHIR Observation JSON → HL7 v2 → EHR). Dashed arrows show error flows to the queue. The diagram illustrates how a small MQTT vital is standardized to FHIR, converted to HL7 v2, securely transported, acknowledged by the EHR, and safely retried on failure.
Figure 3. HL7 ↔ FHIR interoperability flow. Solid arrows show the primary data path (MQTT JSON → FHIR Observation JSON → HL7 v2 → EHR). Dashed arrows show error flows to the queue. The diagram illustrates how a small MQTT vital is standardized to FHIR, converted to HL7 v2, securely transported, acknowledged by the EHR, and safely retried on failure.
Applsci 15 12623 g003
Figure 4. Sub-second, workflow timeline (sensor-to-alert within 600 ms).
Figure 4. Sub-second, workflow timeline (sensor-to-alert within 600 ms).
Applsci 15 12623 g004
Figure 5. Latency cumulative distribution function (CDF) for RHMS end-to-end alerts.
Figure 5. Latency cumulative distribution function (CDF) for RHMS end-to-end alerts.
Applsci 15 12623 g005
Table 1. Software design patterns in RHMS with component mapping, implementations, and benefits.
Table 1. Software design patterns in RHMS with component mapping, implementations, and benefits.
PatternRHMS ComponentConcrete ImplementationTechnical Benefit
Factory and StrategyML Adapter and Sensor IntegrationCLI-generated adapters; interchangeable ML models≤10 min new-adapter setup; zero downtime updates
ObserverPatient Dashboard Real-time UpdatesFirebase RTDB listeners trigger UI refreshReal-time visualization without polling; automatic refresh on data arrival
Protection ProxyDiagnosis and Blockchain Access LayerStrict RBAC enforcement; permission validationHIPAA/PIPEDA-aligned access control; secure data handling
CompositePatientProfile and VitalReading ObjectsUnified patient data structuresSimplified retrieval; better maintainability
DecoratorExtensible Vital Data FieldsAdd vital metrics without modifying existing classesIncremental enhancements; low coupling
SingletonFirebase Admin & Web3 ConnectionsSingleton instances for Firebase SDK and Web3Resource optimization; avoids redundant connections
AdapterSensor and ML IntegrationPlug-and-play integration of heterogeneous sensors and clinical ML modelsRapid onboarding of new devices/models without downtime
Abbreviations: RBAC = Role-Based Access Control; RTDB = Realtime Database; SDK = Software Development Kit.
Table 2. Preliminary security evaluation and mitigations (STRIDE categories).
Table 2. Preliminary security evaluation and mitigations (STRIDE categories).
Risk CategorySeverityIdentified VulnerabilityMitigation Strategy
SpoofingHighMQTT broker authentication weaknessImplemented TLS mutual authentication
TamperingHighUnauthorized data manipulation at the API levelImplemented strict RBAC and input validation
Information DisclosureMediumSensitive data exposure in logsImplemented log sanitization
Denial of ServiceMediumPotential overload of the MQTT brokerQoS limits & client throttling policies
Elevation of PrivilegeMediumPrivilege escalation via admin interfacesHardened admin endpoints with MFA
Abbreviations: RBAC = role-based access control; MFA = multi-factor authentication; QoS = quality of service.
Table 3. RHMS latency metrics (500 concurrent streams).
Table 3. RHMS latency metrics (500 concurrent streams).
MetricValue (ms)Measurement Method
Median latency480Client-side Performance API
Latency IQR (25–75%)110Client-side Performance API
95th percentile latency590Locust load testing
IQR = interquartile range.
Table 4. Throughput benchmarking.
Table 4. Throughput benchmarking.
MetricValueMeasurement Method
Maximum throughput545 streams/sLocust (5 min sustained load)
CPU utilization at peak∼70%Server resource monitoring
CPU = Central Processing Unit.
Table 5. Modeled ZKsync cost metrics (per recorded alert).
Table 5. Modeled ZKsync cost metrics (per recorded alert).
MetricValue (USD)
Median modeled cost$0.00016
Modeled monthly variance±40%
Table 6. Modeled per-record cost sensitivity for representative fee snapshots.
Table 6. Modeled per-record cost sensitivity for representative fee snapshots.
Batch Size BFee RegimeModeled Cost/Record (USD)
1peak0.0030
10average0.00090
50off-peak0.00016
Table 7. Empirical L2 fee per record on zkSync Era Sepolia using an event–only audit log (no storage, no PHI).
Table 7. Empirical L2 fee per record on zkSync Era Sepolia using an event–only audit log (no storage, no PHI).
Batch BTx Fee (ETH)RecordsPer-Record (ETH)
10.00000270482510.000002704825
100.000004863575100.0000004863575
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khan, S.; Maimaitijiang, E.; Kures, I.; Mamatjan, Y. A Novel Modular Framework for Secure and Scalable Remote Health Monitoring: RHMS. Appl. Sci. 2025, 15, 12623. https://doi.org/10.3390/app152312623

AMA Style

Khan S, Maimaitijiang E, Kures I, Mamatjan Y. A Novel Modular Framework for Secure and Scalable Remote Health Monitoring: RHMS. Applied Sciences. 2025; 15(23):12623. https://doi.org/10.3390/app152312623

Chicago/Turabian Style

Khan, Shams, Ehesan Maimaitijiang, Irsad Kures, and Yasin Mamatjan. 2025. "A Novel Modular Framework for Secure and Scalable Remote Health Monitoring: RHMS" Applied Sciences 15, no. 23: 12623. https://doi.org/10.3390/app152312623

APA Style

Khan, S., Maimaitijiang, E., Kures, I., & Mamatjan, Y. (2025). A Novel Modular Framework for Secure and Scalable Remote Health Monitoring: RHMS. Applied Sciences, 15(23), 12623. https://doi.org/10.3390/app152312623

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop