You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

15 November 2025

IoTMindCare: An Integrative Reference Architecture for Safe and Personalized IoT-Based Depression Management

,
,
and
1
Department of Computer Science and Software Engineering, Auckland University of Technology, Auckland 1010, New Zealand
2
School of Information Technology, Deakin University, Burwood, VIC 3125, Australia
3
Department of Data Science and Artificial Intelligence, Auckland University of Technology, Auckland 1010, New Zealand
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue IoT Sensor Systems: Design, Interfaces, Signals, Processing, and Applications

Abstract

Depression affects millions of people worldwide. Traditional management relies heavily on periodic clinical assessments and self-reports, which lack real-time responsiveness and personalization. Despite numerous research prototypes exploring Internet of Things (IoT)-based mental health support, almost none have translated into practical, mainstream solutions. This gap stems from several interrelated challenges, including the absence of robust, flexible, and safe architectural frameworks; the diversity of IoT device ownership; the need for further research across many aspects of technology-based depression support; highly individualized user needs; and ongoing concerns regarding safety and personalization. We aim to develop a reference architecture for IoT-based safe and personalized depression management. We introduce IoTMindCare, integrating current best practices while maintaining the flexibility required to incorporate future research and technology innovations. A structured review of contemporary IoT-based solutions for depression management is used to establish their strengths, limitations, and gaps. Then, following the Attribute-Driven Design (ADD) method, we design IoTMindCare. The Architecture Trade-off Analysis Method (ATAM) is used to evaluate the proposed reference architecture. The proposed reference architecture features a modular, layered logical view design with cross-layer interactions, incorporating expert input to define system components, data flows, and user requirements. Personalization features, including continuous, context-aware feedback and safety-related mechanisms, were designed based on the needs of stakeholders, primarily users and caregivers, throughout the system architecture. ATAM evaluation shows that IoTMindCare supports safety and personalization significantly better than current designs. This work provides a flexible, safe, and extensible architectural foundation for IoT-based depression management systems, enabling the construction of optimal solutions that integrate the most effective current research and technology while remaining adaptable to future advancements. IoTMindCare provides a unifying, aggregation-style reference architecture that consolidates design principles and operational lessons from multiple prior IoT mental-health solutions, enabling these systems to be instantiated, compared, and extended rather than directly competing with any single implementation.

1. Introduction

Depression is a prevalent and growing mental health challenge worldwide, affecting over 280 million people [1]. With increasing modernization, rapid urbanization, and the widespread adoption of digital lifestyles, individuals are experiencing heightened levels of stress, loneliness, and social disconnection, factors that contribute to the onset and exacerbation of depressive disorders [2]. Predictive research studies estimate an increase in the number of people affected by depression by 2030 [3]. This trend highlights the urgent need for innovative, accessible, and scalable approaches to support mental health management, especially within environments where individuals lack regular medical oversight.
Traditional depression management methods rely on episodic assessments and manual reporting, which are often prone to recall bias and delays [4]. In contrast, IoT enables continuous and context-aware monitoring via interconnected devices capable of real-time sensing, transmission, and processing. Wearables, ambient sensors, smartphones, and voice assistants can collectively capture multimodal indicators, heart rate variability, sleep cycles, movement patterns, speech features, and environmental cues to provide a richer, more nuanced monitoring of mental health. This automatic, continuous data stream reduces user burden and has the potential to improve accuracy, enabling a shift from reactive management to proactive and preventive care through immediate anomaly detection and personalized feedback.
Architecturally, most IoT mental health prototypes follow an edge/cloud paradigm [5,6]: sensors transmit data to a local gateway for initial processing, relaying summaries to cloud platforms for deeper analytics and long-term storage. Fog-computing layers are sometimes inserted to reduce latency during emergencies, and standard protocols such as Message Queuing Telemetry Transport (MQTT) or Constrained Application Protocol (CoAP) provide secure, lightweight connectivity between heterogeneous devices. Despite these advantages, no fully commissioned IoT solution for depression management has been deployed at scale. Our structured literature review, presented in Section 2, reveals several persistent barriers, including integration and interoperability issues, privacy and ethical concerns, regulatory and compliance challenges, technological limitations, algorithmic bias, and lack of reference standards.
We hypothesize that a reference architecture that integrates current best practices and prioritizes safety, personalization, and extensibility, can significantly improve the real-world deployment of current and future IoT-based depression management solutions. To test this hypothesis, we follow ADD [7] to develop an integrative reference architecture that integrates the findings from our literature review of current best practice, including emerging standards and domain requirements, and prioritizes safety and personalization. We then use ATAM [8] to evaluate the resulting architecture. Further validation is performed using Unified Modeling Language (UML) models and comparison with benchmark architectures.
The proposed IoT-based reference architecture (IoTMindCare) assimilates best practices, emerging standards, and domain requirements into a unified framework shaped around safety and personalization. It features layering in its logical view to support cross-layer coordination, real-time sensing, behavior analysis, and adaptive interventions through user and caregiver feedback. Unlike existing works, our reference architecture is generalizable and standards-informed. It accommodates heterogeneous sensors, applies to diverse usage contexts, and embeds ethical safeguards from the outset. While clinical trials remain future work, we establish a solid foundation for next-generation mental health platforms that are intelligent, adaptive, and ready for real-world deployment. We present IoTMindCare as a collection of established architectural patterns and operational insights that make it easier to adapt existing solutions, compare alternatives, and extend them for future needs. Instead of emphasizing one specific anomaly detector, the framework provides a flexible architecture for evaluating and selecting components based on application requirements.
Evaluation and comparison show that IoTMindCare is designed to support reductions in FPR and FNR through multi-stage verification, feedback-driven adaptation, and edge-cloud tradeoffs. The quantitative evaluation of these reductions is dependent on concrete instantiations and is left to future prototype and pilot studies.
The main contributions of this research are as follows:
  • Identifying core challenges in IoT-based depression management via a structured literature review (Section 2);
  • A modular, extensible reference architecture (Section 3) that supports heterogeneous sensors, edge/cloud analytics, safety protocols, and personalization;
  • Demonstrating the proposed architecture’s applicability by instantiating a concrete IoT depression care system that delivers proactive, ambient, and user-aware support and related evaluations (Section 4).
An overview of the proposed IoTMindCare reference architecture illustrating its multi-layered structure is indicated in Figure 1. The diagram highlights the system’s key qualities, including safety, personalization, interoperability, and adaptability.
Figure 1. Graphical abstract of the main contributions of IoTMindCare.

3. IoTMindCare: Proposed Reference Architecture

IoTMindCare is built using ADD where we first identify key architectural drivers (Section 3.1) to the develop architectural views (Section 3.2).

3.1. Architectural Drivers

3.1.1. Functional Requirements

Our literature review points to five core functional requirements:
F1
Continuous Monitoring or sensing of physiological, behavioral, and environmental signals.
F2
Context-Aware Decision-Making of multimodal data within spatial-temporal and behavioral contexts.
F3
Alert Generation and Escalation to caregivers or clinicians in response to detected risk conditions.
F4
User and Clinician Interfaces for stakeholders.
F5
Personalized Feedback and intervention mechanisms tailored to individual users.

3.1.2. Quality Attribute Scenarios

Safety and personalization are the key quality attributes in our architecture design process, and are captured using the quality-attribute scenario template [27].
  • QAS1: Safety: The system must detect critical events, such as depressive crises or self-harm, and respond with timely alerts using a probabilistic, context-aware anomaly detection model (Table 2). Detection accuracy, timeliness, and escalation logic are critical. This includes handling uncertainty through a confidence scoring model and minimizing the False Positive Rate (FPR) and False Negative Rate (FNR) as primary operational objectives. Events are cross-referenced with historical and contextual data to improve detection reliability.
    Table 2. Scenario for requirement safety.
  • QAS2: Personalization: The system must adapt dynamically to users’ physiological and behavioral baselines, with support for partial or heterogeneous device configurations (Table 3). When sensors are missing, alternative data (manual input, AI-inferred metrics) must substitute for automated sensing. Personalization includes dynamic switching between modes, user-specific thresholds, and real-time updates to decision logic.
    Table 3. Scenario for requirement personalization.
Recent IoT-health and applied risk-detection studies explicitly report false positive and false negative outcomes when evaluating anomaly and crisis detection models. For example, Gupta et al. [6] show how hierarchical/federated architectures reduce false positives via multi-layer model aggregation and context validation, motivating multi-stage verification in edge/cloud pipelines. Similarly, applied suicide-risk prediction studies report sensitivity/specificity and precision metrics to quantify clinical detection trade-offs and operational risk [28]. These studies motivate our choice to evaluate safety and personalization using FPR and FNR alongside latency and robustness measures in our reference architecture.
To support reproducible validation of the QASs, we specify measurement protocols and metrics that should be applied when an implementation or simulation of IoTMindCare is constructed. For QAS1 (Safety), we recommend reporting FPR and FNR, sensitivity (recall), specificity, detection latency (time from event onset to alert), and escalation latency (time from alert to caregiver/clinician notification). For QAS2 (Personalization), we recommend adaptability, installability, customization, co-existence adequacy, and reliability [29]. Validation should use temporally aware splits to reflect longitudinal deployment and expose concept drift. Calibration metrics (e.g., Brier score or reliability diagrams) are recommended when probabilistic confidence scores drive escalation thresholds.
To support reproducible validation of the QASs, we specify measurement protocols and metrics that should be applied when an implementation or simulation of IoTMindCare is constructed. For QAS1 (Safety), we recommend detection metrics such as True Positive Rate/Sensitivity, FPR, FNR, Specificity, and Precision, together with operational metrics, including detection latency (time from event onset to alert) and escalation latency (time from alert to caregiver/clinician notification). For QAS2 (Personalization), we recommend measures of personalization gain, such as absolute and relative reductions in FN and FP rates after per-user adaptation compared to a population model (such as FNR and FPR), model convergence speed (time or number of examples to reach stable per-user performance), and robustness under sensor loss (percent performance drop when a subset of sensors is removed). Validation should use temporally aware splits (training on earlier windows, testing on later windows) to reflect longitudinal deployment conditions and to expose concept drift. Finally, calibration metrics (such as Brier score or reliability diagrams) should be used where probabilistic confidence scores drive escalation thresholds.

3.1.3. Technical Constraints

The key technical constraints highlighted by our review [5,6,30] are as follows:
TC1
Limited Edge Resources necessitate light-weight and distributed processing.
TC2
Privacy Constraints require local processing and adherence to privacy legislation.
TC3
Device Heterogeneity points to the need to tolerate and adapt to variable device configurations.

3.2. Architecture Views

The architecture is presented using the 4+1 view model [31]. Each view reports a different but equally critical aspect of the architecture and is linked to a different combination of architectural drivers.

3.2.1. Logical View

Logically, the architecture has a layered structure due to this pattern’s robustness in managing modularity and ensuring logical separation between sensing, processing, and interface components (Table 4). Layering is widely adopted in existing solutions [32,33]. Other architectural styles, such as microkernel and service-oriented architecture, are less suitable due to weaker support for reactive and real-time event handling required by QAS1, and unacceptable latency for context-sensitive, time-critical operations, respectively.
Table 4. Logical view—layering diagram showing the five architectural layers of the architecture and their mapping to safety and personalization.
This view enhances the structure from [34] by embedding user-specific personalization across all layers and enabling dynamic thresholding, offering resilience (TC3) and deeper alignment with QAS2. This enhancement improves system adaptability and responsiveness beyond prior work.
Each layer supports specific architectural drivers. The Sensing layer provides real-time signal capture support (F1, QAS1). The Network and Communication layer ensures reliable data routing (F2, TC3). Data Processing and Storage enables edge/cloud processing (F2, QAS1, QAS2). The Service layer implements core decision logic (F2, F3, QAS1). The Interface layer provides personalized feedback and real-time alerting (F3–F5, QAS2).

3.2.2. Development View

The development view (Figure 2 defines the internal structure and interactions between software modules distributed across smart home environments, cloud infrastructure, and external actors such as users and caregivers.
Figure 2. Development view—UML Component diagram showing the major software modules for smart home, cloud, and user environments.
The development view adopts a component-based architecture with RESTful and message-passing interfaces due to support of loose coupling, deployment flexibility, and fault isolation [35,36,37]. This pattern enables modular, distributed services that feature real-time responsiveness and adaptability.
Unlike fixed-rule systems like CarePredict [38] using predefined activity models, IoTMindCare dynamically adapts to each user’s data through the cloud-based Personalization Engine. Feedback loops are integrated via the Feedback Integrator, improving the system over time. This enhancement is central to achieving QAS2 under real-world device constraints (TC3), highlighting adaptability and scalability.
Smart Home edge interface performs continuous monitoring (F1) and forwards data to the Edge Gateway, which executes lightweight inference and handles fallback logic during cloud disconnection (QAS1, TC1, TC2). The Safety Engine performs anomaly detection using probabilistic models and generates alerts based on severity thresholds (F3, QAS1). The cloud-based Personalization Engine manages user-specific models and dynamically adjusts service logic based on behavioral patterns (F5, QAS2, TC3). Moreover, the Feedback Integrator incorporates feedback from users and caregivers to refine system behavior (F5, QAS2), while the Central Database stores longitudinal data and personalization parameters.
For user interaction, Mobile Application enables real-time feedback, notifications, and manual data entry for user reflection, alert acknowledgment, and configuration (F4, F5). Clinician/Caregiver Dashboard provides real-time and retrospective views of user states, anomaly logs, and behavioral trends for escalation actions and threshold customization (F3, F4, QAS1).

3.2.3. Process View

The process view (Figure 3) describes the system’s dynamic behavior, capturing the interaction patterns among core components during real-time operation. It adopts an event-driven architectural style, well-suited for reactive, asynchronous IoT-based health monitoring systems due to its responsiveness and scalability [39,40]. It also uses a client–server pattern, where cloud services hold stateful logic, while edge and user devices act as clients, emitting or receiving events.
Figure 3. Process view—UML Sequence Diagram showing end-to-end system behavior categorized into four stages and featuring dynamic interactions among smart home, cloud, and user interfaces.
Unlike rule-based detection chains in [34], this view incorporates closed-loop adaptation, probabilistic scoring, and scalable escalation, offering resilience and evolving personalization aligned with real-world deployment challenges.
We propose four key processes reflecting the system’s end-to-end lifecycle: System Initialization and Personalization, Continuous Monitoring, Candidate Anomaly Detection, and Feedback Integration. System Initialization and Personalization begins during setup or after reconfiguration. General Practitioner provides user-specific information such as baseline activity and risk thresholds. The cloud-based Personalization Engine generates contextual thresholds and fallback logic for sensor gaps (F5, QAS2, TC3).
In Continuous Monitoring, Edge Gateway gathers and preprocesses multimodal data. Obvious anomalies are flagged locally, while filtered data is sent to the cloud for deeper analysis. Real-time and historical data are stored centrally (F1, F2, TC1, TC2). In Candidate Anomaly Detection and Alert Escalation, Safety Engine evaluates deviations using probabilistic models. If an anomaly is detected, multi-level alerts are triggered: from users to general and mental health practitioners, depending on severity (F2, F3, QAS1). Finally, in Feedback Integration and Adaptation, users and clinicians provide feedback through dashboards. Feedback Integrator refines the personalization logic and detection thresholds, enabling continuous learning (F5, QAS2, TC3).

3.2.4. Deployment View

The deployment view illustrates how software components are physically distributed across edge devices, cloud infrastructure, and stakeholder interfaces (Figure 4). This view relates to latency, fault tolerance, and scalability requirements. A hybrid edge–cloud deployment strategy is adopted, a common best practice in current works [41,42].
Figure 4. Deployment view—UML Deployment diagram showing the mapping of the development view components to physical or infrastructure resources.
The deployment view enables scalability and resilience by decoupling edge and cloud operations. Cloud components can be updated dynamically without disrupting critical edge functions. Communication protocols are optimized for IoT reliability and low-latency demands, aligning with system constraints (TC1TC3) and quality attributes (QAS1, QAS2).
Multiple Sensing Devices capture multimodal behavioral and physiological data in real time. These feed into a local Smart Home Gateway, which hosts the Edge Gateway and Safety Engine. This setup supports on-site anomaly detection, buffering, and fallback logic during cloud disconnections (QAS1, TC1).
The Cloud Infrastructure manages the Personalization Engine, Feedback Integrator, and long-term Database. It handles high-complexity tasks such as behavioral analytics, adaptive model updates, and secure storage. Encrypted communication between the cloud and smart home ensures privacy and integrity (QAS2).
Users and caregivers access the system through Mobile Applications and Clinician Dashboards, which interface primarily with cloud services for alerts, feedback, and system configuration. These apps can also connect locally for real-time updates when needed.

4. Architecture Evaluation

We evaluate the proposed IoTMindCare using ATAM [8,43] for its support for the primary QAs, safety and personalization, as well as complementary 4+1 UML views (See Section 3). The QA scenarios from Section 3.1.2 were analyzed to ensure that they are holistically addressed by the architecture, across all relevant views. Table 5 details how each QA is supported by IoTMindCare, and indicates to a robust architecture delivering safe and personalized IoT-based depression management.
Table 5. Mapping Quality attribute scenarios to architectural evidence.
  • Tradeoff Analysis
The architectural choices must also balance the tradeoffs between safety and personalization, particularly in minimizing the FPR and FNR in behavioral anomaly detection. We identified the following critical tradeoffs:
  • Safety vs. Personalisation: A central tradeoff exists between safety and personalization. Personalization improves detection sensitivity (reducing FNR) by adapting to individual baselines; however, overly narrow per-user thresholds may increase FPR if not validated. IoTMindCare mitigates this via a tiered verification pipeline (edge trigger → cloud contextual validation → caregiver confirmation) and feedback-driven recalibration. However, overly personalized thresholds risk increasing FPs if not rigorously validated. This tradeoff is mitigated by delegating final anomaly verification to a centralized cloud component (Figure 2) and integrating feedback from caregivers (Figure 3) to calibrate and improve models iteratively.
  • Feedback Loops vs. Timeliness: Incorporating caregiver and user feedback into anomaly detection loops (Figure 3) helps reduce FPR/FNR rates over time. However, this introduces delays in model retraining and potential inconsistencies during transition phases. The architecture mitigates this by decoupling feedback ingestion from real-time alerting, maintaining responsiveness while still enabling long-term model refinement.
  • Local Autonomy vs. Central Oversight: To ensure availability and responsiveness, the system gateway performs initial anomaly evaluation at the edge (Figure 4). This supports low-latency decision-making and offers resilience during cloud disconnection, contributing to safety (FNR reduction). However, relying solely on edge intelligence limits access to broader contextual information needed to reduce FPRs. As a result, the architecture introduces a tiered decision-making pipeline: edge components perform preliminary assessments, while cloud services provide contextual validation to reduce FPRs. This design reflects a deliberate trade-off between local autonomy and central oversight.
IoTMindCare intentionally aggregates capabilities from the surveyed literature so that many existing architectures can be expressed as subsets or specific instantiations of the reference model. This framing avoids claiming direct superiority; instead, the contribution is a unifying architecture that (a) makes feature/metric tradeoffs explicit, (b) clarifies where variability points should be placed to compare detectors (e.g., choice of anomaly model, thresholding strategy, retraining cadence), and (c) provides concrete recipes for extending prior designs toward more robust, future-proof deployments.
Overall, IoTMindCare carefully balances quality attributes, with safety and personalization as primary drivers. Modularity, layered processing, and feedback integration strategies explicitly addressed tradeoffs, as evidenced in the UML diagrams and quality scenario mappings.

4.1. Comparison with Existing Architectures

Table 6 provides a high-level comparison between IoTMindCare and selected contemporary IoT-health frameworks across data sources, personalization, edge/cloud strategy, explainability, and multi-stage anomaly handling.
Table 6. Comparison with recent IoT-health architectures.
This comparison focuses on architectural patterns, operational principles, and design choices rather than empirical performance metrics, as IoTMindCare is a reference architecture.
Our architecture combines explainability and personalization within a hierarchical edge-cloud structure, while having other levels of safety and personalization. It extends these through the following:
  • Incorporating home environmental sensors.
  • Employing multi-stage anomaly verification to manage FPR/FNR tradeoffs.
  • Supporting feedback-driven model adaptation with user/caregiver input.
  • Enabling edge-first inference policies for low latency and privacy.
While Zhang et al.’s model [16] presents strong anomaly detection within a controlled dataset, it has a wearable-only scope and centralized deployment, which limits generalization and real-world responsiveness. In contrast, our system is designed for continuous deployment, with added robustness mechanisms (like model updates from feedback) and explainable alerts through transparent processing stages. Compared to federated learning systems, our architecture relies on explicit feedback loops rather than fully distributed updates, offering a more straightforward path to per-user tuning and safety in mental health contexts.
These comparisons suggest that our architecture is a more holistic solution for real-world, mental health IoT deployment, better suited to privacy, safety, personalization, and real-time operation.

4.2. Scenario-Driven Instantiation and Configuration

To demonstrate the applicability of IoTMindCare in realistic deployments while preserving its role as a high-level reference architecture, we provide a set of concrete IoT-driven scenarios drawn from the literature, along with a mapping of these scenarios to the architecture. For each scenario, we indicate relevant literature sources, the architecture views most involved, the primary QAS affected, and brief configuration guidance. These examples are meant as practical guides for building, testing, or running pilot versions of systems.

Representative Scenarios

  • Early deterioration detection (gradual mood decline): Long-term ADL/sleep/activity drift that signals worsening depression (motivated by SMART BEAR [9]).
  • Acute crisis detection (self-harm/suicidal risk): Rapid onset of high-risk states requiring low-latency detection and escalation (inspired by AAL pilots and safety studies [14,15]).
  • Sleep disturbance and insomnia monitoring: Night-time patterns detected via bed/pressure and respiration (contactless modalities), indicating sleep disruption associated with mood change (motivated by SPHERE, contactless sensing work [10,17]).
  • Social isolation/activity withdrawal: Reduced outings, fewer device interactions, reduced social communications extracted from ADL/HAR traces (CASAS [44] and SPHERE [10]).
  • Medication non-adherence/routine deviation: Missed medication or routine changes inferred from smart-plug, cabinet-door, and activity sensors (SMART BEAR [9] AAL studies [14,15]).
  • Acute stress episode (physiological spike): Short-term HRV and activity spikes detected by wearables requiring brief interventions or prompting (wearable explainability studies [16]).
  • Contactless monitoring for privacy-sensitive users: Use radar/Wi-Fi channel state information and on-device inference where wearables are infeasible (contactless systems [17]).
  • Teletherapy-triggered sensing: Trigger richer sensing or clinician contact when conversational agents or self-report apps indicate increased risk (integration with digital therapeutics).
For a given deployment, practitioners can use the recipes above to configure IoTMindCare as follows:
  • Select sensors and signals relevant to the scenario
  • Choose model placement: edge-first for low-latency safety scenarios; cloud-enabled personalization for long-term baselining and retraining.
  • Define escalation policy: map detection confidence levels to tiered actions (edge prompt, cloud validation, caregiver/clinician alert) within the Safety Engine.
  • Personalize and test: initialize thresholds from population priors, then adapt per-user via the Feedback Integrator. Use concrete quantitative validation metrics, such as reductions in FNR, reductions in FPR, improvements in sensitivity/specificity, and changes in detection and escalation latency, to evaluate the personalization gain.
  • Privacy and failover: apply on-device preprocessing, encrypted model updates, or federated learning, and contactless fallbacks where wearables are refused.
These scenario instantiations are explicitly intended to guide future prototype or simulation studies. For each scenario, we recommend a pilot that measures the QA metrics listed above (FPR/FNR, detection latency, and personalization gain), with clearly defined A/B comparisons or pre- and post-personalization analyses. These evaluation plans are consistent with the ATAM-driven analysis and the QA metric framework introduced in Section 3.1.2.
Empirical evaluation plan (guidelines). Because IoTMindCare is a reference architecture, empirical claims about reductions in FPR/FNR require the instantiation of prototypes and controlled evaluation. For future prototypes we recommend the following evaluation regimen: (1) choose scenario(s) from Table 7; (2) run pre/post personalization experiments where the baseline uses population thresholds and the experimental arm uses IoTMindCare personalization and feedback loops; (3) report FPR, FNR, sensitivity, specificity, detection latency, escalation latency, and calibration (Brier score); (4) include robustness tests: sensor dropout (remove sensor modalities), concept-drift injection (temporal distribution shifts), and realistic network outages; (5) present results as absolute and relative changes in FPR/FNR or bootstrap estimates to quantify uncertainty.
Table 7. Scenario to architecture mapping: signals, literature sources, relevant views, primary QAS and practical configuration notes for prototype instantiation.

4.3. Limitations and Threats to Validity

We summarize key limitations of IoTMindCare as follows, adapted from standard research design frameworks such as the SPHERE deployment framework [10]. These include threats to validity in system evaluation, deployment feasibility, and ethical compliance. While our work lays a conceptual and architectural foundation, future empirical studies are required to address these threats.
  • Construct Validity: Behavioral and physiological signals may not fully capture the complex psychological aspects of depression and have not been clinically validated against gold-standard diagnostic tools.
  • External Validity: The system has not yet been deployed across diverse demographics, socioeconomic groups, or cultural contexts, limiting generalizability.
  • Ethical and Legal Constraints: The system has not undergone formal ethical review or compliance assessment with regulations such as GDPR or HIPAA, especially regarding passive data collection.
  • Technical and Ecological Validity: Assumes consistent sensor reliability, user compliance, and stable network conditions; real-world variability and behavioral shifts due to monitoring awareness may affect performance.
The validity issues include methodological concerns, such as the lack of clinical validation and the absence of empirical evaluation, as well as practical limitations in real-world deployment and ethical compliance. Specific challenges include construct and internal validity gaps, potential algorithmic bias, and uncertainties in scalability under resource constraints. Although the architecture presents a promising direction for intelligent depression care, future work must empirically validate its components, address ethical safeguards, and ensure robustness across user populations.

5. Conclusions

This research proposes a modular, layered IoT-based reference architecture (IoTMindCare) for continuous and personalized depression care in non-clinical settings. Integrating multi-modal sensing, edge-cloud analytics, adaptive feedback loops, and safety-driven design principles, our architecture addresses key challenges identified in existing literature, particularly safety, personalization, real-time response, and architectural extensibility. Through comparative analysis and design modelling, we demonstrated how IoTMindCare offers a comprehensive framework to guide the development of intelligent mental health monitoring systems.
While IoTMindCare offers a strong conceptual and technical foundation, several opportunities remain for future advancement. Real-world deployment and clinical trials would be instrumental in validating the system’s effectiveness in diverse populations. Adaptive learning components may be incorporated to support long-term personalization and reduce false positive or negative detections. Additionally, integration with healthcare infrastructures such as electronic health records and alignment with privacy and regulatory standards could support broader adoption. Moreover, the prototype can be strengthened by simulating a real-world environment. These directions represent promising avenues for translating IoT-based mental health architectures from prototype to practice.

Author Contributions

Conceptualization, S.Z., R.S. and S.M.; Methodology, S.Z. and R.S.; Validation, R.S.; Formal analysis, S.Z.; Investigation, S.Z.; Resources, S.Z.; Writing—original draft, S.Z.; Writing—review & editing, R.S. and S.M.; Visualization, S.Z.; Supervision, R.S., S.M. and M.N.; Project administration, R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IoTInternet of Things
ADDAttribute-Driven Design
ATAMArchitecture Trade-off Analysis Method
MQTTMessage Queuing Telemetry Transport
CoAPConstrained Application Protocol
UMLUnified Modeling Language
ADLActivities of Daily Living
HARHuman Activity Recognition
AALAmbient Assisted Living
QASQuality Attribute Scenario
FPRFalse Positive Rate
FNRFalse Negative Rate

References

  1. World Health Organizaation. Depression. Available online: https://www.who.int/news-room/fact-sheets/detail/depression (accessed on 28 May 2025).
  2. Ochnik, D.; Buława, B.; Nagel, P.; Gachowski, M.; Budziński, M. Urbanization, loneliness and mental health model—A cross-sectional network analysis with a representative sample. Sci. Rep. 2024, 14, 24974. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Jia, X.; Yang, Y.; Sun, N.; Shi, S.; Wang, W. Change in the global burden of depression from 1990–2019 and its prediction for 2030. J. Psychiatr. Res. 2024, 178, 16–22. [Google Scholar] [CrossRef]
  4. Simon, G.E.; Moise, N.; Mohr, D.C. Management of depression in adults: A review. JAMA 2024, 332, 141–152. [Google Scholar] [CrossRef]
  5. Wu, Q.; Chen, X.; Zhou, Z.; Zhang, J. Fedhome: Cloud-edge based personalized federated learning for in-home health monitoring. IEEE Trans. Mob. Comput. 2020, 21, 2818–2832. [Google Scholar] [CrossRef]
  6. Gupta, D.; Kayode, O.; Bhatt, S.; Gupta, M.; Tosun, A.S. Hierarchical federated learning based anomaly detection using digital twins for smart healthcare. In Proceedings of the 2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC), Virtual Event, 13–15 December 2021; pp. 16–25. [Google Scholar]
  7. Cervantes, H.; Kazman, R. Designing Software Architectures: A Practical Approach; Addison-Wesley Professional: Boston, MA, USA, 2024. [Google Scholar]
  8. Angelov, S.A.; Trienekens, J.J.M.; Grefen, P.W.P.J. Extending and Adapting the Architecture Tradeoff Analysis Method for the Evaluation of Software Reference Architectures; Technische Universiteit Eindhoven: Eindhoven, The Netherlands, 2014. [Google Scholar]
  9. SMART BEAR Consortium. Smart Big Data Platform to Offer Evidence-Based Personalised Support for Healthy and Independent Living at Home. Horizon 2020 Project. 2020. Available online: https://smart-bear.eu/ (accessed on 20 August 2025).
  10. Tonkin, E.L.; Holmes, M.; Song, H.; Twomey, N.; Diethe, T.; Kull, M.; Craddock, I. A multi-sensor dataset with annotated activities of daily living recorded in a residential setting. Sci. Data 2023, 10, 162. [Google Scholar] [CrossRef] [PubMed]
  11. Patricia, A.C.P.; Rosberg, P.C.; Butt-Aziz, S.; Alberto, P.M.M.; Roberto-Cesar, M.O.; Miguel, U.T.; Naz, S. Semi-supervised ensemble learning for human activity recognition in casas Kyoto dataset. Heliyon 2024, 10, e29398. [Google Scholar] [CrossRef]
  12. Rodrigues, V.F.; da Rosa Righi, R.; da Costa, C.A.; Zeiser, F.A.; Eskofier, B.; Maier, A.; Kim, D. Digital health in smart cities: Rethinking the remote health monitoring architecture on combining edge, fog, and cloud. Health Technol. 2023, 13, 449–472. [Google Scholar]
  13. Benrimoh, D.; Fratila, R.; Israel, S.; Perlman, K. Deep Learning: A New Horizon for Personalized Treatment of Depression? McGill J. Med. 2018, 16, 1–6. [Google Scholar] [CrossRef]
  14. Aalam, J.; Shah, S.N.A.; Parveen, R. Personalized Healthcare Services for Assisted Living in Healthcare 5.0. Ambient Assist. Living 2025, 203–222. [Google Scholar]
  15. Jovanovic, M.; Mitrov, G.; Zdravevski, E.; Lameski, P.; Colantonio, S.; Kampel, M.; Florez-Revuelta, F. Ambient assisted living: Scoping review of artificial intelligence models, domains, technology, and concerns. J. Med. Internet Res. 2022, 24, e36553. [Google Scholar]
  16. Zhang, Y.; Folarin, A.A.; Stewart, C.; Sankesara, H.; Ranjan, Y.; Conde, P.; Choudhury, A.R.; Sun, S.; Rashid, Z.; Dobson, R.J.B. An Explainable Anomaly Detection Framework for Monitoring Depression and Anxiety Using Consumer Wearable Devices. arXiv 2025, arXiv:2505.03039. [Google Scholar] [CrossRef]
  17. Li, A.; Bodanese, E.; Poslad, S.; Chen, P.; Wang, J.; Fan, Y.; Hou, T. A contactless health monitoring system for vital signs monitoring, human activity recognition, and tracking. IEEE Internet Things J. 2023, 11, 29275–29286. [Google Scholar] [CrossRef]
  18. Health Level Seven International (HL7). HL7 FHIR (Fast Healthcare Interoperability Resources); Technical Report; Release 5.0.0 (R5–STU); HL7 International: Ann Arbor, MI, USA, 2023; Available online: https://hl7.org/fhir/ (accessed on 29 June 2025).
  19. openEHR International. openEHR—The Future of Digital Health Is Open. Available online: https://openehr.org/ (accessed on 29 June 2025).
  20. FIWARE Foundation. FIWARE: Open APIs for Open Minds—Open-Source Smart Applications Platform. Available online: https://www.fiware.org/ (accessed on 29 June 2025).
  21. IEEE Std 11073-10701–2022/ISO/IEEE 11073-10701:2024; Health Informatics—Device Interoperability—Part 10701: Point-of-Care Medical Device Communication—Metric Provisioning by Participants in a Service-Oriented Device Connectivity (SDC) System. IEEE: New York, NY, USA, 2022.
  22. Prochaska, J.J.; Vogel, E.A.; Chieng, A.; Kendra, M.; Baiocchi, M.; Pajarito, S.; Robinson, A. A therapeutic relational agent for reducing problematic substance use (Woebot): Development and usability study. J. Med. Internet Res. 2021, 23, e24850. [Google Scholar] [CrossRef] [PubMed]
  23. Wysa: Your AI-Powered Mental Wellness Companion. Touchkin eServices Pvt Ltd. Available online: https://www.wysa.com/ (accessed on 28 May 2025).
  24. BetterHelp Editorial Team. How Effective Is Online Counseling for Depression? Available online: https://www.betterhelp.com/ (accessed on 29 April 2025).
  25. Roble Ridge Software LLC. Moodfit: Tools & Insights for Your Mental Health. Available online: https://www.getmoodfit.com/ (accessed on 29 April 2025).
  26. Healthify He Puna Waiora, NZ. T2 Mood Tracker App. Available online: https://healthify.nz/apps/t/t2-mood-tracker-app (accessed on 29 April 2025).
  27. Software Engineering Design Research Group. Quality Attribute Scenario Template—Design Practice Repository. Available online: https://socadk.github.io/design-practice-repository/artifact-templates (accessed on 18 May 2025).
  28. Akhtar, K.; Yaseen, M.U.; Imran, M.; Khattak, S.B.A.; Nasralla, M.M. Predicting inmate suicidal behavior with an interpretable ensemble machine learning approach in smart prisons. PeerJ Comput. Sci. 2024, 10, e2051. [Google Scholar] [CrossRef]
  29. Karnouskos, S.; Sinha, R.; Leitão, P.; Ribeiro, L.; Strasser, T.I. The applicability of ISO/IEC 25023 measures to the integration of agents and automation systems. In Proceedings of the IECON 2018—44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018; pp. 2927–2934. [Google Scholar]
  30. Cheikhrouhou, O.; Mershad, K.; Jamil, F.; Mahmud, R.; Koubaa, A.; Moosavi, S.R. A lightweight blockchain and fog-enabled secure remote patient monitoring system. Internet Things 2023, 22, 100691. [Google Scholar] [CrossRef]
  31. Kruchten, P.B. The 4+1 view model of architecture. IEEE Softw. 2002, 12, 42–50. [Google Scholar]
  32. Nasiri, S.; Sadoughi, F.; Dehnad, A.; Tadayon, M.H.; Ahmadi, H. Layered Architecture for Internet of Things-based Healthcare System. Informatica 2021, 45, 543–562. [Google Scholar] [CrossRef]
  33. Lamonaca, F.; Scuro, C.; Grimaldi, D.; Olivito, R.S.; Sciammarella, P.F.; Carnì, D.L. A layered IoT-based architecture for a distributed structural health monitoring system. Acta Imeko 2019, 8, 45–52. [Google Scholar] [CrossRef]
  34. Choi, J.; Lee, S.; Kim, S.; Kim, D.; Kim, H. Depressed mood prediction of elderly people with a wearable band. Sensors 2022, 22, 4174. [Google Scholar] [CrossRef] [PubMed]
  35. Richardson, L.; Ruby, S. RESTful Web Services; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2008. [Google Scholar]
  36. Aazam, M.; Zeadally, S.; Harras, K.A. Deploying fog computing in industrial internet of things and industry 4.0. IEEE Trans. Ind. Inform. 2018, 14, 4674–4682. [Google Scholar] [CrossRef]
  37. Razzaque, M.A.; Milojevic-Jevric, M.; Palade, A.; Clarke, S. Middleware for internet of things: A survey. IEEE Internet Things J. 2015, 3, 70–95. [Google Scholar] [CrossRef]
  38. CarePredict. CarePredict: AI-Powered Senior Care Platform. Available online: https://www.carepredict.com/ (accessed on 6 June 2025).
  39. Yousefpour, A.; Fung, C.; Nguyen, T.; Kadiyala, K.; Jalali, F.; Niakanlahiji, A.; Kong, J.; Jue, J.P. All one needs to know about fog computing and related edge computing paradigms: A complete survey. J. Syst. Archit. 2019, 98, 289–330. [Google Scholar] [CrossRef]
  40. Perera, C.; Qin, Y.; Estrella, J.C.; Reiff-Marganiec, S.; Vasilakos, A.V. Fog computing for sustainable smart cities: A survey. ACM Comput. Surv. (CSUR) 2017, 50, 1–43. [Google Scholar] [CrossRef]
  41. Dastjerdi, A.V.; Gupta, H.; Calheiros, R.N.; Ghosh, S.K.; Buyya, R. Fog computing: Principles, architectures, and applications. In Internet of Things; Elsevier: Amsterdam, The Netherlands, 2016; pp. 61–75. [Google Scholar]
  42. Idrees, Z.; Zou, Z.; Zheng, L. Edge computing based IoT architecture for low cost air pollution monitoring systems: A comprehensive system analysis, design considerations & development. Sensors 2018, 18, 3021. [Google Scholar] [CrossRef] [PubMed]
  43. Kazman, R.; Klein, M.; Clements, P. ATAM: Method for Architecture Evaluation; Carnegie Mellon University, Software Engineering Institute: Pittsburgh, PA, USA, 2000. [Google Scholar]
  44. Cook, D.; Schmitter-Edgecombe, M.; Crandall, A.; Sanders, C.; Thomas, B. Collecting and disseminating smart home sensor data in the CASAS project. In Proceedings of the CHI Workshop on Developing Shared Home Behavior Datasets to Advance HCI and Ubiquitous Computing Research, Boston, MA, USA, 4–9 April 2009; pp. 1–7. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.