Next Article in Journal
Flexoelectricity in Biological Materials and Its Potential Applications in Biomedical Research
Previous Article in Journal
A Machine Learning-Based Diagnostic Nomogram for Moyamoya Disease: The Validation of Hypoxia-Immune Gene Signatures
Previous Article in Special Issue
Opportunities for Artificial Intelligence in Operational Medicine: Lessons from the United States Military
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PathCare: Integrating Clinical Pathway Information to Enable Healthcare Prediction at the Neuron Level

1
Peking University Third Hospital, Beijing 100191, China
2
National Engineering Research Center for Software Engineering, Peking University, Beijing 100871, China
3
Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing 100871, China
4
Affiliated Xuzhou Municipal Hospital of Xuzhou Medical University, Xuzhou 221002, China
*
Authors to whom correspondence should be addressed.
Bioengineering 2025, 12(6), 578; https://doi.org/10.3390/bioengineering12060578
Submission received: 20 April 2025 / Revised: 15 May 2025 / Accepted: 22 May 2025 / Published: 28 May 2025
(This article belongs to the Special Issue Artificial Intelligence for Better Healthcare and Precision Medicine)

Abstract

:
Electronic Health Records (EHRs) offer valuable insights for healthcare prediction. Existing methods approach EHR analysis through direct imputation techniques in data space or representation learning in feature space. However, these approaches face the following two critical limitations: first, they struggle to model long-term clinical pathways due to their focus on isolated time points rather than continuous health trajectories; second, they lack mechanisms to effectively distinguish between clinically relevant and redundant features when observations are irregular. To address these challenges, we introduce PathCare, a neural framework that integrates clinical pathway information into prediction tasks at the neuron level. PathCare employs an auxiliary sub-network that models future visit patterns to capture temporal health progression, coupled with a neuron-level filtering gate that adaptively selects relevant features while filtering out redundant information. We evaluate PathCare on the following three real-world EHR datasets: CDSL, MIMIC-III, and MIMIC-IV, demonstrating consistent performance improvements in mortality and readmission prediction tasks. Our approach offers a practical solution for enhancing healthcare predictions in real-world clinical settings with varying data completeness.

1. Introduction

Electronic Health Records (EHRs) have become indispensable in modern clinical healthcare, offering a rich source of data that chronicles a patient’s medical history. Over recent years, deep learning-based prediction models have gained significant attention for their ability to leverage EHR data, which represented as temporal sequences of clinical visits, can significantly inform and enhance healthcare decision-making [1]. Such applications range from disease diagnosis and mortality prediction to patient sub-typing and personalized treatment planning [2,3,4].
Working with EHR data presents challenges due to its inherent limitations in clinical practice. Factors such as rare disease occurrences, expensive examinations, and safety considerations result in data scarcity [5]; for instance, missing or inconsistent follow-up visits disrupt the continuity of patient health trajectories, creating substantial gaps in longitudinal records [2]. A patient with diabetes may have regular check-ups for three months and then miss several appointments, resulting in incomplete monitoring of critical indicators like HbA1c levels [6]. These discontinuities create fractured representations of patient status, making it difficult to model disease progression accurately. Consequently, task-specific models trained on such incomplete datasets often suffer from overfitting to observed patterns while failing to capture true health trajectories, compromising both performance and robustness in real-world applications.
While most existing works tackling EHR data sparsity challenges can be categorized into direct data space and indirect representation space methods, they generally follow four key approaches. Direct methods include attention-based models like RETAIN [2], which identify influential visits through neural attention but focus only on available data points without addressing missing temporal contexts. Interpretability-focused approaches [3,7] enhance transparency through multichannel feature extraction and importance recalibration, yet they struggle with irregularly sampled features. Indirect methods include data scarcity solutions like multi-task learning [5], SAFARI [6], which apply correlational sparsity prior but require extensive expert annotation, and advanced representation learning techniques [8], which employ self-supervised learning but often capture patterns unrelated to the target prediction task.
Despite recent advancements, contemporary methods face the following two major limitations: (1) insufficient modeling of long-term health trajectories, particularly in contexts where follow-up visits occur at irregular intervals; and (2) inadequate ability to selectively identify features that are most relevant to specific prediction tasks. For example, consider a diabetes patient who misses several follow-up appointments; current models may not only struggle to relate the patient’s initially stable status to later complications due to gaps in the temporal sequence, but they may also fail to effectively filter out irrelevant features (such as unrelated laboratory values or demographic attributes) while emphasizing clinically significant variables that contribute to the risk of disease progression. As a result, critical trajectory patterns and key predictors may be overlooked, impairing model performance for individualized risk assessment.
Intuitively, the way clinicians approach patient care offers valuable inspiration for modeling clinical pathways. When faced with a patient who has missed follow-ups, physicians naturally try to fill in the gaps by considering the patient’s historical trends and likely progression based on similar cases they have encountered. They focus on the most predictive indicators while filtering out irrelevant information. Similarly, an effective predictive modeling approach should leverage the predictive power of historical visit patterns to anticipate likely future states, enabling a more continuous representation of the patient’s health trajectory despite missing observations. This perspective aligns with how language models predict the next word in a sentence based on context, suggesting that viewing clinical pathways as “health sentences” could yield more coherent patient representations [9].
Given these insights, we are confronted with the following pressing challenge: How can we effectively model the long-term clinical pathway to capture future-predictive health representations, despite missing or inconsistent follow-ups, while filtering out redundant information that might compromise target prediction?
To address this, we introduce PathCare with two key innovations that directly tackle the identified limitations. First, we develop a longitudinal-aware auxiliary sub-network that predicts future clinical visits and encodes health status from a temporal perspective, enabling the effective modeling of long-term clinical pathways even with missing observations. Second, we design a neuron-level filtering mechanism that adaptively selects target-predictive features while filtering out redundant information, ensuring focus on clinically meaningful signals.
Central to our approach is the view of a patient’s clinical pathway as analogous to a sentence in natural language processing, with each visit considered a word containing lab tests and events [9]. Unlike traditional methods requiring additional labeling or complex feature engineering [5], our auxiliary sub-network evaluates the reliability of each feature by analyzing historical visit patterns and temporal dependencies among clinical variables. By examining how well certain features predict future health states, our model learns which aspects of the patient record are most informative for clinical outcomes, even when observations are irregular or missing.
Our neuron-level filtering mechanism, based on layer decorrelation, encourages diversity among hidden units while adaptively selecting target-predictive features. High-ranking neurons that capture essential health progression patterns are preserved, while low-ranking neurons that model noise or redundant information are filtered out. This approach mimics how physicians focus on key indicators while disregarding less relevant measurements [6,10].
We argue that jointly learning future-predictive health representations offers valuable insights for prognosis, beyond independently predicting target labels. For example, when predicting diabetes risk, incorporating blood glucose level predictions as an auxiliary task enhances the primary task by capturing the underlying progression pattern. Unlike previous multi-task approaches with parameter hard-sharing [5], PathCare provides these as separate hidden features, avoiding interference with the primary task while benefiting from the shared information.
Our primary contributions are outlined as follows:
  • Methodologically, we propose PathCare, a framework for learning future-predictive health representations, designed to model long-term clinical pathways despite missing follow-ups. We design a longitudinal-aware auxiliary sub-network that predicts future clinical visits and encodes health status from a temporal perspective. We also introduce a neuron-level filtering mechanism that adaptively selects target-predictive features, encouraging diversity among hidden units while preserving critical information. PathCare provides refined longitudinal modeling and feature selection, with disrupted health trajectories elaborately modeled, resulting in more robust patient representations across irregular visit patterns.
  • Experimentally, evaluations on CDSL, MIMIC-III, and MIMIC-IV datasets demonstrate PathCare’s superior performance in mortality and readmission prediction tasks, achieving relative AUPRC improvements of up to 2.43% and consistently higher min(+P, Se) values across all datasets. Our model maintains robust performance even under extreme data sparsity conditions and shows particular effectiveness for patients with regular missing visits. Ablation studies confirm that both the pathway prediction component and adaptive filtering mechanism contribute significantly to performance gains, validating our approach to clinical trajectory modeling in real-world settings with varying data completeness.

2. Related Work

In the field of EHR data analysis, data sparsity caused by irregular sampling poses significant challenges for building efficient prediction models [11,12]. Previous approaches primarily fall into the following two categories: methods that directly address sparsity in the data space and methods that indirectly resolve sparsity in the representation space.

2.1. Direct Data Space Methods

Direct data space methods aim to operate directly on raw EHR data through feature extraction or data imputation to address missing value problems. Traditional approaches rely on imputation techniques [13] or statistical methods [14], which assume patient visits are independent and features are missing randomly, ignoring the fact that missingness in EHR data often contains important clinical information. More sophisticated architectures like RETAIN [2] use two-level attention to identify important visits and variables but neglect the fine-grained importance of feature changes. While RETAIN [2] identifies influential visits and variables, its primary focus on available data points makes it challenging to infer progression along an implicit clinical pathway, especially when significant temporal gaps arise from irregular follow-ups. The attention mechanism may not fully capture the underlying clinical logic connecting temporally distant but pathologically related events if the intervening pathway segments are unobserved or poorly represented due to sampling irregularities. Recent models [3,15,16] enhance representation through time-aware distributions, multi-scale feature extraction, and adaptive feature importance recalibration, but they still face fundamental challenges with highly sparse data—extracting sufficient information from limited observations to construct complete representations.
The core limitation of direct methods is their reliance solely on limited data from individual patients, ignoring valuable information in cross-patient patterns, making it difficult to construct complete representations in highly sparse environments. Furthermore, even state-of-the-art feature extraction techniques struggle with high proportions of missing values, especially when the missing patterns themselves contain clinical information [17].

2.2. Indirect Representation Space Methods

Indirect representation space methods address sparsity problems in the representation space by utilizing auxiliary information or similar patient data, rather than directly filling in missing values in the original data [5].
One direction leverages patient similarities. GRASP [4] finds similar patients to enhance representation learning, while other approaches [10] compensate for missing modality information using auxiliary data from similar patients. However, these methods underestimate the impact of missing features when measuring patient similarity, leading to inaccurate similarity assessments. More importantly, they focus on how to extract information from similar patients, rather than how to effectively integrate this information.
Some methods enhance representations through multi-task learning [5,17]. Current approaches often employ parameter hard-sharing strategies for multi-task models, failing to explicitly consider what to share and how much to share it, which can lead to negative transfer that interferes with target prediction optimization.
Some recent works [6,8] learn self-supervised representations for irregular time series through time-sensitive contrastive learning and data reconstruction, but they fail to fully utilize the implicit clinical pathway information in EHR data, which is crucial for capturing dynamic changes in patients’ health status. For instance, advanced sequential models like Mamba [9], while effective at capturing long-range dependencies, are not inherently designed to decode the implicit clinical pathway logic from sequences of medical codes and measurements, particularly when these sequences are characterized by irregular and unpredictable time intervals. Their state-space mechanisms might summarize long histories but may not explicitly model the ’expected next phase’ of a clinical pathway or differentiate signals critical to pathway progression from general temporal patterns, especially when faced with high data irregularity.
Overall, a significant gap persists, outlined as follows: existing methods, whether direct or indirect, often lack dedicated mechanisms to holistically interpret implicit clinical pathways from irregularly sampled EHR time series. They may either focus too narrowly on observed points, fail to adequately model the inherent pathway logic, or lack adaptive filtering robust enough for severe irregularity and missingness. This underscores the need for novel frameworks that can explicitly learn from pathway progression cues and selectively utilize features, even amidst such data imperfections.
In this paper, we propose PathCare, a general healthcare prediction model designed to address these specific challenges by integrating clinical visit pathway information. Its core innovation lies in designing a visit pathway representation learning mechanism that captures temporal dynamics of patient visit sequences and learns from similar patients through an adaptive information sharing strategy. Unlike direct methods, PathCare does not rely on the complete imputation of original data; compared to indirect methods, PathCare explicitly models what to share and how much to share from auxiliary representations, effectively mitigating negative transfer risk. In this way, PathCare provides more accurate predictions, even under highly sparse EHR data conditions.

3. Preliminary

In this section, we begin with a motivating example, then describe the structure of EHR data, and formulate the problem of clinical healthcare prediction.

3.1. A Motivating Example

We consider the health status prediction of patients with chronic or critical conditions as the motivating example. Many individuals worldwide suffer from such conditions, facing significant health risks that require ongoing treatment and periodic hospital visits for various tests (e.g., blood tests and vital sign monitoring). The accurate prediction of patient health risks based on medical records collected during these visits is crucial in supporting recovery and preventing adverse outcomes.

3.2. Problem Formulation

EHR data are routinely collected from patient observations in hospitals through clinical visits, encompassing discrete time-series data (e.g., medication and diagnosis) and continuous multivariate data (e.g., vital signs and laboratory measurements). We assume a patient visits the clinic t times, generating time-ordered EHR records denoted as r t R N r ( t = 1 , 2 , , T ) . Each EHR record contains N r features, such as lab test results or clinical observations, as illustrated in Figure 1. The prediction problem in this paper is formulated as follows: given t historical EHR records of a patient, i.e., ( r 1 , , r t ) , how can we predict the patient’s healthcare status y, which represents the probability of encountering a specific risk (e.g., in-hospital mortality or readmission). The next section will detail the proposed model. Table 1 lists the notations used in our model.

4. Method

For the healthcare prediction task, the model is trained to utilize the recorded clinical visits ( r t R N r ) to learn the representation of health status and predict the probability of suffering from the specific target outcome in the future y ^ (e.g., the risk of in-hospital mortality or readmission). In this paper, we propose a general model, PathCare (illustrated in Figure 2), which adds useful auxiliary embedding to healthcare prediction tasks by predicting future visits.
  • We explicitly model the clinical status pathway by training a Gated Recurrent Unit (GRU)-based auxiliary sub-network (i.e., the left GRU) to predict the lab tests and clinical events recorded in a future visit ( r ^ t + 1 ). The hidden representation of the sequence ( h t f ) is encoded to be a good predictor of future status, and it is provided as extra clinical features for the supervised clinical prediction task. This helps the model to depict the health status from a long-term perspective.
  • A task-specific GRU is applied to extract the other part of the health status representation. The model merges the task-specific representation and the auxiliary representation to perform the target prediction. We encourage the diversity among hidden units based on layer decorrelation to help the useful units stand out (i.e., denoted as red circles). A neuron-level gate is designed to filter out the units that are useless to the target prediction (i.e., denoted as blue squares on the left side, and blue triangles on the right side) and reduce the redundancy of the model.
Figure 2. Framework of PathCare. The embedding extracted to predict the future visit, h t f , is provided as additional advanced clinical features and combined with task-specific embedding h t . A neuron-level filtering gate is designed to adaptively order the neurons by modeling the demand degree of clinical pathway prospects for patients in diverse conditions. High-ranking neurons (denoted as red circles) will store target-predictive information, which is expected to be kept, while low-ranking neurons will store target-irrelevant information that should be filtered out (denoted as blue squares and triangles).
Figure 2. Framework of PathCare. The embedding extracted to predict the future visit, h t f , is provided as additional advanced clinical features and combined with task-specific embedding h t . A neuron-level filtering gate is designed to adaptively order the neurons by modeling the demand degree of clinical pathway prospects for patients in diverse conditions. High-ranking neurons (denoted as red circles) will store target-predictive information, which is expected to be kept, while low-ranking neurons will store target-irrelevant information that should be filtered out (denoted as blue squares and triangles).
Bioengineering 12 00578 g002

4.1. Auxiliary Task for Clinical Pathway Modeling

For the sake of causality, the model is not allowed to consult the lab tests and clinical events recorded in future visits and take them as input (i.e., clinical features), while, in reality, they are closely related to the target outcome in the future. It is beneficial to jointly model the clinical pathway in the future instead of independently predicting the target label. However, it may introduce redundancy and disturb the optimization of the original network if the model is made to predict the future visit and target outcome by directly performing multi-task learning in the same model. Inspired by [18] in natural language processing, which adds the context meaning of words to the sequence tagging model by pre-training an extra language model, we construct a separate auxiliary network to predict lab tests and clinical events in the future. The sub-network takes the patient’s EHR records r t R N r ( t = 1 , 2 , , T ) as the input and embeds the time sequence with a GRU [19],
h 1 f , , h T f = GRU future ( r 1 , , r T ) ,
where h t f R N h ( t = 1 , 2 , , T ) represents the learned representations for each clinical visit, and N h is the hidden dimension. These representations are called future-specific representations, and they contain information on the future status of the patients since they are used to predict future events in the model. Then, those representations are fed into a feedforward layer to predict the lab test values of the next clinical visit,
r ^ t + 1 = FNN ( h t f ) .
We do not need extra labels but use the time sequence itself to supervise the training. We calculate the mean-squared loss for every predicted visit except the last predicted one, whose ground-truth value is not recorded in the data.
L f u t u r e = 1 T t = 1 T 1 MSE ( r ^ t + 1 , r t + 1 ) .
The extracted hidden embeddings for the sequence will be used as additional clinical features in the supervised target prediction model. In particular, we concatenate the embedding ( h t f ) with the output from the GRU layers in the target prediction network ( h t ). However, h t f is trained to predict the future clinical visit and thus contains a lot of target-irrelevant information, which may introduce undesired bias that can degrade the performance of target prediction. To exploit the beneficial information for target prediction, we force neurons to represent different information and filter out the target-irrelevant ones based on the neuron-level filtering gate in the next subsection.

4.2. Neuron-Level Filtering Gate

The task-specific module takes the patient’s raw EHR records r t R N r ( t = 1 , 2 , , T ) as the input and embeds the time sequence with a task-specific GRU,
h 1 , , h T = GRU task ( r 1 , , r T ) ,
where h t R N h ( t = 1 , 2 , , T ) represents the task-specific representations for each clinical visit, and N h is the hidden dimension. These representations are called task-specific representations, and they contain task-specific information of the patients. The future-specific representations and task-specific representations are then projected into a new latent space,
g f u t u r e = W f h t f g t a s k = W t h t
where W f R N g × N h and W t R N g × N h are the projection matrices for the two different representations. We construct a combined system, using g f u t u r e as additional advanced clinical features. A simple method is to concatenate g f u t u r e with g t a s k . To effectively exploit information that is beneficial for the target prediction in auxiliary embeddings and reduce the redundancy, we intend to use a neuron-level filtering gate, which adaptively models the demand degree of clinical pathway information for patients in different conditions and filters out useless neurons.
Firstly, we promote diversities among hidden neurons (i.e., differentiation of information stored inside each neuron) to prepare for neuron-level filtering. This makes it easy to identify the target-irrelevant part. Based on [20], which uses a regularizer to reduce overfitting and increase generalization, we explicitly encourage non-redundant representations by reducing the correlation between activations in g f u t u r e and g t a s k , respectively. Here, for simplicity, we use g to denote them as a general case. The covariances between all pairs of activations i and j of g form a matrix C ,
C i , j = 1 B b = 1 B ( g i b μ i ) ( g j b μ j )
where B is the batch size and g i b is the i-th activation of g at the b-th case in the batch. μ i = 1 B b = 1 B g i b is the sample mean of activation i over the batch. The diagonal of C is then subtracted from the matrix norm to build the decorrelation loss term as follows:
L d e c o r r e l a t i o n = 1 2 ( C F 2 diag ( C ) 2 2 )
where · F is the Frobenius norm, and the diag ( · ) operator extracts the main diagonal of a matrix into a vector. The decorrelation loss will co-operate with the target prediction loss to decompose the target-relevant part from the irrelevant part, which helps to ease the difficulty in mining useful information.
Secondly, we propose a filtering gate to model the demand degree of future visit prediction for patients in diverse conditions. If the gate believes that jointly evaluating the future visit is beneficial to the target prediction for a patient, more hidden units in auxiliary embeddings will be included in the final health status representation. Otherwise, the gate will tend to select the hidden units in task-specific embeddings. Only the neurons selected by the gate are used to perform the target prediction. Concretely, the filtering gate is learned based on the latest record of the patient,
gate = W g a t e r t
where W g a t e R N g × N r is the projection matrix for the record in the last time step. Inspired by [21], which proposes a novel inductive bias for recurrent neural networks to separately allocate hidden state neurons with long and short-term information, we use the similar cumax function to operate the learned gate and obtain the valve for g f u t u r e and g t a s k .
cumax ( . . . ) = cumsum ( softmax ( . . . ) ) gate f u t u r e = cumax ( gate ) gate t a s k = 1 cumax ( gate )
where cumsum denotes the cumulative sum. The output of vector cumax can be seen as the expectation of a binary gate ( 0 , , 0 , 1 , , 1 ) [21]. This binary gate splits the cell state into two segments as fiollows: the 0-segment and the 1-segment. Thus, gate f u t u r e and gate t a s k act as complementary gate vectors, where gate t a s k = 1 gate f u t u r e element-wise, ensuring they operate on distinct neural pathways. We obtain a new state representation by using the learned gate to adaptively extract information from the future/task-specific embeddings,
s = gate f u t u r e g f u t u r e + gate t a s k g t a s k
where ⊙ represents element-wise multiplication. The patient’s state representation s is fed into a feedforward layer to predict the final task,
y ^ = FNN ( s ) .
The corresponding target prediction loss and the final optimization loss are, respectively, defined as
L t a s k = 1 N i = 1 N ( y i log ( y ^ i ) + ( 1 y i ) log ( 1 y ^ i ) )
L t o t a l = L t a s k + α L d e c o r r e l a t i o n
where α is a hyper-parameter. The synergistic operation of these components—specifically, the initial learning of future-specific and task-specific representations in separate GRUs; the promotion of internal diversity within these projected representations ( g f u t u r e , g t a s k ) via the L d e c o r r e l a t i o n loss; and critically, the adaptive neuron-level filtering gate—plays a crucial role in mitigating the risk of negative transfer. By dynamically selecting and weighting neuronal information from both the future-predictive and task-specific embeddings based on the current input instance ( r t ), PathCare can effectively filter out potentially irrelevant or conflicting signals that might originate from the auxiliary future-prediction task. This adaptive control ensures that predominantly beneficial information contributes to the final health status representation s and the primary prediction y ^ .

5. Experimental Setups

5.1. Datasets

We utilize the CDSL (COVID Data Save Lives), MIMIC-III, and MIMIC-IV datasets for benchmarking, with detailed statistics presented in Table 2 and Table 3.
CDSL dataset [22]. This dataset contains anonymized records of 4255 COVID-19 patients from Spain’s HM Hospitales [22]. The CDSL dataset is only used for mortality prediction as readmission information is unavailable.
MIMIC-III dataset (version 1.4) [11]. MIMIC-III contains de-identified health data from over 40,000 ICU patients (2001–2012), including demographics, vital signs, laboratory tests, and medications.
MIMIC-IV dataset (version 2.2) [12]. This updated version collected from 2008 to 2019 includes 24,610 samples after preprocessing, with approximately 29.5% positive samples for both mortality and readmission.
For all datasets, we apply the same preprocessing, outlined as follows: (1) forward filling missing values with patients’ most recent records; (2) standardizing features to zero mean and unit variance; (3) imputing missing features with dataset-level averages when all records are missing. We strictly maintain causality throughout preprocessing to ensure test data integrity.
For the MIMIC-III dataset, the split of 80% training, 15% validation, and 5% testing was chosen to strictly align with the cohort selection and data splitting approach used in the benchmark study by Harutyunyan et al. [5]. This adherence ensures that our performance evaluation on MIMIC-III is directly comparable to established prior work. The specific data splits for CDSL (70% training, 10% validation, and 20% testing) and MIMIC-IV (70% training, 10% validation, and 20% testing) represent a standard and robust partitioning for machine learning model development and evaluation. For mortality prediction, we use the first 48 h of ICU data; for readmission prediction, we use all data before discharge.

5.2. Evaluation Metrics

We assess model performance using AUPRC, AUROC, and min(+P, Se). For our imbalanced datasets, AUPRC provides more informative assessment than AUROC, as it emphasizes positive class performance [23]. The min(+P, Se) metric, the minimum of precision (+P) and sensitivity (Se, or recall), ensures there is a balance between accurate positive predictions and capturing true positives. This is vital in healthcare, where both minimizing false alarms and detecting true cases matter.

5.3. Prediction Tasks

We conduct experiments on the following two clinically relevant prediction tasks.
Mortality prediction. This task predicts whether a patient will die during hospitalization. For the MIMIC datasets, we use the first 48 h of ICU data to predict in-hospital mortality (12% positive cases). For the CDSL dataset, we predict COVID-19 mortality (12.69% positive cases), which is particularly challenging due to the disease’s novel nature and varied impact across populations.
Readmission prediction. The 30-day readmission prediction task aims to predict hospital readmission within 30 days after discharge. Available only for MIMIC datasets, this task uses all pre-discharge data, with approximately 16% positive samples in MIMIC-III and 15.5% in MIMIC-IV.
Both tasks have significant clinical value for identifying high-risk patients, optimizing resource allocation, and enabling early interventions.

5.4. Baseline Models

We compare PathCare with several state-of-the-art approaches as follows:

5.4.1. EHR-Specific Models

  • RETAIN [2] utilizes a two-level neural attention mechanism to detect influential visits and significant clinical variables within those visits. It processes EHR data in reverse time order, mimicking physician practice by giving higher attention to recent clinical visits.
  • SAFARI [6] learns compact patient health representations by imposing a correlational sparsity prior to the correlations of medical feature pairs. It solves a bi-level optimization problem involving high-level inter-group correlations and lower-level intra-group correlations, using Laplacian kernel and graph neural networks.
  • AdaCare [16] captures the long- and short-term variations of biomarkers as clinical features to represent health status across multiple time scales. It models correlations between clinical features to enhance those that strongly indicate health status, maintaining high prediction accuracy while providing interpretability.
  • GRASP [4] enhances patient representation learning by leveraging knowledge from similar patients. It defines similarities between patients for different clinical tasks, finds similar patients with useful information, and learns cohort representation to extract valuable knowledge.

5.4.2. General Deep Learning Models

  • RNN is a standard recurrent neural network model applied to sequential medical data, serving as a fundamental baseline.
  • GRU α is a basic GRU model with an addition-based attention mechanism, serving as a strong baseline for healthcare prediction tasks.
  • Mamba [9] is a linear-time sequence modeling architecture based on selective state spaces. It allows the model to selectively propagate or forget information along the sequence length dimension, depending on the current token, making it suitable for processing long sequences of medical data.

5.4.3. Ablation Models

To evaluate our model components, we include the following two PathCare variants:
  • PathCare C o n t e x t removes the future clinical pathway context module, focusing solely on current information for prediction.
  • PathCare G a t e directly concatenates auxiliary and task-specific embeddings without the neuron-level filtering gate, maintaining potential redundancy between embeddings.

5.5. Implementation Details

Hardware and software. All models are trained on a single NVIDIA RTX 3090 GPU with CUDA 11.8 and 64 GB system memory. We implement our method using Python 3.11.4, PyTorch 2.0.1 [24], PyTorch Lightning 2.0.5 [25], and pyehr [26,27].
Training and hyperparameters. We use AdamW optimizer [28] with a batch size of 256 patients for all models. Training runs for a maximum of 100 epochs with early stopping after 10 epochs without AUPRC improvement on the validation set. We employ a learning rate of 0.001 with linear warmup (first 5 epochs) followed by cosine decay.
To ensure reproducibility, we fix the random seed to 42 for all experiments and employ bootstrapping on the test sets for robust performance evaluation.

6. Experimental Results and Analysis

We evaluate PathCare on in-hospital mortality prediction tasks across CDSL, MIMIC-III, and MIMIC-IV datasets, as well as 30-day readmission prediction tasks on MIMIC-III and MIMIC-IV datasets.

6.1. Experimental Results

Table 4 and Table 5 present performance comparisons between PathCare and the existing methods across multiple datasets and prediction tasks.
PathCare consistently outperforms the baseline methods across all datasets and tasks. For mortality prediction (Table 4), PathCare surpasses the strongest baselines by 0.5–1.3% in AUPRC and 0.5–1.6% in AUROC across datasets. More substantial improvements are observed in the min(+P, Se) metric, with gains of 4–22% over the best-performing baselines, demonstrating PathCare’s superior ability to balance precision and sensitivity.
For readmission prediction (Table 5), performance advantages become more pronounced. PathCare improves AUPRC by 0.5–0.8% and AUROC by 0.2–0.3% over the best baselines. The most significant gains occur in min(+P, Se), where PathCare outperforms existing methods, highlighting its particular strength in maintaining clinically valuable prediction balance.
Comparative analysis reveals that PathCare addresses some of the fundamental limitations in the existing approaches. Direct data space methods (RETAIN, AdaCare) rely solely on patient-specific observations without leveraging cross-patient information patterns. Indirect representation space methods (GRASP) lack explicit mechanisms for determining optimal information sharing strategies. Even advanced sequential modeling approaches like Mamba underperform compared to PathCare, underscoring the importance of both temporal modeling and contextual information integration for effective healthcare prediction.

6.2. Ablation Study

To understand component contributions, we compare PathCare with the following two variants: PathCare C o n t e x t (without future clinical pathway context) and PathCare G a t e (without a neuron-level filtering gate). The results in Table 4 and Table 5 show both variants achieve competitive performance against the baselines but are outperformed by the complete PathCare.
PathCare C o n t e x t performs worse than the full model, with notable AUPRC decreases in mortality prediction (1.55% for CDSL and 6.01% for MIMIC-III) and readmission prediction (3.88% for MIMIC-III and 4.98% for MIMIC-IV). This confirms that modeling future clinical pathways provides valuable information for health status assessment and prediction.
Similarly, PathCare G a t e shows performance drops across datasets, with AUPRC decreases of 2.00%, 3.37%, and 2.33% for mortality prediction on CDSL, MIMIC-III, and MIMIC-IV, respectively, and 3.95% for MIMIC-III readmission. These results demonstrate the efficacy of the neuron-level filtering gate in reducing redundancy and selecting relevant information from auxiliary embeddings.
These findings confirm that both future clinical pathway modeling and neuron-level filtering mechanisms are essential components of PathCare, contributing to its superior performance in healthcare prediction tasks.

6.3. Observations and Analysis

To further understand how PathCare adaptively selects information from future-predictive embeddings versus task-specific embeddings, we examine the learned gate values across different patient groups. Figure 3 illustrates the distribution of gate values for survival and non-survival groups across the CDSL and MIMIC-III datasets.
Interestingly, patients with adverse outcomes (non-survival) consistently exhibit higher average gate values compared to those with favorable outcomes (survival) across both datasets (CDSL: 0.54 vs. 0.51; MIMIC-III: 0.41 vs. 0.38). This indicates that PathCare adaptively allocates more representation capacity to future-predictive features for high-risk patients, while relying more on task-specific features for low-risk patients.
For patients at higher risk, the model appears to emphasize capturing recent deterioration patterns and near-term trajectory shifts that signal potential complications. Conversely, for patients with better prognoses, the model incorporates more auxiliary embeddings to enhance representation robustness, potentially accounting for the greater stability and predictability in their clinical trajectories. This adaptive mechanism aligns with clinical practice, where physicians pay closer attention to immediate risk signals in critically ill patients while considering broader health indicators for stable patients.
This adaptive reliance on future-predictive features, controlled by the gate mechanism, as evidenced by the distinct distributions in Figure 3, highlights how PathCare can strategically leverage auxiliary information when it is deemed beneficial (e.g., for high-risk patients), while simultaneously down-weighting its influence when it might be less relevant or potentially introduce noise for other patient groups. Such dynamic information flow control is a key aspect of the model’s strategy to mitigate negative transfer from the auxiliary task, ensuring robust performance across diverse patient conditions.

7. Conclusions

In this paper, PathCare is proposed to perform healthcare prediction by integrating clinical pathway information at the neuron-level. Specifically, an auxiliary sub-network is built to predict future clinical visits and provide additional context-sensitive information for the primary prediction task. We encourage diversities among hidden units in the health status embedding based on layer decorrelation to help the target-relevant units stand out. We design a neuron-level filtering gate to order the neurons and adaptively select the target-predictive units to predict the primary target for patients in diverse conditions. Experimental results on the MIMIC-III, MIMIC-IV, and CDSL datasets demonstrate that PathCare consistently outperforms state-of-the-art baseline approaches across various healthcare prediction tasks. Our work provides a new perspective on integrating clinical pathway information for improved healthcare prediction, which could potentially assist healthcare providers in making more informed decisions for patient care. Online Resources in Appendix A.

Author Contributions

Conceptualization, D.S.; Methodology, D.S. and X.L.; Software, D.S. and L.G.; Validation, C.Z.; Investigation, D.S.; Resources, L.M., L.W. and W.T.; Data curation, D.S. and K.Y.; Writing—original draft, D.S.; Writing—review & editing, D.S.; Visualization, D.S.; Supervision, L.M. and L.W.; Project administration, L.M. and W.T.; Funding acquisition, L.M. and W.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant numbers 82470774 and 62402017; Xuzhou Scientific Technological Projects, grant number KC23143; the Beijing Natural Science Foundation, grant number L244063; the Peking University Medicine plus X Pilot Program-Key Technologies R&D Project, grant number 2024YXXLHGG007; and the China Postdoctoral Science Foundation, grant number 2024M750122. The APC was funded by Peking University Third Hospital.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Online Resources

References

  1. Chen, J.; Zhang, A. Hgmf: Heterogeneous graph-based fusion for multimodal data with incompleteness. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual Event, 6–10 July 2020; pp. 1295–1305. [Google Scholar]
  2. Choi, E.; Bahadori, M.T.; Sun, J.; Kulas, J.; Schuetz, A.; Stewart, W. Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. In Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS 2016), Barcelona, Spain, 5–10 December 2016; Volume 29. [Google Scholar]
  3. Ma, L.; Zhang, C.; Gao, J.; Jiao, X.; Yu, Z.; Zhu, Y.; Wang, T.; Ma, X.; Wang, Y.; Tang, W.; et al. Mortality prediction with adaptive feature importance recalibration for peritoneal dialysis patients. Patterns 2023, 4, 100892. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, C.; Gao, X.; Ma, L.; Wang, Y.; Wang, J.; Tang, W. GRASP: Generic framework for health status representation learning based on incorporating knowledge from similar patients. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 715–723. [Google Scholar]
  5. Harutyunyan, H.; Khachatrian, H.; Kale, D.C.; Galstyan, A. Multitask learning and benchmarking with clinical time series data. arXiv 2017, arXiv:1703.07771. [Google Scholar] [CrossRef] [PubMed]
  6. Ma, X.; Wang, Y.; Chu, X.; Ma, L.; Tang, W.; Zhao, J.; Yuan, Y.; Wang, G. Patient health representation learning via correlational sparse prior of medical features. IEEE Trans. Knowl. Data Eng. 2022, 35, 11769–11783. [Google Scholar] [CrossRef]
  7. Zhang, Z.; Zhang, Q.; Jiao, Y.; Lu, L.; Ma, L.; Liu, A.; Liu, X.; Zhao, J.; Xue, Y.; Wei, B.; et al. Methodology and real-world applications of dynamic uncertain causality graph for clinical diagnosis with explainability and invariance. Artif. Intell. Rev. 2024, 57, 151. [Google Scholar] [CrossRef]
  8. Chowdhury, R.R.; Li, J.; Zhang, X.; Hong, D.; Gupta, R.K.; Shang, J. Primenet: Pre-training for irregular multivariate time series. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington DC, USA, 7–14 February 2023; Volume 37, pp. 7184–7192. [Google Scholar]
  9. Gu, A.; Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv 2023, arXiv:2312.00752. [Google Scholar]
  10. Zhang, C.; Chu, X.; Ma, L.; Zhu, Y.; Wang, Y.; Wang, J.; Zhao, J. M3care: Learning with missing modalities in multimodal healthcare data. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 2418–2428. [Google Scholar]
  11. Johnson, A.E.; Pollard, T.J.; Shen, L.; Li-wei, H.L.; Feng, M.; Ghassemi, M.; Moody, B.; Szolovits, P.; Celi, L.A.; Mark, R.G. MIMIC-III, a freely accessible critical care database. Sci. Data 2016, 3, 160035. [Google Scholar] [CrossRef] [PubMed]
  12. Johnson, A.E.; Bulgarelli, L.; Shen, L.; Gayles, A.; Shammout, A.; Horng, S.; Pollard, T.J.; Hao, S.; Moody, B.; Gow, B.; et al. MIMIC-IV, a freely accessible electronic health record dataset. Sci. Data 2023, 10, 1. [Google Scholar] [CrossRef] [PubMed]
  13. Van Buuren, S.; Groothuis-Oudshoorn, K. mice: Multivariate imputation by chained equations in R. J. Stat. Softw. 2011, 45, 1–67. [Google Scholar] [CrossRef]
  14. Choi, E.; Xiao, C.; Stewart, W.; Sun, J. Mime: Multilevel medical embedding of electronic health records for predictive healthcare. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 4547–4557. [Google Scholar]
  15. Ma, L.; Zhang, C.; Wang, Y.; Ruan, W.; Wang, J.; Tang, W.; Ma, X.; Gao, X.; Gao, J. Concare: Personalized clinical feature embedding via capturing the healthcare context. In Proceedings of the AAAI conference on artificial intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 833–840. [Google Scholar]
  16. Ma, L.; Gao, J.; Wang, Y.; Zhang, C.; Wang, J.; Ruan, W.; Tang, W.; Gao, X.; Ma, X. Adacare: Explainable clinical health status representation learning via scale-adaptive feature extraction and recalibration. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 825–832. [Google Scholar]
  17. Che, Z.; Purushotham, S.; Cho, K.; Sontag, D.; Liu, Y. Recurrent neural networks for multivariate time series with missing values. Sci. Rep. 2018, 8, 6085. [Google Scholar] [CrossRef] [PubMed]
  18. Peters, M.E.; Ammar, W.; Bhagavatula, C.; Power, R. Semi-supervised sequence tagging with bidirectional language models. arXiv 2017, arXiv:1705.00108. [Google Scholar]
  19. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  20. Cogswell, M.; Ahmed, F.; Girshick, R.; Zitnick, L.; Batra, D. Reducing overfitting in deep networks by decorrelating representations. arXiv 2015, arXiv:1511.06068. [Google Scholar]
  21. Shen, Y.; Tan, S.; Sordoni, A.; Courville, A. Ordered neurons: Integrating tree structures into recurrent neural networks. arXiv 2018, arXiv:1810.09536. [Google Scholar]
  22. HM Hospitales. Covid Data Save Lives. 2020. Available online: https://www.hmhospitales.com/prensa/notas-de-prensa/comunicado-covid-data-save-lives (accessed on 5 June 2024).
  23. Keilwagen, J.; Grosse, I.; Grau, J. Area under precision-recall curves for weighted and unweighted data. PLoS ONE 2014, 9, e92209. [Google Scholar] [CrossRef] [PubMed]
  24. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
  25. Falcon, W.A. PyTorch Lightning; GitHub: San Francisco, CA, USA, 2019. [Google Scholar]
  26. Gao, J.; Zhu, Y.; Wang, W.; Wang, Z.; Dong, G.; Tang, W.; Wang, H.; Wang, Y.; Harrison, E.M.; Ma, L. A comprehensive benchmark for COVID-19 predictive modeling using electronic health records in intensive care. Patterns 2024, 5, 100951. [Google Scholar] [CrossRef] [PubMed]
  27. Zhu, Y.; Wang, W.; Gao, J.; Ma, L. PyEHR: A Predictive Modeling Toolkit for Electronic Health Records. 2023. Available online: https://github.com/yhzhu99/pyehr (accessed on 22 November 2024).
  28. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. Time-series data of EHR. Patients may visit the hospital multiple times. The health status is depicted by various clinical features, including numerical variables (e.g., lab tests) and categorical variables (e.g., diagnosis codes).
Figure 1. Time-series data of EHR. Patients may visit the hospital multiple times. The health status is depicted by various clinical features, including numerical variables (e.g., lab tests) and categorical variables (e.g., diagnosis codes).
Bioengineering 12 00578 g001
Figure 3. Distribution of gate values for survival vs. non-survival groups across datasets. Higher values indicate greater reliance on future-predictive features.
Figure 3. Distribution of gate values for survival vs. non-survival groups across datasets. Higher values indicate greater reliance on future-predictive features.
Bioengineering 12 00578 g003
Table 1. Notations for PathCare.
Table 1. Notations for PathCare.
NotationDefinition
y t Ground truth of prediction target at t-th visit
y ^ t Prediction result at t-th visit
r t R N r Multivariate visit record at t-th visit
r ^ t + 1 Prediction result of the status of the next visit
h t f R N h Health status embedding learned to predict the future visit
h t R N h Health status embedding learned to predict the primary target
g f u t u r e R N g Projection of future-specific embedding
g t a s k R N g Projection of task-specific embedding
gate R N g Neuron-level gate for modeling the demand degree of a future visit
gate f u t u r e R N g Mask for selecting units from g f u t u r e
gate t a s k R N g Mask for selecting units from g t a s k
s R N g Combined health status representation for the target prediction
BBatch size
C i , j Covariances between all pairs of activations i and j in the layer
g i b The i-th activation of g at the b-th case in the batch
μ i The sample mean of activation i over the batch
W f Projection matrix for future representations
W t Projection matrix for task-specific representations
W g a t e Projection matrix for the filtering gate
α Hyperparameter for the decorrelation loss term
L f u t u r e Loss for predicting the next visit
L d e c o r r e l a t i o n Decorrelation loss term
L t a s k Loss for the primary prediction task
Table 2. Statistics of the experimented datasets after preprocessing. The # Samples column shows the number of samples and their percentage of the entire dataset, indicating data splits (train, val, and test). The # Label O u t . = 1 and # Label R e . = 1 columns provide the count and percentage of patients with adverse outcomes within each data split. “Out.” denotes “mortality outcome”, and “Re.” denotes “Readmission”.
Table 2. Statistics of the experimented datasets after preprocessing. The # Samples column shows the number of samples and their percentage of the entire dataset, indicating data splits (train, val, and test). The # Label O u t . = 1 and # Label R e . = 1 columns provide the count and percentage of patients with adverse outcomes within each data split. “Out.” denotes “mortality outcome”, and “Re.” denotes “Readmission”.
DatasetSplit# Samples# Label Out . = 1 # Label Re . = 1
CDSLTrain2978 (69.99%)378 (12.69%)-
Val426 (10.01%)54 (12.68%)-
Test851 (20.00%)108 (12.69%)-
MIMIC-IIITrain16,094 (80.00%)4996 (31.04%)4996 (31.04%)
Val3018 (15.00%)934 (30.95%)934 (30.95%)
Test1006 (5.00%)312 (31.01%)312 (31.01%)
MIMIC-IVTrain17,227 (70.00%)5095 (29.58%)5095 (29.58%)
Val2461 (10.00%)724 (29.42%)724 (29.42%)
Test4922 (20.00%)1450 (29.46%)1450 (29.46%)
Table 3. Detailed statistics of the CDSL dataset.
Table 3. Detailed statistics of the CDSL dataset.
StatisticTotalSurvivedDeceased
Number of patients42553715 (87.31%)540 (12.69%)
Number of records123,044108,142 (87.89%)14,902 (12.11%)
Records per patient24.0 [15, 39]25.0 [15, 39]22.5 [11, 37]
Age67.2 [56.0, 80.0]65.1 [54.0, 77.0]81.6 [75.0, 89.0]
Age > Mean (67 years)2228 (52.36%)1748 (47.05%)480 (88.89%)
Age ≤ Mean (67 years)2027 (47.64%)1967 (52.95%)60 (11.11%)
Male2515 (59.11%)2173 (58.49%)342 (63.33%)
Female1740 (40.89%)1542 (41.51%)198 (36.67%)
Number of features99
Length of stay (days)6.4 [4.0, 11.0]6.1 [4.0, 11.0]6.0 [3.0, 10.0]
Table 4. Performance comparison of different methods on mortality prediction tasks across the CDSL, MIMIC-III, and MIMIC-IV datasets.
Table 4. Performance comparison of different methods on mortality prediction tasks across the CDSL, MIMIC-III, and MIMIC-IV datasets.
MethodsCDSL MortalityMIMIC-III MortalityMIMIC-IV Mortality
AUPRC (↑) AUROC (↑) min(+P, Se) (↑) AUPRC (↑) AUROC (↑) min(+P, Se) (↑) AUPRC (↑) AUROC (↑) min(+P, Se) (↑)
RETAIN77.23 ± 4.1393.67 ± 1.5668.96 ± 4.1445.60 ± 4.4883.89 ± 2.1228.74 ± 4.2751.53 ± 1.4086.61 ± 0.5533.09 ± 1.77
RNN83.03 ± 2.9795.55 ± 0.8269.02 ± 5.0647.63 ± 5.8684.03 ± 1.9328.73 ± 4.9751.92 ± 1.6785.09 ± 0.7130.92 ± 1.42
SAFARI76.70 ± 4.0294.42 ± 1.2763.96 ± 3.9948.32 ± 3.0884.57 ± 0.9425.31 ± 1.9449.25 ± 1.8885.21 ± 0.8332.22 ± 1.73
AdaCare82.10 ± 4.0294.78 ± 1.1972.57 ± 3.6051.19 ± 2.9083.72 ± 0.8625.76 ± 2.4451.18 ± 1.5383.79 ± 0.6834.33 ± 1.60
GRASP83.60 ± 3.1095.05 ± 0.9671.12 ± 4.5045.64 ± 6.2983.24 ± 1.9030.22 ± 5.8152.63 ± 1.3886.23 ± 0.6130.69 ± 1.33
GRU α 80.57 ± 3.8395.31 ± 1.1366.38 ± 3.5152.24 ± 2.6285.49 ± 0.7224.69 ± 2.4854.61 ± 1.3786.16 ± 0.7342.92 ± 1.68
Mamba79.21 ± 3.7392.28 ± 2.0666.69 ± 4.7351.33 ± 3.1285.33 ± 0.8926.05 ± 2.0851.66 ± 1.3284.29 ± 0.7232.35 ± 1.95
PathCare Context 82.56 ± 3.0395.74 ± 0.9372.48 ± 3.4847.50 ± 4.8383.99 ± 1.6246.40 ± 0.4251.67 ± 4.5485.19 ± 1.7052.47 ± 3.67
PathCare Gate 82.11 ± 3.7395.24 ± 1.1374.72 ± 3.6950.14 ± 4.6885.36 ± 1.6550.80 ± 0.3851.86 ± 4.0984.13 ± 1.7351.76 ± 3.39
PathCare 84.11 ± 3.1896.08 ± 0.9776.55 ± 3.6853.51 ± 4.4085.63 ± 1.6252.62 ± 0.1654.19 ± 1.8685.91 ± 0.6552.62 ± 1.56
Note: Bold values indicate the best performance in each column.
Table 5. Performance comparison of different methods on 30-day readmission prediction tasks across the MIMIC-III and MIMIC-IV datasets.
Table 5. Performance comparison of different methods on 30-day readmission prediction tasks across the MIMIC-III and MIMIC-IV datasets.
MethodsMIMIC-III ReadmissionMIMIC-IV Readmission
AUPRC (↑) AUROC (↑) min(+P, Se) (↑) AUPRC (↑) AUROC (↑) min(+P, Se) (↑)
RETAIN48.98 ± 1.9477.50 ± 1.1425.02 ± 1.4046.71 ± 1.7777.53 ± 0.9735.15 ± 1.76
RNN45.77 ± 2.1374.34 ± 0.8728.68 ± 1.8948.72 ± 1.3576.05 ± 0.8127.12 ± 1.53
SAFARI46.65 ± 2.4777.11 ± 1.3025.25 ± 1.8445.49 ± 1.8276.70 ± 0.9530.69 ± 1.64
AdaCare47.19 ± 2.4076.97 ± 1.1024.36 ± 1.7046.87 ± 1.2876.07 ± 0.8426.40 ± 1.63
GRASP48.36 ± 2.0976.70 ± 0.9318.29 ± 1.5050.23 ± 1.5078.47 ± 0.8829.19 ± 1.37
GRU α 50.24 ± 2.0878.36 ± 1.1625.12 ± 1.4350.97 ± 1.3178.46 ± 0.8633.80 ± 1.77
Mamba45.98 ± 2.2076.38 ± 1.0624.50 ± 1.5248.04 ± 1.4276.87 ± 0.8627.45 ± 1.63
PathCare Context 47.13 ± 2.0576.76 ± 0.9247.17 ± 1.7246.54 ± 1.6176.98 ± 0.8547.39 ± 1.41
PathCare Gate 47.06 ± 2.1175.43 ± 0.9946.42 ± 1.7750.42 ± 1.6176.85 ± 0.8848.61 ± 1.50
PathCare 51.01 ± 1.9078.64 ± 0.8850.44 ± 1.7251.52 ± 1.6178.41 ± 0.9250.30 ± 1.40
Note: Bold values indicate the best performance in each column.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sui, D.; Gu, L.; Zhang, C.; Yang, K.; Li, X.; Ma, L.; Wang, L.; Tang, W. PathCare: Integrating Clinical Pathway Information to Enable Healthcare Prediction at the Neuron Level. Bioengineering 2025, 12, 578. https://doi.org/10.3390/bioengineering12060578

AMA Style

Sui D, Gu L, Zhang C, Yang K, Li X, Ma L, Wang L, Tang W. PathCare: Integrating Clinical Pathway Information to Enable Healthcare Prediction at the Neuron Level. Bioengineering. 2025; 12(6):578. https://doi.org/10.3390/bioengineering12060578

Chicago/Turabian Style

Sui, Dehao, Lei Gu, Chaohe Zhang, Kaiwei Yang, Xiaocui Li, Liantao Ma, Ling Wang, and Wen Tang. 2025. "PathCare: Integrating Clinical Pathway Information to Enable Healthcare Prediction at the Neuron Level" Bioengineering 12, no. 6: 578. https://doi.org/10.3390/bioengineering12060578

APA Style

Sui, D., Gu, L., Zhang, C., Yang, K., Li, X., Ma, L., Wang, L., & Tang, W. (2025). PathCare: Integrating Clinical Pathway Information to Enable Healthcare Prediction at the Neuron Level. Bioengineering, 12(6), 578. https://doi.org/10.3390/bioengineering12060578

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop