Next Article in Journal
Could MR Angio Replace Digital Subtraction Angiography for Verification of Occlusion Rate of Cerebral Aneurysms?
Previous Article in Journal
From Evidence to Practice: The Growing Role of Angiography-Derived Physiology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Differential Diagnosis of Infectious Versus Autoimmune Encephalitis Using Artificial Intelligence-Based Modeling

by
David Petrosian
1,*,
Nataša Giedraitienė
2,
Vera Taluntienė
2,
Dagnė Apynytė
2,
Haroldas Bikelis
2,
Gytis Makarevičius
2,
Mantas Jokubaitis
2 and
Mantas Vaišvilas
2
1
Faculty of Medicine, Vilnius University, Vilnius 03101, Lithuania
2
Faculty of Medicine, Institute of Clinical Medicine, Vilnius University, Vilnius 03101, Lithuania
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2025, 14(22), 8222; https://doi.org/10.3390/jcm14228222
Submission received: 24 October 2025 / Revised: 14 November 2025 / Accepted: 19 November 2025 / Published: 20 November 2025
(This article belongs to the Section Clinical Neurology)

Abstract

Background: Encephalitis is a severe and potentially life-threatening inflammatory disorder of the central nervous system. Without prompt diagnosis and appropriate treatment, it often results in poor clinical outcomes. The study aimed to develop an artificial intelligence-based model that distinguishes autoimmune encephalitis from infectious encephalitis, encompassing a broad spectrum of autoimmune encephalitis phenotypes, serostatuses, and neuroimmunological entities. Methods: We conducted a retrospective analysis of patients diagnosed with autoimmune encephalitis, including paraneoplastic neurological syndromes and/or infectious encephalitis, at Vilnius University Hospital Santaros Klinikos from 2016 to 2024. Supervised machine learning techniques were used to train the models, and Shapley Additive Explanations analysis was applied to improve their interpretability. Results: A total of 233 patients were included in the study. The Random Forest model demonstrated the best performance in differentiating the etiology of encephalitis, achieving an AUROC of 0.966. Further analysis revealed that laboratory, electroencephalography, and clinical data were the most influential predictors, whereas imaging data contributed less to classification accuracy. Conclusions: We developed a machine learning model capable of distinguishing infectious encephalitis from both seropositive and seronegative autoimmune encephalitis. Since autoimmune cases may be misdiagnosed as infectious in the absence of detectable antibodies, our model has the potential to support clinical decision-making and reduce diagnostic uncertainty.

1. Introduction

Encephalitis is a life-threatening inflammatory nervous system condition with poor overall outcomes without timely diagnosis and appropriate management [1,2]. The two most common forms of encephalitis are infectious (IE) and antibody-positive autoimmune encephalitis (AE), with a comparable incidence of ~1/100,000 person/years [3].
Although encephalitis is rare, prompt etiological diagnosis is essential, as delayed treatment delays frequently result in persistent neurological sequelae [4]. Unfortunately, establishment of etiological diagnosis remains challenging. Even in tertiary academic centers equipped with advanced diagnostic capabilities, including brain biopsies, the etiology of encephalitis remains undetetermined in 30% to 60% of cases [5,6,7]. Furthermore, in routine clinical settings, the diagnostic yield is likely even lower due to the limited availability of advanced investigative tools. Commercially available polymerase chain reaction assays (PCR) for IE detect only a limited number of pathogens and may produce false-negative results due to various factors [8,9,10,11]. Similarly, the availability of commercial autoantibody testing for AE is limited, costly, and frequently associated with diagnostic inaccuracies regardless of diagnostic modality [12,13,14,15].
To address these limitations, multiple scoring systems have been proposed to either diagnose AE or differentiate between AE and IE, many of which have demonstrated high performance and external validation [16,17]. In contrast, artificial intelligence (AI)-based techniques have rarely been explored. A few published studies demonstrated initial results suggesting that AI may differentiate the etiology of encephalitis with accuracy comparable to that of experienced neurologists [18,19]. However, these studies have involved a relatively small sample size of autoimmune encephalitis (AE), with a primary focus on antibody-positive limbic encephalitis. In addition, they have not addressed extra-limbic central nervous system manifestations, seronegative AE, peripheral nervous system involvement, or paraneoplastic neurological syndromes. Moreover, most previous models have relied primarily on MRI or laboratory data.
To build on this research, we aimed to develop a model that distinguishes between AE and IE, incorporating a wider spectrum of AE phenotypes, serostatuses, and neuroimmunological entities. Importantly, our approach integrates both clinical and laboratory data to enhance its applicability in real-world clinical practice.

2. Materials and Methods

2.1. Data Collection

We retrospectively collected data on patients diagnosed with AE including paraneoplastic neurological syndromes (PNS) and/or IE from Vilnius University Hospital Santaros Klinikos between 2016 and 2024. For AE and PNS and their respective clinical syndromes, diagnosis was made based on published relevant guidelines [20,21,22,23]. For other immune-mediated neuroinflammatory conditions, diagnosis was made using established criteria with histological verification when available [24]. Antibody testing was performed with commercially available indirect immunofluorescence cell-based assays (CBA; Euroimmun, Lubeck, Germany) for the detection of neuronal surface antibodies (anti-N-methyl-D-aspartate receptor (NMDAR), anti-leucine-rich glioma-inactivated protein 1 (LGI-1), anti-α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR), anti-contactin-associated protein 2 (CASPR2), anti-gamma-amino-butyric acid B-receptor (GABAbR)) were used in accordance with the manufacturer’s instructions. For commercial CBAs, both serum and cerebrospinal fluid (CSF) were tested when available. Lineblots for intracellular antibodies against intracellular antigens (anti-Hu, Anti-Yo, anti-Ma1/2, anti-amphiphisin, anti-DNER, anti-CV2, anti-titin, anti-recoverin, anti-GAD65; Euroimmun, Lubeck, Germany) were used in accordance with the manufacturer’s instructions. Paired CSF and serum testing was performed when feasible. Additionally, samples were screened using in-house rat brain immunohistochemistry as previously described [25] to detect autoantibodies beyond the scope for commercial assays.
For IE, cases with a confirmed pathogen were diagnosed based on either bacterial cultures (blood and/or CSF), CSF polymerase chain reaction (PCR), or both. In a minority of cases, pathogen-specific antibodies were identified using ELISA or Western blotting, with the demonstration of intrathecal antibody production in all cases when appropriate. A small fraction of patients received the diagnosis of IE only after brain biopsy.
The dataset included patient demographics, presenting symptoms, serum and CSF parameters, electroencephalographic (EEG) findings (including diffuse slowing/non-epileptic or epileptiform abnormalities), and MRI results (with or without contrast use). Presenting symptoms were defined as the patient’s major complaints at the time of admission and/or objective neurological findings after neurological or psychiatric evaluation. To increase statistical power, we incorporated an external dataset comprising 20 cases of LGI1-antibody encephalitis and 21 cases of herpes simplex virus encephalitis from Müller-Jensen et al. [26]. Prior to integration, we performed data harmonization to ensure consistency between the internal and external datasets. All variables were reviewed for alignment in definitions, measurement units, and coding practices. Categorical variables—such as clinical symptoms, EEG findings, and MRI abnormalities—were compared across datasets and assigned to shared categories based on equivalent clinical meaning. Continuous variables, including serum C-reactive protein (CRP), white blood cell (WBC) count, and CSF parameters, were checked for unit consistency and converted when required. Only variables that could be consistently aligned across both datasets were included in the combined analysis. The external dataset contained complete information for all required variables, including age, sex, clinical symptoms, serum CRP and WBC counts, CSF profiles, EEG results, and MRI findings.

2.2. Data Pre-Processing

Categorical variables were transformed into binary format using one-hot encoding. Missing values were handled via univariate imputation: continuous variables were imputed with their median values, while categorical variables were imputed using the mode of each class. The dataset was split into training (70%) and testing (30%) sets using stratified sampling to preserve the proportion of AE and IE cases. To address class imbalance, higher weights were assigned to the minority class during model training, thereby improving the sensitivity and overall performance.
Prior to model development, Recursive Feature Elimination with Cross-Validation (RFECV) was applied to identify the most informative predictors. RFECV was implemented using a stratified 5-fold cross-validation strategy. Feature elimination was guided by the AUROC metric, and the process continued until performance no longer improved or a minimum of 25 features remained. An XGBoost classifier was used as the underlying estimator due to its ability to handle nonlinear relationships and mixed data types.

2.3. Model Development and Validation

Supervised machine learning classifiers were employed to develop predictive models capable of distinguishing AE from IE. The models included XGBoost, Random Forest, LightGBM, Logistic Regression, K-Nearest Neighbors, and Gaussian Naïve Bayes.
Hyperparameter tuning for all models was performed using RandomizedSearchCV with stratified 5-fold cross-validation, which preserves the proportion of AE and IE cases in each fold. The AUROC metric was used to evaluate model performance during cross-validation, and the best hyperparameters were selected based on this score (see Table S1 for the full hyperparameter search spaces).
Models were evaluated on the independent test set that was retained during the dataset-splitting process. Performance was assessed using accuracy, sensitivity, specificity, F1-score, precision, and AUROC. SHAP (SHapley Additive exPlanations) was used to interpret the contribution of each variable to model predictions and identify the most important features driving differential diagnosis.

2.4. Statistical Analysis

Statistical analysis was conducted using R 4.4.3 (R Foundation for Statistical Computing, Vienna, Austria) and Python 3.11.4 (Python Software Foundation, Wilmington, DE, USA). The Shapiro–Wilk test was applied to check for data normality. Continuous data were presented as medians with interquartile ranges, and categorical data as frequencies along with percentages. Association between categorical variables was assessed using the chi-square test or Fisher’s exact test. Comparisons of two independent groups were conducted using the Mann–Whitney U test or independent samples t-test. Data visualization comprised pie charts to demonstrate the distribution of categorical variables, and chord diagrams created with the R ‘circlize’ package to illustrate relationships between variables. Locally weighted scatterplot smoothing (LOWESS) and zero-crossing analysis were applied to determine model-derived thresholds for laboratory features. Statistical significance was set at p < 0.05.

3. Results

3.1. Clinical and Paraclinical Features of the Cohort

Out of 368 total cases, we excluded 18 patients with hematologic malignancies (12 leukemia, 5 lymphoma, 1 myeloma), 10 with brain abscess, 6 with other oncologic conditions, and 101 with miscellaneous diagnoses not related to encephalitis, leaving a final cohort of 233 patients. The study included 233 patients (83/233 (35.6%) AE and 150/233 (64.4%) IE). Most cases in the autoimmune group consisted of antibody positive limbic encephalitis, while the infectious group included both viral (n = 84, 56.0%) and bacterial (n = 66, 44.0%) agents (Table 1).
Clinical, paraclinical, and demographical data are displayed in Table 2. Most common presenting symptoms for IE were fever (n = 110, 73.3%), ataxia (n = 62, 41.3%), and headache (n = 69, 46.0%). In contrast, seizures (n = 42, 50.6%), memory impairment (n = 37, 44.6%), emotional changes (n = 27, 32.5%), and behavioral changes (n = 28, 33.7%) were more frequent in AE. Laboratory findings showed that patients with IE had significantly higher CSF cell counts, protein levels, hypoglycorrhachia, and elevated serum CRP levels compared to AE.
Paraclinical findings were significant for encephalopathic EEG pattern in IE compared to AE and for parenchymal/meningeal contrast enhancement, mass effect, and restricted diffusion on MRI (Table 2).
The most frequently observed clinical syndrome in AE was focal limbic encephalopathy, whereas generalized encephalopathy predominated in IE. Presenting symptoms, their relations, and established syndromes are illustrated in Figure 1 and Figure 2.

3.2. AI Modeling

After performing recursive feature elimination to select the most important variables, we identified a subset of features for model development. The features selected for AI modeling are presented in Table 3.
Using these variables, we employed a set of widely used classifiers—XGBoost, Random Forest, LightGBM, Logistic Regression, Naïve Bayes, and K-nearest Neighbors—chosen for their established performance in classification tasks. Random Forest was chosen as the final model because it was the most robust classifier with the highest predictive performance based on AUROC values. The DeLong test was performed to compare the AUROC values of the developed models, revealing that Random Forest, Logistic Regression, and LightGBM were more robust compared to K-Nearest Neighbors. No other statistically significant differences were found. Detailed performance metrics are presented in Table 4. AUROC values for different classifiers are shown in Figure 3.
The SHAP framework was employed to evaluate the impact of different variables on the differential diagnosis in the developed machine learning model. Persistent fever (defined as body temperature ≥ 38 °C lasting ≥ 3 consecutive days), headache, and diffuse slowing/non-epileptic abnormalities on EEG were strongly associated with IE (Figure 4), whereas memory impairment, nystagmus, emotional changes, and seizures were predictive of AE. Among the laboratory features, elevated CSF cell counts and protein levels exhibited the highest predictive value for IE. In comparison, imaging features consistently demonstrated lower predictive importance, as quantified by their SHAP values relative to laboratory and clinical variables.
LOWESS smoothing of SHAP values was applied to identify zero-crossing points, which serve as model-derived thresholds beyond which the likelihood of autoimmune encephalitis begins to increase for key laboratory features: CSF cell count 14.32 cells/µL, CSF protein 0.67 g/L, serum CRP 6.85 mg/L, and CSF glucose 3.31 mmol/L (Figure 5).
Additionally, Figure S1 presents four force plots, each corresponding to a patient, including two seropositive and two seronegative cases, to demonstrate how individual features contributed to the diagnosis of AE in each case.

3.3. Comparison with Human Controls

To compare the performance of the developed AI model against human controls, we constructed an independent database comprised of 70 IE and AE cases that had not been used in model training. Model performance was then compared with that of clinicians (Table 5). The features made available to clinicians are detailed in Table 3. Further analysis demonstrated that clinicians primarily relied on laboratory data when differentiating encephalitis etiology. The features considered most impactful for the decision-making process are illustrated in Figure 6.

4. Discussion

In this study, we developed and evaluated an AI-based model to differentiate AE from IE using demographic, clinical, and paraclinical variables. The model incorporated both antibody-positive and antibody-negative cases as well as a substantial proportion of extra-limbic manifestations including peripheral nervous system disorders, thereby reflecting real-life clinical scenarios. Our model demonstrated good performance metrics with accuracy comparable to trained neurologists.
Previous studies have emphasized the time of symptom onset as a key discriminator between IE and AE [27,28]. Indeed, common subtypes of AE (e.g., LGI-1, CASPR2, GAD65 mediated syndromes) typically follow an indolent course [29,30,31]. This complicates diagnosis as current clinical criteria only account for cases with sub-acute symptom onset consequently delaying recognition and immunotherapy initiation. Our findings suggest that differential diagnosis may be achievable based on initial presenting symptoms, independent of disease course, with the potential for the earlier prospective identification of AE.
Serological testing remains the cornerstone of AE diagnosis, and the incorporation of seropositive cases into our model facilitated robust disease modeling despite known caveats of commercial assays. However, seronegative AE continues to pose substantial diagnostic challenges, often leading to misdiagnosis, delays in treatment, and worse clinical outcomes [32,33,34]. Although advanced techniques like phage immunoprecipitation sequencing may serve as an additional diagnostic modality in selected AE cases, they remain inaccessible to most clinical centers [35]. Since up to 50% of AE cases may be seronegative, there is a critical need for diagnostic tools that do not rely on antibody detection. Our results indicate that an AI-based approach may help to address this gap [19,36]. Future collaborative studies should prioritize the inclusion of seronegative cohorts as such an approach would be cost-effective and broadly applicable to real-world clinical settings.
In the testing of our model against neurologists and neurologists in training, we found that clinicians relied on laboratory features, particularly CSF cell count and protein levels. Despite selecting similar features to base the diagnosis, none of the human controls outperformed the model. Nevertheless, the AI-based model and human diagnosticians misclassified different cases. This suggests that etiological decisions of the AI and human controls are based on different data. For example, CSF cell count thresholds differentiating AE and IE differed greatly in our study from previous reports [27], likely due to differences in population characteristics and sample size, underscoring the limited generalizability of rigid cut-offs. In most cases, clinicians misclassified patients who did not present with typical features of a specific encephalitis etiology. Borderline laboratory values often contributed to these diagnostic errors. For example, some patients exhibited CSF pleocytosis, elevated cell counts, and increased protein levels, which could be misinterpreted as indicative of an infectious etiology. Differentiation remains challenging when laboratory findings do not clearly align with the presenting symptoms. In contrast, the SHAP framework provided insights into model decision-making, suggesting that AI can move beyond subjective thresholds such as cell count or symptom onset to refine etiological classification.
Likewise, MRI interpretation using AI-based techniques has shown potential to aid in the diagnosis of AE [37]. Radiomics techniques can extract quantitative features—such as texture, shape, and intensity—from medical images that may not be easily detected by the human eye [38]. While incorporating radiomics features could potentially improve model performance, we did not include them in the current study due to the relatively modest sample size and the high dimensionality of radiomics data. Including hundreds of imaging features with a limited number of AE cases could increase the risk of overfitting and reduce model generalizability. Future studies with larger, multi-center cohorts and external validation are warranted to investigate the added value of radiomics in enhancing the AI-based differentiation of AE and IE.
The major strength of this study is the integration of both seropositive and seronegative AE cases. Such an approach enhanced the model’s relevance for real-world practice, where autoimmune etiologies may be overlooked in the absence of antibody detection. Furthermore, the infectious etiology subgroup consisted of both bacterial and viral agents broadening the applicability and generalizability of the developed model. A deliberate methodological choice was made to exclude time of symptom onset, a parameter often used in clinical reasoning but limited by the indolent course of common AE subtypes (e.g., LGI1, CASPR2). By focusing on initial presenting features, our model is better positioned to facilitate early diagnosis.
The study also has limitations. First, our dataset sample size was modest, which may limit the robustness of the model, particularly for rare AE subtypes. A small sample size can reduce the generalizability of model-derived thresholds and increase the risk of overfitting to patterns present in the internal cohort. Second, we faced class imbalance, which was addressed using class weighting; however, this approach may accentuate overfitting to more prevalent classes, potentially affecting predictions in underrepresented patient groups. Third, external validation was not feasible in this study, and future investigations across independent populations are required to confirm the model’s performance and generalizability. Fewer seronegative cases were included than seropositive cases in the current dataset, which may limit predictive performance for this subgroup; nevertheless, their inclusion enhances the overall applicability of the model, and future studies with larger seronegative cohorts could further improve the predictions. Finally, the internal dataset originated from a single tertiary center (Vilnius University Hospital Santaros Klinikos), which may differ from other clinical settings in terms of patient demographics, comorbidities, and diagnostic practices. Consequently, the model’s applicability to community hospitals or resource-limited settings—where EEG or MRI may not be available—may be limited.

5. Conclusions

AI-based techniques can effectively distinguish between autoimmune encephalitis and infectious encephalitis in a manner comparable to human assessments based on presenting symptoms, without relying on the timing of symptom onset. The developed model encompasses both limbic and extralimbic cases, addressing scenarios of both antibody-positive and antibody-negative patients to better reflect real-world clinical situations. To account for the diagnostic gap of seronegative autoimmune encephalitis, future efforts in AI-based modeling should prioritize seronegative cases to address this critical clinical need and enhance diagnostic accuracy across the entire spectrum of autoimmune encephalitis.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/jcm14228222/s1, Table S1. Hyperparameter search spaces for the machine learning classifiers, Figure S1. SHAP force plots illustrating the impact of clinical features on model predictions for autoimmune encephalitis cases.

Author Contributions

D.P.: Conceptualization; Methodology; Formal analysis and investigation; Writing—original draft preparation; Writing—review and editing; Visualization; Software. N.G.: Conceptualization; Formal analysis and investigation; Writing—review and editing; Supervision; Project administration. V.T.: Formal analysis and investigation. D.A.: Formal analysis and investigation. H.B.: Formal analysis and investigation. G.M.: Formal analysis and investigation; Writing—review and editing. M.J.: Formal analysis and investigation; Writing—review and editing. M.V.: Conceptualization; Methodology; Formal analysis and investigation; Writing—original draft preparation; Writing—review and editing; Supervision; Project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Lithuanian bioethics committee (approval number L-23-02/2, amendment 2, no. 6B-24-4 and date of 23 January 2024).

Informed Consent Statement

Patient consent was waived due to the retrospective nature of this study. The Lithuanian Bioethics Committee granted an exemption from obtaining consent due to the retrospective design and ethical approval.

Data Availability Statement

The datasets generated and/or analyzed in the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dalmau, J.; Graus, F. Antibody-Mediated Encephalitis. N. Engl. J. Med. 2018, 378, 840–851. [Google Scholar] [CrossRef]
  2. Wang, H.; Zhao, S.; Wang, S.; Zheng, Y.; Wang, S.; Chen, H.; Pang, J.; Ma, J.; Yang, X.; Chen, Y. Global magnitude of encephalitis burden and its evolving pattern over the past 30 years. J. Infect. 2022, 84, 777–787. [Google Scholar] [CrossRef]
  3. Dubey, D.; Pittock, S.J.; Kelly, C.R.; McKeon, A.; Lopez-Chiriboga, A.S.; Lennon, V.A.; Gadoth, A.; Smith, C.Y.; Bryant, S.C.; Klein, C.J.; et al. Autoimmune encephalitis epidemiology and a comparison to infectious encephalitis. Ann. Neurol. 2018, 83, 166–177. [Google Scholar] [CrossRef] [PubMed]
  4. Hansen, M.A.; Samannodi, M.S.; Castelblanco, R.L.; Hasbun, R. Clinical Epidemiology, Risk Factors, and Outcomes of Encephalitis in Older Adults. Clin. Infect. Dis. 2020, 70, 2377–2385. [Google Scholar] [CrossRef]
  5. Gelfand, J.M.; Genrich, G.; Green, A.J.; Tihan, T.; Cree, B.A.C. Encephalitis of Unclear Origin Diagnosed by Brain Biopsy: A Diagnostic Challenge. JAMA Neurol. 2015, 72, 66–72. [Google Scholar] [CrossRef]
  6. Glaser, C.A.; Gilliam, S.; Schnurr, D.; Forghani, B.; Honarmand, S.; Khetsuriani, N.; Fischer, M.; Cossen, C.K.; Anderson, L.J. In Search of Encephalitis Etiologies: Diagnostic Challenges in the California Encephalitis Project, 1998–2000. Clin. Infect. Dis. 2003, 36, 731–742. [Google Scholar] [CrossRef]
  7. Granerod, J.; Ambrose, H.E.; Davies, N.W.S.; Clewley, J.P.; Walsh, A.L.; Morgan, D.; Cunningham, R.; Zuckerman, M.; Mutton, K.J.; Solomon, T.; et al. Causes of encephalitis and differences in their clinical presentations in England: A multicentre, population-based prospective study. Lancet Infect. Dis. 2010, 10, 835–844. [Google Scholar] [CrossRef] [PubMed]
  8. Dang-Orita, N.; Chan-Golston, A.M.; Mitchell, M.; Sivasubramanian, G. 815. False Negative BioFire FilmArray Meningitis/Encephalitis Multiplex PCR Assay in Cryptococcal meningitis: A Single Center Analysis. Open Forum Infect. Dis. 2023, 10 (Suppl. S2), ofad500.860. [Google Scholar] [CrossRef]
  9. De Tiège, X.; Héron, B.; Lebon, P.; Ponsot, G.; Rozenberg, F. Limits of Early Diagnosis of Herpes Simplex Encephalitis in Children: A Retrospective Study of 38 Cases. Clin. Infect. Dis. 2003, 36, 1335–1339. [Google Scholar] [CrossRef]
  10. Shin, Y.W.; Sunwoo, J.-S.; Lee, H.-S.; Lee, W.-J.; Ahn, S.-J.; Lee, S.K.; Chu, K. Clinical significance of Epstein-Barr virus polymerase chain reaction in cerebrospinal fluid. Encephalitis 2021, 2, 1–8. [Google Scholar] [CrossRef]
  11. Weil, A.A.; Glaser, C.A.; Amad, Z.; Forghani, B. Patients with Suspected Herpes Simplex Encephalitis: Rethinking an Initial Negative Polymerase Chain Reaction Result. Clin. Infect. Dis. 2002, 34, 1154–1157. [Google Scholar] [CrossRef]
  12. Déchelotte, B.; Muñiz-Castrillo, S.; Joubert, B.; Vogrig, A.; Picard, G.; Rogemond, V.; Pinto, A.-L.; Lombard, C.; Desestret, V.; Fabien, N.; et al. Diagnostic yield of commercial immunodots to diagnose paraneoplastic neurologic syndromes. Neurol. Neuroimmunol. Neuroinflamm. 2020, 7, e701. [Google Scholar] [CrossRef] [PubMed]
  13. Ruiz-García, R.; Muñoz-Sánchez, G.; Naranjo, L.; Guasp, M.; Sabater, L.; Saiz, A.; Dalmau, J.; Graus, F.; Martinez-Hernandez, E. Limitations of a Commercial Assay as Diagnostic Test of Autoimmune Encephalitis. Front. Immunol. 2021, 12, 691536. [Google Scholar] [CrossRef]
  14. Milano, C.; Businaro, P.; Papi, C.; Arlettaz, L.; Marmolejo, L.; Naranjo, L.; Gastaldi, M.; Iorio, R.; Saiz, A.; Planagumà, J.; et al. Assessing Commercial Tissue-Based Assays for Autoimmune Neurologic Disorders (I). Neurol. Neuroimmunol. Neuroinflamm. 2025, 12, e200410. [Google Scholar] [CrossRef] [PubMed]
  15. Papi, C.; Milano, C.; Arlettaz, L.; Businaro, P.; Marmolejo, L.; Naranjo, L.; Planagumà, J.; Martinez-Hernandez, E.; Armangué, T.; Guasp, M.; et al. Assessing Commercial Tissue-Based Assays for Autoimmune Neurologic Disorders (II). Neurol. Neuroimmunol. Neuroinflamm. 2025, 12, e200406. [Google Scholar] [CrossRef]
  16. Fjordside, L.; Nissen, M.S.; Florescu, A.M.; Storgaard, M.; Larsen, L.; Wiese, L.; von Lüttichau, H.R.; Jepsen, M.P.G.; Hansen, B.R.; Andersen, C.Ø.; et al. Validation of a risk score to differentiate autoimmune and viral encephalitis: A Nationwide Cohort Study in Denmark. J. Neurol. 2024, 271, 4972–4981. [Google Scholar] [CrossRef]
  17. Vogrig, A.; Péricart, S.; Pinto, A.-L.; Rogemond, V.; Muñiz-Castrillo, S.; Picard, G.; Selton, M.; Mittelbronn, M.; Lanoiselée, H.-M.; Michenet, P.; et al. Immunopathogenesis and proposed clinical score for identifying Kelch-like protein-11 encephalitis. Brain Commun. 2021, 3, fcab185. [Google Scholar] [CrossRef]
  18. Demuth, S.; Paris, J.; Faddeenkov, I.; De Sèze, J.; Gourraud, P.A. Clinical applications of deep learning in neuroinflammatory diseases: A scoping review. Rev. Neurol. 2025, 181, 135–155. [Google Scholar] [CrossRef]
  19. Choi, B.K.; Choi, Y.J.; Sung, M.; Ha, W.; Chu, M.K.; Kim, W.-J.; Heo, K.; Kim, K.M.; Park, Y.R. Development and validation of an artificial intelligence model for the early classification of the aetiology of meningitis and encephalitis: A retrospective observational study. eClinicalMedicine 2023, 61, 102051. [Google Scholar] [CrossRef] [PubMed]
  20. Graus, F.; Titulaer, M.J.; Balu, R.; Benseler, S.; Bien, C.G.; Cellucci, T.; Cortese, I.; Dale, R.C.; Gelfand, J.M.; Geschwind, M.; et al. A clinical approach to diagnosis of autoimmune encephalitis. Lancet Neurol. 2016, 15, 391–404. [Google Scholar] [CrossRef] [PubMed]
  21. Graus, F.; Vogrig, A.; Muñiz-Castrillo, S.; Antoine, J.-C.G.; Desestret, V.; Dubey, D.; Giometto, B.; Irani, S.R.; Joubert, B.; Leypoldt, F.; et al. Updated Diagnostic Criteria for Paraneoplastic Neurologic Syndromes. Neurol. Neuroimmunol. Neuroinflamm. 2021, 8, e1014. [Google Scholar] [CrossRef] [PubMed]
  22. Vaišvilas, M.; Ciano-Petersen, N.L.; Macarena Villagrán-García, M.D.; Muñiz-Castrillo, S.; Vogrig, A.; Honnorat, J. Paraneoplastic encephalitis: Clinically based approach on diagnosis and management. Postgrad. Med. J. 2023, 99, 669–678. [Google Scholar] [CrossRef]
  23. Orozco, E.; Valencia-Sanchez, C.; Britton, J.; Dubey, D.; Flanagan, E.P.; Lopez-Chiriboga, A.S.; Zalewski, N.; Zekeridou, A.; Pittock, S.J.; McKeon, A. Autoimmune Encephalitis Criteria in Clinical Practice. Neurol. Clin. Pract. 2023, 13, e200151. [Google Scholar] [CrossRef] [PubMed]
  24. Taieb, G.; Mulero, P.; Psimaras, D.; van Oosten, B.W.; Seebach, J.D.; Marignier, R.; Pico, F.; Rigau, V.; Ueno, Y.; Duflos, C.; et al. CLIPPERS and its mimics: Evaluation of new criteria for the diagnosis of CLIPPERS. J. Neurol. Neurosurg. Psychiatry 2019, 90, 1027–1038. [Google Scholar] [CrossRef]
  25. Vaisvilas, M.; Petrosian, D.; Bagdonaite, L.; Taluntiene, V.; Kralikiene, V.; Daugelaviciene, N.; Neniskyte, U.; Kaubrys, G.; Giedraitiene, N. Seroprevalence of neuronal antibodies in diseases mimicking autoimmune encephalitis. Sci. Rep. 2024, 14, 5352. [Google Scholar] [CrossRef]
  26. Müller-Jensen, L.; Zierold, S.; Versluis, J.M.; Boehmerle, W.; Huehnchen, P.; Endres, M.; Mohr, R.; Compter, A.; Blank, C.U.; Hagenacker, T.; et al. Dataset of a retrospective multicenter cohort study on characteristics of immune checkpoint inhibitor-induced encephalitis and comparison with HSV-1 and anti-LGI1 encephalitis. Data Brief. 2022, 45, 108649. [Google Scholar] [CrossRef] [PubMed]
  27. Granillo, A.; Le Maréchal, M.; Diaz-Arias, L.; Probasco, J.; Venkatesan, A.; Hasbun, R. Development and Validation of a Risk Score to Differentiate Viral and Autoimmune Encephalitis in Adults. Clin. Infect. Dis. 2023, 76, e1294–e1301. [Google Scholar] [CrossRef]
  28. Dubey, D.; Kothapalli, N.; McKeon, A.; Flanagan, E.P.; Lennon, V.A.; Klein, C.J.; Britton, J.W.; So, E.; Boeve, B.F.; Tillema, J.-M.; et al. Predictors of neural-specific autoantibodies and immunotherapy response in patients with cognitive dysfunction. J. Neuroimmunol. 2018, 323, 62–72. [Google Scholar] [CrossRef]
  29. Benoit, J.; Muñiz-Castrillo, S.; Vogrig, A.; Farina, A.; Pinto, A.-L.; Picard, G.; Rogemond, V.; Guery, D.; Alentorn, A.; Psimaras, D.; et al. Early-Stage Contactin-Associated Protein-like 2 Limbic Encephalitis. Neurol. Neuroimmunol. Neuroinflamm. 2023, 10, e200041. [Google Scholar] [CrossRef]
  30. van Sonderen, A.; Thijs, R.D.; Coenders, E.C.; Jiskoot, L.C.; Sanchez, E.; de Bruijn, M.A.; van Coevorden-Hameete, M.H.; Wirtz, P.W.; Schreurs, M.W.; Smitt, P.A.S.; et al. Anti-LGI1 encephalitis. Neurology 2016, 87, 1449–1456. [Google Scholar] [CrossRef]
  31. Budhram, A.; Sechi, E.; Flanagan, E.P.; Dubey, D.; Zekeridou, A.; Shah, S.S.; Gadoth, A.; Naddaf, E.; McKeon, A.; Pittock, S.J.; et al. Clinical spectrum of high-titre GAD65 antibodies. J. Neurol. Neurosurg. Psychiatry 2021, 92, 645–654. [Google Scholar] [CrossRef]
  32. Lee, W.-J.; Lee, H.-S.; Kim, D.-Y.; Lee, H.-S.; Moon, J.; Park, K.-I.; Lee, S.K.; Chu, K.; Lee, S.-T. Seronegative autoimmune encephalitis: Clinical characteristics and factors associated with outcomes. Brain 2022, 145, 3509–3521. [Google Scholar] [CrossRef]
  33. Dalmau, J.; Graus, F. Diagnostic criteria for autoimmune encephalitis: Utility and pitfalls for antibody-negative disease. Lancet Neurol. 2023, 22, 529–540. [Google Scholar] [CrossRef] [PubMed]
  34. Mojžišová, H.; Krýsl, D.; Hanzalová, J.; Dargvainiene, J.; Wandinger, K.-P.; Leypoldt, F.; Elišák, M.; Marusič, P. Antibody-Negative Autoimmune Encephalitis. Neurol. Neuroimmunol. Neuroinflamm. 2023, 10, e200170. [Google Scholar] [CrossRef] [PubMed]
  35. Kherbek, H.; Paramasivan, N.K.; Dasari, S.; Karsten, C.; Thakolwiboon, S.; Gilligan, M.; Knight, A.M.; LaFrance-Corey, R.G.; Losada, V.; McKeon, A.; et al. Exploring autoantigens in autoimmune limbic encephalitis using phage immunoprecipitation sequencing. J. Neurol. 2025, 272, 292. [Google Scholar] [CrossRef]
  36. Xiang, Y.; Zeng, C.; Liu, B.; Tan, W.; Wu, J.; Hu, X.; Han, Y.; Luo, Q.; Gong, J.; Liu, J.; et al. Deep Learning-Enabled Identification of Autoimmune Encephalitis on 3D Multi-Sequence MRI. J. Magn. Reson. Imaging 2022, 55, 1082–1092. [Google Scholar] [CrossRef]
  37. Musigmann, M.; Spiekers, C.; Stake, J.; Akkurt, B.H.; Mora, N.G.N.; Sartoretti, T.; Heindel, W.; Mannil, M. Detection of antibodies in suspected autoimmune encephalitis diseases using machine learning. Sci. Rep. 2025, 15, 10998. [Google Scholar] [CrossRef] [PubMed]
  38. Parekh, V.; Jacobs, M.A. Radiomics: A new application from established techniques. Expert. Rev. Precis. Med. Drug Dev. 2016, 1, 207–226. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Chord diagrams illustrating symptom interconnections in (a) limbic encephalopathy, (b) generalized encephalopathy, and (c) cerebellar-brainstem syndrome.
Figure 1. Chord diagrams illustrating symptom interconnections in (a) limbic encephalopathy, (b) generalized encephalopathy, and (c) cerebellar-brainstem syndrome.
Jcm 14 08222 g001
Figure 2. Pie charts depicting established clinical syndromes in (a) autoimmune, (b) infectious, and (c) viral encephalitis. The ‘Other’ category includes cases of encephalomyelitis, cerebral cortical encephalitis, CLIPPERS, diencephalitis, and encephalomyeloradiculoneuritis.
Figure 2. Pie charts depicting established clinical syndromes in (a) autoimmune, (b) infectious, and (c) viral encephalitis. The ‘Other’ category includes cases of encephalomyelitis, cerebral cortical encephalitis, CLIPPERS, diencephalitis, and encephalomyeloradiculoneuritis.
Jcm 14 08222 g002
Figure 3. Receiver Operating Characteristic (ROC) curves comparing the performance of multiple machine learning models.
Figure 3. Receiver Operating Characteristic (ROC) curves comparing the performance of multiple machine learning models.
Jcm 14 08222 g003
Figure 4. (a) SHAP beeswarm plot displaying the distribution and impact of individual feature values on the model output. Each dot represents a single instance’s SHAP value for a feature, colored by the feature value. (b) SHAP feature importance plot.
Figure 4. (a) SHAP beeswarm plot displaying the distribution and impact of individual feature values on the model output. Each dot represents a single instance’s SHAP value for a feature, colored by the feature value. (b) SHAP feature importance plot.
Jcm 14 08222 g004
Figure 5. SHAP value–feature plots with LOWESS smoothing for CSF and serum biomarkers. Each panel displays SHAP values from the predictive model plotted against their respective feature values: (a) CSF cell count, (b) CSF protein, (c) serum CRP, and (d) CSF glucose. The blue line shows the LOWESS-smoothed trend. The vertical red line indicates the threshold where feature values begin to drive predictions toward autoimmune encephalitis diagnosis.
Figure 5. SHAP value–feature plots with LOWESS smoothing for CSF and serum biomarkers. Each panel displays SHAP values from the predictive model plotted against their respective feature values: (a) CSF cell count, (b) CSF protein, (c) serum CRP, and (d) CSF glucose. The blue line shows the LOWESS-smoothed trend. The vertical red line indicates the threshold where feature values begin to drive predictions toward autoimmune encephalitis diagnosis.
Jcm 14 08222 g005
Figure 6. Variables most frequently identified as important by neurologists across all cases. CSF: cerebrospinal fluid, MRI: magnetic resonance imaging, DWI: diffusion-weighted imaging, EEG: electroencephalopgraphy.
Figure 6. Variables most frequently identified as important by neurologists across all cases. CSF: cerebrospinal fluid, MRI: magnetic resonance imaging, DWI: diffusion-weighted imaging, EEG: electroencephalopgraphy.
Jcm 14 08222 g006
Table 1. Distribution of autoimmune and infectious encephalitides cases.
Table 1. Distribution of autoimmune and infectious encephalitides cases.
Autoimmune Encephalitides (n = 83)Infectious Encephalitides (n = 150)
Associated Antibodyn (%)Associated Agentn (%)
Anti-LGI129 (34.9%)Viral84 (56.0%)
Anti-NMDA9 (10.8%)HSV-1/HSV-234 (22.7%)
Anti-AQP49 (10.8%)Unidentified25 (16.7%)
Seronegative9 (10.8%)VZV12 (8.0%)
Anti-Yo7 (8.4%)TBEV10 (6.7%)
Anti-GAD654 (4.8%)CMV1 (0.7%)
Anti-CASPR23 (3.6%)EBV1 (0.7%)
Anti-Hu 3 (3.6%)Parvovirus B191 (0.7%)
Anti-AMPAR2 (2.4%)Bacterial66 (44.0%)
Atypical2 (2.4%)Unidentified31 (20.7%)
Anti-GABAB1 (1.2%)Streptococcus spp.8 (5.3%)
Anti-KLHL111 (1.2%)L. monocytogenes8 (5.3%)
Anti-GFAP1 (1.2%)B. burgdorferi7 (4.7%)
Anti-Ri1 (1.2%)Staphylococcus spp.5 (3.3%)
Anti-MOG1 (1.2%)N. meningitidis3 (2.0%)
ANA1 (1.2%)M. tuberculosis2 (1.3%)
H. influenzae1 (0.7%)
T. pallidum1 (0.7%)
NMDA: N-methyl-D-aspartate receptor; LGI1: leucine-rich glioma inactivated 1; CASPR2: contactin-associated protein-like 2; AMPAR: α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor; AQP4: aquaporin-4; MOG: myelin oligodendrocyte glycoprotein; GAD65: glutamic acid decarboxylase 65; KLHL11: Kelch-like protein 11; GFAP: glial fibrillary acidic protein; GABAB: gamma-aminobutyric acid type B receptor; ANA: anti-nuclear antibodies; VZV: varicella zoster virus; HSV: herpes simplex virus; TBEV: tick-borne encephalitis virus; CMV: cytomegalovirus; EBV: Epstein–Barr virus. One anti-Hu case overlapped with anti-CV2 antibodies.
Table 2. Comparison of clinical features, laboratory data, EEG findings, and MRI abnormalities among patients with autoimmune, infectious, and viral encephalitis.
Table 2. Comparison of clinical features, laboratory data, EEG findings, and MRI abnormalities among patients with autoimmune, infectious, and viral encephalitis.
VariableAutoimmune (n = 83)Infectious (n = 150)Viral (n = 84)p-Value *p-Value **
Age (years), median (IQR)59 (41–68.5)46.5 (28–63)54 (34.25–66)0.01080.2032
Sex (male), n (%)38 (45.8%)88 (58.7%)49 (58.3%)0.05880.1045
Presenting symptoms
Headache5 (6.0%)69 (46.0%)42 (50.0%)<0.0001<0.0001
Disorientation26 (31.3%)45 (30.0%)28 (33.3%)0.83330.7815
Gait disturbance17 (20.5%)11 (7.3%)7 (8.3%)0.00310.0252
Sleep impairment7 (8.4%)3 (2.0%)2 (2.4%)0.03700.0989
Behavioral changes28 (33.7%)13 (8.7%)12 (14.3%)<0.00010.0032
Balance disorder17 (20.5%)10 (6.7%)7 (8.3%)0.00160.0252
Fever10 (12.0%)110 (73.3%)62 (73.8%)<0.0001<0.0001
Impaired consciousness22 (26.5%)35 (23.3%)23 (27.4%)0.58950.8986
Seizures42 (50.6%)31 (20.7%)20 (23.8%)<0.00010.0003
Paresthesia9 (10.8%)12 (8.0%)6 (7.1%)0.46800.4030
GI symptoms4 (4.8%)39 (26.0%)22 (26.2%)<0.00010.0001
Dizziness22 (26.5%)14 (9.3%)8 (9.5%)0.00050.0043
Ataxia27 (32.5%)62 (41.3%)37 (44.0%)0.18540.1258
Nystagmus21 (25.3%)13 (8.7%)6 (7.1%)0.00050.0014
Vision impairment15 (18.1%)9 (6.0%)4 (4.8%)0.00370.0068
Hearing impairment3 (3.6%)7 (4.7%)2 (2.4%)0.70430.6818
Somnolence11 (13.3%)20 (13.3%)10 (11.9%)0.98620.7928
Tremor7 (8.4%)22 (14.7%)12 (14.3%)0.16750.2337
Speech disturbance15 (18.1%)37 (24.7%)24 (28.6%)0.24700.1088
Memory impairment37 (44.6%)19 (12.7%)16 (19.0%)<0.00010.0004
Attention disorder14 (16.9%)2 (1.3%)2 (2.4%)<0.00010.0015
Paresis/plegia22 (26.5%)47 (31.3%)29 (34.5%)0.43960.2607
Hallucinations9 (10.8%)2 (1.3%)2 (2.4%)0.00190.0275
Emotional changes27 (32.5%)5 (3.3%)5 (6.0%)<0.0001<0.0001
Rash1 (1.2%)14 (9.3%)8 (9.5%)0.01550.0173
Pelvic organ dysfunction10 (12.0%)5 (3.3%)4 (4.8%)0.00940.0894
Laboratory data
WBC (×109/L, serum)7.88
(5.84–10.30)
8.98
(6.50–12.85)
8.38
(6.39–10.22)
0.04510.5000
CRP (mg/L, serum)2.12
(0.60–5.75)
11.40
(2.00–85.00)
4.15
(1.06–21.31)
<0.00010.0124
Cell count (cells/μL, CSF)5
(2–18.75)
121
(43.75–410.75)
61
(29.75–126.75)
<0.0001<0.0001
Protein (g/L, CSF)0.46
(0.32–0.71)
1.02
(0.60–2.07)
0.73
(0.49–1.15)
<0.0001<0.0001
Glucose (mmol/L, CSF)3.51
(3.31–4.03)
3.00
(2.45–3.71)
3.24
(2.82–3.94)
0.00010.0300
Oligoclonal bands (CSF)15/44 (34.1%)14/34 (41.2%)10/26 (38.5%)0.52080.7123
EEG data
Diffuse slowing/non-epileptic abnormalities31/65 (47.7%)49/60 (81.7%)33/41 (80.5%)<0.00010.0008
Epileptic abnormalities21/65 (32.3%)15/60 (25.0%)12/41 (29.3%)0.36740.7421
MRI abnormalities
White matter lesions4 (4.8%)22 (14.7%)11 (13.1%)0.02940.0762
Basal ganglia6 (4.8%)13 (8.7%)3 (3.6%)0.77990.3175
Corpus callosum1 (1.2%)7 (4.7%)2 (2.4%)0.26840.5966
Pontine2 (2.4%)4 (2.7%)3 (3.6%)0.95150.7004
Midbrain2 (2.4%)4 (2.7%)3 (3.6%)0.95150.7004
Thalamus5 (6.0%)10 (6.7%)6 (7.1%)0.92180.8360
Corona radiata1 (1.2%)8 (5.3%)3 (3.6%)0.16900.6211
Cortical edema4 (4.8%)3 (2.0%)3 (3.6%)0.23750.7134
Cerebellum2 (2.4%)6 (4.0%)3 (3.6%)0.71780.7004
Limbic system29 (34.9%)42 (28.0%)30 (35.7%)0.17560.8949
Contrast enhancement10/74 (13.5%)53/113 (46.9%)24/71 (33.8%)<0.00010.0039
Edema9 (10.8%)37 (24.7%)25 (29.8%)0.01720.0039
Restriction on DWI9 (10.8%)35 (23.3%)14 (16.7%)0.02920.3337
IQR: interquartile range; GI: gastrointestinal; WBC: white blood cells; CRP: c-reactive protein; CSF: cerebrospinal fluid; EEG: electroencephalography; MRI: magnetic resonance imaging; DWI: diffusion-weighted imaging. The infectious group includes both bacterial and viral cases; * p-values represent comparisons between the autoimmune and infectious groups; ** p-values represent comparisons between the autoimmune and viral groups.
Table 3. Features used by humans for diagnostic decisions and features selected for AI modeling after RFECV.
Table 3. Features used by humans for diagnostic decisions and features selected for AI modeling after RFECV.
FeaturesUsed by HumansSelected for AI Model (After RFECV)
Demographics and clinicalX
AgeXX
Sex_maleXX
RashX
HeadacheXX
FatigueX
Sleep impairmentX
Gait disturbanceX
Behavioral changesX
ShiveringX
Balance disorderXX
CatatoniaX
FeverXX
Consciousness disturbanceXX
Joint/muscle painX
DyspneaX
SeizuresXX
DroolingX
CoughX
MyoclonusX
Sore throatX
DisorientationX
ParesthesiaX
FaintingX
GI symptomsX
Back painX
ChillsX
DizzinessX
AtaxiaXX
NystagmusXX
Visual impairmentXX
Hearing impairmentX
LethargyXX
SomnolenceX
TremorX
DeliriumX
DysphagiaX
Speech disorderX
Memory impairmentXX
Attention disorderXX
ParesisXX
HallucinationsX
Emotional changesXX
Olfactory disturbanceX
Pelvic organ dysfunctionXX
Laboratory featuresX
WBC_serumXX
CRP_serumXX
Cell count_CSFXX
Protein_CSFXX
Glucose_CSFXX
Oligoclonal bands_CSFX
EEG
Diffuse slowing/non-epileptic abnormalitiesXX
Focal epileptic abnormalitiesX
MRI features
LeukoencephalopathyXX
Basal gangliaXX
Cerebellar pedunclesX
Corpus callosumX
PontineX
MidbrainX
ThalamusX
Cortical edemaX
Corona radiataX
CerebellumX
Limbic systemX
Enhancement_MRIX
Enhancement_leptomeningealX
Enhancement_pachymeningealX
Enhancement_linearX
Restricted diffusion_DWIXX
Edema_MRIXX
RFECV: recursive feature elimination with cross-validation; EEG: electroencephalography; MRI: magnetic resonance imaging; DWI: diffusion-weighted imaging; WBC: white blood cell; CRP: C-reactive protein; CSF: cerebrospinal fluid; X: feature considered for inclusion.
Table 4. Performance metrics of machine learning models for the diagnosis of encephalitis.
Table 4. Performance metrics of machine learning models for the diagnosis of encephalitis.
ModelAccuracyPrecisionSensitivitySpecificityF1-ScoreAUROC
Random Forest0.9711.0000.9201.0000.9580.966
XGBoost0.9430.9570.8800.9780.9170.940
LightGBM0.9430.9570.8800.9780.9170.949
Logistic Regression0.9430.9200.9200.9560.9200.964
Naïve Bayes0.8860.8400.8400.9110.8400.880
K-nearest Neighbors0.8710.8330.8000.9110.8160.865
Table 5. Performance comparison of the AI model and human evaluators in classifying encephalitis cases.
Table 5. Performance comparison of the AI model and human evaluators in classifying encephalitis cases.
AccuracyPrecisionSensitivitySpecificityF1-ScoreAUROC
AI model0.9711.0000.9201.0000.9580.966
Neurologist in training 10.9000.8460.8800.9110.8630.896
Neurologist in training 20.8000.6770.8400.7780.7500.809
Neurologist in training 30.7570.6180.8400.7110.7120.776
Attending physician 10.8430.7330.8800.8220.8000.851
Attending physician 20.8290.9330.5600.9780.7000.769
Attending physician 30.8710.8640.7600.9330.8090.847
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Petrosian, D.; Giedraitienė, N.; Taluntienė, V.; Apynytė, D.; Bikelis, H.; Makarevičius, G.; Jokubaitis, M.; Vaišvilas, M. Differential Diagnosis of Infectious Versus Autoimmune Encephalitis Using Artificial Intelligence-Based Modeling. J. Clin. Med. 2025, 14, 8222. https://doi.org/10.3390/jcm14228222

AMA Style

Petrosian D, Giedraitienė N, Taluntienė V, Apynytė D, Bikelis H, Makarevičius G, Jokubaitis M, Vaišvilas M. Differential Diagnosis of Infectious Versus Autoimmune Encephalitis Using Artificial Intelligence-Based Modeling. Journal of Clinical Medicine. 2025; 14(22):8222. https://doi.org/10.3390/jcm14228222

Chicago/Turabian Style

Petrosian, David, Nataša Giedraitienė, Vera Taluntienė, Dagnė Apynytė, Haroldas Bikelis, Gytis Makarevičius, Mantas Jokubaitis, and Mantas Vaišvilas. 2025. "Differential Diagnosis of Infectious Versus Autoimmune Encephalitis Using Artificial Intelligence-Based Modeling" Journal of Clinical Medicine 14, no. 22: 8222. https://doi.org/10.3390/jcm14228222

APA Style

Petrosian, D., Giedraitienė, N., Taluntienė, V., Apynytė, D., Bikelis, H., Makarevičius, G., Jokubaitis, M., & Vaišvilas, M. (2025). Differential Diagnosis of Infectious Versus Autoimmune Encephalitis Using Artificial Intelligence-Based Modeling. Journal of Clinical Medicine, 14(22), 8222. https://doi.org/10.3390/jcm14228222

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop