Next Article in Journal
Tendency to Worry and Fear of Mental Health during Italy’s COVID-19 Lockdown
Next Article in Special Issue
Transcultural Differences in Risk Factors and in Triggering Reasons of Suicidal and Self-Harming Behaviour in Young People with and without a Migration Background
Previous Article in Journal
Validation of Obesity Status Based on Self-Reported Data among Filipina and Indonesian Female Migrant Domestic Workers in Macao (SAR), China
Previous Article in Special Issue
Effectiveness of Internet- and Mobile-Based Cognitive Behavioral Therapy to Reduce Suicidal Ideation and Behaviors: Protocol for a Systematic Review and Meta-Analysis of Individual Participant Data
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations

Stanford Suicide Prevention Research Laboratory, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94304, USA
Department of Psychology, National University of Ireland, Galway, Ireland
Department of Medicine, Center for Biomedical Informatics Research, Stanford University School of Medicine, Stanford, CA 94304, USA
Informatics, Stanford Center for Clinical and Translational Research, and Education (Spectrum), Stanford University, Stanford CA 94304, USA
Facebook, Menlo Park, CA 94025, USA
Yale University School of Medicine, New Haven, CT 06510, USA
Author to whom correspondence should be addressed.
Indicates Co-Senior Authorship.
Int. J. Environ. Res. Public Health 2020, 17(16), 5929;
Received: 20 July 2020 / Accepted: 28 July 2020 / Published: 15 August 2020
(This article belongs to the Special Issue Suicidal Behavior as a Complex Dynamical System)


Suicide is a leading cause of death that defies prediction and challenges prevention efforts worldwide. Artificial intelligence (AI) and machine learning (ML) have emerged as a means of investigating large datasets to enhance risk detection. A systematic review of ML investigations evaluating suicidal behaviors was conducted using PubMed/MEDLINE, PsychInfo, Web-of-Science, and EMBASE, employing search strings and MeSH terms relevant to suicide and AI. Databases were supplemented by hand-search techniques and Google Scholar. Inclusion criteria: (1) journal article, available in English, (2) original investigation, (3) employment of AI/ML, (4) evaluation of a suicide risk outcome. N = 594 records were identified based on abstract search, and 25 hand-searched reports. N = 461 reports remained after duplicates were removed, n = 316 were excluded after abstract screening. Of n = 149 full-text articles assessed for eligibility, n = 87 were included for quantitative synthesis, grouped according to suicide behavior outcome. Reports varied widely in methodology and outcomes. Results suggest high levels of risk classification accuracy (>90%) and Area Under the Curve (AUC) in the prediction of suicidal behaviors. We report key findings and central limitations in the use of AI/ML frameworks to guide additional research, which hold the potential to impact suicide on broad scale.

1. Introduction

Suicide is a complex, but preventable public health problem that challenges prediction due to its transdiagnostic, yet rare occurrence at the population-level. Beyond the inestimable costs at the individual, family, and community level, suicide currently outnumbers homicide and motor vehicle accident collisions [1,2], representing a public health emergency and resulting in an estimated cost of $93.5 billion to the U.S. economy [3]. Despite unprecedented strategies to advance awareness and treatment [4,5,6], suicide rates have remained intractable over time and recently increased in some cases, rising by approximately 24% in the U.S. (10.5 to 13/100,000) from 1999–2014 [7]. Alarmingly, the majority of suicide decedents consult with their physician in the days and weeks prior to death [8,9], suggesting missed detection of risk despite intervention opportunity. Moreover, known risk factors currently show relatively poor sensitivity and clinical utility in predicting suicide occurrence [10,11]. Recent research suggests that youth may disclose risk factors for suicide on Facebook or Twitter that they may fail to disclose to physicians, indicating a unique interplay of risk factors that may likewise vary according to age [12]. As a transdiagnostic outcome of medical illness, suicide rates are impacted by a unique interplay of risk factors that may change substantially over time. This includes differences in suicide risk according to age, gender, and ethnicity, which may furthermore vary according to region, suicide method, and access to health care [13,14]. This underscores need for rigorously-designed, population-based investigations of suicide risk, which may face significant cost, clinical, and infrastructural barriers.
As a form of artificial intelligence (AI), machine learning (ML) methods enable computer learning of advanced classifiers that may improve the accuracy of prediction using large-scale datasets. Given challenges inherent in traditional research methods, including cost and clinical barriers, risk of bias, and restricted generalizability, researchers have begun leveraging large datasets using advanced predictive modeling techniques [15,16,17,18]. This includes the application of ML to electronic medical records (EMR) within modern clinical informatics to advance risk prediction [19]. Risk models learned, using such data, have been developed to predict preterm infant morbidity [20], early warning signs of sepsis [21], and risk for rare outcomes (e.g., first-onset cancers, congestive heart failure, and schizophrenia) with a high degree (i.e., >90%) of accuracy [22]. Such studies indicate that ML approaches can be used to derive appropriate variable weights to produce models that can outperform corresponding, expert-derived scoring systems. On an organizational level, such models have also been used to predict demand for emergency department (ED) beds and elective surgery case volume to inform hospital staffing decisions [23,24,25] and enhance clinical care.
As such, the use of artificial intelligence and machine learning offers new possibilities to significantly guide risk prediction and advance suicide prevention frameworks. Though recent studies yield promising findings [15,26,27,28,29], ML investigations for suicide prevention span diverse medical and computer science fields—challenging ease of review, dissemination, and impact. We therefore conducted a systematic review of empirical reports in this area, with a primary focus on the use of AI in suicide prevention. Our aim was to identify and summarize original reports employing use of an AI/ML framework to predict suicidal behaviors as an outcome of risk according to systematic review.

2. Methods

2.1. Search Strategy

A web-based systematic literature search was performed for articles published from inception through November 30, 2018 on PubMed/MEDLINE, EMBASE, PsycINFO, and Web of Science, using search strings pertaining to suicide and ML. Database searches were supplemented by hand-search techniques.
Key words were used by search engine and designated filters according to PRISMA guidelines:
PubMed: (“Artificial Intelligence"[Mesh] OR machine learning[tw] OR natural language processing[tw] OR artificial intelligence[tw]) AND ("Suicide"[Mesh] OR suicid*[tw])
EMBASE: (’artificial intelligence’:ti,ab,kw OR ’machine learning’:ti,ab,kw OR ’natural language processing’:ti,ab,kw) AND (’mood disorder’:ti,ab,kw OR ’depression’:ti,ab,kw OR ’bipolar disorder’:ti,ab,kw OR ’suicidal behavior’:ti,ab,kw OR ’suicide’:ti,ab,kw OR suicid*)
Web of Science: TS = (“artificial intelligence" OR "machine learning" OR "natural language processing") AND TS = ("mood disorder" OR depress* OR bipolar OR suicid*)
PsycINFO: ((“artificial intelligence" or "machine learning" or "natural language processing") and ("mood disorder" or bipolar or depress* or suicid*)).ab,hw,id,ot,ti.

2.2. Study Selection

This review was performed according to the EQUATOR/PRISMA guidelines (Enhancing the Quality and Transparency of Health Research/Preferred Reporting Items for Systematic Reviews and Meta-Analyses), which serves as an evidence-based protocol for selecting and reporting for systematic reviews and meta-analyses [30]. Given that suicidal behaviors exist across all ages and diverse medical conditions, diagnostic or age-related variables were not a basis for exclusion.

2.3. Data Collection Process

Reviewers (R.A.B., A.M.H.) independently reviewed abstracts, followed by full-text articles. A third reviewer (R.M.) made a final decision, if there was a lack of consensus. Source documents were assessed according to the following inclusion criteria: (1) Journal article (Available in English), (2) original investigation (non-review/commentary), (3) employment of AI/ML methodology, and (4) evaluation of a suicide risk outcome (i.e., defined using CDC-derived guidelines [31] for suicidal self-directed violence; non-suicidal self-injury was excluded), grouped and labeled by suicidal behavior type (e.g., suicide ideation, suicide attempts, suicide death, or other). Studies identified by the above search strategy were managed using Endnote X8. Reports failing to meet inclusion criteria were systematically excluded with reasons. A PRISMA flow chart [30] was created to graphically depict inclusion/exclusion of studies by level of (1) identification, (2) screening, (3) eligibility, and (4) inclusion, and reports were coded with established quality ratings [32]. Reports were further grouped according to suicidal behavior type, sample characteristics, and AI/ML methodology. The latter included use of supervised learning, which aims to predict outcomes based on a set of input values, used to train a classifier; whereas, in unsupervised learning, no labels are provided and the aim is to instead describe data patterns, often by way of clustering, based on input measures. Studies were also coded for use of natural language processing (NLP)—which uses a computer to automatically or semi-automatically process human-generated language—and summarized by other study characteristics, including evaluation of biological markers of suicide risk.

2.4. Data Analysis

Descriptive analyses were employed to analyze study findings by key design characteristics according to suicide risk outcome, where ML parameters (e.g., area under the curve (AUC), accuracy, sensitivity, specificity) were reported. The data collated was not amenable to synthesis and meta-analysis was therefore not possible for evaluation.

3. Results

3.1. Data Extraction

A total of n = 594 records were identified according to the above search methods. An additional 25 records were identified through handsearch and Google Scholar articles. Of n = 461 unique articles, a further n = 316 were excluded according to abstract screening. Full text review was performed for n = 149 articles according to study inclusion criteria. Forty-nine reports failed to meet stated inclusion criteria. This included failure to: represent an original report (i.e., vs. a review/commentary), employ AI/ML methodology, or evaluate a suicide risk outcome (i.e., according to CDC-defined suicidal behaviors). N = 87 studies were included for qualitative synthesis; a subsample of reports (n = 13) met criteria as a subset of this review (see Figure 1). These evaluated emotional content among suicide decedent notes using natural language processing (NLP), and will be discussed separately. A total of n = 87 studies were included in a quantitative analysis. For reports meeting primary review inclusion, this represented an aggregate total number of n = 5,986,238 patients. Sample size was unreported or unavailable in n = 7 studies.

3.2. Broad Outcome Groupings

For broad outcome groupings in the quantitative synthesis, a total of n = 42 reports assessed suicide attempt (n = 28) or suicide death (n = 14) as a primary outcome. A total of n = 45 studies evaluated suicidal ideation (i.e., history and current symptoms) (n = 9), multiple risk outcomes (n = 18), and other-social media (n = 10) or other-undifferentiated (n = 8) risk outcomes.

3.3. ML Techniques and Learning Methods

ML methodology varied widely across reports and included both supervised and unsupervised learning algorithms. The majority of studies employed supervised learning techniques, which included ensemble learning methods (e.g., especially random forests), naïve Bayes classification, decision trees, logistic/least squares regression, and support vector machines (SVM). In comparison, n = 7 studies used unsupervised learning techniques, which included clustering algorithms, neural networks, self-organizing maps (SOM), principal component analysis (PCA), and decision trees. Only three studies used both supervised and unsupervised learning methods. Cross-validation techniques, or methods for splitting the data into training and test sets for model performance testing, were variably reported, with few investigations using distinct datasets, separated in time, for training and test models. (See Table 1 for all studies (n = 87). See Table 2 for a subset of reports (n = 13) in this review).

3.4. Design Characteristics

Broadly grouped, several general approaches were visible in study design methodology: (1) Investigations designed to explore the accuracy of diagnostic classification (i.e., using ML techniques and a large dataset or number of variables) to identify those at risk by classifying a binary suicide risk outcome (n = 65) (i.e., classification studies); or (2) investigations evaluating conceptual models of suicide risk, which ranged from PCA and other clustering algorithm methods (e.g., hidden layers, discovering patterns). Prospective designs were used in a small number of studies (n = 21), whereas the majority of investigations used a cross-sectional study design. Several reports (n = 12) utilized a population-based or epidemiologic design, and over half included multi-site investigations. In general, according to the Oxford Centre for Evidence-Based Medicine Protocol [32], articles ranged between ratings of 2–4, with most represented by a 3 rating. A 2 rating describes well-designed, controlled trials without randomization or prospective comparative cohort trials; a 3 rating refers to studies that employ case controls or retrospective cohort investigations; whereas a 4 rating represents case series studies with or without intervention or use of a cross sectional design. No randomized controlled, adequately powered trials (i.e., 1 rating) were identified in this review.

3.5. Sample Size and High Dimensional Data

Across all investigations, samples ranged in size from 55 to 975,057 (M = 74,815, SD = 217,839; Md n = 761 participants). While the majority of reports harnessed big data, several studies (n = 6) investigated high dimensional datasets with small sample sizes, which may increase the risk of overfitting. These studies evaluated a large volume of variables (i.e., >400), using smaller samples (i.e., ranging from n = 34–135 participants) to classify and detect differences in risk outcome.

3.6. Sample Characteristics

Samples varied significantly in ages studied, with the majority evaluating adults, and a smaller proportion investigating pediatric (n = 26), geriatric (n = 15), or all-age (n = 8) samples. A total of n = 16 studies evaluated risk among military personnel or veterans, and across all reports, the use of a clinical sample was observed in the majority of cases. These included participants recruited from high-risk or triage settings, such as the emergency department (ED) (n = 15). Reports demonstrated a primarily transdiagnostic focus, with few focusing on risk among specific psychiatric conditions, such as mood disorders and schizophrenia (n = 13). Other studies (n = 10) included social media investigations without diagnostic specifiers or assessed an undifferentiated outcome of suicide risk (i.e., suicide risk stratification and clinical decision-making prediction; suicide gene marker detection; human vs. machine learning classification testing, etc.). Finally, the number of studies utilizing electronic medical records (EMR) or administrative chart data was high, particularly in comparison with those using epidemiologic surveys (n = 12) or social media user data (n = 10). Use of a convenience sample or re-evaluation of archival datasets using ML techniques was common in comparison with a priori-designed studies.

3.7. Natural Language Processing and Biological Markers of Risk

Twenty-nine investigations employed the use of natural language processing (NLP) in association with suicidal behaviors. These included investigations evaluating (n = 2) acoustic features of speech to identify risk within emergency department settings, text-based applications (n = 1), or investigation of social media user data or posts (n = 15). A few such studies generated a word map to note word frequency in association with risk within EMR, medical discharge notes, or social media posts. A small number of ML investigations (n = 12) evaluated a biological marker of risk, such as plasma and blood metabolites (n = 2), genes (n = 8), and neuroimaging (n = 1), to predict risk for suicidal behaviors and hospitalization.

3.8. Timeframe of Assessment and Predictive Modeling

Timeframe of risk detection was variable, ranging from the next 24 h to lifetime assessments of suicide outcomes. Where reported, the majority (n = 15) investigated suicide risk prediction over a monthly timeframe. This ranged from 1 to 24 months (n = 9.31). Two reports investigated risk over an acute timeframe (24–72 h), and n = 12 studies evaluated lifetime risk. Four investigations reported multiple timeframes of risk, whereas n = 21 failed to report or specify precise observation or time-at-risk periods.

3.9. Accuracy, Area Under the Curve, Positive Predictive Value, Sensitivity/Specificity

Of N = 65 classification studies, a total of n = 41 investigations reported area under the curve (AUC) (or provided sufficient information for this value to be derived; n = 4 cases), (M = 0.814, SD = 0.110, Mdn = 0.820, range 0.61–0.99). In comparison, n = 29 investigations reported accuracy, (M = 0.813, SD = 0.123, Mdn = 0.840, range = 0.47–0.99). Other metrics, such as positive predictive value (PPV), were infrequently reported. PPV was reported in n = 6 cases, (SD = 0.30, Mdn = 0.88, range = 0.18–1.00). In total, n = 31 studies reported sensitivity, (M = 0.749, SD = 0.195, Mdn = 0.790, range = 0.22–1.00), and n = 32 reported specificity, (M = 0.870, SD = 0.156, Mdn = 0.870, range = 0.57–1.00). According to an exploratory one-way analysis of variance (ANOVA) to evaluate non-weighted accuracy and AUC values across reports, significant mean differences were not detected according to type of suicide-related outcome for highest accuracy (F 4,15 = 1.98, p = 0.149) or AUC (F 5,19 = 1.52, p = 0.231). See Figure 2 and Figure 3.

4. Discussion

Eighty-seven reports were identified in this systematic review, which included a subset of investigations evaluating emotional sentiment among suicide notes using ML methods. Across reports meeting primary inclusion criteria, the majority of studies examined risk for suicide attempts, followed by death by suicide, suicidal ideation, and multiple risk outcomes. A small proportion of studies predicted risk of an outcome in-between these groupings, including those examining an undifferentiated outcome (e.g., unspecified suicidal behavior, or ”suicidality”) or harnessing social media data (e.g., suicide-related risk by Twitter or internet post content) [85,86,87,88,89,90,91,92]. Based on this review, use of AI/ML methods for suicide risk prediction is a burgeoning area of inquiry, reflected by the diversity of fields represented and the pace of publications. Though 1999 marked the earliest publication, nearly half of reports were published in the past three years. This suggests an area of rapid growth at a nascent stage of investigation, presenting opportunities to critically guide the field forward and address key gaps in the extant literature.
Machine learning methods varied substantially across studies and ranged significantly in rigor and model testing. Supervised learning was most commonly used compared to unsupervised learning techniques, and few studies used both methods. In general, exploratory investigations were overrepresented, and replication or application of a predictive model—within a new setting or sample—was rare. Several reports tested replication in a new cohort—within the same setting—or used a multiple-wave sampling approach [15,26,34,97]. Methodologically, these represent critical areas of importance for future studies and warrant replication. Classification studies were most commonly observed in this review, and excellent accuracy and area under the curve (AUC) values were observed, despite considerable differences in design, methodology, sample, and learning methods. Model performance metrics most frequently reported were AUC, whereas accuracy, sensitivity, and specificity were reported in less than a third of reports. According to broad outcome groupings, underreporting and low cell counts by outcome groupings challenge interpretation and adequately powered comparisons.
Regarding generalizability, reports reflected a transdiagnostic focus, and primarily assessed adult participants or patient records. A smaller number of reports examined high-risk, pediatric or geriatric samples, as well as military veterans [15,26,33,34,53,72,97,112]. These highlight areas of elevated need, and align with prioritized strategies and nationally-directed initiatives for technology innovation in suicide prevention [5,6,130]. Investigations predominantly evaluated clinical samples or emergency settings, consistent with increased risk post-hospitalization [15,26,34,43,68,131]. Regarding constraints, archival datasets were common, with fewer studies employing prospective data elements [15,34,55,106]. Though convenience samples present inherent limitations, this has likewise been emphasized as a relative strength—insofar as ML may be applied to large-scale datasets that, as yet, remain unstudied [34]. This highlights opportunity for re-analysis of existing datasets to advance early detection and prevention methods, where prospective samples warrant prioritization. Next, though several reports used epidemiologic surveys within nationally-representative sample [60,101,106]. Such surveys, however, frequently used a single item assessment of suicidal behaviors, which may misclassify risk [132,133]. In general, suicide outcomes were variably defined, and validated symptom instruments varied significantly across reports [26,56,57,59,72,78,86,95,102,103]. This aligns with calls for increased uniformity in the assessment of suicidal behaviors to enhance research comparisons and improve surveillance [31]. We recommend that such calls be applied to the study of suicidal behaviors across ML investigations to enhance uniformity, comparison, and opportunity to improve risk prediction frameworks.
In several cases, the development and testing of clinical prediction models were evaluated against traditional statistics [6,55], showing superiority of ML in the classification of risk. Reports likewise compared ML-guided decision tree models to clinician-based predictions (i.e., of hospitalization following a suicide attempt (SA) or predicting likelihood of a suicide risk outcome) to guide triage [6,55,59,76,78,96]. Importantly, ML-guided risk stratification models outperformed those relying on clinician-based prediction methods alone. This included model testing within acute time frames of risk (i.e., 3–6 months) [55,96]—in one case, with performance enhanced three-fold using ML risk stratification [96]. Such findings suggest that advanced data analytic methods, combined with computer-guided screening, may augment clinical decision-making. Replication is warranted, including how such models may guide triage to optimize patient care with minimal time burden to providers.
Given that the majority of suicide decedents consult with their physician prior to death [8,9], such methods hold promise to enhance early detection and opportunity for rapid intervention. This may be particularly relevant to emergency settings, where medical records have been compared with manual coding of suicide attempt encounters using machine learning with promising results [43]. This aligns with research suggesting that brief, low-risk suicide prevention strategies targeting emergency settings are both efficacious and cost-effective [134,135,136]. The way suicidal behaviors are coded within EMR may likewise pose challenges to risk detection. Anderson and colleagues [102] used ML to evaluate correspondence between patient notes and ICD/E-Codes (International Classification of Disease/ICD External Cause of Injury Code) for suicidal behaviors, based on text-mining of clinical discharge notes in a sample of n = 15,761 patient records. They observed a low level of correspondence, with only 3% of encounters coded for suicidal ideation and 19% coded for suicide attempts. This suggests nned for considerable caution when interpreting suicide risk using ICD/E-Codes from EMR data alone, in comparison with discharge notes.
A subset of studies investigated NLP as a novel area of inquiry in select settings or populations. Pestian et al. [26] investigated NLP (i.e., key words and vocal characteristics) in structured and free-text speech responses to accurately distinguish (96.6% accuracy) n = 60 youth presenting to an ED for suicide risk (i.e., versus those presenting for other reasons). Text-mining methods also predicted accurate classification of those at risk for later suicidal behaviors [109,110,112,113], in some cases, generating word maps that may aid future research. Other novel approaches included social media investigations of microblog users and Twitter posts to detect suicide risk among users, online communities, or posts following a natural disaster to index public emotion [66,85,86,87,88,89,105]. Despite a large number of neuroimaging and neuroanatomical reports within suicide prevention, a smaller number of studies examined a biological variable in this review. Baca-Garcia and colleagues [56] showed that an algorithm based on three CNS (Central Nervous System) single nucleotide polymorphisms (SNPs) correctly classified those with and without a suicide attempt history, whereas other investigations evaluated candidate biomarkers to predict future risk for suicide [64,69,103]. Only one study used neuroimaging—comparing youth with suicidal ideation (n = 17) to matched controls (n = 17) on fMRI variables [74]. Based on neural representations in response to suicide and death-related scan stimuli, this generated a high (91%) classification accuracy [74]. This signals a promising approach to biomarker discovery, underscoring integration of biological, behavioral, and clinical variables to inform etiology and intervention in an area with few selective treatments [137,138].

Critical Challenges and Future Directions

A number of limitations should be noted. Methods varied widely across reports, both with respect to ML methods and study quality. Despite considerable diagnostic and methodological heterogeneity, high levels of model performance were observed. Incomplete reporting of test statistics (e.g., accuracy, AUC, sensitivity, specificity) and differing methods for assessing and defining risk within diverse ML methods—highlights need for improved reporting standards and a priori-designed studies. Key parameters, such as PPV, area under the precision curve (AUPRC), and lead-time of the prediction—which allows for decision-making about when to potentially act and intervene—were also underreported. Challenges inherent in retrospectively analyzing health data for administrative and clinical purposes should also be noted, given the high number of studies using EMR. Hersh et al. [139] raised concerns regarding biases due to EMR data being collected only at hospital visits, incomplete records or missing data, and other considerations relevant to accurate coding that emphasize advanced statistical methods be used for correction. Others report similar concerns of omission in EMR, calling for longitudinal measurements [19]. Additionally, given the way in which differences in the splitting of training data may alter the performance of predictive modeling [140], use of multiple methods to separate samples (i.e., for training versus testing of algorithms) is recommended. Critically, the majority of studies were cross-sectional in nature, underscoring need for prospectively-designed ML investigations to advance suicide risk prediction.
A lack of application to new settings or populations also highlights need for replication, particularly according to longitudinal, well-defined outcomes of risk. Though translation of one model to a new site poses inherent challenges, a model can be trained with data from any local site and tested using data from the site itself [23]. Regarding future application, challenges in constructing and deploying a statistical model within a clinical setting include access to data, availability of skilled personnel, and need to identify ways of integrating the model into healthcare workflows [23]. Others have emphasized associations between model complexity and predictive accuracy [141], in addition to key limitations [142]. For example, Siddaway and colleagues [142] suggest that ML may be best harnessed when led by clinical need, becoming machine-assisted learning similar to other statistical techniques, cautioning against over-reliance on ML models. We recommend incorporation of these considerations into the design of new investigations utilizing machine learning in the detection and prediction of risk for suicidal behaviors.

5. Conclusions

In conclusion, findings of this review highlight risk factors that align with past non-ML findings (e.g., mood/substance disorders, male gender, family history, previous hospitalization, unemployment, comorbidity, and delinquency); whereas, newly-identified risk variables or approaches point to sleep, circadian, and neural substrates, and NLP-derived indices of speech or user data. These findings reflect a burgeoning literature that warrant future study in an area of prevention prioritized worldwide. Though a leading cause of death, suicide defies prediction given its rare occurrence at the population level, which poses important challenges to prevention. AI and ML applications hold unique promise to enable precision medicine in the prevention of suicide, particularly given their ability to handle large and complex datasets. We propose that such methods may crucially inform the early detection of suicide risk, triage, and treatment development, with important methodological and statistical cautions. The application of NLP to social media in particular, and integration of AI with real-time suicide risk assessments, holds unique promise to impact the prevention of suicide on a broad scale.

Author Contributions

R.A.B. had full access to all the data in this review and takes responsibility for the integrity of the data and the accuracy of data analysis. Study concept and design: R.A.B. Acquisition, analysis, or interpretation of data: R.A.B., A.M.H., and R.M. Drafting of manuscript: R.A.B., A.M.H., R.M., J.P.K., F.A., and N.H.S. Critical revision of the manuscript for important intellectual content: R.A.B., A.M.H., R.M., and N.H.S. Supervision of study: R.A.B. and N.H.S. All authors have read and agreed to the published version of the manuscript.


This study was supported in part by grant K23MH093490 from the National Institute of Health (Bernert).

Conflicts of Interest

No conflicts are reported for disclosure of potential conflicts of interest for the present report. Dr. Bernert has received financial support for consulting services (Facebook, Inc.; and The California Mental Health Services Oversight and Accountability Commission); no financial support was received for the present manuscript.


  1. National Center for Injury Prevention and Control, CDC. Web-Based Injury Statistics Query and Reporting System (WISQARS), Fatal Injury and Violence Data. Available online: (accessed on 30 November 2018).
  2. Rockett, I.R.; Regier, M.D.; Kapusta, N.D.; Coben, J.H.; Miller, T.R.; Hanzlick, R.L.; Todd, K.H.; Sattin, R.W.; Kennedy, L.W.; Kleinig, J.; et al. Leading causes of unintentional and intentional injury mortality: United States, 2000–2009. Am. J. Public Health 2012, 102, e84–e92. [Google Scholar] [CrossRef]
  3. Shepard, D.S.; Gurewich, D.; Lwin, A.K.; Reed, G.A., Jr.; Silverman, M.M. Suicide and suicidal attempts in the United States: Costs and policy implications. Suicide Life Threat. Behav. 2016, 46, 352–362. [Google Scholar] [CrossRef]
  4. Goldsmith, S.; Pellmar, T.; Kleinman, A.; Bunney, W. Reducing suicide: A national imperative (Committee on Pathophysiology and Prevention of Adolescent and Adult Suicide, Board on Neuroscience and Behavioral Health, Institute of Medicine of the National Academies). Proc. Natl. Acad. Sci. USA 2002. [Google Scholar] [CrossRef]
  5. World Health Organization. Preventing Suicide: A Global Imperative; World Health Organization: Geneva, Switzerland, 2014. [Google Scholar]
  6. U.S. Department of Health and Human Services (HHS) Office of the Surgeon General and National Action Alliance for Suicide Prevention. 2012 National Strategy for Suicide Prevention: Goals and Objectives for Action; HHS: Washington, DC, USA, 2012.
  7. Curtin, S.C.; Warner, M.; Hedegaard, H. Increase in suicide in the United States, 1999–2014. NCHS Data Brief 2016, 241, 1–8. [Google Scholar]
  8. Juurlink, D.N.; Herrmann, N.; Szalai, J.P.; Kopp, A.; Redelmeier, D.A. Medical illness and the risk of suicide in the elderly. Arch. Intern. Med. 2004, 164, 1179–1184. [Google Scholar] [CrossRef]
  9. Smith, E.G.; Kim, H.M.; Ganoczy, D.; Stano, C.; Pfeiffer, P.N.; Valenstein, M. Suicide risk assessment received prior to suicide death by Veterans Health Administration patients with a history of depression. J. Clin. Psychiatry 2013, 74, 226–232. [Google Scholar] [CrossRef]
  10. Van Heeringen, K.; Mann, J.J. The neurobiology of suicide. Lancet Psychiatry 2014, 1, 63–72. [Google Scholar] [CrossRef]
  11. Franklin, J.C.; Ribeiro, J.D.; Fox, K.R.; Bentley, K.H.; Kleiman, E.M.; Huang, X.; Musacchio, K.M.; Jaroszewski, A.C.; Chang, B.P.; Nock, M.K. Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research. Psychol. Bull. 2017, 143, 187–232. [Google Scholar] [CrossRef]
  12. Pourman, A.; Roberson, J.; Caggiula, A.; Monslave, N.; Rahimi, M.; Torres-Llenza, V. Telemedicine and e-Health. J. Cit. Rep. 2019, 26, 880–888. [Google Scholar]
  13. Ivey-Stephenson, A.Z. Suicide trends among and within urbanization levels by sex, race/ethnicity, age group, and mechanism of death—United States, 2001–2015. MMWR Surveill. Summ. 2017, 66, 1–16. [Google Scholar] [CrossRef][Green Version]
  14. Miller, M.; Azrael, D.; Barber, C. Suicide mortality in the United States: The importance of attending to method in understanding population-level disparities in the burden of suicide. Annu. Rev. Public Health 2012, 33, 393–408. [Google Scholar] [CrossRef]
  15. Kessler, R.C.; Warner, C.H.; Ivany, C.; Petukhova, M.V.; Rose, S.; Bromet, E.J.; Brown, M.; Cai, T.; Colpe, L.J.; Cox, K.L.; et al. Predicting suicides after psychiatric hospitalization in US Army soldiers: The Army Study To Assess Risk and resilience in Servicemembers (Army STARRS). JAMA Psychiatry 2015, 72, 49–57. [Google Scholar] [CrossRef]
  16. Delgado-Rodríguez, M.; Llorca, J. Bias. J. Epidemiol. Community Health 2004, 58, 635–641. [Google Scholar] [CrossRef][Green Version]
  17. Weng, C.; Li, Y.; Ryan, P.; Zhang, Y.; Liu, F.; Gao, J.; Bigger, J.T.; Hripcsak, G.A. distribution-based method for assessing the differences between clinical trial target populations and patient populations in electronic health records. Appl. Clin. Inform. 2014, 5, 463–479. [Google Scholar]
  18. Casey, J.A.; Schwartz, B.S.; Stewart, W.F.; Adler, N. Using electronic health records for population health Research: A review of methods and applications. Annu. Rev. Public Health 2016, 37, 61–81. [Google Scholar] [CrossRef][Green Version]
  19. Goldstein, B.A.; Navar, A.M.; Pencina, M.J.; A Ioannidis, J.P. Opportunities and challenges in developing risk prediction models with electronic health records data: A systematic review. J. Am. Med. Inform. Assoc. 2016, 24, 198–208. [Google Scholar] [CrossRef]
  20. Saria, S.; Rajani, A.K.; Gould, J.; Koller, D.; A Penn, A. Integration of early physiological responses predicts later illness severity in preterm infants. Sci. Transl. Med. 2010, 2, 48–65. [Google Scholar] [CrossRef][Green Version]
  21. Henry, K.E.; Hager, D.N.; Pronovost, P.J.; Saria, S. A targeted real-time early warning score (TREWScore) for septic shock. Sci. Transl. Med. 2015, 7, 299ra122. [Google Scholar] [CrossRef][Green Version]
  22. Huang, S.H.; LePendu, P.; Iyer, S.V.; Tai-Seale, M.; Carrell, D.; Shah, N.H. Toward personalizing treatment for depression: Predicting diagnosis and severity. J. Am. Med. Inform. Assoc. 2014, 21, 1069–1075. [Google Scholar] [CrossRef][Green Version]
  23. Peck, J.S.; Benneyan, J.C.; Nightingale, D.J.; Gaehde, S.A. Predicting emergency department inpatient admissions to improve same-day patient flow. Acad. Emerg. Med. 2012, 19, E1045–E1054. [Google Scholar] [CrossRef]
  24. Tiwari, V.; Furman, W.R.; Sandberg, W.S. Predicting case volume from the accumulating elective operating room schedule facilitates staffing improvements. Anesthesiology 2014, 121, 171–183. [Google Scholar] [CrossRef][Green Version]
  25. Callahan, A.; Shah, N.H. Machine learning in healthcare. In Key Advances in Clinical Informatics, 1st Edition: Transforming Health Care Through Health Information Technology; Sheikh, A., Bates, D., Wright, A., Cresswell, K.K., Eds.; Academic Press: Cambridge, MA, USA, 2017; pp. 279–291. [Google Scholar]
  26. Pestian, J.P.; Grupp-Phelan, J.; Bretonnel Cohen, K.; Meyers, G.; Richey, L.A.; Matykiewicz, P. A controlled trial using natural language processing to examine the language of suicidal adolescents in the emergency department. Suicide Life Threat. Behav. 2016, 46, 154–159. [Google Scholar] [CrossRef]
  27. McCoy, T.H., Jr.; Castro, V.M.; Roberson, A.M.; Snapper, L.A.; Perlis, R.H. Improving prediction of suicide and accidental death after discharge from general hospitals with natural language processing. JAMA Psychiatry 2016, 73, 1064–1071. [Google Scholar] [CrossRef][Green Version]
  28. Sanderson, M.; Bulloch, A.G.; Wang, J.; Williams, K.G.; Williamson, T.; Patten, S.B. Predicting death by suicide following an emergency department visit for parasuicide with administrative health care system data and machine learning. EClinicalMedicine 2020, 20, 100281. [Google Scholar] [CrossRef]
  29. Roy, A.; Nikolitch, K.; McGinn, R.; Jinah, S.; Klement, W.; Kaminsky, Z.A. A machine learning approach predicts future risk to suicidal ideation from social media data. NPJ Digit. Med. 2020, 3, 1–12. [Google Scholar] [CrossRef]
  30. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, P. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef][Green Version]
  31. Crosby, A.; Ortega, L.; Melanson, C. Self-directed Violence Surveillance: Uniform Definitions and Recommended Data Elements; version 1.0; National Center for Injury Prevention and Control (CDC): Atlanta, GA, USA, 2011.
  32. Howick, J.; Chalmers, I.; Glasziou, P.; Greenhalgh, T.; Heneghan, C.; Liberati, A.; Moschetti, I.; Phillips, B.; Thornton, H. Explanation of the 2011 Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence (Background Document). Oxford Centre for Evidence-Based Medicine. Available online: (accessed on 31 July 2020).
  33. Kessler, R.C.; Hwang, I.; Hoffmire, C.A.; McCarthy, J.F.; Petukhova, M.V.; Rosellini, A.J.; Sampson, N.A.; Schneider, A.L.; Bradley, P.A.; Katz, I.R.; et al. Developing a practical suicide risk prediction model for targeting high-risk patients in the Veterans Health Administration. Int. J. Methods Psychiatr. Res. 2017, 26. [Google Scholar] [CrossRef]
  34. Kessler, R.C.; Stein, M.B.; Petukhova, M.V.; Bliese, P.; Bossarte, R.M.; Bromet, E.J.; Fullerton, C.S.; Gilman, S.E.; Ivany, C.; Lewandowski-Romps, L.; et al. Predicting suicides after outpatient mental health visits in the Army study to assess risk and resilience in servicemembers (Army STARRS). Mol. Psychiatry 2017, 22, 544–551. [Google Scholar] [CrossRef][Green Version]
  35. Poulin, C.; Shiner, B.; Thompson, P.; Vepstas, L.; Young-Xu, Y.; Goertzel, B.; Watts, B.; Flashman, L.; McAllister, T. Predicting the risk of suicide by analyzing the text of clinical notes. PLoS ONE 2014, 9, e85733. [Google Scholar] [CrossRef]
  36. Pestian, J.P.; Matykiewicz, P.; Grupp-Phelan, J.; Lavanier, S.A.; Combs, J.; Kowatch, r. Using natural language processing to classify suicide notes. AMIA Annu. Symp. Proc. 2008, 1091. [Google Scholar]
  37. Pamer, C.; Serpi, T.; Finkelstein, J. Analysis of Maryland poisoning deaths using classification and regression tree (CART) analysis. AMIA Annu. Symp. Proc. 2008, 550–554. [Google Scholar]
  38. Haerian, K.; Salmasian, H.; Friedman, C. Methods for identifying suicide or suicidal ideation in EHRs. AMIA Annu Symp Proc. 2012, 2012, 1244–1253. [Google Scholar] [PubMed]
  39. Ilgen, M.A.; Downing, K.; Zivin, K. Exploratory data mining analysis identifying subgroups of patients with depression who are at high risk for suicide. J. Clin. Psychiatry 2009, 70, 1495–1500. [Google Scholar] [CrossRef] [PubMed][Green Version]
  40. Adamou, M.; Antoniou, G.; Greasidou, E.; Lagani, V.; Charonyktakis, P.; Tsamardinos, I.; Doyle, M. Toward automatic risk assessment to support suicide prevention. Crisis 2019, 40, 249–256. [Google Scholar] [CrossRef][Green Version]
  41. Rosellini, A.J.; Stein, M.B.; Benedek, D.M.; Bliese, P.D.; Chiu, W.T.; Hwang, I.; Monahan, J.; Nock, M.K.; Sampson, N.A.; Street, A.E.; et al. Predeployment predictors of psychiatric disorder-symptoms and interpersonal violence during combat deployment. Depress Anxiety 2018, 35, 1073–1080. [Google Scholar] [CrossRef]
  42. De Ávila Berni, G.; Rabelo-da-Ponte, F.D.; Librenza-Garcia, D.V.; Boeira, M.; Kauer-Sant’Anna, M. Potential use of text classification tools as signatures of suicidal behavior: A proof-of-concept study using Virginia Woolf’s personal writings. PLoS ONE 2018, 13, e0207963. [Google Scholar]
  43. Metzger, M.H.; Tvardik, N.; Gicquel, Q.; Bouvry, C.; Poulet, E.; Potinet-Pagliaroli, V. Use of emergency department electronic medical records for automated epidemiological surveillance of suicide attempts: A French pilot study. Int. J. Methods Psychiatr Res. 2017, 26, e1522. [Google Scholar] [CrossRef]
  44. Passos, I.C.; Mwangi, B.; Cao, B.; Hamilton, J.E.; Wu, M.J.; Zhang, X.Y.; Zunta-Soares, G.B.; Quevedo, J.; Kauer-Sant’Anna, M.; Kapczinski, F.; et al. Identifying a clinical signature of suicidality among patients with mood disorders: A pilot study using a machine learning approach. J. Affect Disord. 2016, 193, 109–116. [Google Scholar] [CrossRef][Green Version]
  45. Kessler, R.C.; Van Loo, H.M.; Wardenaar, K.J.; Bossarte, R.M.; Brenner, L.A.; Cai, T.; Ebert, D.D.; Hwang, I.; Li, J.; De Jonge, P.; et al. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports. Mol. Psychiatry 2016, 21, 1366–1371. [Google Scholar] [CrossRef]
  46. Modai, I.; Kuperman, J.; Goldberg, I.; Goldish, M.; Mendel, S. Fuzzy logic detection of medically serious suicide attempt records in major psychiatric disorders. J. Nerv. Ment. Dis. 2004, 192, 708–710. [Google Scholar] [CrossRef]
  47. Modai, I.; Kurs, R.; Ritsner, M.; Oklander, S.; Silver, H.; Segal, A.; Goldberg, I.; Mendel, S. Neural network identification of high-risk suicide patients. Med. Inform. Internet Med. 2002, 27, 39–47. [Google Scholar] [CrossRef]
  48. Modai, I.; Valevski, A.; Solomish, A.; Kurs, R.; Hines, I.L.; Ritsner, M.; Mendel, S. Neural network detection of files of suicidal patients and suicidal profiles. Med. Inform. Internet Med. 1999, 24, 249–256. [Google Scholar]
  49. Modai, I.; Hirschmann, S.; Hadjez, J.; Bernat, C.; Gelber, D.; Ratner, Y.; Rivkin, O.; Kurs, R.; Ponizovsky, A.; Ritsner, M. Clinical evaluation of prior suicide attempts and suicide risk in psychiatric inpatients. Crisis 2002, 23, 47–54. [Google Scholar] [CrossRef]
  50. Modai, S.; Greenstain, A.; Weizman, A.; Mendel, S. Backpropagation and adaptive resonance theory in predicting suicidal risk. Med. Inform. 1998, 23, 325–330. [Google Scholar] [CrossRef]
  51. Hettige, N.C.; Nguyen, T.B.; Yuan, C.; Rajakulendran, T.; Baddour, J.; Bhagwat, N.; Bani-Fatemi, A.; Voineskos, A.N.; Chakravarty, M.M.; De Luca, V. Classification of suicide attempters in schizophrenia using sociocultural and clinical features: A machine learning approach. Gen. Hosp. Psychiatry 2017, 47, 20–28. [Google Scholar] [CrossRef]
  52. Walsh, C.G.; Ribeiro, J.D.; Franklin, J.C. Predicting risk of suicide attempts over time through machine learning. Clin. Psychol. Sci. 2017, 5, 457–469. [Google Scholar] [CrossRef]
  53. Venek, V.; Scherer, S.; Morency, L.P.; Rizzo, A.; Pestian, J. Adolescent suicidal risk assessment in clinician-patient interaction. IEEE Trans. Affect. Comput. 2017, 8, 204–215. [Google Scholar] [CrossRef]
  54. Baca-Garcia, E.; Perez-Rodriguez, M.M.; Saiz-Gonzalez, D.; Basurte-Villamor, I.; Saiz-Ruiz, J.; Leiva-Murillo, J.M.; De Prado-Cumplido, M.; Santiago-Mozos, R.; Artés-Rodríguez, A.; De Leon, J. Variables associated with familial suicide attempts in a sample of suicide attempters. Prog. Neuropsychopharmacol. Biol. Psychiatry 2007, 31, 1312–1316. [Google Scholar] [CrossRef]
  55. Tiet, Q.Q.; Ilgen, M.A.; Byrnes, H.F.; Moos, R.H. Suicide attempts among substance use disorder patients: An initial step toward a decision tree for suicide management. Alcohol Clin. Exp. Res. 2006, 30, 998–1005. [Google Scholar] [CrossRef]
  56. Baca-Garcia, E.; Vaquero-Lorenzo, C.; Perez-Rodriguez, M.M.; Gratacòs, M.; Bayés, M.; Santiago-Mozos, R.; Leiva-Murillo, J.M.; De Prado‐Cumplido, M.; Artes‐Rodriguez, A.; Ceverino, M.M.; et al. Nucleotide variation in central nervous system genes among male suicide attempters. Am. J. Med. Genet. B. Neuropsychiatr. Genet. 2010, 153, 208–213. [Google Scholar] [CrossRef]
  57. Lopez-Castroman, J.; De las Mercedes Perez-Rodriguez, M.; Jaussent, I.; Alegria, A.A.; Artes-Rodriguez, A.; Freed, P.; Guillaume, S.; Jollant, F.; Leiva-Murillo, J.M.; Malafosse, A.; et al. Distinguishing the relevant features of frequent suicide attempters. J. Psychiatry Res. 2010, 45, 619–625. [Google Scholar] [CrossRef] [PubMed][Green Version]
  58. Modai, I.; Kuperman, J.; Goldberg, I.; Goldish, M.; Mendel, S. Suicide risk factors and suicide vulnerability in various major psychiatric disorders. Med. Inform. Internet Med. 2004, 29, 65–74. [Google Scholar] [CrossRef] [PubMed]
  59. Mann, J.J.; Ellis, S.P.; Waternaux, C.M.; Liu, X.; Oquendo, M.A.; Malone, K.M.; Brodsky, B.S.; Haas, G.L.; Currier, D. Classification trees distinguish suicide attempters in major psychiatric disorders: A model of clinical decision making. J. Clin. Psychiatry 2008, 69, 23. [Google Scholar] [CrossRef] [PubMed]
  60. Bae, S.M.; Lee, S.A.; Lee, S.H. Prediction by data mining, of suicide attempts in Korean adolescents: A national study. Neuropsychiatr. Dis. Treat. 2015, 11, 2367–2375. [Google Scholar] [CrossRef][Green Version]
  61. Choo, C.; Diederich, J.; Song, I.; Ho, R. Cluster analysis reveals risk factors for repeated suicide attempts in a multi-ethnic Asian population. Asian J. Psychiatr. 2014, 8, 38–42. [Google Scholar] [CrossRef]
  62. Oh, J.; Yun, K.; Hwang, J.H.; Chae, J.H. Classification of suicide attempts through a machine learning algorithm based on multiple systemic psychiatric scales. Front Psychiatry 2017, 8, 192. [Google Scholar] [CrossRef][Green Version]
  63. Benton, A.; Mitchell, M.; Hovy, D. Multi-task learning for mental health using social media text. arXiv preprint 2017, arXiv:171203538. [Google Scholar]
  64. Ruderfer, D.M.; Walsh, C.G.; Aguirre, M.W.; Tanigawa, Y.; Ribeiro, J.D.; Franklin, J.C.; Rivas, M.A. Significant shared heritability underlies suicide attempt and clinically predicted probability of attempting suicide. Mol. Psychiatry 2019. [Google Scholar] [CrossRef][Green Version]
  65. Lyu, J.; Zhang, J. BP neural network prediction model for suicide attempt among Chinese rural residents. J. Affect. Disord. 2019, 246, 465–473. [Google Scholar] [CrossRef]
  66. Coppersmith, G.; Leary, R.; Crutchley, P.; Fine, A. NLP of social media as screening for suicide risk. Biomed. Inform. Insights 2018, 10, 1–11. [Google Scholar] [CrossRef]
  67. Dargél, A.A.; Roussel, F.; Volant, S.; Etain, B.; Grant, R.; Azorin, J.M.; M’Bailara, K.; Bellivier, F.; Bougerol, T.; Kahn, J.P.; et al. Emotional hyper-reactivity and cardiometabolic risk in remitted bipolar patients: A machine learning approach. Acta Psychiatr. Scand. 2018, 138, 348–359. [Google Scholar] [CrossRef] [PubMed]
  68. Jordan, J.T.; McNeil, D.G. Characteristics of a suicide attempt predict who makes another attempt after hospital discharge: A decision-tree investigation. Psychiatry Res. 2018, 268, 317–322. [Google Scholar] [CrossRef]
  69. Setoyama, D.; Kato, T.A.; Hashimoto, R.; Kunugi, H.; Hattori, K.; Hayakawa, K.; Sato-Kasai, M.; Shimokawa, N.; Kaneko, S.; Yoshida, S.; et al. Plasma metabolites predict severity of depression and suicidal ideation in psychiatric patients - a multicenter pilot analysis. PLoS ONE 2016, 11, e0165267. [Google Scholar] [CrossRef] [PubMed]
  70. Pestian, J.P.; Sorter, M.; Connolly, B.; Bretonnel Cohen, K.; McCullumsmith, C.; Gee, J.T.; Morency, L.P.; Scherer, S.; Rohlfs, L.; STM Research Group. A machine learning approach to identifying the thought markers of suicidal subjects: A prospective multicenter trial. Suicide Life Threat. Behav. 2017, 47, 112–121. [Google Scholar] [CrossRef] [PubMed]
  71. Cook, B.L.; Progovac, A.M.; Chen, P.; Mullin, B.; Hou, S.; Baca-Garcia, E. Novel use of natural language processing (NLP) to predict suicidal ideation and psychiatric symptoms in a text-based mental health intervention in Madrid. Comput. Math Methods Med. 2016, 2016, 8708434. [Google Scholar] [CrossRef] [PubMed]
  72. Gradus, J.L.; King, M.W.; Galatzer-Levy, I.; Street, A.E. Gender differences in machine learning models of trauma and suicidal ideation in veterans of the Iraq and Afghanistan Wars. J. Trauma Stress. 2017, 30, 362–371. [Google Scholar] [CrossRef][Green Version]
  73. Birjali, M.; Beni-Hssane, A.; Erritali, M. Machine learning and semantic sentiment analysis based algorithms for suicide sentiment prediction in social networks. Proc. Comput. Sci. 2017, 113, 65–72. [Google Scholar] [CrossRef]
  74. Just, M.A.; Pan, L.; Cherkassky, V.L. Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth. Nat. Hum. Behav. 2017, 1, 911–919. [Google Scholar] [CrossRef][Green Version]
  75. Ryu, S.; Lee, H.; Lee, D.K.; Park, K. Use of a machine learning algorithm to predict individuals with suicidal ideation in the general population. Psychiatry Investig. 2018, 15, 1030–1036. [Google Scholar] [CrossRef]
  76. Desjardins, I.; Cats-Baril, W.; Maruti, S.; Freeman, K.; Althoff, R. Suicide risk assessment in hospitals: An expert system-based triage tool. J. Clin. Psychiatry 2016, 77, e874–e882. [Google Scholar] [CrossRef][Green Version]
  77. Tzeng, H.M. Forecasting: Adopting the methodology of support vector machines to nursing research. Worldviews Evid. Based Nurs. 2006, 3, 124–128. [Google Scholar] [CrossRef] [PubMed]
  78. Baca-García, E.; Perez-Rodriguez, M.M.; Basurte-Villamor, I.; Saiz-Ruiz, J.; Leiva-Murillo, J.M.; De Prado-Cumplido, M.; Santiago-Mozos, R.; Artés-Rodríguez, A.; De Leon, J. Using data mining to explore complex clinical decisions: A study of hospitalization after a suicide attempt. J. Clin. Psychiatry 2006, 67, 1124–1132. [Google Scholar] [CrossRef] [PubMed]
  79. Quan, C.; Wang, M.; Ren, F. An unsupervised text mining method for relation extraction from biomedical literature. PLoS ONE 2014, 9, e102039. [Google Scholar] [CrossRef] [PubMed][Green Version]
  80. Litvinova, T.; Seredin, P.V.; Litvinov, O.A.; Romanchenko, O.V. Identification of suicidal tendencies of individuals based on the quantitative analysis of their internet texts. Computación y Sistemas. 2017, 21, 243–252. [Google Scholar] [CrossRef][Green Version]
  81. Zhang, Y.; Zhang, O.; Li, R.; Flores, A.; Selek, S.; Zhang, X.Y.; Xu, H. Psychiatric stressor recognition from clinical notes to reveal and association with suicide. Health Informatics J. 2019, 4, 1846–1862. [Google Scholar] [CrossRef][Green Version]
  82. McKernan, L.C.; Lenert, M.C.; Crofford, L.J.; Walsh, C.G. Outpatient engagement and predicted risk of suicide attempts in fibromyalgia. Arthritis Care Research. 2019, 9, 1255–1263. [Google Scholar] [CrossRef]
  83. DelPozo-Banos, M.; John, A.; Petkov, N.; Berridge, D.M.; Southern, K.; LLoyd, K.; Jones, C.; Spencer, S.; Travieso, C.M. Using neural networks with routine health records to identify suicide risk: Feasibility study. JMIR Ment. Health 2018, 5, e10144. [Google Scholar] [CrossRef][Green Version]
  84. Burke, T.A.; Jacobucci, R.; Ammerman, B.A.; Piccirillo, M.; McCloskey, M.S.; Heimberg, R.G.; Alloy, L.B. Identifying the relative importance of non-suicidal self-injury features in classifying suicidal ideation plans and behaviour using exploratory data mining. Psychiatry Res. 2018, 262, 175–183. [Google Scholar] [CrossRef]
  85. Cheng, Q.; Li, T.M.; Kwok, C.L.; Zhu, T.; Yip, P.S. Assessing suicide risk and emotional distress in Chinese social media: A text mining and machine learning study. J. Med. Internet Res. 2017, 19, e243. [Google Scholar] [CrossRef]
  86. Braithwaite, S.R.; Giraud-Carrier, C.; West, J.; Barnes, M.D.; Hanson, C.L. Validating machine learning algorithms for Twitter data against established measures of suicidality. JMIR Ment. Health 2016, 3, e21. [Google Scholar] [CrossRef]
  87. Guan, L.; Hao, B.; Cheng, Q.; Yip, P.S.; Zhu, T. Identifying Chinese microblog users with high suicide probability using Internet-based profile and linguistic features: Classification model. JMIR Ment. Health 2015, 2, e17. [Google Scholar] [CrossRef] [PubMed]
  88. Woo, H.; Cho, Y.; Shim, E.; Lee, K.; Song, G. Public trauma after the Sewol Ferry Disaster: The role of social media in understanding the public mood. Int. J Environ. Res. Public Health 2015, 12, 10974–10983. [Google Scholar] [CrossRef] [PubMed][Green Version]
  89. O’Dea, B.; Wan, S.; Batterham, P.J.; Calear, A.L.; Paris, C.; Christensen, H. Detecting suicidality on Twitter. Internet Interv. 2015, 2, 183–188. [Google Scholar] [CrossRef][Green Version]
  90. Nguyen, T.; O’Dea, B.; Larsen, M.; Phung, D.; Venkatesh, S.; Christensen, H. Using linguistic and topic analysis to classify sub-groups of online depression communities. Multimed. Tools Appl. 2017, 76, 10653–10676. [Google Scholar] [CrossRef]
  91. Burnap, P.; Colombo, G.; Amery, R.; Hodorog, A.; Scourfield, J. Multi-class machine classification of suicide-related communication on Twitter. Online Soc. Netw. Media 2017, 2, 32–44. [Google Scholar] [CrossRef]
  92. Vioulès, M.J.; Moulahi, B.; Azé, J.; Bringay, S. Detection of suicide-related posts in Twitter data streams. IBM J. Res. Dev. 2018, 62, 7–12. [Google Scholar]
  93. Zalar, B.; Plesnicar, B.K.; Zalar, I.; Mertik, M. Suicide and suicide attempt descriptions by multimethod approach. Psychiatria Danuba. 2018, 30, 317–322. [Google Scholar] [CrossRef]
  94. Tran, T.; Nguyen, T.D.; Phung, D.; Venkatesh, S. Learning vector representation of medical objects via EMR-driven nonnegative restricted Boltzmann machines (eNRBM). J. Biomed. Inform. 2015, 54, 96–105. [Google Scholar] [CrossRef][Green Version]
  95. Leiva-Murillo, J.M.; Lopez-Castroman, J.; Baca-Garcia, E. EURECA Consortium. Characterization of suicidal behaviour with self-organizing maps. Comput. Math. Methods Med. 2013, 2013, 136743. [Google Scholar] [CrossRef]
  96. Tran, T.; Luo, W.; Phung, D.; Harvey, R.; Berk, M.; Kennedy, R.L.; Venkatesh, S. Risk stratification using data from electronic medical records better predicts suicide risks than clinician assessments. BMC Psychiatry. 2014, 14, 76. [Google Scholar] [CrossRef][Green Version]
  97. Bernecker, S.L.; Zuromski, K.L.; Gutierrez, P.M.; Joiner, T.E.; King, A.J.; Liu, H.; Nock, M.K.; Sampson, N.A.; Zaslavsky, A.M.; Stein, M.B.; et al. Predicting suicide attempts among soldiers who deny suicidal ideation in the Army Study to Assess Risk and Resilience in Service Members (Army STARRS). Behav. Res. Ther. 2019, 120, 103350. [Google Scholar] [CrossRef] [PubMed]
  98. Zhong, Q.Y.; Mittal, L.P.; Nathan, M.D.; Brown, K.M.; González, D.K.; Cai, T.; Finan, S.; Gelaye, B.; Avillach, P.; Smoller, J.W.; et al. Use of natural language processing in electronic medical records to identify pregnant women with suicidal behavior: Towards a solution to the complex classification problem. Eur. J. Epidemiol. 2018, 34, 153–162. [Google Scholar] [CrossRef] [PubMed]
  99. Morales, S.; Barros, J.; Echávarri, O.; García, F.; Osses, A.; Moya, C.; Maino, M.P.; Fischman, R.; Núñez, C.; Szmulewicz, T.; et al. Acute mental discomfort associated with suicide behavior in a clinical sample of patients with affective disorders: Ascertaining critical variables using artificial intelligence tools. Front. Psychiatry 2017, 8, 7. [Google Scholar] [CrossRef][Green Version]
  100. Barros, J.; Morales, S.; Echávarri, O.; García, A.; Ortega, J.; Asahi, T.; Moya, C.; Fischman, R.; Maino, M.P.; Núnez, C. Suicide detection in Chile: Proposing a predictive model for suicide risk in a clinical sample of patients with mood disorders. Braz. J. Psychiatry 2017, 39, 1–11. [Google Scholar] [CrossRef][Green Version]
  101. Kuroki, Y. Risk factors for suicidal behaviors among Filipino Americans: A data mining approach. Am. J. Orthopsychiatry 2015, 85, 34. [Google Scholar] [CrossRef] [PubMed]
  102. Anderson, H.D.; Pace, W.D.; Brandt, E.; Nielsen, R.D.; Allen, R.R.; Libby, A.M.; West, D.R.; Valuck, R.J. Monitoring suicidal patients in primary care using electronic health records. J. Am. Board Fam. Med. 2015, 28, 65–71. [Google Scholar] [CrossRef] [PubMed]
  103. Levey, D.F.; Niculescu, E.M.; Le-Niculescu, H.; Dainton, H.L.; Phalen, P.L.; Ladd, T.B.; Weber, H.; Belanger, E.; Graham, D.L.; Khan, F.N.; et al. Towards understanding and predicting suicidality in women: Biomarkers and clinical risk assessment. Mol. Psychiatry 2016, 21, 768–785. [Google Scholar] [CrossRef]
  104. Colic, S.; Richardson, D.J.; Reilly, J.P.; Hasey, G. Using machine learning algorithms to enhance the management of suicide ideation. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2018, 4936–4939. [Google Scholar]
  105. Aladağ, A.E.; Muderrisoglu, S.; Akbas, N.B.; Zahmacioglu, O.; Bingol, H.O. Detecting suicidal ideation on forums: Proof-of-concept study. J. Med. Internet Res. 2018, 20, e215. [Google Scholar] [CrossRef]
  106. Choi, S.B.; Wanhyung, L.; Jin-Ha, Y.; Wan, J.H.; Kim, D.W. Ten year prediction of suicide death using Cox regression and ML in a nationwide retrospective cohort study in South Korea. S. Korea J. Affect. Disord. 2018, 231, 8–14. [Google Scholar] [CrossRef]
  107. Downs, J.; Velupillai, S.; George, G.; Holden, R.; Kikoler, M.; Dean, H.; Fernandes, A.; Dutta, R. Detection of suicidality in adolescents with autism spectrum disorders: Developing a natural language processing approach for use in electronic health records. AMIA Annu. Symp. Proc. 2018, 641–649. [Google Scholar]
  108. Fahey, R.A.; Matsubayashi, T.; Ueda, M. Tracking the Werther effect on social media: Emotional responses to prominent suicide deaths on Twitter and subsequent increases in suicide. Soc. Sci. Med. 2018, 219, 19–29. [Google Scholar] [CrossRef] [PubMed]
  109. Zhong, Q.Y.; Karlson, E.W.; Gelaye, B.; Finan, S.; Avillach, P.; Smoller, J.W.; Cai, T.; Williams, M.A. Screening pregnant women for suicidal behavior in electronic medical records: Diagnostic codes vs. clinical notes processed by natural language processing. BMC Med. Inform. Decis. Mak. 2018, 18, 30. [Google Scholar] [CrossRef] [PubMed]
  110. Fernandes, A.C.; Dutta, R.; Velupillai, S.; Sanyal, J.; Stewart, R.; Chandran, D. Identifying suicide ideation and suicidal attempts in a psychiatric clinical research database using natural language processing. Sci. Rep. 2018, 8, 7426. [Google Scholar] [CrossRef][Green Version]
  111. Jordan, P.; Sheddan-Mara, M.; Lowe, B. Predicting suicidal ideation in primary care: An approach to identity easily accessible key variables. Gen. Hosp. Psych. 2018, 51, 106–111. [Google Scholar] [CrossRef]
  112. Carson, N.J.; Mullin, B.; Sanchez, M.J.; Lu, F.; Yang, K.; Menezes, M. Identification of suicidal behavior among psychiatrically hospitalized adolescents using natural language processing and machine learning of electronic health records. PLoS ONE 2019, 14, e0211116. [Google Scholar] [CrossRef]
  113. McCoy, T.H.; Pellegrini, A.M.; Perlis, R.H. Research Domain Criteria scores estimated through natural language processing are associated with risk for suicide and accidental death. Depress. Anxiety 2019, 36, 392–399. [Google Scholar] [CrossRef]
  114. Connolly, B.; Cohen, K.B.; Bayram, U.; Pestian, J. A nonparametric Bayesian method of translating machine learning scores to probabilities in clinical decision support. BMC Bioinform. 2017, 18, 361. [Google Scholar] [CrossRef][Green Version]
  115. Modai, I.; Ritsner, M.; Kurs, R.; Shalom, M.; Ponizovsky, A. Validation of the Computerized Suicide Risk Scale - A backpropagation neural network instrument (CSRS-BP). Euro. Psychiatry 2002, 17, 75–81. [Google Scholar] [CrossRef]
  116. Rosellini, A.J.; Stein, M.B.; Benedek, D.M.; Bliese, P.D.; Chiu, W.T.; Hwang, I.; Monahan, J.; Nock, M.K.; Petukhova, M.V.; Sampson, N.A.; et al. Using self-report surveys at the beginning of service to develop multi-outcome risk models for new soldiers in the U.S. Army. Psychol. Med. 2017, 47, 2275–2287. [Google Scholar] [CrossRef][Green Version]
  117. Liakata, M.; Kim, J.H.; Saha, S.; Hastings, J.; Rebholz-Schuhmann, D. Three hybrid classifiers for the detection of emotions in suicide notes. Biomed. Inform. Insights 2012, 5, BII-S8967. [Google Scholar] [CrossRef]
  118. Nikfarjam, A.; Emadzadeh, E.; Gonzalez, G. A hybrid system for emotion extraction from suicide notes. Biomed. Inform. Insights 2012, 5, BII-S8981. [Google Scholar] [CrossRef] [PubMed]
  119. Yeh, E.; Jarrold, W.; Jordan, J. Leveraging psycholinguistic resources and emotional sequence models for suicide note emotion annotation. Biomed. Inform. Insights 2012, 5, BII-S8979. [Google Scholar] [CrossRef] [PubMed][Green Version]
  120. Cherry, C.; Mohammad, S.M.; De Bruijn, B. Binary classifiers and latent sequence models for emotion detection in suicide notes. Biomed. Inform. Insights 2012, 5, BII-S8933. [Google Scholar] [CrossRef] [PubMed][Green Version]
  121. Wang, W.; Chen, L.; Tan, M.; Wang, S.; Sheth, A.P. Discovering fine-grained sentiment in suicide notes. Biomed. Inform. Insights 2012, 5, BII-S8963. [Google Scholar] [CrossRef] [PubMed][Green Version]
  122. Desmet, B.; Hoste, V. Combining lexico-semantic features for emotion classification in suicide notes. Biomed. Inform. Insights 2012, 5, BII-S8960. [Google Scholar] [CrossRef]
  123. Kovačević, A.; Dehghan, A.; Keane, J.A.; Nenadic, G. Topic categorisation of statements in suicide notes with integrated rules and machine learning. Biomed. Inform. Insights 2012, 5, BII-S8978. [Google Scholar]
  124. Pak, A.; Bernhard, D.; Paroubek, P.; Grouin, C. A combined approach to emotion detection in suicide notes. Biomed. Inform. Insights 2012, 5, BII-S8969. [Google Scholar] [CrossRef]
  125. Spasić, I.; Burnap, P.; Greenwood, M.; Arribas-Ayllon, M. A naïve bayes approach to classifying topics in suicide notes. Biomed. Inform. Insights 2012, 5, BII-S8945. [Google Scholar]
  126. McCarthy, J.A.; Finch, D.K.; Jarman, J. Using ensemble models to classify the sentiment expressed in suicide notes. Biomed. Inform. Insights 2012, 5, BII-S8931. [Google Scholar] [CrossRef]
  127. Wicentowski, R.; Sydes, M.R. emotion Detection in suicide notes using Maximum Entropy Classification. Biomed. Inform. Insights 2012, 5, BII-S8972. [Google Scholar] [CrossRef] [PubMed]
  128. Sohn, S.; Torii, M.; Li, D.; Wagholikar, K.; Wu, S.; Liu, H. A hybrid approach to sentiment sentence classification in suicide notes. Biomed. Inform. Insights 2012, 5, BII-S8961. [Google Scholar] [CrossRef] [PubMed][Green Version]
  129. Yang, H.; Willis, A.; De Roeck, A.; Nuseibeh, B. A hybrid model for automatic emotion recognition in suicide notes. Biomed. Inform. Insights 2012, 5, BII-S8948. [Google Scholar] [CrossRef] [PubMed]
  130. The White House Office of Science and Technology Policy, The National Insitutes of Health, The U.S. Department of Veterans Affairs. Open Data and Innovation for Suicide Prevention: #MentalHealthHackathon; The White House Office of Science and Technology Policy, The National Insitutes of Health, The U.S. Department of Veterans Affairs: Boston, MA, USA; Chicago, IL, USA; San Francisco, CA, USA; New York, NY, USA; Washington, DC, USA, 12 December 2015.
  131. Chung, D.T.; Ryan, C.J.; Hadzi-Pavlovic, D.; Singh, S.P.; Stanton, C.; Large, M.M. Suicide rates after discharge from psychiatric facilities: A systematic review and meta-analysis. JAMA Psychiatry 2017, 74, 694–702. [Google Scholar] [CrossRef] [PubMed]
  132. Hom, M.A.; Joiner, T.E.; Bernert, R.A. Limitations of a single-item assessment of suicide attempt history: Implications for standardized suicide risk assessment. Psychol. Assess. 2016, 28, 1026–1030. [Google Scholar] [CrossRef] [PubMed]
  133. Millner, A.J.; Lee, M.D.; Nock, M.K. Single-item measurement of suicidal behaviors: Validity and consequences of misclassification. PLoS ONE 2015, 10, e0141606. [Google Scholar] [CrossRef][Green Version]
  134. Carter, G.L.; Clover, K.; Whyte, I.M.; Dawson, A.H.; D’Este, C. Postcards from the EDge: 5-year outcomes of a randomised controlled trial for hospital-treated self-poisoning. Br. J. Psychiatry 2013, 202, 372–380. [Google Scholar] [CrossRef]
  135. Miller, I.W.; Camargo, C.A., Jr.; Arias, S.A.; Sullivan, A.F.; Allen, M.H.; Goldstein, A.B.; Manton, A.P.; Espinola, J.A.; Jones, R.; Hasegawa, K.; et al. Suicide prevention in an emergency department population: The ED-SAFE study. JAMA Psychiatry 2017, 74, 563–570. [Google Scholar] [CrossRef][Green Version]
  136. Denchev, P.; Pearson, J.L.; Allen, M.H.; Claassen, C.A.; Currier, G.W.; Zatzick, D.F.; Schoenbaum, M.; Claassen, C. Modeling the cost-effectiveness of interventions to reduce suicide risk among hospital emergency department patients. Psychiatr. Serv. 2018, 69, 23–31. [Google Scholar] [CrossRef]
  137. Brent, D.A.; McMakin, D.L.; Kennard, B.D.; Goldstein, T.R.; Mayes, T.L.; Douaihy, A.B. Protecting adolescents from self-harm: A critical review of intervention studies. J. Am. Acad. Child Adolesc. Psychiatry 2013, 52, 1260–1271. [Google Scholar] [CrossRef][Green Version]
  138. Mann, J.J.; Apter, A.; Bertolote, J.; Beautrais, A.; Currier, D.; Haas, A.; Hegerl, U.; Lönnqvist, J.; Malone, K.M.; Marusic, A.; et al. Suicide prevention strategies: A systematic review. JAMA 2005, 294, 2064–2074. [Google Scholar] [CrossRef]
  139. Hersh, W.R.; Weiner, M.G.; Embi, P.J.; Logan, J.R.; Payne, P.R.; Bernstam, E.V.; Lehmann, H.P.; Hripcsak, G.; Hartzog, T.H.; Cimino, J.J.; et al. Caveats for the use of operational electronic health record data in comparative effectiveness research. Med. Care 2013, 51, S30–S37. [Google Scholar] [CrossRef][Green Version]
  140. Iezzoni, L.I. Statistically derived predictive models Caveat emptor. J. Gen. Intern. Med. 1999, 14, 388–389. [Google Scholar] [CrossRef] [PubMed][Green Version]
  141. Fox, K.R.; Huang, X.; Linthicum, K.P.; Wang, S.B.; Franklin, J.C.; Ribeiro, J.D. Model complexity improves the prediction of nonsuicidal self-injury. J. Consult. Clin. Psychology 2019, 87, 684–692. [Google Scholar] [CrossRef] [PubMed]
  142. Siddaway, A.P.; Quinlivan, L.; Kapur, N.; O’Connor, R.C.; De Beurs, D. Cautions concerns and future directions for using machine learning in relation to mental health problems and clinical and forensic risks: A brief comment on “Model complexity improves the prediction of nonsuicidal self-injury” (Fox et al., 2019). J. Consult. Clin. Psychology 2020, 88, 384–387. [Google Scholar] [CrossRef] [PubMed][Green Version]
Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram.
Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram.
Ijerph 17 05929 g001
Figure 2. Boxplot of accuracy by suicide outcome. Boxplot of classification accuracy by suicide outcome groupings. Notes: Outcomes assessed suicide death (M = 0.69 (SD = 0.04)); 95% CI (0.58, 0.80), suicide attempt (M = 0.82 (SD = 0.12)); 95% CI (0.71, 0.92), suicide ideation (M = 92 (SD = 0.04)); 95% CI (0.84, 0.99), other-undifferentiated (M = 0.85 (SD = 0.19)); 95% CI (−0.86, 2.57), other-social media (M = 0.84 (SD = 0.04)); 95% CI (0.74, 0.94). CI = confidence interval by outcome.
Figure 2. Boxplot of accuracy by suicide outcome. Boxplot of classification accuracy by suicide outcome groupings. Notes: Outcomes assessed suicide death (M = 0.69 (SD = 0.04)); 95% CI (0.58, 0.80), suicide attempt (M = 0.82 (SD = 0.12)); 95% CI (0.71, 0.92), suicide ideation (M = 92 (SD = 0.04)); 95% CI (0.84, 0.99), other-undifferentiated (M = 0.85 (SD = 0.19)); 95% CI (−0.86, 2.57), other-social media (M = 0.84 (SD = 0.04)); 95% CI (0.74, 0.94). CI = confidence interval by outcome.
Ijerph 17 05929 g002
Figure 3. Boxplot of AUC by suicide outcome. Boxplot of classification area under the curve (AUC) by suicide outcome groupings. Notes: Outcomes assessed suicide death (M = 0.79 (SD = 0.13)); 95% CI (−0.41, 2.0), suicide attempt (M = 0.81 (SD = 0.09)); 95% CI (0.76, 0.87), suicide ideation (M = 0.78 (SD = 0.15)); 95% CI (0.52, 1.03), multiple outcomes (M = 0.87 (SD = 0.08)); 95% CI (0.67-1.06). CI = confidence interval by outcome.
Figure 3. Boxplot of AUC by suicide outcome. Boxplot of classification area under the curve (AUC) by suicide outcome groupings. Notes: Outcomes assessed suicide death (M = 0.79 (SD = 0.13)); 95% CI (−0.41, 2.0), suicide attempt (M = 0.81 (SD = 0.09)); 95% CI (0.76, 0.87), suicide ideation (M = 0.78 (SD = 0.15)); 95% CI (0.52, 1.03), multiple outcomes (M = 0.87 (SD = 0.08)); 95% CI (0.67-1.06). CI = confidence interval by outcome.
Ijerph 17 05929 g003
Table 1. Investigations by broad outcome groupings and ML parameters.
Table 1. Investigations by broad outcome groupings and ML parameters.
AuthorYearJournalQuality RatingClinical SampleOutcomeBiomarkerNLP aClassificationSpecificitySensitivityAccuracyAUC bN
Kessler et al. [33]2017Int. J. Methods Psychiatr. Res.2x1 x 0.28 6360
McCoy et al. [27]2016JAMA Psychiatry3x1 xx 458,053
Kessler et al. [34]2017Mol. Psychiatry2x1 x0.950.460.700.70975,057
Kessler et al. [15] 2015JAMA Psychiatry3x1 0.8953,769
Poulin et al. [35]2014PLoS One4x1xxx 0.65 210
Pestian et al. [36]2008AMIA Annu. Symp. Proc.3 1 xx 0.74 66
Pamer et al. [37] 2008AMIA Annu. Symp. Proc.3 1 x 1204
Haerian et al. [38] 2012AMIA Annu. Symp. Proc.3x1 xx 280
Ilgen et al. [39]2009J Clin. Psychiatry3x1 887,859
Adamou et al. [40]2019Crisis3x1 xx 0.70130
Rossellini [41]2018Depress. Anxiety3x1 x 0.869488
De Avila Berni [42]2018PLOS One5 1 x0.910.69 0.80
Metzger et al. [43]2017Int. J. Methods Psychiatr. Res.3 2 xx 444
Passos et al. [44] 2016J. Affect. Disord.4x2 0.71 0.720.77144
Kessler et al. [45] 2016Mol. Psychiatry2 2 x 0.70 0.761056
Modai et al. [46] 2004J. Nerv. Ment. Dis.2x2 0.850.94 987
Modai et al. [47] 2002JMIR Med. Inform.3x2 0.851.00 0.82197
Modai et al. [48] 1999Med. Inform. Internet Med.3 2 0.940.94 0.94198
Modai et al. [49] 2002Crisis2x2 250
Modai et al. [50] 1998Med. Inform.4x2x 0.970.83 161
Hettige et al. [51] 2017Gen. Hosp. Psychiatry3x2x x0.80.650.670.71345
Walsh et al. [52] 2017Clin. Psychol. Sci.3x2 x 0.96 0.845167
Venek et al. [53] 2017IEEE Trans. Affect. Comput.3x2 x 0.90 60
Baca-Garcia et al. [54]2007Prog. Neuropsych.. Biol Psychiatry4x2x x0.990.990.970.99539
Tiet et al. [55] 2006Alcohol Clin. Exp. Res.3x2xxx0.870.89 0.8834,251
Baca-Garcia et al. [56] 2010Am. J. Med. Genet. B Genet.3x2x x0.820.500.670.66277
Lopez-Castroman et al. [57] 2011J. Psychiatry Res.4x2x x0.97 0.760.711349
Modai et al. [58]2004Med. Inform. Internet Med.3x2 0.700.83 0.77612
Mann et al. [59] 2008J. Clin. Psychiatry3x2 x0.920.89 0.80408
Bae et al. [60] 2015Neuropsychiatr. Dis. Treat.3 2 x 2754
Choo et al. [61] 2014Asian J. Psychiatr.3x2 0.90 418
Oh et al. [62]2017Front. Psychol.3x2 x0.990.780.97 573
Benton et al. [63]2017Proc. 15th Conf. EACL3x2 x 9611
Ruderfer et al. [64]2019Mol. Psychiatry3 2x 0.820.92 0.94512,639
Lyu and Zhang [65]2019J. Affect. Disord.3x2 0.940.68 0.851318
Coppersmith et al. [66]2018Biomed. Inform. Insights3 2,6 xx 0.94418
Dargel et al. [67]2018Acta Psychiatr. Scand.3x2x 0.84 635
Jordan et al. [68]2018Psychiatry Res.3x2 x0.690.79 0.72218
Setoyama et al. [69] 2016PLoS One2x3x x 0.7090
Pestian et al. [70] 2017Suicide Life Threat. Behav.2x3 xx 0.930.85379
Cook et al. [71] 2016Comput. Math Meth. Med.2x3 xx0.570.560.850.611453
Pestian et al. [26] 2016Suicide Life Threat. Behav.2x3 xx 0.960.9760
Gradus et al. [72]2017J. Trauma Stress4x3 x 0.922240
Birjali et al. [73]2017Procedia Comput. Sci.3 3 xx
Just et al. [74] 2017Nat. Hum. Behav.3x3x x 0.94 79
Ryu et al. [75]2018Psychiatric Invest.3 3 x0.810.84 0.8011,628
Desjardins et al. [76] 2016J. Clin. Psychiatry4x4 x 0.93879
Tzeng [77]2006Worldview Evid. Base. Nurs.2x4 x 63
Baca-García et al. [78] 2006J. Clin. Psychiatry3x4 x1.00 0.99 509
Quan et al. [79] 2014PLoS One2 4x
Litvinova et al. [80]2017Comput. y Sistemas3x4 xx 0.72 1000
Zhang et al. [81] 2019Health Inform. J.3x4 xx 409
McKernan et al. [82]2019Arthritis Care Res.3x2,3 x1.001.00 0.828879
DelPozo-Banos et al. [83]2018JMIR Ment. Health3x1 x0.850.65 0.802604
Burke et al. [84]2018Psychiatry Res.3 4 x 0.89359
Cheng et al. [85] 2017J. Med. Internet Res.4 5 xx 0.61974
Braithwaite et al. [86] 2016JMIR Ment. Health4 5 x0.970.530.91 135
Guan et al. [87] 2015JMIR Ment. Health4 5 x 909
Woo et al. [88] 2015Int. J. Environ. Res. Public Health4 5 x
O’Dea et al. [89] 2015Internet Interv.3 5 x 0.76 14,701
Nguyen et al. [90] 2017Multimed. Tools Appl.3x5 x 0.88
Burnap et al. [91]2017Online Soc. Netw. Media3 5 xx 0.85
Vioules et al. [92]2018IBM J. Res. Dev.3 5 xx 120
Zalar et al. [93]2018Psychiatr. Danub.3 1, 2 x 0.91 78,625
Tran et al. [94] 2015J. Biomed. Inform.3x1,2 x 7578
Leiva-Murillo et al. [95] 2013Comput. Math. Methods Med.3x1,2x 8699
Tran et al. [96] 2014BMC Psychiatry3x1,2 0.97 0.797399
Bernecker et al. [97] 2019Behav. Res. Ther.3x2, 3 x 0.8327,501
Zhong et al. [98]2019Euro. J. Epidemiol.3 2, 3 xx0.960.34 0.83275,843
Morales et al. [99] 2017Front. Psychiatry4x2,3 x0.79 0.71 707
Barros et al. [100] 2017Braz. J. Psychiatry4x2,3 x0.790.770.78 707
Kuroki [101] 2015Am. J. Orthopsychiatry3 2,3 xx 624
Anderson et al. [102] 2015J. Am. Board Fam. Med.3x2,3,4 xx0.960.94 0.9515,761
Levey et al. [103] 2016Mol. Psychiatry3x1,3,4x 0.94114
Colic et al. [104]2018Conf. Proc. IEEE Eng. Med. Biol. Soc.3x3 0.84 738
Aladag et al. [105]2018J. Med. Internet Res.3 5 xx 0.92 785
Choi et al. [106]2018S. Korean J. Affect. Disord.3 1 x 0.72819,951
Downs et al. [107]2018AMIA Annu. Symp. Proc.3x4 xx 1906
Fahey [108]2018Soc. Sci. Med.3 5 xx 0.80 974,891
Zhong et al. [109]2018BMC Med. Inform. Decis. Mak.3x2-4 xx 275,843
Fernandes et al. [110]2018Sci. Rep.3 2,3 xx 0.87
Jordan et al. [111]2018Gen. Hosp. Psychiatry3x3 x 0.83 0.876805
Carson et al. [112]2019PLOS one3x2 xx0.220.830.470.6873
McCoy et al. [113]2018Depress. Anxiety3 1 x 444,317
Connolly et al. [114]2017BMC Bioinform.3x3 x 314
Modai et al. [115] 2002Euro. Psychiatry4x2 0.730.65 250
Rossellini et al. [116]2017Psychol. Med.3 2 x 0.7421,832
Investigations by broad outcome groupings and ML parameters. Outcomes: 1 = suicide death, 2 = suicide attempt/medically serious suicide attempt, 3 = suicidal ideation/state, 4 = other-undifferentiated, 5 = other-social media risk outcomes. Notes: NLP = natural language processing a; AUC = area under the curve b; empty cells indicate missing/unreported data in articles; ML = machine learning.
Table 2. Investigations by study subset and ML parameters.
Table 2. Investigations by study subset and ML parameters.
CitationJournalOutcomePrecisionQuality Rating
Liakata et al. 2012 [117]Biomed Inform InsightsDeath/NLP of Suicide Notes0.603
Nikfarjam et al. 2012 [118]Biomed Inform InsightsDeath/NLP of Suicide Notes0.603
Yeh et al. 2012 [119]Biomed Inform InsightsDeath/NLP of Suicide Notes0.773
Cherry et al. 2012 [120]Biomed Inform InsightsDeath/NLP of Suicide Notes1.003
Wang et al. 2012 [121]Biomed Inform InsightsDeath/NLP of Suicide Notes0.673
Desmet et al. 2012 [122]Biomed Inform InsightsDeath/NLP of Suicide NotesNR3
Kovacevic et al. 2012 [123]Biomed Inform InsightsDeath/NLP of Suicide Notes0.673
Pak et al. 2012 [124]Biomed Inform InsightsDeath/NLP of Suicide Notes0.623
Spasic, 2012 [125]Biomed Inform InsightsDeath/NLP of Suicide Notes0.553
McCarthy et al. 2012 [126]Biomed Inform InsightsDeath/NLP of Suicide Notes0.573
Wicentowski et al. 2012 [127]Biomed Inform InsightsDeath/NLP of Suicide Notes0.693
Sohn, 2012 [128]Biomed Inform InsightsDeath/NLP of Suicide Notes0.613
Yang, 2012 [129]Biomed Inform InsightsDeath/NLP of Suicide Notes0.583
Investigations (N = 13) by study subset and ML parameters. Outcome focused on sentiment detection of suicide decedent notes using NLP. Notes: Quality ratings were performed according to the Oxford Centre for Evidence-Based Medicine Protocol; ML = machine learning; NLP = natural language processing; Precision = positive predictive value (PPV); NR = not reported.

Share and Cite

MDPI and ACS Style

Bernert, R.A.; Hilberg, A.M.; Melia, R.; Kim, J.P.; Shah, N.H.; Abnousi, F. Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations. Int. J. Environ. Res. Public Health 2020, 17, 5929.

AMA Style

Bernert RA, Hilberg AM, Melia R, Kim JP, Shah NH, Abnousi F. Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations. International Journal of Environmental Research and Public Health. 2020; 17(16):5929.

Chicago/Turabian Style

Bernert, Rebecca A., Amanda M. Hilberg, Ruth Melia, Jane Paik Kim, Nigam H. Shah, and Freddy Abnousi. 2020. "Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations" International Journal of Environmental Research and Public Health 17, no. 16: 5929.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop