Next Article in Journal
Anomaly Detection Using Machine Learning for Robotics Environments on 5G Networks
Previous Article in Journal
Artificial Tissue Models for Microneedle Testing and Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Potential of Decision Tree Modeling to Augment Return-to-Duty Decisions Following Major Limb Injury

1
Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
2
Department of Physical Medicine and Rehabilitation, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, USA
3
Center for the Intrepid, Brooke Army Medical Center, San Antonio, TX 78234, USA
4
DoD/VA Extremity Trauma and Amputation Center of Excellence, Bethesda, MD 20889, USA
*
Author to whom correspondence should be addressed.
John Fergason contributed substantially to the design and execution of the study and drafting of the manuscript, qualifying as an author prior to passing away. Concurrence for submission of this manuscript has been provided by his family.
Technologies 2026, 14(2), 107; https://doi.org/10.3390/technologies14020107
Submission received: 5 September 2025 / Revised: 3 November 2025 / Accepted: 2 January 2026 / Published: 8 February 2026

Abstract

Advances in medical care now enable significant functional recovery after traumatic limb injuries. The return-to-duty decision-making process is highly variable and dependent on multiple factors. To retain service members (SM) post-injury, there needs to be a robust method to inform the decision-making process. The collection of outcome data and decision tree analysis has the potential to assist in the development of an efficient decision support tool. Data were combined from two previous research studies on 31 injured SMs (26 with limb salvage wearing custom dynamic ankle–foot orthoses and 5 with varying levels of lower limb amputation wearing prostheses). Forty-two factors across military, demographic, injury, and outcome measures were used to develop categorical tree models to classify return to duty after injury. The feasibility of the final pruned model was evaluated using a 10-fold cross-validation to calculate sensitivity, specificity, and misclassification rate. The overall misclassification rate for the final pruned model was 29% (9/31). The model classified participants into successful return to duty: (1) Post Concussion Symptom Scale < 20 and (2) age at time of assessment ≥34. These preliminary results suggest that decision tree modeling could be an effective approach to augmenting the return-to-duty decision-making process.

1. Introduction

The United States Military expends significant resources to train, outfit, and deploy a service member (SM) [1]. The SM then gains experience from successive deployments, which makes the SM become increasingly more valuable to their unit. Unfortunately, major lower extremity injury is prevalent in the military, with approximately 15,000 combat-related extremity injuries occurring due to recent conflicts, and almost 5000 of those requiring medical evacuation from the theater [2]. These figures represent a significant burden and do not incorporate non-combat-related extremity injuries, such as training or motor vehicle accidents. These injuries often result in complex limb salvage operations or amputation, culminating in a large negative impact on military readiness. With advances in orthotic and prosthetic (O&P) care as well as surgical and rehabilitation interventions, injured SMs often regain high levels of function following their injury [3]. Despite this, only 33% of SMs with a combination of a custom dynamic ankle–foot orthosis and specific training [4], and less than 15% of SMs with amputation return to duty (RTD) in some form following their injury [5]. SMs can return to high function, yet a low RTD rate indicates that there is the potential that SMs using orthoses or prostheses may be separated from the military unnecessarily. Thus, it is imperative that the RTD decision is as informed as possible to ensure that this crucial experience is retained.
Decision-making for an individual SM to RTD, or not, is a complex process that takes into consideration multiple factors. A range of stakeholders are involved in the RTD decision-making process, including the SM and their family, clinicians, unit leadership, the Medical Evaluation Board, and the Physical Evaluation Board (PEB). Additional factors that influence the final RTD decision include the SM’s current occupation, nature of injury, and time in service prior to the injury. There is a wide range of performance and outcome measures that can be assessed and included in the injured SM’s documentation. Standard clinical measures (e.g., Four-Square-Step-Test, Lower Extremity Functional Scale) have been shown to be relevant and reliable in the O&P patient population [6]. These measures often lack validity with stakeholders, as it can be difficult to interpret how assessment results translate to performance on military-specific occupational tasks. Moreover, there are no standardized or validated assessments that evaluate an SM’s capacity to perform their military duties. Previous research has developed and evaluated military-relevant assessments on SMs with severe lower limb injuries requiring O&P care. These assessments include the Stand–Prone–Stand (SPS), Stand–Kneel–Stand (SKS) [7], and Readiness Evaluation during simulated Dismounted Operations (REDOp) [7,8]. The SPS and SKS measures require military-relevant movements (rapid transitions between standing, kneeling, and prone positions) that are notably challenging for O&P patients. Both measures demonstrated excellent inter-session and inter-rater reliability (ICC > 0.8) for both able-bodied and military populations using O&P devices [7,8]. The more comprehensive REDOp requires load carriage and target engagement with a simulated weapon during a simulated dismounted patrol over variable terrain. The embedded measures have shown promising psychometrics to be able to reliably assess SMs and identify differences between able-bodied and injured SMs [8]. Despite their military and clinical relevance, none of these assessments have been validated for their ability to inform the RTD decision-making process. An effective RTD assessment needs to not only evaluate the multifaceted demands of military service but also be easily understood and interpreted by non-technical personnel.
There are many analysis methods that can generate models that classify samples based on input variables. As such, these types of analyses lend themselves well to the development of decision support tools. Each method has its strengths and weaknesses when it comes to the required size of the training dataset, computational complexity, and interpretability. Common methods include regression, specifically logistic regression, neural networks, and decision trees [9]. Logistic regression generates an equation that takes in measured variables and predicts the probability of a binary variable. These models can be created with a moderate amount of training data but are largely limited to binary classification and require implementing a threshold to translate the probability into a decision (yes/no). While an equation might be technically interpretable, it is not as user-friendly to clinical personnel. Neural networks are powerful at predicting and classifying outcomes based on a wide range of inputs. However, they often require a large training dataset and are “black box”, making interpreting how they make the decision very difficult. Unlike the previously mentioned decision support algorithms, decision trees are well-suited as decision support tools because they communicate to a lay audience what decision should be made in an easy-to-understand fashion with transparency, without additional calculations.
In decision tree analysis, a tree of decision points (i.e., yes/no questions), called nodes, is constructed by selecting the variable and associated cut-off score that best classifies the data at each step. As a result, a tree model will identify which subset of assessments is most relevant and the corresponding cut-off scores that can be used as clinical benchmarks during rehabilitation. Decision tree models are effective at utilizing multiple data types and have the capabilities of handling missing data points. Overall, the decision tree produces a finalized model that is understandable and interpretable to a diverse audience.
The purpose of this study was to examine decision tree analysis models applied to previously collected datasets on injured SMs during O&P care, along with follow-up surveys, to predict RTD status following lower limb injury. The goal was to establish the feasibility of a decision support tool that would optimize the RTD decision-making process for military personnel who use O&P devices. This manuscript describes the datasets used in the generation and validation of the models, the model generation and validation process, and presents the model and validation results along with discussions of limitations and future directions.

2. Methods

2.1. Participants

To develop and evaluate the decision support tool, we leveraged the military-relevant assessment data that we have collected on injured SMs as part of previous research efforts. The dataset comprises 31 participants (30 males, age: 34 ± 8 years, height: 1.79 ± 0.09 m, weight: 92.8 ± 16.8 kg) in total drawn from two previous studies [8,10]. All participants received their rehabilitation and O&P care at the same location. Injuries included 26 individuals with limb salvage using orthotics (22 unilateral and 4 bilateral) and five individuals with unilateral transtibial amputation. Informed consent was obtained from all individual participants included in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of San Antonio Institutional Review Board (22-15641) on 28 February 2022.

2.2. Data Collection

Participants from the two previous studies were contacted to inquire about details related to their duty status following the completion of their rehabilitation. The information included whether the SM returned to duty following the completion of their rehabilitation and in what capacity (e.g., same MOS, reclassified, and limited duty). This information was combined with previously captured data for the final dataset (Table 1).

2.3. Variables

Forty-two variables were used in the analysis, including six demographic measures, seven data points related to the injury, five data points related to their military service, and twenty-four clinical outcome measures (Table 1). Since data were drawn from two different studies, not all participants completed all 24 clinical outcome measures. The number of participants from each study included in the dataset is reported for each variable (Table 1).
Below is a summary of the clinical outcome measures included in the analysis. More details about their implementation can be found in their individual references.
The Patient-Reported Outcomes Measurement Information System (PROMIS) assesses patient-reported health status across a range of domains using a collection of instruments [11,12]. Participants completed PROMIS measures for Self-Efficacy, Pain Interference, Pain Behavior, and Cognitive Function. We used the PTSD Checklist—Military to assess symptoms in response to “stressful military experiences” [13]. Self-reported symptoms related to concussive injury were captured using the Post-Concussion Symptom Scale (PCSS) [14,15]. We used the Lower Extremity Functional Scale [6], Modified Oswestry Low Back Pain Questionnaire [16], Roland–Morris Disability Questionnaire [17], and numerical pain scale [18] to evaluate physical function and pain. The Veterans RAND 36 was completed to evaluate health-related quality of life [19]. The dataset included both the physical and mental sub-scores. The NASA Task Load Index recorded perception of overall workload when completing a military simulation [20].
We evaluated physical function using standard physical performance measures. These measures included SPS and SKS on both the injured and uninjured sides [7] and the Four-Square Step Test [21]. We also used the REDOp assessment to evaluate performance within a military-relevant scenario [8,10]. During the REDOp assessment, participants complete as much as they can of a simulated dismounted patrol in a simulator consisting of a 300° dome screen and a treadmill in a six-degree-of-freedom motion platform. During the simulated patrol, participants walked over simulated variable terrain and completed an ambush shooting task where they had to make shoot/no-shoot decisions, all while their motion was tracked using a twenty-six infrared camera motion capture system. Several variables were captured from the REDOp assessment. Distance completed was used as a surrogate measure of activity tolerance. Reaction time was calculated during the ambush task as the average time from when a target was revealed to when a shot was registered. Shooting accuracy was calculated as the percent of correct responses out of all responses, and precision as the percent of targets that were shot that were supposed to be shot. Participant’s stability while walking on variable terrain was calculated using the whole-body angular momentum in the frontal, transverse, and sagittal planes [22,23].

2.4. Data Analysis

Participants were grouped into the following three categories based on their RTD outcome: (1) Full RTD, defined as RTD with no restrictions or limitations; (2) Limited RTD, defined as RTD but with some limitations to activity or deployment; and (3) None, if they were unable to RTD. We used a categorical decision tree analysis to determine what factors and cut-off scores classify those groups. The initial unpruned model perfectly classifies the data but is overly complex and overfits the data, resulting in poor predictive ability. To create the final model, cost-complexity pruning based on 10-fold cross-validation was utilized. For cross-validation, the data is randomly divided into 10 parts called folds. Then, 10 models are created using 9 of the 10 folds as training data and the remaining fold acting as a test dataset. This process is repeated as 9 other folds are selected and tested against a 10th fold. The cross-validation error rate was calculated as the sum of the out-of-sample error rates for the 10 models. The cross-validation error was calculated for different model complexities (i.e., the number of splits). To account for randomness in the calculation of the cross-validation error rate, one standard error was added to the error rate for each split. The number of splits and the corresponding model that minimized this error were chosen as the final model, as they represent the optimal balance between classification performance and complexity.
The decision tree analysis is considered a “greedy” algorithm since it only keeps the best variable/threshold combination at each step. As a result, other variables that are important for classifying the data may be missed due to the specific dataset. The Variable Importance score (VIMP), also known as the permutation (Brieman–Cutler) importance, is a quantitative measure for how important each explanatory variable is at influencing the model prediction. The computation of the VIMP involves a nested simulation loop where the outer loop involves bootstrapping, and the inner loop involves permutation for predictors (randomly changing the order of predictor values). In each step of the bootstrapping outer loop, rows are drawn randomly with replacement. The randomly selected rows are called In-the-Bag (ITB), and the rows left out are called Out-of-the-Bag (OOB). In each bootstrap simulation, the overall data is dichotomized into training or test data, with the ITB and OOB datasets serving their respective purposes, and a tree model is built using the training ITB data. The inner simulation loop permutes the values of a given predictor variable in the OOB portion of the dataset. The VIMP is then computed by comparing the prediction from the true predictor to the prediction from the permuted predictor in the OOB portion of the dataset. The chief idea behind permutation emerges from the permutation test. If we see large changes between the permuted prediction and the true prediction, then this can only be because the “true” tree model used this predictor in the tree. Similarly, if the permuted predictor variable changes the predictions from the tree model only very slightly, then the predictor must be unimportant, and it will result in a small VIMP. Overall, VIMP is a useful quantitative score for ranking how important each variable is as a tree prediction classifier.

2.5. Statistical Analysis

The validity of our final model was evaluated based on the sensitivity (true positive rate), specificity (true negative rate), overall misclassification rate calculated from the 10-fold cross-validation, and balanced accuracy. The balanced accuracy value is the arithmetic average between the sensitivity and specificity, which is a measure of overall classifier performance based on the assumption of equal importance of false positives and false negatives. A balanced accuracy of 70–80% is considered ‘acceptable’, 80–90% is ‘good’, 90–95% is ‘excellent’, and 95–100% is ‘outstanding’ [24]. The validation calculations were only run on the final, pruned model. As the model classifies samples into three RTD categories (Full, Limited, and None), there are separate values for each outcome. Treating the Full and Limited outcomes together as positive, we were able to evaluate the outcome of the model as a whole. All model generation was developed in R (v 4.5.1) [25] using the rpart package (v. 4.1.24) [26].

3. Results

Two predictive tree models were created: (1) an initial, unpruned model and (2) a final pruned model. For our analysis, real-world RTD classification is as follows: 12 had Full RTD, 11 had Limited RTD, and 8 had no RTD.

3.1. Unpruned Model

The overfit model included multiple variables to partition nodes as follows: PCSS, age, SPS, SKS on the injured side, PROMIS General Self-Efficacy Score, and REDOp shooting accuracy (Figure 1).

3.2. Finalized Pruned Model

The finalized model only contained two nodes: PCSS < 20 and age ≥ 34 (Figure 2). The model was overall able to correctly classify 22/31 (71%) participants. The model classified 11/12 participants as Full, with a specificity of 93.6%, a sensitivity of 73.3%, and a balanced accuracy of 83.5%. The model classified 5/11 participants as Limited RTD with a specificity of 75%, a sensitivity of 71.4%, and a balanced accuracy of 73.2%. The model classified 6/8 participants as No RTD with a specificity of 90.9%, a sensitivity of 66.7%, and a balanced accuracy of 78.8%. Treating Full and Limited as positive, the model’s binary performance was an accuracy of 84%, sensitivity of 67%, specificity of 91%, and balanced accuracy of 79%.

3.3. Variable Importance Scores

Akin to the final pruned model, the PCSS had the highest VIMP of 4.809 (Table 2). While it was not included in the tree model, the PTSD Checklist—Military had the second-highest VIMP at 3.438.

4. Discussion

The final model had a “good” balanced accuracy for Full RTD, and “acceptable” balanced accuracies for Limited and No RTD. The model was able to effectively classify the participants using two clinic-friendly assessments, PCSS and age at time of assessment. This suggests that these assessments are important to capture in injured SMs and that this classification tree model may be useful for informing RTD decision-making.
Both the tree model and the VIMP identified the PCSS as the most important variable for classifying RTD outcome. The PCSS is a self-reported rating of symptoms associated with a concussion, which includes 22 questions related to physical, cognitive, emotional, and sleep domains, which are rated on a 0–6 scale for severity of the symptom. The maximum score is 132, with higher scores indicating greater severity of concussive symptoms. The decision tree analysis identified a cutoff score of 20 on the PCSS as effective for classifying RTD outcome. Eagle et al. determined PCSS cutoff scores for sports-related concussion, where a score of seven was able to differentiate groups (i.e., concussion group versus healthy controls) [27]. Most of the participants’ injuries were the result of trauma, which may have coincided with a concussive event. The analysis used the total PCSS score, which is an aggregate of all symptoms and their severities reported. While the PCSS score is an effective measure of overall symptoms, without evaluating the individual responses, it is difficult to say which specific aspects of the PCSS account for the strong ability to effectively classify RTD outcomes.
In addition to the PCSS, age ≥ 34 years was the only other factor that separated Full from Limited RTD. It is important to note that the age variable used in the analysis was the age at the time of the data collection. While it is not necessarily the age at injury or the age at the time of RTD determination, at the time of the data collection, they were still receiving some level of O&P care. It may seem like younger SMs would be more likely to RTD since they would be more fit and able to recover from the injuries; however, there are a few reasons that older SMs may be more likely to RTD. Older SMs likely have more experience in the military, which can be invaluable to mission safety and success. These SMs may be selected to RTD in order to retain that crucial experience. While age was a factor, rank was not. This suggests that it may be more than just experience that is associated with the RTD classification. If an SM enlists at 18 years old, they would have been in the military for 16 years, which is close to the 20 years to be eligible for full retirement. Proximity to retirement may be a factor, but the fact that it was included in the Full RTD model and not the Limited RTD model suggests that something else related to age is associated with RTD outcomes. This underscores recent findings on the variety of factors that go into RTD decisions and how SMs with extremity trauma felt physical barriers and limitations were easier to overcome than institutional barriers and personal factors [28]. Those factors (e.g., interpersonal, Health Care System, and institutional) would be heavily influenced by age and, thus, age being a predictor here could be a result of RTD factors outside the scope of the current study.
Out of all the demographic and clinical outcome measures that were included in the analysis, only age and PCSS were included in the decision tree model as factors predictive of RTD outcome following injury. While it may be expected that measures of physical performance would be most important to RTD determination, the model suggests that more personal and emotional factors are more predictive. This is also shown in the VIMP, with the top three variables being the PCSS, PCL-M, and age. The SPS is the only physical performance measure on the list (Table 2), and the large difference in VIMPs between SPS and PCSS makes it unlikely that it would be included in future models. This is in line with the findings of Wilson et al. (2021) [28] that showed injured SMs view the injury and physical limitations as less important to the RTD decision. Instead, they noted factors of mental health and time in service, which align with the age and PCSS measures in the models and PCL-M from the VIMPs [28]. This further emphasizes the need for multidimensional measures (e.g., acceptance by organizational command) to be included in these decision support tools.

Study Limitations

While the model was able to effectively classify the participants, there are aspects of the development that may limit the adoption. The model was created using a limited sample size of 31 SMs. Further, the participants all had major lower extremity injuries, which is not representative of all the possible injuries seen across the services. Therefore, research needs to be performed to determine if these factors are consistent across injury types or if injury-specific models need to be created. While the process of cost-complexity pruning reduces the effects of overfitting the models, the small sample size with specific injuries limits the generalizability of the model. Emphasis needs to be placed on the small sample size and participant characteristics that do not make the model appropriate for the broader SM populations. The data for this study is a combination of two different datasets. These previous studies had differing research questions and, subsequently, different dependent variables, albeit some that overlapped. A more comprehensive dataset could provide a more robust model and rely less upon surrogate measures. Additionally, while the analysis included a wide range of clinical and demographic measures as predictors, it did not include all potential measures. There is the potential that there are other factors or assessments that are better predictors of RTD outcome; for example, including the individual responses on the PCSS may provide more insights.

5. Conclusions

A decision tree model was generated that can predict RTD outcomes on SMs with O&P devices. The model indicated that an important measure in RTD status (i.e., Full and Limited) was the PCSS. This questionnaire is easy to administer and can provide meaningful clinical information. This study examines the feasibility of decision tree modeling to examine RTD outcomes and underscores the potential of similar analytical processes in the creation of decision support tools. However, this model is based on a limited dataset, so it is better treated as a proof-of-concept than a fully validated tool ready for clinical implementation. Like all decision support tools, it can provide useful information to inform the decision, but it is only one aspect that the stakeholders need to consider when making the RTD determination. The study demonstrates the potential of leveraging machine learning methods and, with sufficient datasets, creating decision support tools that can help inform the RTD decision-making process as well as other military and health care questions.

Author Contributions

R.C.S. designed this research, analyzed the data, and drafted the original manuscript. N.A.L. provided project management and assisted in data processing. D.K. developed and ran the statistical analysis and wrote the statistical sections. W.L.C., J.F., M.L., and J.A. provided subject matter expertise and support to the project. All authors have read and agreed to the published version of the manuscript.

Funding

The original data collection was supported by the Center for Rehabilitation Sciences Research awards HU00012120074 and HU00012220038. The current project was supported by the CDMRP OPORP award W81XWH2120021.

Institutional Review Board Statement

All studies were approved by the San Antonio Institutional Review Board ( 22-15641) on 28 February 2022 .

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

The data underlying this article will be shared on reasonable request to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

GSW SF T-ScorePROMIS General Self-Efficacy Score
ITBIn-the-Bag
IRBInstitutional Review Board
NASANational Aeronautics and Space Administration
O&POrthotic and Prosthetic
OOBOut-of-the-Bag
PEBPhysical Evaluation Board
PCL-MPTSD Checklist—Military
PCSSPost-Concussion Symptom Survey
PROMISPatient-Reported Outcomes Measurement Information System
PTSDPost-Traumatic Stress Disorder
REDOpReadiness Evaluation during Simulated Dismounted Operations
RTDReturn to duty
SMService member
SPSStand–Prone–Stand
SKS-(R/L)Stand–Kneel–Stand (Right or Left Leg)
VIMPVariable Importance

References

  1. Niebuhr, D.W.; Page, W.F.; Cowan, D.N.; Urban, N.; Gubata, M.E.; Richard, P. Cost-effectiveness analysis of the U.S. Army Assessment of Recruit Motivation and Strength (ARMS) program. Mil. Med. 2013, 178, 1102–1110. [Google Scholar] [CrossRef]
  2. Belmont, P.J.; Schoenfeld, A.J.; Goodman, G. Epidemiology of combat wounds in Operation Iraqi Freedom and Operation Enduring Freedom: Orthopaedic burden of disease. J. Surg. Orthop. Adv. 2010, 19, 2–7. [Google Scholar]
  3. Mazzone, B.; Farrokhi, S.; Depratti, A.; Stewart, J.; Rowe, K.; Wyatt, M. High-Level Performance After the Return to Run Clinical Pathway in Patients Using the Intrepid Dynamic Exoskeletal Orthosis. J. Orthop. Sports Phys. Ther. 2019, 49, 529–535. [Google Scholar] [CrossRef]
  4. Franklin, N.; Hsu, J.R.; Wilken, J.; McMenemy, L.; Ramasamy, A.; Stinner, D.J. Advanced Functional Bracing in Lower Extremity Trauma: Bracing to Improve Function. Sports Med. Arthrosc. Rev. 2019, 27, 107–111. [Google Scholar] [CrossRef]
  5. Belisle, J.G.; Wenke, J.C.; Krueger, C.A. Return-to-duty rates among US military combat-related amputees in the global war on terror: Job description matters. J. Trauma Acute Care Surg. 2013, 75, 279–286. [Google Scholar] [CrossRef] [PubMed]
  6. Binkley, J.M.; Stratford, P.W.; Lott, S.A.; Riddle, D.L. The Lower Extremity Functional Scale (LEFS): Scale development, measurement properties, and clinical application. North American Orthopaedic Rehabilitation Research Network. Phys. Ther. 1999, 79, 371–383. [Google Scholar] [PubMed]
  7. Sheehan, R.C.; Ohm, K.A.; Wilken, J.M.; Rabago, C.A. Novel Metrics for Assessing Mobility During Ground-Standing Transitions. Mil. Med. 2023, 188, e1975–e1980. [Google Scholar] [CrossRef] [PubMed]
  8. Rabago, C.A.; Sheehan, R.C.; Schmidtbauer, K.A.; Vernon, M.C.; Wilken, J.M. A novel assessment for Readiness Evaluation during Simulated Dismounted Operations: A reliability study. PLoS ONE 2019, 14, e0226386. [Google Scholar] [CrossRef]
  9. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning, Data Mining, Inference, and Predicition, 2nd ed.; Springer Series in Statistics; Springer: New York, NY, USA, 2009; pp. XXII–745. ISBN 978-0-387-84857-0. [Google Scholar] [CrossRef]
  10. Sheehan, R.C.; Fain, A.C.; Wilson, J.B.; Wilken, J.M.; Rabago, C.A. Inclusion of a Military-specific, Virtual Reality-based Rehabilitation Intervention Improved Measured Function, but Not Perceived Function, in Individuals with Lower Limb Trauma. Mil. Med. 2021, 186, e777–e783. [Google Scholar] [CrossRef]
  11. Hays, R.D.; Bjorner, J.B.; Revicki, D.A.; Spritzer, K.L.; Cella, D. Development of physical and mental health summary scores from the patient-reported outcomes measurement information system (PROMIS) global items. Qual. Life Res. 2009, 18, 873–880. [Google Scholar] [CrossRef]
  12. Tucker, C.A.; Escorpizo, R.; Cieza, A.; Lai, J.S.; Stucki, G.; Ustun, T.B.; Kostanjsek, N.; Cella, D.; Forrest, C.B. Mapping the content of the Patient-Reported Outcomes Measurement Information System (PROMIS(R)) using the International Classification of Functioning, Health and Disability. Qual. Life Res. 2014, 23, 2431–2438. [Google Scholar] [CrossRef]
  13. Keen, S.M.; Kutter, C.J.; Niles, B.L.; Krinsley, K.E. Psychometric properties of PTSD Checklist in sample of male veterans. J. Rehabil. Res. Dev. 2008, 45, 465–474. [Google Scholar] [CrossRef] [PubMed]
  14. Lovell, M. The neurophysiology and assessment of sports-related head injuries. Phys. Med. Rehabil. Clin. N. Am. 2009, 20, 39–53. [Google Scholar] [CrossRef] [PubMed]
  15. Lovell, M.R.; Iverson, G.L.; Collins, M.W.; Podell, K.; Johnston, K.M.; Pardini, D.; Pardini, J.; Norwig, J.; Maroon, J.C. Measurement of symptoms following sports-related concussion: Reliability and normative data for the post-concussion scale. Appl. Neuropsychol. 2006, 13, 166–174. [Google Scholar] [CrossRef]
  16. Page, S.J.; Shawaryn, M.A.; Cernich, A.N.; Linacre, J.M. Scaling of the revised Oswestry low back pain questionnaire. Arch. Phys. Med. Rehabil. 2002, 83, 1579–1584. [Google Scholar] [CrossRef] [PubMed]
  17. Roland, M.; Morris, R. A study of the natural history of back pain. Part I: Development of a reliable and sensitive measure of disability in low-back pain. Spine 1983, 8, 141–144. [Google Scholar] [CrossRef]
  18. Cook, K.F.; Dunn, W.; Griffith, J.W.; Morrison, M.T.; Tanquary, J.; Sabata, D.; Victorson, D.; Carey, L.M.; MacDermid, J.C.; Dudgeon, B.J.; et al. Pain assessment using the NIH Toolbox. Neurology 2013, 80, S49–S53. [Google Scholar] [CrossRef]
  19. Ware, J.E., Jr.; Sherbourne, C.D. The MOS 36-item short-form health survey (SF-36). I. Conceptual framework and item selection. Med. Care 1992, 30, 473–483. [Google Scholar] [CrossRef]
  20. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Human Mental Workload; Hancock, P.A., Meshkati, N., Eds.; North Holland Press: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  21. Dite, W.; Temple, V.A. A clinical test of stepping and change of direction to identify multiple falling older adults. Arch. Phys. Med. Rehabil. 2002, 83, 1566–1571. [Google Scholar] [CrossRef]
  22. Herr, H.; Popovic, M. Angular momentum in human walking. J. Exp. Biol. 2008, 211, 467–481. [Google Scholar] [CrossRef]
  23. Sheehan, R.C.; Beltran, E.J.; Dingwell, J.B.; Wilken, J.M. Mediolateral angular momentum changes in persons with amputation during perturbed walking. Gait Posture 2015, 41, 795–800. [Google Scholar] [CrossRef] [PubMed]
  24. Varoquaux, G.; Colliot, O. Evaluating machine learning models and their diagnostic value. In Machine Learning for Brain Disorders; Colliot, O., Ed.; Humana: New York, NY, USA, 2023; pp. 601–630. [Google Scholar]
  25. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  26. Therneau, T.M.; Atkinson, E.J. An Introduction to Recursive Partitioning Using the RPAT Routines; Mayo Foundation: Rochester, MN, USA, 2023. [Google Scholar]
  27. Eagle, S.R.; Womble, M.N.; Elbin, R.J.; Pan, R.; Collins, M.W.; Kontos, A.P. Concussion Symptom Cutoffs for Identification and Prognosis of Sports-Related Concussion: Role of Time Since Injury. Am. J. Sports Med. 2020, 48, 2544–2551. [Google Scholar] [CrossRef] [PubMed]
  28. Wilson, J.B.; Rábago, C.A.; Hoppes, C.W.; Harper, P.L.; Gao, J.; Russell Esposito, E. Should I stay or should I go? Identifying intrinsic and extrinsic factors in the decision to return to duty following lower extremity injury. Mil. Med. 2021, 186, 430–439. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Initial unpruned decision tree for classifying return to duty. Factors include Post-Concussion Symptom Scale (PCSS), age, Stand–Prone–Stand (SPS2), Stand–Kneel–Stand on the injured side (SKS_I), PROMIS General Self Efficacy Score (GSW SF T-Score), and REDOp shooting accuracy.
Figure 1. Initial unpruned decision tree for classifying return to duty. Factors include Post-Concussion Symptom Scale (PCSS), age, Stand–Prone–Stand (SPS2), Stand–Kneel–Stand on the injured side (SKS_I), PROMIS General Self Efficacy Score (GSW SF T-Score), and REDOp shooting accuracy.
Technologies 14 00107 g001
Figure 2. Final pruned decision tree model. Post-Concussion Symptom Scale (PCSS) separated Return to Duty (RTD) from No RTD. Age further separated Full RTD and Limited RTD.
Figure 2. Final pruned decision tree model. Post-Concussion Symptom Scale (PCSS) separated Return to Duty (RTD) from No RTD. Age further separated Full RTD and Limited RTD.
Technologies 14 00107 g002
Table 1. List of the variables included in the analysis and the number of participants from each study that had values for each variable.
Table 1. List of the variables included in the analysis and the number of participants from each study that had values for each variable.
CategoryVariableStudy 1
(n = 14)
Study 2
(n = 17)
Military Years of service prior to injury1417
Desire to return to duty1417
Officer or Enlisted1417
Combat or Support role1417
Branch1417
DemographicHeight1417
Weight1417
Age1417
Sex1417
Education1417
Race1417
InjuryChange in injury status since assessment1417
Injury Side1417
Type (amputation or limb salvage)1417
Joint injured1417
Nerve involvement1417
Time Since Injury1417
Time to first ambulation after injury1417
Clinical OutcomesStand–Prone–Stand (SPS)1417
 Stand–Kneel–Stand Left (SKS-L)1416
 Stand–Kneel–Stand Right (SKS-R)1416
 Four Square Step Test80
 PROMIS—Self-Efficacy017
 PROMIS—Pain Interference017
 PROMIS—Pain Behavior017
 PROMIS—Cognitive Function017
 PTSD Checklist—Military1417
 Post-Concussion Symptom Scale (PCSS)1417
 Lower Extremity Functional Scale 1417
 Modified Oswestry Low Back Pain Questionnaire017
 Roland–Morris Disability Questionnaire017
 NASA Task Load Index130
 Veterans RAND 36 Physical140
 Veterans RAND 36 Mental140
 Baseline Pain Level1417
REDOpDistance Completed1417
 Shooting Accuracy1417
 Shooting Precision1417
 Reaction Time1417
 Angular Momentum Frontal Plane1411
 Angular Momentum Transverse Plane1411
 Angular Momentum Sagittal Plane1411
Table 2. Variable importance scores (VIMPs) for dependent variables used in the finalized logic tree model.
Table 2. Variable importance scores (VIMPs) for dependent variables used in the finalized logic tree model.
Dependent VariableVIMP
Post-Concussion Symptom Survey (PCSS)4.809
PTSD Checklist—Military (PCL-M)3.438
Age2.997
Stand–Prone–Stand2.621
Pain at Baseline1.803
Lower Extremity Functional Scale1.803
Time Since Injury1.635
Weight1.202
Distance Completed1.09
Height1.09
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sheehan, R.C.; Levine, N.A.; King, D.; Childers, W.L.; Fergason, J.; Loftsgaarden, M.; Alderete, J. Evaluating the Potential of Decision Tree Modeling to Augment Return-to-Duty Decisions Following Major Limb Injury. Technologies 2026, 14, 107. https://doi.org/10.3390/technologies14020107

AMA Style

Sheehan RC, Levine NA, King D, Childers WL, Fergason J, Loftsgaarden M, Alderete J. Evaluating the Potential of Decision Tree Modeling to Augment Return-to-Duty Decisions Following Major Limb Injury. Technologies. 2026; 14(2):107. https://doi.org/10.3390/technologies14020107

Chicago/Turabian Style

Sheehan, Riley C., Nicholas A. Levine, David King, Walter Lee Childers, John Fergason, Megan Loftsgaarden, and Joseph Alderete. 2026. "Evaluating the Potential of Decision Tree Modeling to Augment Return-to-Duty Decisions Following Major Limb Injury" Technologies 14, no. 2: 107. https://doi.org/10.3390/technologies14020107

APA Style

Sheehan, R. C., Levine, N. A., King, D., Childers, W. L., Fergason, J., Loftsgaarden, M., & Alderete, J. (2026). Evaluating the Potential of Decision Tree Modeling to Augment Return-to-Duty Decisions Following Major Limb Injury. Technologies, 14(2), 107. https://doi.org/10.3390/technologies14020107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop