Next Article in Journal
Effects of Swimming Exercise on Early Adolescents’ Physical Conditioning and Physical Health: A Systematic Review
Next Article in Special Issue
Minimum Normalized Cycling Cadence to Increase Post-Cycling Gait Velocity
Previous Article in Journal
The IntegraPark Study: An Opportunity to Facilitate High-Intensity Exercise with Immersive Virtual Reality in Parkinson’s Disease Patients
Previous Article in Special Issue
Effects of Oral Lactate Supplementation on Acid–Base Balance and Prolonged High-Intensity Interval Cycling Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Clinical Utility of Ocular Assessments in Sport-Related Concussion: A Scoping Review

Department of Sports, Exercise, and Nutrition, Atlantic Technological University, H91 T8NW Galway City, Ireland
*
Author to whom correspondence should be addressed.
J. Funct. Morphol. Kinesiol. 2024, 9(3), 157; https://doi.org/10.3390/jfmk9030157
Submission received: 2 August 2024 / Revised: 27 August 2024 / Accepted: 27 August 2024 / Published: 4 September 2024
(This article belongs to the Special Issue Sports Medicine and Public Health)

Abstract

:
Background/objectives: Ocular tools and technologies may be used in the diagnosis of sport-related concussions (SRCs), but their clinical utility can vary. The following study aimed to review the literature pertaining to the reliability and diagnostic accuracy of such assessments. Methods: The preferred reporting items for systematic reviews and meta-analysis (PRISMA) extension for scoping reviews was adhered to. Reference standard reliability (RSR ≥ 0.75) and diagnostic accuracy (RSDA ≥ 0.80) were implemented to aid interpretation. Results: In total, 5223 articles were screened using the PCC acronym (Population, Concept, Context) with 74 included in the final analysis. Assessments included the King-Devick (KD) (n = 34), vestibular-ocular motor screening (VOMs) and/or near point of convergence (NPC) (n = 25), and various alternative tools and technologies (n = 20). The KD met RSR, but RSDA beyond amateur sport was limited. NPC met RSR but did not have RSDA to identify SRCs. The VOMs had conflicting RSR for total score and did not meet RSR in its individual tests. The VOMs total score did perform well in RSDA for SRCs. No alternative tool or technology met both RSR and RSDA. Conclusion: Ocular tools are useful, rapid screening tools but should remain within a multi-modal assessment for SRCs at this time.

1. Introduction

Sport-related concussions (SRCs) are a mild traumatic brain injury whereby forces are transmitted to the brain via direct head contact or transmission of forces via contact elsewhere in the body [1]. Such an injury induces a metabolic cascade and transient symptomology in athletes, the duration of which is variable but typically occurs within a month [1]. Although these symptoms are undesirable, they do provide useful insights for clinicians to initially diagnose and even provide tailored rehabilitation to patients [2]. Both quantitative and qualitative research have shown a lack of SRC reporting in some sports and difficult biopsychosocial barriers to best practice in SRC management [3,4].
The recommended SRC assessment of athletes at this time is the Concussion in Sport Group’s Sport Concussion Assessment Tool 6 (SCAT6) [5,6,7]. However, this multi-modal assessment can only be conducted by medical professionals and can be difficult to conduct in non-specialized environments [8]. Simplified sideline assessments and emerging technologies have attempted to bridge this gap between specialized and non-specialized (i.e., community athlete care) through the inception of neuropsychological [9], haematological [10], and ocular assessment tools [11].
Ocular tools are one of the most commonly used SRC assessments, with their development originally extending as far back as the 1970s [12,13]. These tools assess a variety of ocular movements such as saccades, smooth, pursuits, near point of convergence (NPC), vestibular-ocular reflexes, and visual motion sensitivity. Two of the most popular assessments in this area are the King-Devick (KD) test [14] and vestibular-ocular motor screening (VOMs) [15]. The VOMs is included within the office-based SCAT6, and both the KD and VOMs are included in National Collegiate Athletic Association—Department of Defence Concussion Assessment, Research and Education (NCAA DOD CARE) consort assessments, which collects and publishes a large body of work in SRC assessments in both student-athletes and military personnel [16]. The associated eye movements (e.g., saccades and smooth pursuits) of these tests are a key focus of emerging ocular technologies (primitive or untested) in SRC research [17,18]. Emerging technologies include new, untested technologies or technologies still within their infancy in research or practice.
Ensuring the reliability and validity of assessment tools is imperative to high-quality SRC diagnosis. Reliability is the repeatability or consistency of test measures [19] and is traditionally assessed through intra-class correlation coefficients (ICCs), but more traditional association tests may also be considered. Validity refers to how truthful a measure is [19] and is assessed using sensitivity (ability to identify a true positive, i.e., concussed) [20], specificity (ability to identify a true negative, i.e., non-concussed) [20], and receiver operator curves (ROC), which plot the relationship of both simultaneously [21]. These statistics allow researchers and practitioners to determine (a) the reproducibility of an assessment and (b) the diagnostic accuracy of an assessment. It is common for studies utilising traditional ocular assessments (VOMs and KD) to cite a limited number of seminal studies for reliability or diagnostic accuracy statistics [14,15,22], while little is known about the reliability and diagnostic accuracy of emerging ocular technologies.
With this in mind, using a scoping review methodology, the following study aimed to explore and summarize current literature on the reliability and diagnostic accuracy of such ocular tools used in the diagnosis of SRCs in athletes and military personnel. This study aims to guide further research in sport and emerging technology development and to quantify the presence and clinical utility of all available ocular tools. Thus, given the broad research aim, a scoping review was deemed appropriate.

2. Materials and Methods

Ethical approval was not required for this study. Registration was completed on Open Science Framework (OSF) on 1 November 2023 as advised by the Joanne Briggs Institute [23], and this study is reported in accordance with the modified preferred reporting information for systematic reviews and meta-analysis protocol for scoping reviews (PRISMA-SCr) [24]. Please see Appendix A for further reporting detail.
The key research question supporting this study was: Is there research to support the use of ocular assessment tools commonly used to diagnose SRCs in athletes? More specifically, this study had three questions to answer: Which ocular tools provide (1) optimal reliability, (2) internal consistency, and (3) diagnostic accuracy in the diagnosis of SRCs.
Studies were identified via four databases (SPORTDiscus, PubMed, Web of Science, and CINAHL). Initial search terms and strings were conducted by AW and were based on a previous systematic review on ocular technologies for SRCs, which investigated variables and measures of interest (but not reliability and diagnostic accuracy) for future research [25]. These were later refined based on the initial searches, and agreement was reached on the final search terms by AW, LR, and ED on 26 October 2023. The final search was conducted on 2 November 2023. The included search terms were as follows: Concussion or Brain injur* OR Head injur* OR Athletic injur* OR “Sports concussion” OR “Sports related concussion” OR “mild traumatic brain injury” OR mTBI OR TBI OR “craniocerebral trauma” OR “brain damage” OR SRC AND Athlete* OR Sport* OR Player* OR “Physically Active” OR Healthy OR Active OR Rugby OR Soccer OR Football OR Gaelic Football OR Camogie OR Hurling OR Hockey OR AFL OR NFL OR “Australian Rules” AND Eye OR “Eye movement” OR Eye track* OR Gaze track* OR Oculomotor OR “Pupil dilation” OR “Pupil size” OR Pupillometer OR Pupillometry OR Saccad* OR “Smooth pursuit” OR Visuomotor OR “saccadic dysfunction” OR “saccadic eye movement” OR vestibular OR ocular OR “ocular microtremor” OR “rapid eye movements” OR “Near point of convergence” OR “Balance vision reflex” OR “Visual motion sensitivity” OR “Contrast sensitivity”.
Appropriate studies were selected using the Population, Concept, Context (PCC) protocol for inclusion and exclusion criteria. Inclusion criteria were as follows: otherwise healthy athletes or active military personnel (with or without an SRC) involved in observational, cross-sectional, case-control, or intervention-based research whereby SRC-related ocular assessments were conducted; reliability, sensitivity/specificity, or ROC data were reported; published between 2000–2023; and published in English. Studies were to be excluded if the participants were not specifically defined as athletes or active military personnel or if their mTBI was not confirmed to be sport-related, if the study design was computer-modelled, qualitative, involved ocular assessment not specifically related to SRC or associated head impacts, did not report reliability, internal consistency, sensitivity and/or specificity data; and if the research was non-English or published outside the year range stated above. Both full text articles and conference abstracts were included to adequately represent emerging technology research, increase the comprehensiveness of our findings, and reduce reporting bias [26].
Study searches were exported to Endnote Desktop V.20 (Clarivate, London, UK) and screened in three phases for inclusion and exclusion; phase one (title screening), phase two (abstract screening), and phase three (full text screening). A coding hierarchy was applied using the ‘Research Notes’ function on Endnote to exclude studies based on duplicates (SRD), language (SRL), year (SRY), population (SRP), outcome (SRO), not a study of interest (SRNS), or miscellaneous (SRM). An additional code (SRR) was retrospectively added, as retracted articles were identified in the output. To confirm agreement on the application of the inclusion criteria and subsequent coding, a random sample of 25 articles were independently coded by AW and LR before consensus was reached. AW conducted all other screening but used the SRM code to discuss issues that arose during each phase. An overview of the screening process is available in Figure 1. Grey literature and citation searches were conducted to ensure all relevant studies were included in the analysis, and corresponding authors were conducted via email or via ResearchGate to obtain articles where required.
Data extraction was conducted using a modified version of a data extraction tool previously used by this research team [3] in Excel 2023 (Microsoft, Redmond, WA, USA). Specifically, amendments were made to obtain greater detail on each ocular tool and statistical analysis. Data were extracted regarding study design, setting, population, demographics, ocular tools, statistics reported, and any additional data deemed relevant. Data extraction was conducted by AW with oversight by LR and ED, and issues were discussed and resolutions agreed upon weekly throughout this process. Data were collated into a single Microsoft Excel spreadsheet for summary and subgroup analysis. Risk of bias for internal consistency and reliability was assessed using the Consensus-Based Standards for the Selection of Health Measurement Instruments (COSMIN) tools [27]; the internal consistency tool has five key questions while the reliability tool has eight. The revised tool for the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) was used to assess diagnostic accuracy across four domains (patient selection, index tests, reference standards, flow and timing) related to the risk of bias and applicability concerns of studies [28]. No reference or gold standard of diagnosis currently exists for SRCs, thus medical diagnosis was used as the reference standard for this study.
Charted data was supported and interpreted using a number of classification criteria for related statistical analysis. Intraclass-correlation coefficients (ICC) are commonly used to assess the reliability of continuous variables. The ICC employed depends on model, type, and definition, and are graded as follows; ≤0.50 = poor, 0.50–0.75 = moderate, 0.75–0.90 = good, and >0.90 = excellent reliability [29]. Pearson’s (r) and Spearman’s (ρ) correlations are also commonly used for continuous data but generally not preferred in place of ICC. These statistics are interpreted as follows: 0.00–0.30 = negligible, 0.30–0.50 = low, 0.50–0.70 = moderate, 0.70–0.90 = high, and 0.90–1.00 = very high correlation [30]. Kappa (κ) is often used to assess agreement in qualitative or categorical data and thus is used in the VOMS assessment reliability analysis given the variability in symptom provocation in the test. It is interpreted as follows: <0 = no agreement; 0.01–0.20 = slight agreement; 0.21–0.40 = fair agreement; 0.41–0.60 = moderate agreement; 0.61–0.80 = substantial agreement; and 0.81–0.99 = almost perfect agreement. The reference standard of reliability (RSR) for clinical utility was set at 0.75 [31,32]. Internal consistency allows for the contribution of individual subscales of an instrument assess the various aspects of a construct. It is represented via Cronbachs alpha (α) and is interpreted as follows: <0.50 = unacceptable, 0.50–0.60 = poor, 0.60–0.70 = questionable, 0.70–0.80 = acceptable, 0.80–0.90 = good, and 0.90–1.00 = excellent [33]. As previously mentioned, sensitivity is the ability to identify a true positive in diagnosis, while specificity is the ability to identify a true negative in diagnosis. Sensitivity and specificity values are interpreted as follows: 0.80–0.89 = acceptable and 0.90–1.00 = good to excellent [34]. As a general rule, ROC area under the curve (AUC) analysis allows for the assessment of classifier performance and an exploration of the trade-off between sensitivity and specificity. AUC scores are interpreted as follows: <0.50 were = poor; 0.51 to 0.69 = fair; 0.70 to 0.80 = acceptable; 0.80 to 0.90 = excellent; and ≥0.90 = outstanding [35,36]. The reference standard of diagnostic accuracy (RSDA) for clinical utility was set at 0.80 [37,38]. Significance was set at p < 0.05 for all analyses.

3. Results

3.1. Study Characteristics

In total, 5223 articles were screened, with 71 full articles and three abstracts [39,40,41] included in the final analysis. The majority of studies were conducted in the USA (n = 59), with a large emphasis on high school and collegiate populations (n = 32). Included studies used the KD (n = 34), the VOMs in conjunction with independent use of the NPC assessment (n = 28), or alternative tools and emerging technologies (n = 19). Only four studies included active military personnel. Studies were mostly mixed sex (n = 51), and only one study included a female-only population. Findings for the KD, VOMS/NPC, and alternative tools and technologies are discussed in three succinct sections below. Further information on each study’s aims are available in Supplementary File S1 (Tables S1.1–S1.6).
Risk of bias analysis for all assessed metrics is available in Supplementary File S3. For risk of bias in internal consistency studies (n = 14), 10 were graded very good, three were graded doubtful, while one was graded unclear due to a lack of clarity as to whether Cronbach’s a was calculated for individual subscales (Table S3.4). For risk of bias in reliability studies (n = 46), four were rated as very good, thirty-seven were graded as adequate, a further four were rated as doubtful, and one was inadequate (Tables S3.2 and S3.3). In analysis of risk of bias in diagnostic accuracy studies (n = 28), all but two studies met all applicability criteria (Table S3.1). Studies were primarily exposed to potential bias due to convenience sampling, non-blinding of the index/reference standard findings (or not stating if such findings were known), or inappropriate follow up timelines following an SRC (e.g., retesting 21 days post-injury).

3.2. Clinical Utility of the King-Devick (KD) Assessment

Thirty-four studies conducted the KD test and reported on reliability (n = 23), internal consistency (n = 5), sensitivity and specificity (n = 10), and ROC analysis (n = 6). A summary of included studies is available in Table 1, and further information is available in Supplementary Table S2.1. Reliability of the KD via ICC was primarily good to excellent, with 94.12% (n = 48) of all included ICCs reaching the RSR (ICC = 0.75) for clinical utility. In instances where this threshold was not met, the retest comparison was of longer duration: pre-post season [42], median retest of 392 days [32], and between consecutive years [31]. The KD performed well in both card and digital versions, in the presence of exercise [43,44,45], combat sport [46,47], SRCs [43], and when administered by parents [47]. The KD also performed well with alternative reliability measures such as Pearson’s [48,49], Spearman’s [50,51], and Kappa [43,52] analyses, while internal consistency of the KD cards 1–3 was acceptable to excellent [14,53,54,55], even in the presence of oculomotor fatigue [41].
Findings in the assessment of diagnostic accuracy were not as clear. Using the ROC reference standard for clinical utility (AUC = 0.80), two studies observed outstanding diagnostic accuracy for the KD test, with a cutoff of 2 s time increase suggested to be optimal [56,57]. Meanwhile, another study found both the usage of the digital KD test (not card) and those with learning difficulty (but not those without) to have good diagnostic accuracy [36]. In comparison, the tool performed poorly in male elite rugby union athletes [58], and the utilization of eye tracking integration (KD-ET) would not be advised at this time (p > 0.05) [59,60]. Despite not meeting the RSDA, findings from Le et al. (2023) appear to show that the KD test may achieve higher diagnostic accuracy at 0–6 h/24–28 h post-SRC and decrease thereafter [36].
When sensitivity and specificity was assessed in isolation, the KD performed well in amateur rugby league and union [50,52] and in male sub-elite Australian Rules [43] but performed poorly in male elite rugby union [58] and semi-professional rugby union [61], and had low sensitivity in elite American football [45]. When exploring optimal cutoff times, any increase in KD time seemed to have the greatest combined sensitivity and specificity [32], with decreasing sensitivity but increasing specificity when >3 s or >5 s cutoffs were used. Comparatively, Le et al. explored optimal cutoffs to achieve sensitivities of 0.70 and 0.80, and this resulted in poor specificity at all time points, and no optimal cutoff existed once athletes became asymptomatic [36].
Table 1. Overview of included studies using the King-Devick (KD) assessment.
Table 1. Overview of included studies using the King-Devick (KD) assessment.
CitationPopulationSample Size (% Female)
Galetta et al., 2011a [46]Amateur Boxing/MMA39 (2.56%)
Galetta et al., 2011b [14]Collegiate Multi-Sport219 (16.89%)
King et al., 2012 [53]Amateur Rugby League50
King et al., 2013 [54]Amateur Rugby Union37
Leong et al., 2013 [47]Amateur Boxing33 (12.12%)
Yevseyenkov et al., 2013 [39]High School American Football47
Galetta et al., 2015 [56]Youth/Collegiate Multi-Sport322 (18.67%)
King et al., 2015a [50]Amateur Junior Rugby League19 (23.62%)
King et al., 2015b [52]Amateur Rugby Union/League104
Leong et al., 2015 [62]Collegiate Football/Basketball127 (6.30%)
Vartiainen et al., 2015 [48]Professional Ice Hockey185
Alsalaheen et al., 2016 [63]High School American Football62
Smolyansky et al., 2016 [64]Elite Junior Olympians54 (43.00%)
Walsh et al., 2016 [51]Active Military100 (21.00%)
Dhawan et al., 2017 [57]High School Ice Hockey141
Oberlander et al., 2017 [65]Adolescent Athletes68 (39.71%)
Weise et al., 2017 [66]Adolescent Multi-Sport619 (25.3%)
Broglio et al., 2018 [31]Collegiate Multi-Sport/Active Military4874 (41.09%)
Hecimovich et al., 2018b [43]Sub-Elite Australian Football22
Moran and Covassin 2018a [55]Youth American Football/Soccer422 (34.12%)
Worts et al., 2018 [44]High-School Multi-Sport45 (46.67%)
Breedlove et al., 2019 [67]Collegiate Multi-Sport3248 (44.70%)
Fuller et al., 2019 [58]Elite Rugby Union261
Hecimovich et al., 2018a [59]Youth Australian Football19
Naidu et al., 2018 [45]Elite American Football231
King et al., 2020 [68]Amateur Rugby Union69 (100%)
Molloy et al., 2017 [61]Semi-Pro Rugby Union52
Guzowski et al., 2017 [40]American Football124
White-Schwoch et al., 2019 [49]Youth Tackle Football82
Elbin et al., 2019 [42]High School Multi-Sport69 (46.40%)
Worts et al., 2020 [41]Adolescent Multi-Sport121 (41.32%)
Harmon et al., 2021 [32]Collegiate Multi-Sport82 (41.00%)
Hecimovich et al., 2022 [60]Collegiate Rugby Union49 (48.98%)
Le et al., 2023 [36]Collegiate Multi-Sport1559 (48.17%)

3.3. Clinical Utility of Vestibular-Ocular Motor Screening (VOMs) and/near Point of Convergence (NPC) Assessments

Twenty-eight studies included the NPC and VOMs assessment tools and reported on reliability (n = 16), internal consistency (n = 9), sensitivity and specificity (n = 4), and ROCs (n = 8). A summary of these studies is available in Table 2, and further information is available in Supplementary Table S2.2.
The NPC assessment had excellent intra-rater and inter-rater reliability via ICC [69,70,71] and Pearson’s correlations [72,73]. The within-session reliability of NPC also met the raw value threshold for clinical utility (ICC = 0.75), even in convergence insufficiency [74]; in various NPC font sizes [75]; and when compared across baseline, pre-practice, and intra-practice assessments [44]. However, the reliability of NPC reduced when observed over longer durations. Kontos et al. found moderate to good reliability between initial and 6-month follow ups for NPC distance and symptoms [22]. Despite this, the former reached clinical utility threshold and the latter was close behind. Additionally, poor to good one-year reliability for NPC and NPCbreak [76], poor to moderate NPC reliability over 18 months [77], and moderate reliability between consecutive years were observed [38]. Kappa analysis found fair agreement over 18 months for convergence [77], while fair and moderate agreement was observed for convergence and NPC distance, respectively, between consecutive years [31].
The VOMs total score had conflicting reliability via ICC, with one study achieving moderate to good reliability over 6 months [22] and achieving reference clinical utility, while another found poor reliability in consecutive years [38]. However, Kontos et al. (2020) also investigated subscales of the VOMs and found moderate to good reliability of visual motion sensitivity, vertical VOR, and vertical saccades to achieve RSR [22]. Agreement via Kappa was low, as values for individual subscales ranged from no agreement to fair [31,77].
Internal consistency of the VOMs total and NPC was found to be excellent in almost all instances [15,22,77,78,79,80], including in the presence of ocular motor fatigue [41]. Although Iverson et al. (2019) found that, in subscale investigations, only visual motion sensitivity and smooth pursuits had good consistency, while NPC and vertical saccades had acceptable consistency [80].
Due to the number of assessments within the VOMs, the investigation of diagnostic accuracy was more nuanced. Four studies found total VOM scores to be sufficient to achieve RSDA [38,81,82,83], while pretest symptoms also seemed to be accurate in the detection of SRCs [83]. The RSDA was not met in identifying normal versus protracted recovery in males or females [84]. VOMs change score achieved RSDA in one study [83] but performed worse in another when a cutoff of ≥3 was implemented [85]. A clinical cutoff of ≥4 was suggested to achieve RSDA for the VOMs and modified VOMs totals [83]; however, a higher AUC was achieved by Kontos et al. using a cutoff of ≥8 for VOMs total [82]. When looking at individual subscales as a diagnostic tool, one study found a combination of horizontal VOR, visual motion sensitivity, and NPC distance to be effective but did not find any individual subscale to reach the RSDA [15]. Poor subscale performance was also identified in mixed sex adolescent athletes [85]; however, both Kontos et al. (2021) and Ferris et al. (2022) found all subscales except NPC distance to have high diagnostic accuracy [82,83]. In fact, only two studies found convergence or NPC distance to have the RSDA for SRCs, suggesting limited utility of the tool to distinguish SRCs from controls [15,38,82,83,85,86].
Table 2. Summary of included studies using the vestibular-ocular motor screening (VOMs) and/or near point of convergence.
Table 2. Summary of included studies using the vestibular-ocular motor screening (VOMs) and/or near point of convergence.
Vestibular-Ocular Motor Screening (VOMs)
CitationPopulationSample Size (% Female)
Mucha et al., 2014 [15]Unspecified Athletes142 (34.51%)
Kontos et al., 2016 [78]Collegiate Multi-Sport263 (36.88%)
Broglio et al., 2018 [31]Collegiate Multi-Sport/Active Military4874 (41.09%)
Moran and Covassin 2018b [79]Youth Multi-Sport423 (34.28%)
Worts et al., 2018 [44]High-School Multi-Sport45 (46.67%)
Ferris et al., 2021a [81]Collegiate Multi-Sport388 (36.9%)
Iverson et al., 2019 [80]Youth Ice Hockey387
Kontos et al., 2020 [22]Active Military108 (14.8%)
Buttner et al., 2020 [87]Adult Amateur100 (28.00%)
Knell et al., 2021 [84]Multi-Sport Athletes549 (43.20%)
Kontos et al., 2021 [82]Collegiate Multi-Sport/Cadet570 (23.16%)
Elbin et al., 2022 [85]Adolescent Multi-Sport294 (42.52%)
Ferris et al., 2022 [83]Collegiate Multi-Sport3444 (47.80%)
Ferris et al., 2021b [38]Collegiate Multi-Sport3958 (47.70%)
Moran et al., 2023 [77]Youth Soccer51 (54.90%)
Anderson et al., 2024 [88]Youth Soccer110 (40.00%)
Near Point of Convergence (NPC)
CitationPopulationSample Size (% Female)
Pearce et al., 2015 [74]Collegiate Athletes78 (42.30%)
Kawata et al., 2015 [73]Soccer Athletes20 (25.00%)
Kawata et al., 2016 [72]Collegiate American Football29
McDevitt et al., 2016 [86]Collegiate Multi-Sport72 (41.67%)
DuPrey et al., 2017 [89]Unspecified Athletes270 (45.56%)
Aloosh et al., 2020 [76]Elite Athletes16 (56.25%)
Zonner et al., 2018 [69]High School American Football12
Worts et al., 2020 [41]Adolescent Multi-Sport121 (41.32%)
Heick et al., 2021 [75]Recreational Multi-Sport75 (78.67%)
De Rossi, 2022 [90]High School Multi-Sport718 (19.64%)
Kalbfell et al., 2023 [70]Adult Soccer43 (37.21%)
Zuidema et al., 2023 [71]High School American Football 99

3.4. Clinical Utility of Alternative Tools and Technologies

Twenty studies reported reliability (n = 14), internal consistency (n = 1), sensitivity/specificity (n = 4), and ROCs (n = 8) on alternative assessments and emerging technologies. The primary eye movements assessed were saccades and smooth pursuits (n = 5) [32,76,91,92,93], dynamic visual acuity (DVA) (n = 4) [94,95,96,97], and ImPACT visual motor speed (VMS) (n = 7) [31,38,98,99,100,101,102]. A summary of included studies is available in Table 3, and further information is available in Supplementary Table S2.3.
ImPACT visual motor speed reliability was primarily assessed between consecutive years and either achieved the RSR [38,100] or had moderate to good reliability [31,101]. When observed over shorter durations, reliability was moderate to good between 7, 14, 30, 44, 198 days and only achieving the RSR at 7 and 44 days [99]. Impact VMS did not achieve the RSDA to distinguish SRCs versus controls [38,99,102], nor could it predict normal versus protracted recovery [102].
Five studies used emerging technologies to assess saccades and smooth pursuits (not including the KD eye-tracking previously stated). A proprietary algorithm for saccades could not reach RSR [76], while the neuro-otologic test chair only achieved the RSR when assessing optokinetic gain at 60° counter-clockwise [91]. In another study, optokinetic stimulation signs and symptoms achieved the RSDA, and this was further increased when used in conjunction with NPC [86]. A broad evaluation of SMI Red250Mobile found only the smooth pursuit saccade count at 10° to reach RSR in adults, while smooth pursuit diagonal gain, antisaccades, and fast memory guided sequences reached the RSR in youths [93]. However, the tool also had poor to questionable internal consistency. Two studies evaluated EYE-SYNC smooth pursuits. One study found the tool to not have the RSR or RSDA in tangential or radial variability [32]. In comparison, another study found both tangential and radial variability along with three other variables of EYE-SYNC to meet the RSR at both pre- and post-practice [92].
DVA reached the RSR in only one of three studies [94,95,96,97], and did not reach the RSDA for identifying SRCs versus controls [97]. PLM did not reach the RSDA across all variables explored [103], while no variable explored using the EyeLink 1000 reached the RSR for clinical utility [104]. Visio-vestibular evaluation (VVE) did not reach the RSDA across different rep ranges and movements; however, ≤20 repetitions was promising [105]. Gaze stability achieved the RSR in high-school students; however, it was only in the yaw plane and not in university students [95] and, although close, did not reach the RSDA using signs and symptoms [86]. No alternative tool or emerging technology met both RSR and RSDA.
Table 3. Summary of included studies using alternative ocular assessments and emerging technologies.
Table 3. Summary of included studies using alternative ocular assessments and emerging technologies.
CitationPopulationSample Size (% Female)
Dynamic Visual Acuity
Scherer et al., 2013 [94]Active Military20 (10.00%)
Kaufman et al., 2013 [95]High School/Collegiate American Football50
Patterson et al., 2017 [96]Collegiate/Club Multi-Sport28 (28.57%)
Feller et al., 2021 [97]Collegiate Multi-Sport86 (40.70%)
Saccade and Smooth Pursuit Technologies
Cochrane et al., 2019 [91]Collegiate Multi-Sport115 (43.48%)
Sundaram et al., 2019 [92]Collegiate Multi-Sport150 (55.00%)
Aloosh et al., 2020 [76]Elite 16 (56.25%)
Harmon et al., 2021 [32]Collegiate Multi-Sport82 (41.00%)
Sneigreva et al., 2021 [93]Multi-Sport92
ImPACT Visual Motor Speed
Gardener et al., 2012 [98]Non-Elite Rugby Union92
Tsushima et al., 2016 [101]Mixed Sex, High School212 (40.57%)
Nelson et al., 2016 [99]High School/Collegiate Multi-Sport331 (16.62%)
Brett et al., 2016 [100]High-School Multi-Sport1150 (45.39%)
Sufrinko et al., 2017 [102]Multi-Sport69 (26.00%)
Broglio et al., 2018 [31]Collegiate Multi-Sport/Active Military4874 (41.09%)
Ferris et al., 2021b [38]Collegiate Multi-Sport3958 (47.70%)
Additional Assessments
Kaufman et al., 2014 [95]High School/Collegiate American Football50
McDevitt et al., 2016 [86]Collegiate Multi-Sport72 (41.67%)
Howell et al., 2018 [104]Adolescent Multi-Sport31 (39.00%)
Master et al., 2020 [103]Adolescent Multi-Sport232 (57.33%)
Storey et al., 2022 [105]Multi-Sport138 (51.45%)

4. Discussion

This study aimed to summarize the available literature pertaining to the reliability and diagnostic accuracy of ocular assessment tools and emerging technologies used for SRC diagnosis in athletes and active military personnel. The findings are discussed further below in context of current literature.

4.1. Summary of Key Findings

The findings of this study confirm that ocular tools have potential in SRC diagnosis but may not be advised as stand-alone assessments at this time. The current literature in this area has a risk of bias due to convenience sampling, lack of blinding between reference and index standards, and the inherent subjectiveness of assessments such as the VOMs.
The KD had good to excellent reliability, achieving the RSR in almost all instances, while internal consistency was acceptable to excellent. Sensitivity and specificity was high in multiple studies but not in male semi-pro rugby union [61] or elite male rugby union [58]; the tool also had low sensitivity in male elite American Football. This may perhaps be linked to the rudimentary scoring system employed with the tool (total time), enabling inflated baselines from athletes in elite male athletes where poor attitudes towards this injury are prevalent [45,106]. Outstanding diagnostic accuracy and RSDA of the KD were observed using change from baseline [56] or an optimal cutoff of two seconds [57]. Excellent diagnostic accuracy and RSDA were observed using the digital KD and in those with learning disorders [36], while exploration of cutoff scores suggested using any increase in time, as >3 s and >5 s cutoffs had increasing specificity and decreasing sensitivity [32]. Le et al. (2023) [36] also explored cutoffs to achieve sensitivity at 70–80% across various time points post-SRC; however, this could not be achieved without obtaining low specificity [36]. Integrating eye tracking technology with the KD would not be advised at this time due to an inability to achieve high sensitivity, specificity, and the RSDA [59,60]. Ultimately, the findings indicate the KD may be a useful assessment to support suspected diagnosis in amateur sport settings.
NPC achieved the RSR (via ICC and Pearson’s r) in intra-rater [69,71,72,73], inter-rater [70,71], within-session [74,75], and post-exercise [44], but repeated testing may be required as reliability seemed to be reduced over longer durations [38,76,77]. VOMs total had poor reliability between years in mixed sex, collegiate athletes [38] but achieved the RSR with moderate to good reliability in military personnel [22]. Reliability via Kappa was mostly fair for VOMs subscales [31,77], while internal consistency of total VOMs and NPC was primarily excellent [15,22,41,77,78,79]; however, this did lower when individual subscales were assessed [80]. Although NPC had high reliability, limited ability to achieve RSDA to distinguish SRCs from controls were found [15,38,82,83,85], and NPC could not predict prolonged recovery [89]. VOMs total achieved the RSDA with excellent to outstanding diagnostic accuracy [38,81,82,83], but conflicting findings existed for VOMs subscales, as two studies achieved the RSDA [82,83] while another two did not [15,85]. However, it is difficult to truly compare these studies given the differences in the cutoffs implemented. Therefore, the NPC assessment should not be used to diagnose SRCs, but the VOMs assessment may be useful if reliability is ensured.
Additional information regarding the remaining alternative tools and technologies (as per Table 3) is available in Supplementary Table S2.3. However, many of these studies were exploratory or validation studies, and none had both of the required RSR and RSDA. Thus, none of these tools or technologies are advised for diagnosing SRCs at this time.

4.2. Findings in Context

SRCs are a challenging injury to accurately diagnose in amateur sport. This is exemplified by the majority of included studies being conducted in the US where legal incentives and structured programmes such as the CARE consortium [16] produce the majority of research available. Only three studies were conducted in Europe [87], and two of these were conducted on professional athletes [48,58].
The KD and VOMs/NPC are commonly implemented in pre-season baseline testing, particularly in US high school and collegiate athletes. Both tools are quick and easy to administer, but the VOMs does require clinical expertise to conduct. In comparison, the KD is cheap, requires minimal equipment, and was shown to have high reliability even when administered by sport parents [47]. If implemented, changes from baseline [56] or a cutoff of 2 s may provide the greatest accuracy for SRCs [57]. The tool may also not be effective in elite or semi-pro male athletes given the rudimentary scoring system, as previously mentioned [45,53,58]. This may be where an emerging technology such as the KD-ET could improve the diagnostic accuracy of the tool following further validation [59,60]. The tool might also only be effective for initial diagnosis, as diagnostic accuracy be reduced after 0–6 h/24–48 h post-SRC [36]. Similar findings have been observed in the SCAT assessment, whereby clinical utility can diminish after 72 h [1]. If medical expertise is available, the use of VOMs total seems to achieve high internal consistency and diagnostic accuracy. However, given the reliability concerns presented in this study, establishing intra- and inter-rater reliability is advised, as is the use of regular baseline testing to observe changes over time.
Emerging ocular technologies provide an attractive avenue for future research. Such tools could provide an objective assessment of SRCs from grassroots to professional sport. Achieving truly objective assessment is a pertinent point, as athletes may attempt to falsify assessment [107], while allied medical staff may feel pressure and bias from coaches or athletes during SRC assessments [4]. Research in athletic therapy students has also shown varying levels of self-efficacy in the assessment and management of concussion [108]. For instance, mean self-efficacy in the KD was 15.05 ± 30.30% with a 7.5 ± 20.34% use on clinical placement (r = 0.71, p < 0.001), while the VOMs obtained a mean self-efficacy of 57.71 ± 36.98% and a use of 31.44 ± 35.02% (r = 0.60, p < 0.001) [108]. Technologies may therefore help reduce the reliance and burden on medical professionals in the initial diagnosis, reduce the financial burden on athletes, parents, and sport clubs, and also reduce the burden on healthcare systems. The Noeye tracking technology included in the present study had achieved both sufficient reliability and validity for use to date, a finding consistent with a previous systematic review of eye tracking technology in SRCs [25]. The review did. However, suggest that challenging measures of executive function (i.e., memory-based saccades, antisaccades) may be a promising area for future research. Subsequent research by the same author has been included in the present study and is available in Supplementary Table S2.3. The study found that antisaccades using an emerging technology achieved moderate to good reliability in youth and adult participants, but many other explored variables performed poorly [93].
Why eye-tracking technologies have not been successful to date is unclear. It should be stated, firstly, that all oculomotor tools are indirect measures of brain function and thus are not as accurate as a more direct measure of brain injury. It could also be suggested that not all athletes who sustain an SRC present with oculomotor deficits, thereby impacting the diagnostic accuracy of such devices. As evidenced by the findings of these studies in Supplementary Table S2.3, it remains unclear which variables provide the greatest reliability (if these do exist), thus further research and optimisation of methodologies and analysis methods are required. With these findings in mind, caution must be exercised before implementing such tools in both amateur and elite sport at this time.

4.3. Future Research and Study Limitations

This study was conducted to explore the current status of reliability and diagnostic accuracy of ocular tools for SRCs. It is intended for researchers and practitioners to use this study as a basis to guide future research and development in this area, although this study does have some limitations. Firstly, study quality and risk of bias was not assessed given the exploratory nature of a scoping review, and thus these findings must be interpreted in context of this. A narrow scope of statistical analysis was also utilized in this study. A subsequent systematic review or meta-analysis on this topic may consider additional analysis such as odds ratios, reliable change indexes, and further measures of reliability and diagnostic accuracy. It should also be made clear that the use of a reference standard was used to aid the reader in the interpretation and summarisation of a large dataset across studies; while these are regularly cited, they are very much arbitrary, and some authors have suggested higher thresholds [31,109]. The risk of bias analysis has reflected the subjective nature of much of the literature in this area, where studies are often convenience-sampled and unblinded. This is another potential avenue for emerging technologies to help reduce subjectivity during data collection in such studies.
The authors are also aware of the existence of other emerging ocular technologies being researched both internally by the present research team and externally by others, which were excluded from this study. The rapidly developing nature of technology, machine learning, and artificial intelligence will likely mean that research will continue to accelerate in the coming years; thus, regular revision of the literature may be advised. Limited information is also available on the acceptability of such technologies to date. This is expected but should be considered in subsequent research in this area. Only one included study contained a female-only population, and much of the mixed sex literature was made possible through the well-established US concussion research systems. Future research in less-resourced environments (i.e., amateur female sport), where best practice may not be adhered to [4], and in sports where SRC surveillance may be limited [3], could add additional insights into the utility and acceptability of these tools. At this time, no ocular tool could predict prolonged recovery from SRCs, nor could they be used to determine return to play. Only two studies explored the feasibility of such: ImPACT VMS could not predict duration of recovery (p > 0.05), while the KD did not meet RSDA at the commencement of SRC-RTP (p > 0.05) [36,102]. This capability is secondary to ensuring accurate initial diagnosis, but if such a tool were developed, it would greatly assist in medical professionals’ confidence in this process.

5. Conclusions

SRCs are a challenging but increasingly present injury in amateur and elite sport. Ocular tools may play an important role in assisting with sideline and office-based evaluation of athletes, but each tool may have challenges in achieving standards of reliability and diagnostic accuracy for clinical utility. The KD may be useful in amateur sport, while the VOMs may be useful if reliability is ensured. The NPC assessment should not be used as a standalone assessment for SRCs. For these reasons, the SCAT remains the recommended tool for the assessment of SRCs at this time.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jfmk9030157/s1, Supplementary File S1 (Summary of Study Aims), Supplementary File S2 (List of Extracted Findings), Supplementary File S3 (Risk of Bias Analysis).

Author Contributions

A.W., E.D. and L.R. contributed to study conceptualization. A.W. conducted data collection, data analysis, and initial manuscript preparation. The final manuscript was prepared and agreed upon by A.W., E.D. and L.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are available within this manuscript and associated references.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Preferred Reporting Items for Systematic Review and Meta Analysis Extension for Scoping Reviews (PRISMA-SCr)

SectionItemPRISMA-ScR Checklist ItemReported on Page
Title
Title1Identify the report as a scoping review.Page 1
Abstract
Structured summary2Provide a structured summary that includes (as applicable): background, objectives, eligibility criteria, sources of evidence, charting methods, results, and conclusions that relate to the review questions and objectives.Page 1
Introduction
Rationale3Describe the rationale for the review in the context of what is already known. Explain why the review questions/objectives lend themselves to a scoping review approach.Page 1
Objectives4Provide an explicit statement of the questions and objectives being addressed with reference to their key elements (e.g., population or participants, concepts, and context) or other relevant key elements used to conceptualize the review questions and/or objectives.Page 2
Methods
Protocol and registration5Indicate whether a review protocol exists; state if and where it can be accessed (e.g., a Web address); and if available, provide registration information, including the registration number.Page 2
Eligibility criteria6Specify characteristics of the sources of evidence used as eligibility criteria (e.g., years considered, language, and publication status), and provide a rationale.Page 2
Information sources 7Describe all information sources in the search (e.g., databases with dates of coverage and contact with authors to identify additional sources), as well as the date the most recent search was executed.Page 2
Search8Present the full electronic search strategy for at least 1 database, including any limits used, such that it could be repeated.Page 2
Selection of sources of evidence 9State the process for selecting sources of evidence (i.e., screening and eligibility) included in the scoping review.Page 3
Data charting process 10Describe the methods of charting data from the included sources of evidence (e.g., calibrated forms or forms that have been tested by the team before their use, and whether data charting was done independently or in duplicate) and any processes for obtaining and confirming data from investigators.Page 3–4
Data items11List and define all variables for which data were sought and any assumptions and simplifications made.Page 3–4
Critical appraisal of individual sources of evidence12If done, provide a rationale for conducting a critical appraisal of included sources of evidence; describe the methods used and how this information was used in any data synthesis (if appropriate).Page 3
Synthesis of results13Describe the methods of handling and summarizing the data that were charted.Page 3–4
Results
Selection of sources of evidence14Give numbers of sources of evidence screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally using a flow diagram.Page 4, Figure 1
Characteristics of sources of evidence15For each source of evidence, present characteristics for which data were charted and provide the citations.Table 1, Table 2 and Table 3
Supplementary File S1
Critical appraisal within sources of evidence16If done, present data on critical appraisal of included sources of evidence (see item 12).Supplementary File S3
Results of individual sources of evidence17For each included source of evidence, present the relevant data that were charted that relate to the review questions and objectives.Supplementary Files S1–S3
Synthesis of results18Summarize and/or present the charting results as they relate to the review questions and objectives.Page 5–6
Supplementary Files S1–S3
Discussion
Summary of evidence19Summarize the main results (including an overview of concepts, themes, and types of evidence available), link to the review questions and objectives, and consider the relevance to key groups.Page 10
Limitations20Discuss the limitations of the scoping review process.Page 12
Conclusions21Provide a general interpretation of the results with respect to the review questions and objectives, as well as potential implications and/or next steps.Page 11–12
Funding
Funding22Describe sources of funding for the included sources of evidence, as well as sources of funding for the scoping review. Describe the role of the funders of the scoping review.Page 12

References

  1. Patricios, J.S.; Schneider, K.J.; Dvorak, J.; Ahmed, O.H.; Blauwet, C.; Cantu, R.C.; Davis, G.A.; Echemendia, R.J.; Makdissi, M.; McNamee, M.; et al. Consensus statement on concussion in sport: The 6th International Conference on Concussion in Sport-Amsterdam, October 2022. Br. J. Sports Med. 2023, 57, 695–711. [Google Scholar] [CrossRef] [PubMed]
  2. Galeno, E.; Pullano, E.; Mourad, F.; Galeoto, G.; Frontani, F. Effectiveness of Vestibular Rehabilitation after Concussion: A Systematic Review of Randomised Controlled Trial. Healthcare 2022, 11, 90. [Google Scholar] [CrossRef]
  3. Walshe, A.; Daly, E.; Ryan, L. Epidemiology of sport-related concussion rates in female contact/collision sport: A systematic review. BMJ Open Sport Exerc. Med. 2022, 8, e001346. [Google Scholar] [CrossRef]
  4. Walshe, A.; Daly, E.; Ryan, L. Existence ≠ adherence. Exploring barriers to best practice in sports-related concussion return to play (SRC-RTP) in Irish amateur female sport. Phys. Ther. Sport 2023, 63, 1–8. [Google Scholar] [CrossRef] [PubMed]
  5. Sport Concussion Assessment Tool 6 (SCAT6). Br. J. Sports Med. 2023, 57, 622–631. [CrossRef]
  6. Davis, G.A.; Echemendia, R.J.; Ahmed, O.H.; Anderson, V.; Blauwet, C.; Brett, B.L.; Broglio, S.; Bruce, J.M.; Burma, J.S.; Gioia, G.; et al. Child SCAT6. Br. J. Sports Med. 2023, 57, 636–647. [Google Scholar] [CrossRef]
  7. Daly, E.; Pearce, A.; Finnegan, E.; Cooney, C.; McDonagh, M.; Scully, G.; McCann, M.; Doherty, R.; White, A.; Phelan, S.; et al. An assessment of current concussion identification and diagnosis methods in sports settings: A systematic review. BMC Sports Sci. Med. Rehabil. 2022, 14, 125. [Google Scholar] [CrossRef] [PubMed]
  8. Salmon, D.M.; Sullivan, S.J.; Murphy, I.; Mihalik, J.K.R.; Dougherty, B.; McCrory, G. Square peg round hole—Time to customise a concussion assessment tools for primary care: The New Zealand experience? A call for a GP-SCAT. Brain Inj. 2020, 34, 1794–1795. [Google Scholar] [CrossRef]
  9. Alsalaheen, B.; Stockdale, K.; Pechumer, D.; Broglio, S.P. Validity of the Immediate Post Concussion Assessment and Cognitive Testing (ImPACT). Sports Med. 2016, 46, 1487–1501. [Google Scholar] [CrossRef]
  10. Rauchman, S.H.; Pinkhasov, A.; Gulkarov, S.; Placantonakis, D.G.; De Leon, J.; Reiss, A.B. Maximizing the Clinical Value of Blood-Based Biomarkers for Mild Traumatic Brain Injury. Diagnostics 2023, 13, 3330. [Google Scholar] [CrossRef]
  11. Akhand, O.; Balcer, L.J.; Galetta, S.L. Assessment of vision in concussion. Curr. Opin. Neurol. 2019, 32, 68–74. [Google Scholar] [CrossRef]
  12. King, A. The Proposed King-Devick Test and Its Relation to the Pierce Saccade Test and Reading Levels; The Carl Shepherd Memorial Library, Illinois College of Optometry: Chicago, IL, USA, 1976. [Google Scholar]
  13. Oride, M.K.; Marutani, J.K.; Rouse, M.W.; DeLand, P.N. Reliability study of the Pierce and King-Devick saccade tests. Am. J. Optom. Physiol. Opt. 1986, 63, 419–424. [Google Scholar] [CrossRef] [PubMed]
  14. Galetta, K.M.; Brandes, L.E.; Maki, K.; Dziemianowicz, M.S.; Laudano, E.; Allen, M.; Lawler, K.; Sennett, B.; Wiebe, D.; Devick, S.; et al. The King-Devick test and sports-related concussion: Study of a rapid visual screening tool in a collegiate cohort. J. Neurol. Sci. 2011, 309, 34–39. [Google Scholar] [CrossRef]
  15. Kontos, A.P.; Mucha, A.; Collins, M.W.; Elbin, R.J.; Troutman-Enseki, C.; DeWolf, R.; Furman, J. Brief Vestibular and Ocular Motor Screening (VOMS) Assessment: Preliminary Findings in Patients following Sport-related Concussion. Med. Sci. Sports Exerc. 2014, 46, 279–280. [Google Scholar] [CrossRef]
  16. Broglio, S.P.; McCrea, M.; McAllister, T.; Harezlak, J.; Katz, B.; Hack, D.; Hainline, B. A National Study on the Effects of Concussion in Collegiate Athletes and US Military Service Academy Members: The NCAA-DoD Concussion Assessment, Research and Education (CARE) Consortium Structure and Methods. Sports Med. 2017, 47, 1437–1451. [Google Scholar] [CrossRef] [PubMed]
  17. Walshe, A.; Daly, E.; Pearce, A.J.; Ryan, L. In-Season Test–Retest Reliability of Visual Smooth-Pursuit (EyeGuide Focus) Baseline Assessment in Female and Male Field-Sport Athletes. J. Funct. Morphol. Kinesiol. 2024, 9, 46. [Google Scholar] [CrossRef]
  18. Pearce, A.J.; Daly, E.; Ryan, L.; King, D. Reliability of a Smooth Pursuit Eye-Tracking System (EyeGuide Focus) in Healthy Adolescents and Adults. J. Funct. Morphol. Kinesiol. 2023, 8, 83. [Google Scholar] [CrossRef] [PubMed]
  19. Bailey, C. Quantitative Analysis in Exercise and Sport Science; University of North Texas Libraries: Denton, TX, USA, 2021. [Google Scholar]
  20. Swift, A.; Heale, R.; Twycross, A. What are sensitivity and specificity? Evid. Based Nurs. 2020, 23, 2–4. [Google Scholar] [CrossRef]
  21. Nahm, F.S. Receiver operating characteristic curve: Overview and practical use for clinicians. Korean J. Anesthesiol. 2022, 75, 25–36. [Google Scholar] [CrossRef]
  22. Kontos, A.P.; Monti, K.; Eagle, S.R.; Thomasma, E.; Holland, C.L.; Thomas, D.; Bitzer, H.B.; Mucha, A.; Collins, M.W. Test-retest reliability of the Vestibular Ocular Motor Screening (VOMS) tool and modified Balance Error Scoring System (mBESS) in US military personnel. J. Sci. Med. Sport 2021, 24, 264–268. [Google Scholar] [CrossRef]
  23. Peters, M.D.J.; Marnie, C.; Tricco, A.C.; Pollock, D.; Munn, Z.; Alexander, L.; McInerney, P.; Godfrey, C.M.; Khalil, H. Updated methodological guidance for the conduct of scoping reviews. JBI Evid. Synth. 2020, 18, 2119–2126. [Google Scholar] [CrossRef]
  24. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef] [PubMed]
  25. Snegireva, N.; Derman, W.; Patricios, J.; Welman, K.E. Eye tracking technology in sports-related concussion: A systematic review and meta-analysis. Physiol. Meas. 2018, 39, 12tr01. [Google Scholar] [CrossRef] [PubMed]
  26. Scherer, R.W.; Saldanha, I.J. How should systematic reviewers handle conference abstracts? A view from the trenches. Syst. Rev. 2019, 8, 264. [Google Scholar] [CrossRef] [PubMed]
  27. Mokkink, L.B.; Prinsen, C.A.; Bouter, L.M.; Vet, H.C.; Terwee, C.B. The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) and how to select an outcome measurement instrument. Braz. J. Phys. Ther. 2016, 20, 105–113. [Google Scholar] [CrossRef]
  28. Whiting, P.F.; Rutjes, A.W.S.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.G.; Sterne, J.A.C.; Bossuyt, P.M.M. QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies. Ann. Intern. Med. 2011, 155, 529–536. [Google Scholar] [CrossRef]
  29. Koo, T.K.; Li, M.Y. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J. Chiropr. Med. 2016, 15, 155–163. [Google Scholar] [CrossRef]
  30. Mukaka, M.M. Statistics corner: A guide to appropriate use of correlation coefficient in medical research. Malawi Med. J. 2012, 24, 69–71. [Google Scholar]
  31. Broglio, S.P.; Katz, B.P.; Zhao, S.; McCrea, M.; McAllister, T.; Investigators, C.C. Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium. Sports Med. 2018, 48, 1255–1268. [Google Scholar] [CrossRef]
  32. Harmon, K.G.; Whelan, B.M.; Aukerman, D.F.; Bohr, A.D.; Nerrie, J.M.; Elkinton, H.A.; Holliday, M.; Poddar, S.K.; Chrisman, S.P.D.; McQueen, M.B. Diagnostic accuracy and reliability of sideline concussion evaluation: A prospective, case-controlled study in college athletes comparing newer tools and established tests. Br. J. Sports Med. 2022, 56, 144–150. [Google Scholar] [CrossRef]
  33. Taber, K.S. The Use of Cronbach’s Alpha When Developing and Reporting Research Instruments in Science Education. Res. Sci. Educ. 2018, 48, 1273–1296. [Google Scholar] [CrossRef]
  34. Plante, E.; Vance, R. Selection of preschool language tests: A data-based approach. Lang. Speech Heart Serv. Sch. 1994, 25, 15–24. [Google Scholar] [CrossRef]
  35. Hosmer, D.; Lemeshow, S. Assessing the Fit of the Model. In Applied Logistic Regression; Wiley: Hoboken, NJ, USA, 2000; p. 162. [Google Scholar]
  36. Le, R.K.; Ortega, J.; Chrisman, S.P.; Kontos, A.P.; Buckley, T.A.; Kaminski, T.W.; Meyer, B.P.; Clugston, J.R.; Goldman, J.T.; McAllister, T.; et al. King-Devick Sensitivity and Specificity to Concussion in Collegiate Athletes. J. Athl. Train. 2023, 58, 97–105. [Google Scholar] [CrossRef] [PubMed]
  37. Mandrekar, J.N. Receiver operating characteristic curve in diagnostic test assessment. J. Thorac. Oncol. 2010, 5, 1315–1316. [Google Scholar] [CrossRef]
  38. Ferris, L.M.; Kontos, A.P.; Eagle, S.R.; Elbin, R.J.; Collins, M.W.; Mucha, A.; McAllister, T.W.; Broglio, S.P.; McCrea, M.; Pasquina, P.F.; et al. Utility of VOMS, SCAT3, and ImPACT Baseline Evaluations for Acute Concussion Identification in Collegiate Athletes: Findings from the NCAA-DoD Concussion Assessment, Research and Education (CARE) Consortium. Am. J. Sports Med. 2022, 50, 1106–1119. [Google Scholar] [CrossRef]
  39. Yevseyenkov, V.; Kaupke, K.; Lebsock, S.; Kaminsky, M. Concussion Screening in High School Football Using the King-Devick Test. Investig. Ophthalmol. Vis. Sci. 2013, 54, 2344. [Google Scholar]
  40. Guzowski, N.; McCrea, M.A.; Nelson, L.D. Head-to-Head Comparison of Popular Clinical Assessments Tools Used in the Management of Sport-Related Concussion (SRC). J. Neurotrauma 2017, 34, A3. [Google Scholar]
  41. Worts, P.R.; Mason, J.R.; Kontos, A.P.; Schatz, P.; Burkhart, S.O. Oculomotor Fatigue Is Present in Some Adolescent Student-athletes Following Sport-related Concussion. Med. Sci. Sports Exerc. 2020, 52, 309–310. [Google Scholar] [CrossRef]
  42. Elbin, R.J.; Schatz, P.; Möhler, S.; Covassin, T.; Herrington, J.; Kontos, A.P. Establishing Test-Retest Reliability and Reliable Change for the King-Devick Test in High School Athletes. Clin. J. Sport Med. 2021, 31, e235–e239. [Google Scholar] [CrossRef]
  43. Hecimovich, M.; King, D.; Dempsey, A.R.; Murphy, M. The King-Devick test is a valid and reliable tool for assessing sport-related concussion in Australian football: A prospective cohort study. J. Sci. Med. Sport 2018, 21, 1004–1007. [Google Scholar] [CrossRef]
  44. Worts, P.R.; Schatz, P.; Burkhart, S.O. Test Performance and Test-Retest Reliability of the Vestibular/Ocular Motor Screening and King-Devick Test in Adolescent Athletes During a Competitive Sport Season. Am. J. Sports Med. 2018, 46, 2004–2010. [Google Scholar] [CrossRef] [PubMed]
  45. Naidu, D.; Borza, C.; Kobitowich, T.; Mrazik, M. Sideline Concussion Assessment: The King-Devick Test in Canadian Professional Football. J. Neurotrauma 2018, 35, 2283–2286. [Google Scholar] [CrossRef] [PubMed]
  46. Galetta, K.M.; Barrett, J.; Allen, M.; Madda, F.; Delicata, D.; Tennant, A.T.; Branas, C.C.; Maguire, M.G.; Messner, L.V.; Devick, S.; et al. The King-Devick test as a determinant of head trauma and concussion in boxers and MMA fighters. Neurology 2011, 76, 1456–1462. [Google Scholar] [CrossRef]
  47. Leong, D.F.; Balcer, L.J.; Galetta, S.L.; Liu, Z.; Master, C.L. The King-Devick test as a concussion screening tool administered by sports parents. J. Sports Med. Phys. Fit. 2014, 54, 70–77. [Google Scholar]
  48. Vartiainen, M.V.; Holm, A.; Peltonen, K.; Luoto, T.M.; Iverson, G.L.; Hokkanen, L. King-Devick test normative reference values for professional male ice hockey players. Scand. J. Med. Sci. Sports 2015, 25, e327–e330. [Google Scholar] [CrossRef] [PubMed]
  49. White-Schwoch, T.; Krizman, J.; McCracken, K.; Burgess, J.K.; Thompson, E.C.; Nicol, T.; LaBella, C.R.; Kraus, N. Performance on auditory, vestibular, and visual tests is stable across two seasons of youth tackle football. Brain Inj. 2020, 34, 236–244. [Google Scholar] [CrossRef]
  50. King, D.; Hume, P.; Gissane, C.; Clark, T. Use of the King-Devick test for sideline concussion screening in junior rugby league. J. Neurol. Sci. 2015, 357, 75–79. [Google Scholar] [CrossRef]
  51. Walsh, D.V.; Capó-Aponte, J.E.; Beltran, T.; Cole, W.R.; Ballard, A.; Dumayas, J.Y. Assessment of the King-Devick® (KD) test for screening acute mTBI/concussion in warfighters. J. Neurol. Sci. 2016, 370, 305–309. [Google Scholar] [CrossRef]
  52. King, D.; Gissane, C.; Hume, P.A.; Flaws, M. The King-Devick test was useful in management of concussion in amateur rugby union and rugby league in New Zealand. J. Neurol. Sci. 2015, 351, 58–64. [Google Scholar] [CrossRef]
  53. King, D.; Clark, T.; Gissane, C. Use of a rapid visual screening tool for the assessment of concussion in amateur rugby league: A pilot study. J. Neurol. Sci. 2012, 320, 16–21. [Google Scholar] [CrossRef]
  54. King, D.; Brughelli, M.; Hume, P.; Gissane, C. Concussions in amateur rugby union identified with the use of a rapid visual screening tool. J. Neurol. Sci. 2013, 326, 59–63. [Google Scholar] [CrossRef]
  55. Moran, R.N.; Covassin, T. King-Devick test normative reference values and internal consistency in youth football and soccer athletes. Scand. J. Med. Sci. Sports 2018, 28, 2686–2690. [Google Scholar] [CrossRef] [PubMed]
  56. Galetta, K.M.; Morganroth, J.; Moehringer, N.; Mueller, B.; Hasanaj, L.; Webb, N.; Civitano, C.; Cardone, D.A.; Silverio, A.; Galetta, S.L.; et al. Adding Vision to Concussion Testing: A Prospective Study of Sideline Testing in Youth and Collegiate Athletes. J. Neuro-Ophthalmol. 2015, 35, 235–241. [Google Scholar] [CrossRef]
  57. Dhawan, P.S.; Leong, D.; Tapsell, L.; Starling, A.J.; Galetta, S.L.; Balcer, L.J.; Overall, T.L.; Adler, J.S.; Halker-Singh, R.B.; Vargas, B.B.; et al. King-Devick Test identifies real-time concussion and asymptomatic concussion in youth athletes. Neurol.-Clin. Pract. 2017, 7, 464–473. [Google Scholar] [CrossRef] [PubMed]
  58. Fuller, G.W.; Cross, M.J.; Stokes, K.A.; Kemp, S.P.T. King-Devick concussion test performs poorly as a screening tool in elite rugby union players: A prospective cohort study of two screening tests versus a clinical reference standard. Br. J. Sports Med. 2019, 53, 1526–1532. [Google Scholar] [CrossRef] [PubMed]
  59. Hecimovich, M.; King, D.; Dempsey, A.; Gittins, M.; Murphy, M. In situ use of the King-Devick eye tracking test and changes seen with sport-related concussion: Saccadic and blinks counts. Physician Sportsmed. 2019, 47, 78–84. [Google Scholar] [CrossRef]
  60. Hecimovich, M.; Murphy, M.; Chivers, P.; Stock, P. Evaluation and Utility of the King-Devick with Integrated Eye Tracking as a Diagnostic Tool for Sport-Related Concussion. Orthop. J. Sports Med. 2022, 10, 23259671221142255. [Google Scholar] [CrossRef] [PubMed]
  61. Molloy, J.H.; Murphy, I.; Gissane, C. The King-Devick (K-D) test and concussion diagnosis in semi-professional rugby union players. J. Sci. Med. Sport 2017, 20, 708–711. [Google Scholar] [CrossRef]
  62. Leong, D.F.; Balcer, L.J.; Galetta, S.L.; Evans, G.; Gimre, M.; Watt, D. The King-Devick test for sideline concussion screening in collegiate football. J. Optom. 2015, 8, 131–139. [Google Scholar] [CrossRef]
  63. Alsalaheen, B.; Haines, J.; Yorke, A.; Diebold, J. King-Devick Test reference values and associations with balance measures in high school American football players. Scand. J. Med. Sci. Sports 2016, 26, 235–239. [Google Scholar] [CrossRef]
  64. Smolyansky, V.; Morettin, C.E.; Hitzman, S.A.; Beckerman, S. Test-Retest Reliability of the King-Devick Test in Elite Junior Olympic Athletes. Optom. Vis. Perform. 2016, 4, 147–154. [Google Scholar]
  65. Oberlander, T.J.; Olson, B.L.; Weidauer, L. Test-Retest Reliability of the King-Devick Test in an Adolescent Population. J. Athl. Train. 2017, 52, 439–445. [Google Scholar] [CrossRef]
  66. Weise, K.K.; Swanson, M.W.; Penix, K.; Hale, M.H.; Ferguson, D. King-Devick and Pre-season Visual Function in Adolescent Athletes. Optom. Vis. Sci. 2017, 94, 89–95. [Google Scholar] [CrossRef]
  67. Breedlove, K.M.; Ortega, J.D.; Kaminski, T.W.; Harmon, K.G.; Schmidt, J.D.; Kontos, A.P.; Clugston, J.R.; Chrisman, S.P.D.; McCrea, M.A.; McAllister, T.W.; et al. King-Devick Test Reliability in National Collegiate Athletic Association Athletes: A National Collegiate Athletic Association-Department of Defense Concussion Assessment, Research and Education Report. J. Athl. Train. 2019, 54, 1241–1246. [Google Scholar] [CrossRef] [PubMed]
  68. King, D.; Hume, P.A.; Clark, T.N.; Pearce, A.J. Use of the King-Devick test for the identification of concussion in an amateur domestic women’s rugby union team over two competition seasons in New Zealand. J. Neurol. Sci. 2020, 418, 117162. [Google Scholar] [CrossRef] [PubMed]
  69. Zonner, S.W.; Ejima, K.; Fulgar, C.C.; Charleston, C.N.; Huibregtse, M.E.; Bevilacqua, Z.W.; Kawata, K. Oculomotor Response to Cumulative Subconcussive Head Impacts in US High School Football Players: A Pilot Longitudinal Study. JAMA Ophthalmol. 2019, 137, 265–270. [Google Scholar] [CrossRef]
  70. Kalbfell, R.M.; Rettke, D.J.; Mackie, K.; Ejima, K.; Harezlak, J.; Alexander, I.L.; Wager-Miller, J.; Johnson, B.D.; Newman, S.D.; Kawata, K. The modulatory role of cannabis use in subconcussive neural injury. iScience 2023, 26, 106948. [Google Scholar] [CrossRef]
  71. Zuidema, T.R.; Bazarian, J.J.; Kercher, K.A.; Mannix, R.; Kraft, R.H.; Newman, S.D.; Ejima, K.; Rettke, D.J.; Macy, J.T.; Steinfeldt, J.A.; et al. Longitudinal Associations of Clinical and Biochemical Head Injury Biomarkers with Head Impact Exposure in Adolescent Football Players. JAMA Netw. Open 2023, 6, e2316601. [Google Scholar] [CrossRef]
  72. Kawata, K.; Rubin, L.H.; Jong Hyun, L.; Sim, T.; Takahagi, M.; Szwanki, V.; Bellamy, A.; Darvish, K.; Assari, S.; Henderer, J.D.; et al. Association of Football Subconcussive Head Impacts with Ocular Near Point of Convergence. JAMA Ophthalmol. 2016, 134, 763–769. [Google Scholar] [CrossRef]
  73. Kawata, K.; Tierney, R.; Phillips, J.; Jeka, J.J. Effect of Repetitive Sub-concussive Head Impacts on Ocular Near Point of Convergence. Int. J. Sports Med. 2016, 37, 405–410. [Google Scholar] [CrossRef]
  74. Pearce, K.L.; Sufrinko, A.; Lau, B.C.; Henry, L.; Collins, M.W.; Kontos, A.P. Near Point of Convergence After a Sport-Related Concussion: Measurement Reliability and Relationship to Neurocognitive Impairment and Symptoms. Am. J. Sports Med. 2015, 43, 3055–3061. [Google Scholar] [CrossRef] [PubMed]
  75. Heick, J.D.; Bay, C. Determining Near Point of Convergence: Exploring a Component of the Vestibular/Ocular Motor Screen Comparing Varied Target Sizes. Int. J. Sports Phys. Ther. 2021, 16, 21–30. [Google Scholar] [CrossRef]
  76. Aloosh, M.; Leclerc, S.; Long, S.; Zhong, G.; Brophy, J.M.; Schuster, T.; Steele, R.; Shrier, I. One-year test-retest reliability of ten vision tests in Canadian athletes. F1000Research 2019, 8, 1032. [Google Scholar] [CrossRef]
  77. Moran, R.N.; Bretzin, A.C. Long-term test-retest reliability of the vestibular/ocular motor screening for concussion in child athletes: A preliminary study. Appl. Neuropsychol. Child 2023, 1–6. [Google Scholar] [CrossRef] [PubMed]
  78. Kontos, A.P.; Sufrinko, A.; Elbin, R.J.; Puskar, A.; Collins, M.W. Reliability and Associated Risk Factors for Performance on the Vestibular/Ocular Motor Screening (VOMS) Tool in Healthy Collegiate Athletes. Am. J. Sports Med. 2016, 44, 1400–1406. [Google Scholar] [CrossRef]
  79. Moran, R.N.; Covassin, T.; Elbin, R.J.; Gould, D.; Nogle, S. Reliability and Normative Reference Values for the Vestibular/Ocular Motor Screening (VOMS) Tool in Youth Athletes. Am. J. Sports Med. 2018, 46, 1475–1480. [Google Scholar] [CrossRef] [PubMed]
  80. Iverson, G.L.; Cook, N.E.; Howell, D.R.; Collings, L.J.; Kusch, C.; Sun, J.; Virji-Babul, N.; Panenka, W.J. Preseason Vestibular Ocular Motor Screening in Children and Adolescents. Clin. J. Sport Med. 2021, 31, E188–E192. [Google Scholar] [CrossRef] [PubMed]
  81. Ferris, L.M.; Kontos, A.P.; Eagle, S.R. Predictive Accuracy of the Sport Concussion Assessment Tool 3 and Vestibular/Ocular-Motor Screening, Individually and In Combination: A National Collegiate Athletic Association-Department of Defense Concussion Assessment, Research and Education Consortium Analysis. Am. J. Sports Med. 2021, 49, 1040–1048, Corrigendum to Am. J. Sports Med. 2021, 49, NP66–NP67. [Google Scholar] [CrossRef]
  82. Kontos, A.P.; Eagle, S.R.; Marchetti, G.; Sinnott, A.; Mucha, A.; Port, N.; Ferris, L.M.; Elbin, R.J.; Clugston, J.R.; Ortega, J.; et al. Discriminative Validity of Vestibular Ocular Motor Screening in Identifying Concussion among Collegiate Athletes: A National Collegiate Athletic Association-Department of Defense Concussion Assessment, Research, and Education Consortium Study. Am. J. Sports Med. 2021, 49, 2211–2217. [Google Scholar] [CrossRef]
  83. Ferris, L.M.; Kontos, A.P.; Eagle, S.R.; Elbin, R.J.; Clugston, J.R.; Ortega, J.; Port, N.L. Optimizing VOMS for identifying acute concussion in collegiate athletes: Findings from the NCAA-DoD CARE consortium. Vis. Res. 2022, 200, 108081. [Google Scholar] [CrossRef]
  84. Knell, G.; Caze, T.; Burkhart, S.O. Evaluation of the vestibular and ocular motor screening (VOMS) as a prognostic tool for protracted recovery following paediatric sports-related concussion. BMJ Open Sport Exerc. Med. 2021, 7, e000970. [Google Scholar] [CrossRef]
  85. Elbin, R.J.; Eagle, S.R.; Marchetti, G.F.; Anderson, M.; Schatz, P.; Womble, M.N.; Stephenson, K.; Covassin, T.; Collins, M.W.; Mucha, A.; et al. Using change scores on the vestibular ocular motor screening (VOMS) tool to identify concussion in adolescents. Appl. Neuropsychol.-Child 2022, 11, 591–597. [Google Scholar] [CrossRef]
  86. McDevitt, J.; Appiah-Kubi, K.O.; Tierney, R.; Wright, W.G. Vestibular and Oculomotor Assessments May Increase Accuracy of Subacute Concussion Assessment. Int. J. Sports Med. 2016, 37, 738–747. [Google Scholar] [CrossRef] [PubMed]
  87. Büttner, F.; Howell, D.R.; Doherty, C.; Blake, C.; Ryan, J.; Delahunt, E. Clinical Detection and Recovery of Vestibular and Oculomotor Impairments among Amateur Athletes Following Sport-Related Concussion: A Prospective, Matched-Cohort Study. J. Head Trauma Rehabil. 2021, 36, 87–95. [Google Scholar] [CrossRef] [PubMed]
  88. Anderson, M.; Tomczyk, C.P.; Zynda, A.J.; Pollard-McGrandy, A.; Loftin, M.C.; Covassin, T. Preliminary Baseline Vestibular Ocular Motor Screening Scores in Pediatric Soccer Athletes. J. Sport Rehabil. 2023, 33, 5–11. [Google Scholar] [CrossRef] [PubMed]
  89. DuPrey, K.M.; Webner, D.; Ellis, J.T.; Lyons, A.; Cronholm, P.F.; Kucuk, C.H. Convergence Insufficiency Identifies Athletes at Risk of Prolonged Recovery from Sport-Related Concussion. Am. J. Sports Med. 2017, 45, 2388–2393. [Google Scholar] [CrossRef] [PubMed]
  90. Del Rossi, G. Examination of Near Point of Convergence Scores in High-School Athletes: Implications for Identifying Binocular Vision Dysfunction after Concussion Injury. Clin. J. Sport Med. 2022, 32, E451–E456. [Google Scholar] [CrossRef]
  91. Cochrane, G.D.; Christy, J.B.; Almutairi, A.; Busettini, C.; Swanson, M.W.; Weise, K.K. Visuo-oculomotor Function and Reaction Times in Athletes with and without Concussion. Optom. Vis. Sci. 2019, 96, 256–265. [Google Scholar] [CrossRef]
  92. Sundaram, V.; Ding, V.Y.; Desai, M.; Lumba-Brown, A.; Little, J. Reliable sideline ocular-motor assessment following exercise in healthy student athletes. J. Sci. Med. Sport 2019, 22, 1287–1291. [Google Scholar] [CrossRef]
  93. Snegireva, N.; Derman, W.; Patricios, J.; Welman, K. Eye tracking to assess concussions: An intra-rater reliability study with healthy youth and adult athletes of selected contact and collision team sports. Exp. Brain Res. 2021, 239, 3289–3302. [Google Scholar] [CrossRef]
  94. Scherer, M.R.; Claro, P.J.; Heaton, K.J. Sleep Deprivation Has No Effect on Dynamic Visual Acuity in Military Service Members Who Are Healthy. Phys. Ther. 2013, 93, 1185–1196. [Google Scholar] [CrossRef] [PubMed]
  95. Kaufman, D.R.; Puckett, M.J.; Smith, M.J.; Wilson, K.S.; Cheema, R.; Landers, M.R. Test-retest reliability and responsiveness of gaze stability and dynamic visual acuity in high school and college football players. Phys. Ther. Sport 2014, 15, 181–188. [Google Scholar] [CrossRef] [PubMed]
  96. Patterson, J.N.; Murphy, A.M.; Honaker, J.A. Examining Effects of Physical Exertion on the Dynamic Visual Acuity Test in Collegiate Athletes. J. Am. Acad. Audiol. 2017, 28, 36–45. [Google Scholar] [CrossRef] [PubMed]
  97. Feller, C.N.; Goldenberg, M.; Asselin, P.D.; Merchant-Borna, K.; Abar, B.; Jones, C.M.C.; Mannix, R.; Kawata, K.; Bazarian, J.J. Classification of Comprehensive Neuro-Ophthalmologic Measures of Postacute Concussion. JAMA Netw. Open 2021, 4, e210599. [Google Scholar] [CrossRef]
  98. Gardner, A.; Shores, E.A.; Batchelor, J.; Honan, C.A. Diagnostic efficiency of ImPACT and CogSport in concussed rugby union players who have not undergone baseline neurocognitive testing. Appl. Neuropsychol. Adult 2012, 19, 90–97. [Google Scholar] [CrossRef]
  99. Barr, W.B.; Guskiewicz, K.; Hammeke, T.A.; LaRoche, A.A.; Lerner, E.B.; McCrea, M.A.; Nelson, L.D.; Pfaller, A.Y.; Randolph, C. Prospective, Head-to-Head Study of Three Computerized Neurocognitive Assessment Tools (CNTs): Reliability and Validity for the Assessment of Sport-Related Concussion. J. Int. Neuropsychol. Soc. 2016, 22, 24–37. [Google Scholar] [CrossRef]
  100. Brett, B.L.; Smyk, N.; Solomon, G.; Baughman, B.C.; Schatz, P. Long-term Stability and Reliability of Baseline Cognitive Assessments in High School Athletes Using ImPACT at 1-, 2-, and 3-year Test-Retest Intervals. Arch. Clin. Neuropsychol. 2016, 31, 904–914. [Google Scholar] [CrossRef]
  101. Tsushima, W.T.; Siu, A.M.; Pearce, A.M.; Zhang, G.; Oshiro, R.S. Two-year Test-Retest Reliability of ImPACT in High School Athletes. Arch. Clin. Neuropsychol. 2016, 31, 105–111. [Google Scholar] [CrossRef]
  102. Sufrinko, A.M.; Marchetti, G.F.; Cohen, P.E.; Elbin, R.J.; Re, V.; Kontos, A.P. Using Acute Performance on a Comprehensive Neurocognitive, Vestibular, and Ocular Motor Assessment Battery to Predict Recovery Duration After Sport-Related Concussions. Am. J. Sports Med. 2017, 45, 1187–1194. [Google Scholar] [CrossRef]
  103. Master, C.L.; Podolak, O.E.; Ciuffreda, K.J.; Metzger, K.B.; Joshi, N.R.; McDonald, C.C.; Margulies, S.S.; Grady, M.F.; Arbogast, K.B. Utility of Pupillary Light Reflex Metrics as a Physiologic Biomarker for Adolescent Sport-Related Concussion. JAMA Ophthalmol. 2020, 138, 1135–1141. [Google Scholar] [CrossRef]
  104. Howell, D.R.; Brilliant, A.N.; Master, C.L.; Meehan, W.P. Reliability of Objective Eye-Tracking Measures among Healthy Adolescent Athletes. Clin. J. Sport Med. 2020, 30, 444–450. [Google Scholar] [CrossRef] [PubMed]
  105. Storey, E.P.; Corwin, D.J.; McDonald, C.C.; Arbogast, K.B.; Metzger, K.B.; Pfeiffer, M.R.; Margulies, S.S.; Grady, M.F.; Master, C.L. Assessment of Saccades and Gaze Stability in the Diagnosis of Pediatric Concussion. Clin. J. Sport Med. 2022, 32, 108–113. [Google Scholar] [CrossRef] [PubMed]
  106. Williams, J.M.; Langdon, J.L.; McMillan, J.L.; Buckley, T.A. English professional football players concussion knowledge and attitude. J. Sport Health Sci. 2016, 5, 197–204. [Google Scholar] [CrossRef]
  107. O’Connell, E.; Molloy, M.G. Concussion in rugby: Knowledge and attitudes of players. Ir. J. Med. Sci. 2016, 185, 521–528. [Google Scholar] [CrossRef] [PubMed]
  108. Postawa, A.P.; Whyte, E.F.; O’Connor, S. Are Irish Athletic Therapy Students Confident in Concussion Assessment and Management? A Cross-Sectional Study of Final Year Students’ Self-Efficacy. Int. J. Athl. Ther. Train. 2024, 29, 141–148. [Google Scholar] [CrossRef]
  109. Randolph, C.; McCrea, M.; Barr, W.B. Is neuropsychological testing useful in the management of sport-related concussion? J. Athl. Train. 2005, 40, 139–152. [Google Scholar] [PubMed]
Figure 1. Flow chart of preferred reporting items for systematic review and meta-analysis extension for scoping reviews (PRISMA-SCr).
Figure 1. Flow chart of preferred reporting items for systematic review and meta-analysis extension for scoping reviews (PRISMA-SCr).
Jfmk 09 00157 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Walshe, A.; Daly, E.; Ryan, L. Clinical Utility of Ocular Assessments in Sport-Related Concussion: A Scoping Review. J. Funct. Morphol. Kinesiol. 2024, 9, 157. https://doi.org/10.3390/jfmk9030157

AMA Style

Walshe A, Daly E, Ryan L. Clinical Utility of Ocular Assessments in Sport-Related Concussion: A Scoping Review. Journal of Functional Morphology and Kinesiology. 2024; 9(3):157. https://doi.org/10.3390/jfmk9030157

Chicago/Turabian Style

Walshe, Ayrton, Ed Daly, and Lisa Ryan. 2024. "Clinical Utility of Ocular Assessments in Sport-Related Concussion: A Scoping Review" Journal of Functional Morphology and Kinesiology 9, no. 3: 157. https://doi.org/10.3390/jfmk9030157

APA Style

Walshe, A., Daly, E., & Ryan, L. (2024). Clinical Utility of Ocular Assessments in Sport-Related Concussion: A Scoping Review. Journal of Functional Morphology and Kinesiology, 9(3), 157. https://doi.org/10.3390/jfmk9030157

Article Metrics

Back to TopTop