Next Article in Journal
The Clinical Usefulness of Evaluating the Lens and Intraocular Lenses Using Optical Coherence Tomography: An Updated Literature Review
Previous Article in Journal
Screening for Fabry Disease-Related Mutations Among 829 Kidney Transplant Recipients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

The Potential of Automated Assessment of Cognitive Function Using Non-Neuroimaging Data: A Systematic Review

by
Eyitomilayo Yemisi Babatope
1,*,
Alejandro Álvaro Ramírez-Acosta
2,
José Alberto Avila-Funes
3 and
Mireya García-Vázquez
1,*
1
Instituto Politécnico Nacional, Centro de Investigación y Desarrollo de Tecnología Digital, Tijuana 22435, Mexico
2
MIRAL R&D&I Multimedia, San Diego, CA 92154, USA
3
Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán—INCMNSZ, México City 14080, Mexico
*
Authors to whom correspondence should be addressed.
J. Clin. Med. 2024, 13(23), 7068; https://doi.org/10.3390/jcm13237068
Submission received: 9 October 2024 / Revised: 15 November 2024 / Accepted: 19 November 2024 / Published: 22 November 2024
(This article belongs to the Section Clinical Neurology)

Abstract

:
Background/Objectives: The growing incidence of cognitive impairment among older adults has a significant impact on individuals, family members, caregivers, and society. Current conventional cognitive assessment tools are faced with some limitations. Recent evidence suggests that automating cognitive assessment holds promise, potentially resulting in earlier diagnosis, timely intervention, improved patient outcomes, and higher chances of response to treatment. Despite the advantages of automated assessment and technological advancements, automated cognitive assessment has yet to gain widespread use, especially in low and lower middle-income countries. This review highlights the potential of automated cognitive assessment tools and presents an overview of existing tools. Methods: This review includes 87 studies carried out with non-neuroimaging data alongside their performance metrics. Results: The identified articles automated the cognitive assessment process and were grouped into five categories either based on the tools’ design or the data analysis approach. These categories include game-based, digital versions of conventional tools, original computerized tests and batteries, virtual reality/wearable sensors/smart home technologies, and artificial intelligence-based (AI-based) tools. These categories are further explained, and evaluation of their strengths and limitations is discussed to strengthen their adoption in clinical practice. Conclusions: The comparative metrics of both conventional and automated approaches of assessment suggest that the automated approach is a strong alternative to the conventional approach. Additionally, the results of the review show that the use of automated assessment tools is more prominent in countries ranked as high-income and upper middle-income countries. This trend merits further social and economic studies to understand the impact of this global reality.

1. Introduction

Over the years, there has been an increase in the use of cognitive screening tools, particularly among older adults, due to the need to provide better management for individuals with impaired cognition. Cognitive impairment is a major symptom in neurodegenerative diseases and can vary in severity, from mild to severe as seen in dementia cases [1]. Consequently, cognitive screening in primary care centers [2] plays a crucial role in the early detection of cognitive impairment [3,4], thereby enhancing early intervention, management, and patient outcome [4]. Common causes of cognitive impairment include neurodegenerative diseases such as Alzheimer’s disease (AD), Parkinson’s disease, vascular dementia, and brain injury, among others [1,5]. Treatment for cognitive impairment depends on its cause, and currently, no curative pharmacological treatments are available [5].
In clinical settings, cognitive assessment is conducted through a combination of interviews, standardized cognitive tests using screening tools like the Mini-Mental State Examination (MMSE), observational assessment, laboratory, and imaging tests [6]. The current cognitive screening tools, predominantly pen-paper-based, must be administered by highly skilled professionals or specialists and are not without challenges. These challenges underscore the need to automate cognitive assessment to improve efficiency and accessibility. With technological advancements [7] and increased effectiveness observed in health care, automating cognitive screening is a promising solution. This can assist and support healthcare professionals in various tasks with NO intention of replacing them [8].
Many studies have explored the use of technology, as seen in technological designs such as games [9,10], technological devices such as computerized test batteries [11,12,13], and wearable technologies [14], in assessing cognitive function with improved precision and scalability. Some others have applied artificial intelligence algorithms in medical data analysis to predict cognitive decline [15,16]. Previous reviews have analyzed the application of AI and machine learning in the analysis of dementia-related data [17], standardization and automation of testing [18], and predictive models for Alzheimer’s disease (AD) risk using public medical databases like ADNI [19]. Some other studies explored the state of computer-based cognitive testing [20,21] and digital cognitive assessment [22,23]. Additionally, some studies compared computerized tests and pen-paper-based tests in detecting MCI and dementia [24] and the primary care physicians’ view on computer-based assessment [25]. This review article highlights the potential of automated cognitive assessment by presenting an overview of existing tools, explaining the diverse mobile and digital applications of technology and AI, ranging from digital neuropsychological test batteries to technology-based tests, wearable and nonwearable devices, and computer and smartphone applications. In addition, we present a comparative analysis of both conventional and automated assessment approaches and briefly discuss the strengths (by stating the performance metrics) and limitations of both, with an emphasis on the potential contributions of the automated-driven cognitive assessment tools for healthcare providers, patients, caregivers, and society at large. This is to build trust in the use of these automated approaches among healthcare professionals.
The rest of the paper is organized as follows: Section 2 details the process of article selection, search strategy, inclusion and exclusion criteria used in our review. In addition, we detailed some background information on some terms and the common conventional assessment tools. Section 3 details our analysis of existing automated tools and the five categories into which we grouped these automated tools. In this section, we further compare conventional and automated tools and discuss their advantages. Section 4 provides the discussion and analysis of our findings, the limitations of the review, and our opinion of the tools. We draw some conclusions based on our analysis in Section 5.

2. Methods

This systematic review was conducted without being registered in a public registry. The protocol was developed using PRISMA (preferred reporting items for systematic reviews and meta-analyses) guidelines, as shown in Figure 1. Several articles found in public and academic repositories were put together. Articles discussing automated cognitive assessment in all ranges of diseases were considered. In addition to this, we searched for references to selected articles and performed a manual search for additional papers.

2.1. Search Strategy

Two central data repositories, Web of Science and PubMed, which cover biomedical, healthcare, and interdisciplinary research and have a well-rounded collection of high-quality and high-impact, peer-reviewed articles, were searched for studies of interest using a broad range of keywords. The keywords used in our search include: “Computerized trail making test”, “Electronic trail making test”, “comparison of screening tools”, “Cognitive impairment”, “automated assessment”, “Automated assessment of cognition”, “Automated assessment of cognitive impairment”, “Automated assessment of cognitive dysfunction”, “Computerized assessment of cognition”, “Computerized assessment of cognitive impairment”, “Computerized assessment of cognitive dysfunction”, “Automatic cognitive screening”, “Convolutional neural networks to predict cognitive impairment”, “Convolutional neural networks to predict cognitive dysfunction”, “Convolutional neural networks to predict cognition”, “Deep learning to predict cognitive impairment”, “Deep learning to predict cognitive dysfunction”, “Deep learning to predict cognition”, “Machine learning to predict cognitive impairment”, “Machine learning to predict cognitive dysfunction”, “Machine learning to predict cognition”, “Artificial intelligence to predict cognition”, “Artificial intelligence to predict cognitive impairment”, “Artificial intelligence to predict cognitive dysfunction”, “Digital cognitive assessment”, and “Computerized cognitive testing”. Relevant studies were retrieved and selected for this review based on these keywords and date ranges (January 2000–June 2024). Only articles published in the English language were considered. The retrieved articles were screened based on the titles, abstracts, and availability of full text of articles, and duplicates were removed. The references of the included articles were consulted in some cases to strengthen and support the objective of this article. As shown in Figure 1, the final number of articles included in this study is 87 research papers.

2.2. Exclusion and Inclusion Criteria

A systematic review was used in selecting studies that fit into the search criteria, and articles that met any of the following criteria were excluded from this review:
  • Studies conducted in a language other than English.
  • Studies that have focused on automated cognitive assessment using medical imaging data, such as magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT) scans.
  • Studies assessing cognitive impairment associated with diseases, such as HIV, cancer, stroke, peri- and post-operative procedures, etc.
  • Studies of cognitive assessment in children, adolescents, or nonhuman participants (for example, monkeys and chimpanzees).
  • Articles whose full text was not freely available online.
  • Studies that discuss detection or diagnosis within the scope of conversion from MCI to AD.
  • Studies that provide a limited description of data modalities, subjects, AI techniques, devices, or performance metrics.
For inclusion in this review, studies that met the following criteria were added to this study and reviewed.
  • Studies assessing the diagnosis of cognitive impairment or cognitive function associated with neurodegenerative diseases.
  • Studies distinguishing between control and cognitively impaired participants.
  • Studies predicting cognitive scores with artificial intelligence algorithms or statistical analysis using non-neuroimaging data.
  • Studies comparing the conventional approach of assessment with automated assessment.
  • Studies that discuss digital, computerized, or automated assessment of cognitive decline.
After considering the inclusion and exclusion criteria, 87 articles were considered for this study. These 87 research articles were carried out in 27 countries with a study population covering diverse categories of over 50,000 study participants. The study participants include 7765 participants with Alzheimer’s and dementia, five with dementia with Lewy body, 289 with frontotemporal dementia, 19,992 with mild cognitive impairment, 115 with Parkinson’s disease, three with Parkinson’s disease with MCI, 41 with cognitive frailty, five experiencing cognitive difficulty, six at risk of cognitive difficulty, 80 at risk of cognitive difficulty, 15 with a functional memory disorder, 9876 with functionally impaired, 13,443 with control/normal, 145 with schizophrenia, 300 with systemic lupus erythematosus, 454 with ischemic, 100 with multiple sclerosis, 25 with multiple system atrophy with predominant cerebellar ataxia, eight with multiple system atrophy with predominant parkinsonism, and 10 with other neurological disorder.

2.3. Definitions of What Is Known

2.3.1. Background and Concepts

Depending on the clinical stage of the disease, disability in instrumental activities of daily living (IADLs) is common across all syndromes of dementia [26]. These IADLs are tasks necessary to live independently and require a higher level of cognition. A decline in the ability to perform IADLs is a significant marker of cognitive decline [21]. For cognitive assessment, a comprehensive evaluation is carried out by collecting information from physical, neurological, and mental status examinations to understand better the extent of the deficit experienced by the individual [5]. Healthcare professionals review the patient’s clinical history and administer cognitive screening tools, with some common tools discussed in Table 1. A challenge with these tools is their inability to identify subtle changes [27]. Additional information is often sought from family members or caregivers regarding an individual’s cognitive abilities and changes in behavior.
With advancements in technology, like smart and portable devices, wearable sensors, robust software, and artificial intelligence algorithms, automation continues to gain access to all fields [28], including healthcare [29]. Automation involves the use of a system or device to partially or fully accomplish the same function manually done either partially or fully by humans [30]. Automating the manual evaluation of cognitive impairment is likely beneficial to healthcare professionals and individuals as automation promotes efficiency and increases accuracy. Over the years, various automated cognitive assessment approaches have been developed [31,32,33,34]. Several authors have examined and assessed the performance of the pen-paper-based approach, along with technology-based or digital devices, for the assessments of cognitive impairment [35,36]. However, these approaches have not been widely adopted in clinical settings, especially in low- and middle-income countries, where the conventional pen-paper-based method remains dominant. This review analyzes existing automated cognitive assessment methods reporting their performance metrics. Overall, the automated assessment tools aim to reduce human error, streamline evaluation, improve access to timely assessment, and support clinicians in decision-making.

2.3.2. Conventional Assessment Tools

Several conventional tools exist for evaluating cognitive and functional impairment. Cognitive assessments using standardized tools are part of a comprehensive evaluation to guide diagnosis, treatment planning, and intervention strategies. As mentioned earlier, healthcare professionals, including neurologists, geriatricians, psychologists, and occupational therapists, often administer many of these tools to assess cognitive and functional impairment in clinical settings. Some of the commonly used conventional assessment tools are shown in Table 1, and a summary of the domains tested by each tool is mentioned. Indeed, these tools evaluate different cognitive domains such as executive functions, visuospatial ability, and verbal and visual memory [37]. In clinical practice, patients are assigned simple tasks such as naming the current date, identifying everyday objects or pictures of animals, copying a drawing of a shape or objects [38,39], and drawing a clock [38,40]. In the end, each session is scored, and the sum score is calculated and interpreted to ascertain the level of impairment. In addition, details about patients’ performance of simple daily activities are often based on patients, family members, and caregivers’ reports [41]. However, this information may be inaccurate [41], as Loewenstein et al. [42] showed that it is overestimated by caregivers.
Table 1. Common conventional assessment tools for evaluating cognitive and functional status.
Table 1. Common conventional assessment tools for evaluating cognitive and functional status.
ToolPurposeDomainMaximum Score PossibleAdministration Time
LABIS
Graf et al., 2008 [43]
IADL screening
(Functional
evaluation)
Eight domains: Ability to use telephone, shopping, food preparation, housekeeping, laundry, transportation, responsibility for own medications, and ability to handle finances8 points10 to 15 min
Katz ADL Index
Katz et al., 1970 [44]
ADL screening
(Functional
evaluation)
Six domains: bathing, dressing, toileting, transferring, continence, and feeding6 pointsLess than 5 min
MMSE
Folstein et al., 1975 [39]
Cognitive screening
(Cognitive
evaluation)
Five domains: orientation (to time and place), memory (immediate and delayed recall), concentration and attention and calculation, three-word recall, language, and visual construction30 pointsBetween 5 to 10 min
Mini-cog
Borson et al., 2000 [40]
(Cognitive
evaluation)
Two domains: A 3-item recall component and a clock drawing test5 pointsTakes less than 3 min
MoCA
Nasreddine et al., 2005 [38]
MCI and Dementia screeningEight domains: Visuospatial/executive, naming, memory, attention, language, abstraction, delayed recall, and orientation (to time and place)30 pointsApproximately 10 min
ADL—Activity of daily living, LABIS—Lawton and Brody IADL scale, MCI—mild cognitive impairment, Mini-cog (mini-cognitive), MMSE—Mini-Mental State Examination, and MoCA—Montreal cognitive assessment.
Functional assessment measures an individual’s ability to perform specific tasks independently and can be categorized into self-reported and performance-based [45]. Performance-based functional assessments such as timed walks and other tasks related to motor function are an objective alternative to self-reported measures in the form of questionnaires [46]. In this approach, direct observation is required while the patient demonstrates IADL. This approach is difficult to administer in the clinical setting but is suited for academic purposes and yields more accurate results [46]. Self-reported measures are primarily used [45]. Functional assessment is done using standardized tools like Lawton’s IADL scale [43] and KATZ ADL [44], while cognitive assessment tools include MoCA [39] and MMSE [39], among others.
These conventional tools face several limitations, including the time required to score the patient and the need for a specialist to administer the test [47,48,49]. They are unsuited for long-term tracking due to the lack of alternative forms [50]. They cannot be modified to an individual’s competence level and are unsuitable for retesting due to the static nature of the questions [49]. Other challenges are associated with humans, some of which are biased, as observed in caregivers’ reports of impaired AD patient’s functional ability, fatigue, and distraction during the assessment [42]. These challenges pose difficulties in accurately diagnosing patients. Moreover, premorbid status, such as intelligence or education, dramatically affects the validity of some tools like MMSE [4].

3. Results

3.1. Automated Assessment Tools

The advent of technology has made it easier to assess cognitive domains. Existing automated approaches for assessing cognitive function include digital versions of established standardized tests and new computerized tests. These automated approaches leverage technological approaches to enhance cognitive assessment. These tools often provide objective and quantitative measures, allowing efficient and standardized evaluations. While several tools assess various cognitive domains, a common focus among them is the assessment of the state of memory and function. In many of these studies, automated cognitive assessment is conducted using various platforms. We carefully considered the existing tools and categorized them into five based on the design and approach of analysis. They are game-based, digital versions of conventional tools, original computerized tests and batteries, virtual reality/wearable sensors/smart home, and artificial intelligence-based (AI-based) tools. Each category is further discussed in subsequent subsections. Additionally, a comprehensive table comparing the 87 automated cognitive assessment tools categorized based on this classification is included in Table S1a–e as a reference for readers.

3.1.1. Game-Based

In recent years, several studies have used games beyond the purpose of entertainment, and this has helped in accurately assessing cognitive and functional impairment [9,10,48,51,52]. Here, we discuss using games as a medium for cognitive health assessment. Our analysis is based on 10 articles in which different games assess human cognitive function. This approach is used to assess cognitive and functional skills by assessing correctness, accuracy, and completion of tasks while carrying out some tasks. Some of these tasks include shape matching, visuomotor tracking, and drawing. Devices such as touchscreen computers and tablets are used to administer these tasks, and they are relatively affordable. Lindenmayer et al. [9] used VRFCAT, a game-based environment, to predict functional ability among schizophrenia patients. Significant correlation values of 0.005 and 0.01 with UPSA-B were obtained at baseline and total score, respectively [9]. A serious game was employed among AD patients to test for cognitive impairment and was found to be user-friendly accommodating the functional deterioration in patients [10]. Some research has also explored the correlation between game-based assessments and the conventional questionnaire approach. An example is Cheng et al. [51], where a game-based system was administered as an automated cognitive assessment tool to 80 participants. The correlation result with the Wechsler Adults Intelligence Scale 4th Edition (WAIS-IV) ranged from 0.34 to 0.51. Some of these tools test judgment ability and memory function and are sensitive to identifying subtle cognitive decline [53]. Although older adults are presumed to be uncomfortable with games due to a lack of game experience. Yang et al. observed that games like MahjongBrain are user-friendly for older adults [54]. This approach offers flexibility with testing and is clinically valuable for assessing cognitive impairment [55]. From our analysis of these articles on the game-based approach of assessment (see Supplementary Table S1), we observe a moderate correlation (greater than or equal to 0.5) of this approach with conventional tools [56] such as MMSE. Some games, like the EVO Monitor, a digital cognitive assessment developed by Akili Interactive Labs (Akili, Boston, MA, USA), are available online and can run on tablets or touchscreen computers [55]. Others, like the NAIHA Neuro Cognitive Test (NNCT) [56] are designed by professional research groups and may be available on request.

3.1.2. Digital Versions of Conventional Tools

Some of the existing conventional tools have been fully digitized, as seen in eMoca [57,58] and MMSE mobile applications [36]. Others digitize parts of existing conventional tools such as the eCDT, mPDT [59], and eTMT [60,61]. This digital format uses an electronic pen/stylus and tablet to perform the same task as the conventional approach, and scoring is based on software or AI models. The automated scoring introduced in the digitized version has greatly improved efficiency and reduced human bias. Here, we analyzed 12 articles that focus on this approach for cognitive assessment. Some digital features measured include pen movement, time of completing the task, and the number of strokes made while performing the task. This digitized version of conventional tools has shown a positive correlation with their conventional counterpart as observed in the MMSE mobile application [36] being r = 0.9, and an adequate convergent validity of 0.84 between the conventional MoCA and eMoCA [57,58]. It is capable of measuring cognition in the same way as the conventional, as observed in the eTMT where the correlation value between the derived scores using the pen-paper TMT and the eTMT range between 0.51 and 0.67 and the intraclass correlation coefficient (ICC) value range between 0.90 and 0.95 [60]. A positive correlation value of 0.651 was observed between pen-paper TMT-B and eTMT-B [62], and the predicted eTMT score correlates with clinical scores at a value of 0.98 [61]. Additionally, it has the potential to screen MCI based on its performance, as seen in eCDT [35], as it demonstrates a higher performance (sensitivity) compared to the conventional CDT, with a difference of 0.18 in sensitivity value [63]. Cognitive assessments using the digitized version can be conducted using mobile devices, tablets, and computers. Though older adults have limited digital skills, this method is promising, as it offers wider accessibility to cognitive evaluations. In addition, it provides ease of adaptation to different languages [36]. Furthermore, it supports group screening wherein physicians can administer screening tools due to the portability of the software used [64]. Free versions of digitized tools like the MoCA test are available online for healthcare professionals and academia in multiple languages.

3.1.3. Original Computerized Tests and Batteries

The original computerized test and batteries category includes computerized batteries and/or tests. We analyzed 35 articles using these batteries and tests to assess cognitive function. Computerized batteries are a collection of standardized cognitive tests that are fully automated and assess several cognitive domains. They are administered through a computer. These batteries are not adapted from traditional/conventional existing tools. They have shown a moderate correlation with the conventional approach, as in the case of Minnemera, which is within the range of 0.34 and 0.67 [27]. ANAM was found to be more effective than MMSE in detecting cognitive impairment among heart failure patients [65]. Computerized cognitive tests are designed to assess cognitive and/or executive function using software or applications to generate scores and sometimes interpret results on a computer. Some of these tools have shown high performance based on high correlation with standardized tools and their sensitivity and/or specificity. CST (computer self-test) [66], performed better than MMSE and mini-cog in classifying cognitively impaired subjects, achieving 96% accuracy, while MMSE and mini-cog had values of 71% and 69% respectively. Computerized cognitive screening (CCS), [67] showed a high correlation with conventional MoCA with a value of 0.78 and a sensitivity value of 0.94, similar to MoCA’s value of 0.95 while screening for cognitive impairment. In addition, mSTS-MCI [68] also showed a high correlation value of 0.773 with the Korean version of MoCA and a higher sensitivity and specificity in screening MCI. Like other technological tools, older adults may be unfamiliar with and may not be interested in using these tools for assessment. However, computerized batteries are good tools for cognitive assessment since they have standardized administration and are sensitive to subtle change. Some computer-designed tests, like the Hong Kong-vigilance and memory test (HK-VMT) [69], are available online and can be used on touchscreen computers.

3.1.4. Virtual Reality/Wearable Sensors/Smart Home Technologies

This category refers to the virtual reality approach and smart home technologies. We analyzed 10 different articles in this category. Virtual reality has also been used to simulate real-life tasks and assess patients based on their performance on these simulated tasks. Smart home technologies with sensors to capture the needed information have been used to gather information related to everyday life. These smart home technologies and virtual reality systems extract features, analyze them, and make assessments by considering features such as the time taken to carry out an activity and the completeness of the activity, to mention a few. Analysis/prediction based on the information gathered uses statistical or artificial intelligence algorithms. Studies have shown that this approach can potentially predict patient cognitive health [70,71]. CAAB (Clinical Assessment using Activity Behavior) showed a high correlation value of 0.72 with the cognitive scores provided by the clinician [70,71]. CAVIRE (Cognitive Assessment by Virtual Reality), a virtual reality system, takes less time to complete the assessment than the conventional pen-paper-based MoCA, with a mean difference of 74.94 s in assessing healthy Asian adults [14]. The high cost of virtual reality software, sensors, and technological equipment associated with this approach is a significant drawback. Nevertheless, this approach provides real-world or at-home data collection and monitoring opportunities. The ability to track daily activities can support identifying changes in cognitive function.

3.1.5. Artificial Intelligence-Based (AI-Based) Tools

Artificial intelligence has emerged as a promising tool in healthcare, especially for analyzing medical data in cognitive assessment [72]. It has been used to screen, predict, and analyze large datasets of cognitive test results, digital biomarkers, and medical records. Artificial intelligence (AI)—based techniques for cognitive assessment have been employed in several ways, from scoring [73] to analyzing [74] and predicting [75] cognitive impairment. This approach is often applied to different types of data, such as imaging, behavioral, and non-neuroimaging data. This category focuses on AI-based approaches using non-neuroimaging data, and we analyzed 24 articles. An additional eight articles were already categorized into one of the four classes above; however, since the authors used AI for data analysis, we have also included them in this category. Different AI algorithms have been applied to different data by different authors. Some authors applied machine learning algorithms to speech data [15,76,77,78,79,80] for analysis and prediction. Others applied deep learning algorithms to image data and assessed patients by automatically scoring drawn images [74,81,82,83]. These algorithms learn from datasets, extract the necessary features and make predictions. Some of these techniques used in cognitive assessment include machine learning [34], deep learning [82], and natural language processing [80], among others. Machine learning techniques involve the use of Bayesian methods, support vector machines, random forests, logistic regression, and decision trees, among others [34,48,84]. Deep learning algorithms use deep neural networks and require large data for training [82]. The natural language processing technique is used to understand human verbal and written communication [80]. This approach is used to analyze audio or speech recordings [79,80]. It includes speech recognition and sentiment analysis. Sato et al. [74] built a CDT-based deep neural network (DNN) model using machine learning for scoring drawn CDT, and a high-performance metric of approximately 90% was achieved for executive dysfunction and 77% for probable dementia. Using convolutional neural network algorithms, Youn et al. [75] achieved a 71% accuracy for classifying control, mildly and severely impaired persons CDT and RCFT-copy data. Nakaoku et al. [85] developed a predictive model using power monitoring data to detect cognitive impairment and achieved good performance values of 0.82, 0.48, and 0.96 for accuracy, sensitivity, and specificity, respectively. Rykov et al. [81,86] developed an explainable self-attention deep neural network which achieved an accuracy of 0.81. A deep learning algorithm was applied to CDT, and an accuracy of 0.97 and 0.99 was achieved for screening and scoring, respectively [82]. These tools have shown potential in analyzing cognitive performance data to provide predictions for supporting diagnosis. In addition, the models are often available online but will require fine-tuning for use. The major drawback is the need for a large volume of quality labeled data for training AI models. Furthermore, there is a high need for the predictions made by AI models to be interpretable, but with the help of explainable AI [81], this challenge can be overcome. AI-based category offers improved efficiency, improved scoring accuracy, prompt assessment, and overall, early detection.
In all five categories, automated cognitive assessment has proven to be a strong alternative based on the comparative analysis report in the subsequent subsection.

3.2. Comparative Analysis of Automated and Conventional Cognitive Assessment Tools

Here, we evaluate several studies detailing the different cognitive screening tools (Table 2 and others in the Supplementary Materials). Table 2, an excerpt from the Supplementary Materials, presents an analysis of the tools based on the performance metrics reported by the authors. The first part of Table 2 compares conventional and automated approaches together. Here, the comparison was based on performance metrics reported by the authors. These performance metrics include sensitivity, specificity, and, where available, AUC. The latter part presents individual automated tools and their performance metrics. We report the success of these screening tools based on the performance metrics (correlation (r), area under the ROC curve (AUC), sensitivity (sens), and specificity (spec)) provided by the author. Additional metrics like accuracy, precision, and other statistical measures are reported as described by the author in the Supplementary Materials.
As MoCA is considered a better screening tool for MCI than MMSE in the literature [87], we selected from Table S1a–e more works comparing the performance of the automated approach with MoCA and presented this in the summarized Table 2 below. This table shows the performance of both automated and conventional approaches. According to the literature [88], sensitivity is the ability of a screening tool to detect true positives, that is, people with a condition of interest which in this case is cognitively impaired. At the same time, specificity is the ability of a screening tool to detect true negatives, that is, identifying people who do not have the condition of interest, in this case, those who are not cognitively impaired. A high sensitivity (sens) indicates a high probability or effectiveness in identifying cognitively impaired individuals (true positive). On the other hand, a high specificity (spec) indicates a high probability or effectiveness in identifying individuals who are not cognitively impaired (true negative). Correlation measures the association or relationship between two variables [89]. The correlation (r) in this context indicates the degree to which the conventional and automated approaches relate/agree. The area under the ROC curve value (AUC) measures the probability of a model to identify correctly a diseased and nondiseased individual, in this case cognitively impaired and unimpaired individuals [90]. A high area under ROC curve value (AUC) in this context suggests the effectiveness of an approach in distinguishing between different classes or groups of participants. The higher the AUC, the better it is in distinguishing between groups of participants. In Table 2 and the Supplementary Table S1, the correlation, sensitivity, specificity, and area under ROC curve values all range from 0 to 1. Values between 0–0.49 indicate low to moderate performance/probability, between 0.5 to 0.99 indicate moderate to high performance/probability while values of 1 indicate a perfect performance/probability.
Table 2. Summarized performance evaluation of cognitive assessment tools.
Table 2. Summarized performance evaluation of cognitive assessment tools.
ToolParticipantDomain Assessed By the AAComparative Metrics Reported for Both the Conventional Approach (CA) and Automated
Approach (AA)
Time Taken to AdministerObservationReference
Automated tools compared with conventional tools like MoCAMoCA (CA)
ACE-R (CA)
CANS-MCI (AA)
35 participants (20 CN and 15 MCI)Memory, executive function, and language/spatial fluencyAUC (MoCA) = 0.890
AUC (ACE-R) = 0.822
Sens (CA) = 0.90
Spec (CA)= 0.67
(sens and spec value is for both MoCA and ACE-R)
AUC (CANS-MCI) = 0.867
Sens (AA) = 0.89
Spec (AA)= 0.73
MoCA ~ 10 min
ACE-R ~ 15 min
CANS-MCI ~ 30 min
Of the 3 examples cited here, AA and CA appear to have a close and competitive outcome.[91]
CDT (CA)
CDT (AA)
70 (20 AD, 30 MCI and 20 CN) patientsExecutive and visual-spatial functionSens (CA) = 0.63
Spec (CA) = 0.83
Sens (AA) = 0.81
Spec (AA) = 0.72
NA[63]
MoCA-k (CA)
mSTS-MCI (AA)
177 participants (103 CN and 74 MCI)Memory, attention, and executive functionAUC (CA) = 0.819
Sens (CA) = 0.94
Spec (CA) = 0.60
AUC (AA) = 0.985
Sens (AA) = 0.99
Spec (AA) = 0.93
mSTS-MCI ~ 10–15 min[68]
Automated tools with high correlation when compared with the conventional approachmSTS-MCI177 participants (103 CN and 74 MCI)Memory, attention, and executive function. Reaction time is assessed for attention while the other 2 measures performance.r = 0.773
correlation with MoCA-K (Korean version of MoCA)
Sens = 0.99
Spec = 0.93
(sens and spec at optimal cutoff)
10–15 minFindings reflected in the correlation between both approaches show a positively high association between both.[68]
CoCoSc160 participant (59 CI and 101 CN)Six subtests covering five cognitive domains including learning and memory, executive functions, orientation, attention and working memory and time- and event-based prospective memory are scored based on completion of the task.r = 0.71
correlation with MoCA
AUC = 0.78
Sens = 0.78
Spec = 0.69
15 min[92]
CCS60 participants (20 CN and 40 mild-moderate dementia but only 34 completed the CCS task)Three domains were assessed concentration, memory, and visuospatial with related tasks and scored based on correct responses provided in 1 min for each task.r = 0.78
Correlation with MoCA
Sens = 0.94
Spec = 0.60
AUC = 0.94
1 min per task[67]
C-ABC (Computerized assessment battery for
cognition)
701 participants (422 dementia, 145 MCI, and 574 CN)Sensorimotor skills, attention, orientation, and immediate memory, among othersr = 0.753
Correlation with MMSE score
Sens = 0.77
Spec = 0.71
Average values for distinguishing MCI from CN
~5 min[33]
MoCA-CC176 participants (83 CN and 93 MCI)Eight cognitive domains: executive function, memory, language, visuoconstructional skills among othersr = 0.93
correlation with MoCA-BJ
AUC= 0.97
Sens = 0. 958
Spec = 0.871
~10 min[64]
AA (automated assessment), AD (Alzheimer’s disease), AUC (area under the ROC curve), CA (conventional assessment or conventional approach), CI (cognitively impaired), CN (cognitively normal/healthy adult), MCI (mild cognitive impairment), NA (not available), r (Pearson correlation), Sens (sensitivity), and Spec (specificity). Please note that the values in bold show the highest value obtained when comparing both the automated and conventional tools based on the performance metrics reported.
In the first three rows, where the performance of both conventional and automated assessment is compared, we observe that of the three studies discussed [63,68,91], both the conventional and automated assessment approaches showed good performance metrics, as detailed above. However, in the first row with MoCA, the conventional approach has a slightly higher sensitivity (0.9) and area under curve value (0.89), while the automated approach has a higher specificity (0.73). This finding of a higher sensitivity with the conventional approach indicates that the conventional approach is highly effective in identifying cognitively impaired persons. On the other hand, the automated approach with higher specificity suggests that it is effective in identifying those who are not cognitively impaired. In the clock drawing test study [68], the automated assessment approach showed higher sensitivity (0.81), while the conventional approach showed higher specificity (0.83). Again, this indicates the effectiveness of the automated approach in identifying cognitively impaired individuals and the conventional approach’s effectiveness in effectively identifying individuals who are not cognitively impaired. In the MoCA study [63], in the third row, the automated approach showed higher AUC (0.985), sensitivity (0.99), and specificity (0.93). This presents the automated approach as a tool capable of effectively identifying cognitively impaired and unimpaired individuals. Overall, in these three studies, the automated approach demonstrated better sensitivity and specificity than the conventional approach.
Considering other automated tools presented in Table 2, like the mSTS-MCI [68], CoCoSc [92], CCS [67], C-ABC [33], and MoCA-CC [64], the result shows high correlation values with the conventional tools with values ranging from 0.71 to 0.93. Also, among these five studies [33,64,67,68,92], the sensitivity values range from 0.77 to 0.99, while specificity values range from 0.61 to 0.93. This result demonstrates that these automated tools can be used as an alternative to the pen-paper approach. The high sensitivity of these tools also shows that they can identify subtle changes, unlike the MMSE, which is known in the literature to have low sensitivity (0.65) [93] compared to MoCA in diagnosing cognitive impairment [87]. The high sensitivity (0.99) [68] achieved with the use of an automated approach, like mSTS-MCI, highlights its potential for supporting early diagnosis leading to earlier intervention.
Based on the performance metrics reported in Table 2, the automated approach of assessment can effectively measure several cognitive domains, like the conventional approach. Further information on more automated assessment tools is provided in Table S1a–e in the annex.

3.3. Advantages of Automated Assessment

After reviewing 87 articles that met our exclusion and inclusion criteria, we identified some notable advantages of automated cognitive assessment. Our analysis of these automated screening tools found that there is no need for experts to administer this test as anyone can be trained to operate some of them, like CAVIRE [14], while some others can be self-administered [13,70,94]. Many of these tools can be used remotely at home or in primary healthcare settings and do not require a trained specialist [14], making them more efficient than the conventional pen-paper approach.
They can be standardized and are not affected by human bias [21]. They are more accurate and sensitive tools for screening MCI and are more focused on memory tests [70]. Automated screening and scoring are achievable with these tools using different AI algorithms and software [82]. These tools are scalable [14] and can support triaging individuals, which may relieve health practitioners and promote timely access based on the individual’s severity. Results from this automated assessment can be stored or transferred into patients’ electronic medical record systems [36]. These tools can be used in a very diverse population as language can be switched based on the user’s preference [36]. This can be a form of great support for clinicians [36]. These tools can support increased access to cognitive assessment [58,61].
Additional features like the reaction time can be captured, further supporting other research focused on behavioral analysis [95]. Some additional digital/performance features related to mobility and time, such as the quality of tasks performed and time taken to transition the stylus, among others, may help monitor other cognitive processes not captured by paper [61]. Due to this automatized screening and scoring, they are practical for assessing large cohorts [96]. These tools can potentially increase the reliability and efficiency of cognitive assessment [53]. Some of these tools possess high sensitivity and specificity and are highly efficient in correctly discriminating between MCI and CN, as seen in CAMCI [13] and mSTS-MCI [68].
Overall, this automated cognitive assessment approach is efficient and cost-effective, supports standardization, and prompt assessment, increases access to assessment, encourages frequent testing, and can be self-administered. The feature of automatized scoring and screening makes it suitable for screening large cohorts, eliminating human bias, and hence, reliable and efficient. All of these give the automated cognitive assessment an edge over the conventional pen-paper approach.

4. Discussion

In this review, we evaluate the potential of automated cognitive assessment based on the performance metrics reported alongside the advantages presented with the use of diverse automated cognitive assessment tools. These two (the performance metrics and the advantages) present it as a strong alternative to conventional tools. The effectiveness of the reviewed automated assessment tools in screening for cognitive impairment is demonstrated in the high sensitivity, specificity, accuracy, and area under curve value. From our analysis of the 87 articles, we categorize the automated cognitive assessment tools into five groups: game-based, digital versions of existing cognitive tools, computerized tools and batteries, virtual reality/wearable sensors/smart home technology, and artificial intelligence-based tools. As shown in Table S1a, six (6) of the reviewed articles belong to the game-based method. Of these methods, Panoramix [48] shows a promising performance of 100% in identifying cognitive impairment. In addition, Evomonitor [55] showed a moderate correlation with a brief assessment tool (SDMT), while NAIHA [56] showed a moderate correlation with MMSE. Furthermore, twelve (12) of the total reviewed articles were categorized as digitized versions of existing conventional tools with significant performance, such as high correlation with conventional tools, as observed in MMSE (app) [36] and MoCA-BJ [64]. In addition, high sensitivity and specificity (>85%) are seen in MoCA-BJ [64], ePDT [59], and eCDT [35]. The computerized tests and battery method account for the largest category, totaling thirty-five (35). Of these methods, high correlation with conventional tools (with value > 0.7) was observed in [33,67,68,73,92,97,98,99,100,101], high sensitivity and specificity (with a value > 60%) in [11,33,66,69,91,102,103,104,105,106], showing the capacity to correctly identify impairment, high area under curve value (>0.7) in [66,67,68,69,91,92,102,103,105,107,108,109,110] and others showing moderate performance and greater than 80% correct classification in [12]. The virtual reality/wearable sensors/smart technologies have ten (10) articles of the total reviewed articles, of which high sensitivity and specificity (value >80%) were observed in [13] and a moderate correlation (>0.5) between predicted and observed/clinician scores in [70,111]. The last category, the artificial intelligence-based method has twenty-four (24) articles out of the total articles reviewed with potentially high accuracy (>70) as seen in [34,74,75,76,81,82,83,85,112,113,114,115,116], AUC value (≥ 70) as seen in [76,77,78,84,114,115,117,118], moderate correlation with conventional tool [86,119] and relatively high sensitivity, specificity, and accuracy as seen in [76,114,115,116,120].
The efficacy of these tools is evident in the high sensitivity and specificity recorded in tools such as CANTAB, CAMCI, ANAM, Cogno-Speak, and BrainCheck [11,13,35,59,63,102,104,120], and they are as reliable as conventional tools in screening cognitive impairment as in the case of eMoCA [57]. This further underscores the reliability of automated assessment in accurately screening for cognitive impairment. In addition, automated tools like CANS-MCI, CST, HK-VMT, and BHA [66,69,91,103,105,114] displayed high sensitivity, specificity, and the area under the curve, showcasing it as a powerful tool for correctly identifying and classifying individuals with or without cognitive impairment. Other tools like the CogEvo and Brain on Track [69,78,107,108,109,121] can also classify individuals with or without cognitive impairment based on the high AUC values reported. Moreover, some of these automated tools, eMoCa, CoCoSc, mSTS-MCI, and CCS, have shown a strong correlation with conventional tools like the MoCA [36,57,60,61,67,68,73,92], indicating that they similarly measure cognitive function, thereby suggesting it as a possible alternative.
Of the 27 countries identified in the 87 papers reviewed here, 71% belong to high-income countries, and the remaining 29% are categorized as upper middle-income countries according to the World Bank country classifications by income level for 2024–2025. The low-income and lower middle-income countries have yet to adopt the automated approach, which may be due to a lack of basic infrastructure such as uninterrupted electricity supply and poor/average access to the internet. It is projected that by 2050, 68% of the global prevalence and burden of dementia will take place in low and middle-income countries [122]. Adopting this automated assessment approach and encouraging its use by clinicians may support early diagnosis in low, lower middle, and high-income countries.
Furthermore, our analysis of 87 papers and their metrics shows that 22 automated tools have documented correlation values with conventional tools and four with significant correlation whose value was not reported. This positive correlation between conventional and automated tools shows that both tools are consistent and related in their measurement of cognitive impairment. Additionally, three of the total articles report their ability to distinguish between cognitively impaired and unimpaired with no value to support this. Nineteen of the total reviewed papers were proven to be potentially useful for cognitive screening. One has a sensitivity and specificity greater than 60%, and 20 papers have a sensitivity and specificity greater than 70%. These high values of sensitivity and specificity indicate that the tools are capable of correctly identifying cognitive impairment and identifying unimpaired, respectively. Ten have accuracy values greater than 70%, one records high sensitivity and specificity without specific values, seven record AUC values greater than 70%, while two records over 80% correct classification of MCI and control group. This shows that the automated approach is a reliable alternative to the conventional approach, and considering its advantage of increased access to tests, its use should be greatly encouraged, especially in primary healthcare centers where specialists may not be readily available.
Compared with the pen-paper-based test, automated assessment offers cost-effectiveness, the ability to store patients’ data, and accurate recording of responses [21], is consistent with other reviews. Evaluating cognitive function is crucial for diseases associated with memory loss, as this information is essential for decision-making. Automating the assessment of cognitive status may facilitate the prediction of cognitive impairment, which could help in the diagnosis of neurodegenerative diseases such as Alzheimer’s. These automated tools like eMoCA, CCS, and CANS-MCI, among others, can be used for monitoring cognitive health [71], screening for probable dementia or decreased executive function [74], and scoring and screening of cognitive impairments [82]. Additionally, most of these automated approaches are appropriate for natural (smart home technologies) [71] and clinical environments. Some automated approaches are more effective than the conventional method, like MMSE, for screening cognitive impairments, as observed using ANAM [65]. With automated scoring, results can be described in a way that is easy to understand and interpret [123]. This approach offers an opportunity to measure subtle changes in executive functions [63], subtle changes or alterations in language features that may not be detectable by conventional methods [15], thereby increasing early detection of cognitive impairment.
Although prior experience or familiarity with technology and related devices is observed to aid performance, as observed in [32], where individuals with more experience using touchscreen devices performed better on eMoCa compared to their contemporaries without the experience. On the other hand, Scalon et al. [67] found no difference in automated assessment scores between those with and those without prior computer experience. This issue is likely not to be a problem in the future, as the current generation is increasingly familiar with the use of technology and digital devices. Automating the assessment of cognitive function shows potential that gives room for its inclusion in the diagnosis and detection of cognitive decline.

4.1. Limitations

One limitation of this review lies in the potential for publication bias, as the inclusion of articles was restricted to those available in the selected databases with free full-text and within the specified timeframe (January 2000–June 2024). The exclusion of studies documented in other languages different from English may have led to the removal of the potential contribution of such works. Additionally, the time it takes to complete these automated tests and the cost associated with these approaches were not stated in this review due to the limited availability of this information. It is fair to mention that most of these automated approaches cannot be used as a standalone diagnostic tool but as support for clinical decision-making.

4.2. Authors’ Opinion

The implication of adopting technology-driven cognitive assessment tools is discussed in this section. Some of these automated assessment approaches, such as eMOCA [57], eCDT [35,63], eTMT [60,61], ePDT [59], CoCoSc [92], and CCS [67], among others, may be challenging to people with visual impairment. This automated approach may also pose a challenge to individuals who are not familiar with computers or technology in general. However, as the current generation ages, technology will likely become less of a challenge as people, even in developing or underdeveloped countries, interact with technology daily. These tools only support, not replace, clinical assessment tools or healthcare practitioners. With the future in view, clinicians and individuals should embrace this evolving approach to encourage technology developers, improve the performance of automated tools, and overall improve access to care and treatment. Continuous collaboration between medical and technology experts will further strengthen the potential of these automated tools and facilitate their acceptance.

5. Conclusions

The potential of the automated tools identified in this review is evident in their ability to accurately classify individuals with or without cognitive impairment and their correlation with existing conventional tools, as shown by the 87 articles. These tools are scalable and readily available, thereby increasing accessibility for screening with minimal or no human intervention, as many of these tools can be self-administered. This capability can lead to early diagnosis and intervention, ultimately improving individuals’ quality of life. Some studies identified longer test completion time [58] and limited familiarity with the devices or technological approaches used as disadvantages, which may impact performance or misrepresent the patient’s cognitive status [32]. However, the advantages of the automated assessment were evident throughout this review and outweighed the identified weaknesses or disadvantages. These automated approaches have shown significant potential for early screening before other tests are conducted, facilitating early intervention and allowing for comprehensive patient care plans. The automated approaches reviewed have shown comparable diagnostic performance to their pen-paper-based counterparts in all the articles included in this study, with correlation values ranging between 0.4 to 0.9 and sensitivity and specificity ranging between 0.7 to 0.9. Collectively, these studies highlight the promising impact of automating the assessment of cognitive function. Considering the performance metrics reported (sensitivity, specificity, accuracy, correlation, and area under curve), these tools offer timely interventions, improved access to care, prompt triaging, and effective patient monitoring, which can be highly beneficial for clinical trials. High correlation, high sensitivity, and specificity values support the validity of these automated tools and show their potential use for cognitive assessment.
Integrating technology into healthcare practice, especially for diagnosing cognitive impairment and analyzing medical data for predicting cognitive impairment, can transform the process, enhance diagnosis and treatment, and improve patient outcomes. Collaboration between healthcare professionals and technology developers will further strengthen the use of technological tools and algorithms and address the challenges associated with their use. Achieving this will greatly advance the diagnosis of diseases related to cognitive and functional impairment. These automated approaches offer the possibility of developing clinical devices that are highly sensitive, noninvasive, and cost-effective for testing cognitive decline. This review shows that automated assessment tools are more useful in high- and middle-income countries. Future work may consider evaluating the reasons for the low utility of automated assessment in low- and lower income countries.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jcm13237068/s1, Table S1: Performance evaluation of automated cognitive assessment tools based on the 5 categories; (a) Game-based, (b) Digitized version of some conventional tools, (c) Original computerized tests and batteries, (d) Virtual reality/Wearable sensors/Smart home technologies, and (e) Artificial intelligence-based (AI-based) techniques, references [124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140] are cited in Supplementary Materials.

Author Contributions

M.G.-V. and A.Á.R.-A.; conceptualization, methodology, resources, supervision, project administration, review, and editing. J.A.A.-F.; medical expert consultation, supervision, project administration, review, and editing. E.Y.B.; methodology, investigation, validation, and writing—original draft. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Instituto Politécnico Nacional, Mexico project IPN-SIP-2019 to SIP-2024 and CONAHCYT, Mexico.

Acknowledgments

We express our gratitude to Instituto Politécnico Nacional, Mexico, for the support provided by IPN-SIP-2019 to SIP-2024. We are also grateful to CONAHCYT, Mexico, for the scholarship provided to the PhD student, Eyitomilayo.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AAAutomated Assessment
AccuAccuracy
ACE-RAddenbrooke’s Cognitive Examination-Revised
ADAlzheimer’s Disease
ADLActivity of Daily Living
AIArtificial Intelligence
ANAMAutomated Neuropsychological Assessment Metrics
AUCArea Under the ROC (Receiver Operating Characteristics) Curve
BHABrain Health Assessment
CAConventional Assessment
CAABClinical Assessment using Activity Behavior
C-ABCComputerized Assessment Battery for Cognition
CAMCIComputer Assessment of Mild Cognitive Impairment
CANS-MCIComputer-Administered Neuropsychological Screen for Mild Cognitive Impairment
CANTABCambridge Neuropsychological Test Automated Battery
CAVIRECognitive Assessment by Virtual Reality
CCCConcordance Correlation Coefficients
CCSComputerized Cognitive Screening
CDTClock Drawing Test
CICognitively Impaired
CoCoScComputerized Cognitive Screen
CNCognitively Normal/Healthy Adult
CSTComputer Self-Test
CT scanComputed Tomography scan
dTMTDigital Trail Making Test
eCDTElectronic Clock Drawing Test
eMoCAElectronic Montreal Cognitive Assessment
ePDTElectronic Pentagon Drawing Test
eTMTElectronic Trail Making Test
FCDFunctional Cognitive Disorder
GPCOGGeneral Practitioner Assessment of Cognition
HIVHuman Immunodeficiency Viruses
HK-VMTHong Kong–Vigilance and Memory Test
IADLInstrumental Activity of daily living
LABISLawton and Brody IADL scale
MCIMild Cognitive Impairment
MRIMagnetic Resonance Imaging
MISMemory Impairment Screen
MMSEMini-Mental State Examination
Mini-cogMini-Cognitive
MoCAMontreal Cognitive Assessment
MoCA-KKorean version of Montreal Cognitive Assessment
MoCA-BJMontreal Cognitive Assessment–Beijing version
mSTS-MCIMobile Screening Test System for Screening Mild Cognitive Impairment
NNCTNAIHA Neuro Cognitive Test
NHATSNational Health and Aging Trends Study
PC-basedPersonal Computer-based
PDParkinson Disease
PDTPentagon Drawing Test
PET scanPositron Emission Tomography
rPearson Correlation
r1Bivariate Correlation Coefficients
SaturnSelf-Administered Tasks Uncovering Risk of Neurodegeneration
SensSensitivity
SLESystemic Lupus Erythematosus
SpecSpecificity
TIATransient Ischemic Attack
TMTTrail Making Test
UPSA-BUniversity of California, San Diego Performance-Based Skills Assessment Brief
WAIS-IVWechsler Adult Intelligence Scale | Fourth Edition
WTMTWalking Trail Making Test
VPCWeb-based Visual-Paired Comparison
VRFCATVirtual Reality Functional Capacity Assessment Tool

References

  1. Mohamed, A.A.; Marques, O. Diagnostic Efficacy and Clinical Relevance of Artificial Intelligence in Detecting Cognitive Decline. Cureus 2023, 15, e47004. [Google Scholar] [CrossRef] [PubMed]
  2. Roebuck-Spencer, T.M.; Glen, T.; Puente, A.E.; Denney, R.L.; Ruff, R.M.; Hostetter, G.; Bianchini, K.J. Cognitive Screening Tests Versus Comprehensive Neuropsychological Test Batteries: A National Academy of Neuropsychology Education Paper. Arch. Clin. Neuropsychol. 2017, 32, 491–498. [Google Scholar] [CrossRef] [PubMed]
  3. Cullen, B.; O’Neill, B.; Evans, J.J.; Coen, R.F.; Lawlor, B.A. A Review of Screening Tests for Cognitive Impairment. J. Neurol. Neurosurg. Psychiatry 2007, 78, 790–799. [Google Scholar] [CrossRef] [PubMed]
  4. Ismail, Z.; Rajji, T.K.; Shulman, K.I. Brief Cognitive Screening Instruments: An Update. Int. J. Geriatr. Psychiatry 2010, 25, 111–120. [Google Scholar] [CrossRef]
  5. Dhakal, A.; Bobrin, B.D. Cognitive Deficits. In StatPearls; StatPearls Publishing: Treasure Island, FL, USA, 2023. [Google Scholar]
  6. Porsteinsson, A.P.; Isaacson, R.S.; Knox, S.; Sabbagh, M.N.; Rubino, I. Diagnosis of Early Alzheimer’s Disease: Clinical Practice in 2021. J. Prev. Alzheimers Dis. 2021, 8, 371–386. [Google Scholar] [CrossRef]
  7. Chen, L.; Zhen, W.; Peng, D. Research on Digital Tool in Cognitive Assessment: A Bibliometric Analysis. Front. Psychiatry 2023, 14, 1227261. [Google Scholar] [CrossRef]
  8. Bohr, A.; Memarzadeh, K. The Rise of Artificial Intelligence in Healthcare Applications. In Artificial Intelligence in Healthcare; Elsevier: Amsterdam, The Netherlands, 2020; pp. 25–60. ISBN 978-0-12-818438-7. [Google Scholar]
  9. Lindenmayer, J.-P.; Goldring, A.; Borne, S.; Khan, A.; Keefe, R.S.E.; Insel, B.J.; Thanju, A.; Ljuri, I.; Foreman, B. Assessing Instrumental Activities of Daily Living (iADL) with a Game-Based Assessment for Individuals with Schizophrenia. Schizophr. Res. 2020, 223, 166–172. [Google Scholar] [CrossRef]
  10. Vallejo, V.; Wyss, P.; Rampa, L.; Mitache, A.V.; Müri, R.M.; Mosimann, U.P.; Nef, T. Evaluation of a Novel Serious Game Based Assessment Tool for Patients with Alzheimer’s Disease. PLoS ONE 2017, 12, e0175999. [Google Scholar] [CrossRef]
  11. Juncos-Rabadán, O.; Pereiro, A.X.; Facal, D.; Reboredo, A.; Lojo-Seoane, C. Do the Cambridge Neuropsychological Test Automated Battery Episodic Memory Measures Discriminate Amnestic Mild Cognitive Impairment? Int. J. Geriatr. Psychiatry 2014, 29, 602–609. [Google Scholar] [CrossRef]
  12. Junkkila, J.; Oja, S.; Laine, M.; Karrasch, M. Applicability of the CANTAB-PAL Computerized Memory Test in Identifying Amnestic Mild Cognitive Impairment and Alzheimers Disease. Dement. Geriatr. Cogn. Disord. 2012, 34, 83–89. [Google Scholar] [CrossRef]
  13. Saxton, J.; Morrow, L.; Eschman, A.; Archer, G.; Luther, J.; Zuccolotto, A. Computer Assessment of Mild Cognitive Impairment. Postgrad. Med. 2009, 121, 177–185. [Google Scholar] [CrossRef] [PubMed]
  14. Wong, W.T.; Tan, N.C.; Lim, J.E.; Allen, J.C.; Lee, W.S.; Quah, J.H.M.; Paulpandi, M.; Teh, T.A.; Lim, S.H.; Malhotra, R. Comparison of Time Taken to Assess Cognitive Function Using a Fully Immersive and Automated Virtual Reality System vs. the Montreal Cognitive Assessment. Front. Aging Neurosci. 2021, 13, 756891. [Google Scholar] [CrossRef] [PubMed]
  15. Beltrami, D.; Gagliardi, G.; Rossini Favretti, R.; Ghidoni, E.; Tamburini, F.; Calzà, L. Speech Analysis by Natural Language Processing Techniques: A Possible Tool for Very Early Detection of Cognitive Decline? Front. Aging Neurosci. 2018, 10, 369. [Google Scholar] [CrossRef] [PubMed]
  16. Javed, A.R.; Fahad, L.G.; Farhan, A.A.; Abbas, S.; Srivastava, G.; Parizi, R.M.; Khan, M.S. Automated Cognitive Health Assessment in Smart Homes Using Machine Learning. Sustain. Cities Soc. 2021, 65, 102572. [Google Scholar] [CrossRef]
  17. Veneziani, I.; Marra, A.; Formica, C.; Grimaldi, A.; Marino, S.; Quartarone, A.; Maresca, G. Applications of Artificial Intelligence in the Neuropsychological Assessment of Dementia: A Systematic Review. J. Pers. Med. 2024, 14, 113. [Google Scholar] [CrossRef]
  18. Wesnes, K.A. Cognitive Function Testing: The Case for Standardization and Automation. Br. Menopause Soc. J. 2006, 12, 158–163. [Google Scholar] [CrossRef]
  19. Wang, X.; Zhou, S.; Ye, N.; Li, Y.; Zhou, P.; Chen, G.; Hu, H. Predictive Models of Alzheimer’s Disease Dementia Risk in Older Adults with Mild Cognitive Impairment: A Systematic Review and Critical Appraisal. BMC Geriatr. 2024, 24, 531. [Google Scholar] [CrossRef]
  20. Wild, K.; Howieson, D.; Webbe, F.; Seelye, A.; Kaye, J. Status of Computerized Cognitive Testing in Aging: A Systematic Review. Alzheimers Dement. 2008, 4, 428–437. [Google Scholar] [CrossRef]
  21. Zygouris, S.; Tsolaki, M. Computerized Cognitive Testing for Older Adults: A Review. Am. J. Alzheimers Dis. Dement. 2015, 30, 13–28. [Google Scholar] [CrossRef]
  22. Cubillos, C.; Rienzo, A. Digital Cognitive Assessment Tests for Older Adults: Systematic Literature Review. JMIR Ment. Health 2023, 10, e47487. [Google Scholar] [CrossRef]
  23. Öhman, F.; Hassenstab, J.; Berron, D.; Schöll, M.; Papp, K.V. Current Advances in Digital Cognitive Assessment for Preclinical Alzheimer’s Disease. Alzheimers Dement. Diagn. Assess. Dis. Monit. 2021, 13, e12217. [Google Scholar] [CrossRef] [PubMed]
  24. Chan, J.Y.C.; Kwong, J.S.W.; Wong, A.; Kwok, T.C.Y.; Tsoi, K.K.F. Comparison of Computerized and Paper-and-Pencil Memory Tests in Detection of Mild Cognitive Impairment and Dementia: A Systematic Review and Meta-Analysis of Diagnostic Studies. J. Am. Med. Dir. Assoc. 2018, 19, 748–756.e5. [Google Scholar] [CrossRef] [PubMed]
  25. Millett, G.; Naglie, G.; Upshur, R.; Jaakkimainen, L.; Charles, J.; Tierney, M.C. Computerized Cognitive Testing in Primary Care: A Qualitative Study. Alzheimer Dis. Assoc. Disord. 2018, 32, 114–119. [Google Scholar] [CrossRef] [PubMed]
  26. Giebel, C.M.; Knopman, D.; Mioshi, E.; Khondoker, M. Distinguishing Frontotemporal Dementia From Alzheimer Disease Through Everyday Function Profiles: Trajectories of Change. J. Geriatr. Psychiatry Neurol. 2021, 34, 66–75. [Google Scholar] [CrossRef] [PubMed]
  27. Björngrim, S.; Van Den Hurk, W.; Betancort, M.; Machado, A.; Lindau, M. Comparing Traditional and Digitized Cognitive Tests Used in Standard Clinical Evaluation—A Study of the Digital Application Minnemera. Front. Psychol. 2019, 10, 2327. [Google Scholar] [CrossRef]
  28. Frazier, S.; Pitts, B.J.; McComb, S. Measuring Cognitive Workload in Automated Knowledge Work Environments: A Systematic Literature Review. Cogn. Technol. Work 2022, 24, 557–587. [Google Scholar] [CrossRef]
  29. Sirilertmekasakul, C.; Rattanawong, W.; Gongvatana, A.; Srikiatkhachorn, A. The Current State of Artificial Intelligence-Augmented Digitized Neurocognitive Screening Test. Front. Hum. Neurosci. 2023, 17, 1133632. [Google Scholar] [CrossRef]
  30. Parasuraman, R.; Sheridan, T.B.; Wickens, C.D. A Model for Types and Levels of Human Interaction with Automation. IEEE Trans. Syst. Man Cybern. Part Syst. Hum. 2000, 30, 286–297. [Google Scholar] [CrossRef]
  31. Kehl-Floberg, K.E.; Marks, T.S.; Edwards, D.F.; Giles, G.M. Conventional Clock Drawing Tests Have Low to Moderate Reliability and Validity for Detecting Subtle Cognitive Impairments in Community-Dwelling Older Adults. Front. Aging Neurosci. 2023, 15, 1210585. [Google Scholar] [CrossRef]
  32. Wallace, S.E.; Donoso Brown, E.V.; Simpson, R.C.; D’Acunto, K.; Kranjec, A.; Rodgers, M.; Agostino, C. A Comparison of Electronic and Paper Versions of the Montreal Cognitive Assessment. Alzheimer Dis. Assoc. Disord. 2019, 33, 272–278. [Google Scholar] [CrossRef]
  33. Noguchi-Shinohara, M.; Domoto, C.; Yoshida, T.; Niwa, K.; Yuki-Nozaki, S.; Samuraki-Yokohama, M.; Sakai, K.; Hamaguchi, T.; Ono, K.; Iwasa, K.; et al. A New Computerized Assessment Battery for Cognition (C-ABC) to Detect Mild Cognitive Impairment and Dementia around 5 Min. PLoS ONE 2020, 15, e0243469. [Google Scholar] [CrossRef] [PubMed]
  34. Yao, L.; Shono, Y.; Nowinski, C.; Dworak, E.M.; Kaat, A.; Chen, S.; Lovett, R.; Ho, E.; Curtis, L.; Wolf, M.; et al. Prediction of Cognitive Impairment Using Higher Order Item Response Theory and Machine Learning Models. Front. Psychiatry 2024, 14, 1297952. [Google Scholar] [CrossRef] [PubMed]
  35. Chan, J.Y.C.; Bat, B.K.K.; Wong, A.; Chan, T.K.; Huo, Z.; Yip, B.H.K.; Kowk, T.C.Y.; Tsoi, K.K.F. Evaluation of Digital Drawing Tests and Paper-and-Pencil Drawing Tests for the Screening of Mild Cognitive Impairment and Dementia: A Systematic Review and Meta-Analysis of Diagnostic Studies. Neuropsychol. Rev. 2022, 32, 566–576. [Google Scholar] [CrossRef] [PubMed]
  36. Devos, P.; Debeer, J.; Ophals, J.; Petrovic, M. Cognitive Impairment Screening Using M-Health: An Android Implementation of the Mini-Mental State Examination (MMSE) Using Speech Recognition. Eur. Geriatr. Med. 2019, 10, 501–509. [Google Scholar] [CrossRef]
  37. Chatzidimitriou, E.; Ioannidis, P.; Moraitou, D.; Konstantinopoulou, E.; Aretouli, E. The Cognitive and Behavioral Correlates of Functional Status in Patients with Frontotemporal Dementia: A Pilot Study. Front. Hum. Neurosci. 2023, 17, 1087765. [Google Scholar] [CrossRef]
  38. Nasreddine, Z.S.; Phillips, N.A.; Bédirian, V.; Charbonneau, S.; Whitehead, V.; Collin, I.; Cummings, J.L.; Chertkow, H. The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool For Mild Cognitive Impairment. J. Am. Geriatr. Soc. 2005, 53, 695–699. [Google Scholar] [CrossRef]
  39. Folstein, M.F.; Folstein, S.E.; McHugh, P.R. Mini-Mental State. J. Psychiatr. Res. 1975, 12, 189–198. [Google Scholar] [CrossRef]
  40. Borson, S.; Scanlan, J.; Brush, M.; Vitaliano, P.; Dokmak, A. The Mini-Cog: A Cognitive? Vital Signs? Measure for Dementia Screening in Multi-Lingual Elderly. Int. J. Geriatr. Psychiatry 2000, 15, 1021–1027. [Google Scholar] [CrossRef]
  41. Cipriani, G.; Danti, S.; Picchi, L.; Nuti, A.; Fiorino, M.D. Daily Functioning and Dementia. Dement. Neuropsychol. 2020, 14, 93–102. [Google Scholar] [CrossRef]
  42. Loewenstein, D.A.; Arguelles, S.; Bravo, M.; Freeman, R.Q.; Arguelles, T.; Acevedo, A.; Eisdorfer, C. Caregivers’ Judgments of the Functional Abilities of the Alzheimer’s Disease Patient: A Comparison of Proxy Reports and Objective Measures. J. Gerontol. B Psychol. Sci. Soc. Sci. 2001, 56, P78–P84. [Google Scholar] [CrossRef]
  43. Graf, C. The Lawton Instrumental Activities of Daily Living Scale. AJN Am. J. Nurs. 2008, 108, 52–62. [Google Scholar] [CrossRef] [PubMed]
  44. Katz, S.; Downs, T.D.; Cash, H.R.; Grotz, R.C. Progress in Development of the Index of ADL. Gerontologist 1970, 10, 20–30. [Google Scholar] [CrossRef] [PubMed]
  45. Nielsen, L.M.; Kirkegaard, H.; Østergaard, L.G.; Bovbjerg, K.; Breinholt, K.; Maribo, T. Comparison of Self-Reported and Performance-Based Measures of Functional Ability in Elderly Patients in an Emergency Department: Implications for Selection of Clinical Outcome Measures. BMC Geriatr. 2016, 16, 199. [Google Scholar] [CrossRef] [PubMed]
  46. Royall, D.R.; Lauterbach, E.C.; Kaufer, D.; Malloy, P.; Coburn, K.L.; Black, K.J. The Cognitive Correlates of Functional Status: A Review From the Committee on Research of the American Neuropsychiatric Association. J. Neuropsychiatry Clin. Neurosci. 2007, 19, 249–265. [Google Scholar] [CrossRef] [PubMed]
  47. Ye, S.; Ko, B.; Phi, H.; Eagleman, D.; Flores, B.; Katz, Y.; Huang, B.; Hosseini Ghomi, R. Validity of Computer Based Administration of Cognitive Assessments Compared to Traditional Paper-Based Administration: Psychiatry and Clinical Psychology. medRxiv 2020. [Google Scholar] [CrossRef]
  48. Valladares-Rodriguez, S.; Pérez-Rodriguez, R.; Fernandez-Iglesias, J.M.; Anido-Rifón, L.; Facal, D.; Rivas-Costa, C. Learning to Detect Cognitive Impairment through Digital Games and Machine Learning Techniques: A Preliminary Study. Methods Inf. Med. 2018, 57, 197–207. [Google Scholar] [CrossRef]
  49. Sternin, A.; Burns, A.; Owen, A.M. Thirty-Five Years of Computerized Cognitive Assessment of Aging—Where Are We Now? Diagnostics 2019, 9, 114. [Google Scholar] [CrossRef]
  50. Zeng, Z.; Fauvel, S.; Hsiang, B.T.T.; Wang, D.; Qiu, Y.; Khuan, P.C.O.; Leung, C.; Shen, Z.; Chin, J.J. Towards Long-Term Tracking and Detection of Early Dementia: A Computerized Cognitive Test Battery with Gamification. In Proceedings of the 3rd International Conference on Crowd Science and Engineering, Singapore, 28–31 July 2018; ACM: New York, NY, USA, 2018; pp. 1–10. [Google Scholar]
  51. Cheng, X.; Gilmore, G.C.; Lerner, A.J.; Lee, K. Computerized Block Games for Automated Cognitive Assessment: Development and Evaluation Study. JMIR Serious Games 2023, 11, e40931. [Google Scholar] [CrossRef]
  52. Lee, K.; Jeong, D.; Schindler, R.C.; Short, E.J. SIG-Blocks: Tangible Game Technology for Automated Cognitive Assessment. Comput. Hum. Behav. 2016, 65, 163–175. [Google Scholar] [CrossRef]
  53. Kawahara, Y.; Ikeda, Y.; Deguchi, K.; Kurata, T.; Hishikawa, N.; Sato, K.; Kono, S.; Yunoki, T.; Omote, Y.; Yamashita, T.; et al. Simultaneous Assessment of Cognitive and Affective Functions in Multiple System Atrophy and Cortical Cerebellar Atrophy in Relation to Computerized Touch-Panel Screening Tests. J. Neurol. Sci. 2015, 351, 24–30. [Google Scholar] [CrossRef]
  54. Yang, J.; Jiang, R.; Ding, H.; Au, R.; Chen, J.; Li, C.; An, N. Designing and Evaluating MahjongBrain: A Digital Cognitive Assessment Tool Through Gamification. In HCI International 2023—Late Breaking Papers; Gao, Q., Zhou, J., Duffy, V.G., Antona, M., Stephanidis, C., Eds.; Lecture Notes in Computer Science; Springer Nature Switzerland: Cham, Switzerland, 2023; Volume 14055, pp. 264–278. ISBN 978-3-031-48040-9. [Google Scholar]
  55. Hsu, W.-Y.; Rowles, W.; Anguera, J.A.; Zhao, C.; Anderson, A.; Alexander, A.; Sacco, S.; Henry, R.; Gazzaley, A.; Bove, R. Application of an Adaptive, Digital, Game-Based Approach for Cognitive Assessment in Multiple Sclerosis: Observational Study. J. Med. Internet Res. 2021, 23, e24356. [Google Scholar] [CrossRef] [PubMed]
  56. Oliva, I.; Losa, J. Validation of the Computerized Cognitive Assessment Test: NNCT. Int. J. Environ. Res. Public Health 2022, 19, 10495. [Google Scholar] [CrossRef] [PubMed]
  57. Berg, J.-L.; Durant, J.; Léger, G.C.; Cummings, J.L.; Nasreddine, Z.; Miller, J.B. Comparing the Electronic and Standard Versions of the Montreal Cognitive Assessment in an Outpatient Memory Disorders Clinic: A Validation Study. J. Alzheimers Dis. 2018, 62, 93–97. [Google Scholar] [CrossRef] [PubMed]
  58. Snowdon, A.; Hussein, A.; Kent, R.; Pino, L.; Hachinski, V. Comparison of an Electronic and Paper-Based Montreal Cognitive Assessment Tool. Alzheimer Dis. Assoc. Disord. 2015, 29, 325–329. [Google Scholar] [CrossRef]
  59. Park, I.; Kim, Y.J.; Kim, Y.J.; Lee, U. Automatic, Qualitative Scoring of the Interlocking Pentagon Drawing Test (PDT) Based on U-Net and Mobile Sensor Data. Sensors 2020, 20, 1283. [Google Scholar] [CrossRef]
  60. Park, S.-Y.; Schott, N. The Trail-Making-Test: Comparison between Paper-and-Pencil and Computerized Versions in Young and Healthy Older Adults. Appl. Neuropsychol. Adult 2022, 29, 1208–1220. [Google Scholar] [CrossRef]
  61. Dahmen, J.; Cook, D.; Fellows, R.; Schmitter-Edgecombe, M. An Analysis of a Digital Variant of the Trail Making Test Using Machine Learning Techniques. Technol. Health Care 2017, 25, 251–264. [Google Scholar] [CrossRef]
  62. Heimann-Steinert, A.; Latendorf, A.; Prange, A.; Sonntag, D.; Müller-Werdan, U. Digital Pen Technology for Conducting Cognitive Assessments: A Cross-over Study with Older Adults. Psychol. Res. 2021, 85, 3075–3083. [Google Scholar] [CrossRef]
  63. Müller, S.; Preische, O.; Heymann, P.; Elbing, U.; Laske, C. Increased Diagnostic Accuracy of Digital vs. Conventional Clock Drawing Test for Discrimination of Patients in the Early Course of Alzheimer’s Disease from Cognitively Healthy Individuals. Front. Aging Neurosci. 2017, 9, 101. [Google Scholar] [CrossRef]
  64. Yu, K.; Zhang, S.; Wang, Q.; Wang, X.; Qin, Y.; Wang, J.; Li, C.; Wu, Y.; Wang, W.; Lin, H. Development of a Computerized Tool for the Chinese Version of the Montreal Cognitive Assessment for Screening Mild Cognitive Impairment. Int. Psychogeriatr. 2015, 27, 213–219. [Google Scholar] [CrossRef]
  65. Xie, S.S.; Goldstein, C.M.; Gathright, E.C.; Gunstad, J.; Dolansky, M.A.; Redle, J.; Hughes, J.W. Performance of the Automated Neuropsychological Assessment Metrics (ANAM) in Detecting Cognitive Impairment in Heart Failure Patients. Heart Lung 2015, 44, 387–394. [Google Scholar] [CrossRef] [PubMed]
  66. Dougherty, J.H.; Cannon, R.L.; Nicholas, C.R.; Hall, L.; Hare, F.; Carr, E.; Dougherty, A.; Janowitz, J.; Arunthamakun, J. The Computerized Self Test (CST): An Interactive, Internet Accessible Cognitive Screening Test For Dementia. J. Alzheimers Dis. 2010, 20, 185–195. [Google Scholar] [CrossRef] [PubMed]
  67. Scanlon, L.; O’Shea, E.; O’Caoimh, R.; Timmons, S. Usability and Validity of a Battery of Computerised Cognitive Screening Tests for Detecting Cognitive Impairment. Gerontology 2016, 62, 247–252. [Google Scholar] [CrossRef] [PubMed]
  68. Park, J.-H.; Jung, M.; Kim, J.; Park, H.Y.; Kim, J.-R.; Park, J.-H. Validity of a Novel Computerized Screening Test System for Mild Cognitive Impairment. Int. Psychogeriatr. 2018, 30, 1455–1463. [Google Scholar] [CrossRef] [PubMed]
  69. Fung, A.W.-T.; Lam, L.C.W. Validation of a Computerized Hong Kong—Vigilance and Memory Test (HK-VMT) to Detect Early Cognitive Impairment in Healthy Older Adults. Aging Ment. Health 2020, 24, 186–192. [Google Scholar] [CrossRef]
  70. Dawadi, P.N.; Cook, D.J.; Schmitter-Edgecombe, M. Automated Cognitive Health Assessment From Smart Home-Based Behavior Data. IEEE J. Biomed. Health Inform. 2016, 20, 1188–1194. [Google Scholar] [CrossRef]
  71. Dawadi, P.N.; Cook, D.J.; Schmitter-Edgecombe, M. Automated Cognitive Health Assessment Using Smart Home Monitoring of Complex Tasks. IEEE Trans. Syst. Man Cybern. Syst. 2013, 43, 1302–1313. [Google Scholar] [CrossRef]
  72. Javed, A.R.; Saadia, A.; Mughal, H.; Gadekallu, T.R.; Rizwan, M.; Maddikunta, P.K.R.; Mahmud, M.; Liyanage, M.; Hussain, A. Artificial Intelligence for Cognitive Health Assessment: State-of-the-Art, Open Challenges and Future Directions. Cogn. Comput. 2023, 15, 1767–1812. [Google Scholar] [CrossRef]
  73. Vacante, M.; Wilcock, G.K.; De Jager, C.A. Computerized Adaptation of The Placing Test for Early Detection of Both Mild Cognitive Impairment and Alzheimer’s Disease. J. Clin. Exp. Neuropsychol. 2013, 35, 846–856. [Google Scholar] [CrossRef]
  74. Sato, K.; Niimi, Y.; Mano, T.; Iwata, A.; Iwatsubo, T. Automated Evaluation of Conventional Clock-Drawing Test Using Deep Neural Network: Potential as a Mass Screening Tool to Detect Individuals with Cognitive Decline. Front. Neurol. 2022, 13, 896403. [Google Scholar] [CrossRef]
  75. Youn, Y.C.; Pyun, J.-M.; Ryu, N.; Baek, M.J.; Jang, J.-W.; Park, Y.H.; Ahn, S.-W.; Shin, H.-W.; Park, K.-Y.; Kim, S. Use of the Clock Drawing Test and the Rey–Osterrieth Complex Figure Test-Copy with Convolutional Neural Networks to Predict Cognitive Impairment. Alzheimer’s Res. Ther. 2020, 13, 85. [Google Scholar] [CrossRef] [PubMed]
  76. Kaser, A.N.; Lacritz, L.H.; Winiarski, H.R.; Gabirondo, P.; Schaffert, J.; Coca, A.J.; Jiménez-Raboso, J.; Rojo, T.; Zaldua, C.; Honorato, I.; et al. A Novel Speech Analysis Algorithm to Detect Cognitive Impairment in a Spanish Population. Front. Neurol. 2024, 15, 1342907. [Google Scholar] [CrossRef] [PubMed]
  77. Hajjar, I.; Okafor, M.; Choi, J.D.; Moore, E.; Abrol, A.; Calhoun, V.D.; Goldstein, F.C. Development of Digital Voice Biomarkers and Associations with Cognition, Cerebrospinal Biomarkers, and Neural Representation in Early Alzheimer’s Disease. Alzheimers Dement. Diagn. Assess. Dis. Monit. 2023, 15, e12393. [Google Scholar] [CrossRef] [PubMed]
  78. Chen, L.; Asgari, M.; Gale, R.; Wild, K.; Dodge, H.; Kaye, J. Improving the Assessment of Mild Cognitive Impairment in Advanced Age with a Novel Multi-Feature Automated Speech and Language Analysis of Verbal Fluency. Front. Psychol. 2020, 11, 535. [Google Scholar] [CrossRef] [PubMed]
  79. Robin, J.; Xu, M.; Kaufman, L.D.; Simpson, W. Using Digital Speech Assessments to Detect Early Signs of Cognitive Impairment. Front. Digit. Health 2021, 3, 749758. [Google Scholar] [CrossRef]
  80. Yeung, A.; Iaboni, A.; Rochon, E.; Lavoie, M.; Santiago, C.; Yancheva, M.; Novikova, J.; Xu, M.; Robin, J.; Kaufman, L.D.; et al. Correlating Natural Language Processing and Automated Speech Analysis with Clinician Assessment to Quantify Speech-Language Changes in Mild Cognitive Impairment and Alzheimer’s Dementia. Alzheimers Res. Ther. 2021, 13, 109. [Google Scholar] [CrossRef]
  81. Ruengchaijatuporn, N.; Chatnuntawech, I.; Teerapittayanon, S.; Sriswasdi, S.; Itthipuripat, S.; Hemrungrojn, S.; Bunyabukkana, P.; Petchlorlian, A.; Chunamchai, S.; Chotibut, T.; et al. An Explainable Self-Attention Deep Neural Network for Detecting Mild Cognitive Impairment Using Multi-Input Digital Drawing Tasks. Alzheimers Res. Ther. 2022, 14, 111. [Google Scholar] [CrossRef]
  82. Chen, S.; Stromer, D.; Alabdalrahim, H.A.; Schwab, S.; Weih, M.; Maier, A. Automatic Dementia Screening and Scoring by Applying Deep Learning on Clock-Drawing Tests. Sci. Rep. 2020, 10, 20854. [Google Scholar] [CrossRef]
  83. Park, J.-H. Non-Equivalence of Sub-Tasks of the Rey-Osterrieth Complex Figure Test with Convolutional Neural Networks to Discriminate Mild Cognitive Impairment. BMC Psychiatry 2024, 24, 166. [Google Scholar] [CrossRef]
  84. Bergeron, M.F.; Landset, S.; Zhou, X.; Ding, T.; Khoshgoftaar, T.M.; Zhao, F.; Du, B.; Chen, X.; Wang, X.; Zhong, L.; et al. Utility of MemTrax and Machine Learning Modeling in Classification of Mild Cognitive Impairment. J. Alzheimers Dis. 2020, 77, 1545–1558. [Google Scholar] [CrossRef]
  85. Nakaoku, Y.; Ogata, S.; Murata, S.; Nishimori, M.; Ihara, M.; Iihara, K.; Takegami, M.; Nishimura, K. AI-Assisted In-House Power Monitoring for the Detection of Cognitive Impairment in Older Adults. Sensors 2021, 21, 6249. [Google Scholar] [CrossRef]
  86. Rykov, Y.G.; Patterson, M.D.; Gangwar, B.A.; Jabar, S.B.; Leonardo, J.; Ng, K.P.; Kandiah, N. Predicting Cognitive Scores from Wearable-Based Digital Physiological Features Using Machine Learning: Data from a Clinical Trial in Mild Cognitive Impairment. BMC Med. 2024, 22, 36. [Google Scholar] [CrossRef]
  87. Jia, X.; Wang, Z.; Huang, F.; Su, C.; Du, W.; Jiang, H.; Wang, H.; Wang, J.; Wang, F.; Su, W.; et al. A Comparison of the Mini-Mental State Examination (MMSE) with the Montreal Cognitive Assessment (MoCA) for Mild Cognitive Impairment Screening in Chinese Middle-Aged and Older Population: A Cross-Sectional Study. BMC Psychiatry 2021, 21, 485. [Google Scholar] [CrossRef]
  88. Trevethan, R. Sensitivity, Specificity, and Predictive Values: Foundations, Pliabilities, and Pitfalls in Research and Practice. Front. Public Health 2017, 5, 307. [Google Scholar] [CrossRef]
  89. Senthilnathan, S. Usefulness of Correlation Analysis. SSRN Electron. J. 2019. [Google Scholar] [CrossRef]
  90. Janssens, A.C.J.W.; Martens, F.K. Reflection on Modern Methods: Revisiting the Area under the ROC Curve. Int. J. Epidemiol. 2020, 49, 1397–1403. [Google Scholar] [CrossRef]
  91. Ahmed, S.; De Jager, C.; Wilcock, G. A Comparison of Screening Tools for the Assessment of Mild Cognitive Impairment: Preliminary Findings. Neurocase 2012, 18, 336–351. [Google Scholar] [CrossRef]
  92. Wong, A.; Fong, C.; Mok, V.C.; Leung, K.; Tong, R.K. Computerized Cognitive Screen (CoCoSc): A Self-Administered Computerized Test for Screening for Cognitive Impairment in Community Social Centers. J. Alzheimers Dis. 2017, 59, 1299–1306. [Google Scholar] [CrossRef]
  93. Larner, A.J. Screening Utility of the Montreal Cognitive Assessment (MoCA): In Place of—or as Well as—the MMSE? Int. Psychogeriatr. 2012, 24, 391–396. [Google Scholar] [CrossRef]
  94. Tierney, M.C.; Naglie, G.; Upshur, R.; Moineddin, R.; Charles, J.; Liisa Jaakkimainen, R. Feasibility and Validity of the Self-Administered Computerized Assessment of Mild Cognitive Impairment with Older Primary Care Patients. Alzheimer Dis. Assoc. Disord. 2014, 28, 311–319. [Google Scholar] [CrossRef]
  95. Phillips, M.; Rogers, P.; Haworth, J.; Bayer, A.; Tales, A. Intra-Individual Reaction Time Variability in Mild Cognitive Impairment and Alzheimer’s Disease: Gender, Processing Load and Speed Factors. PLoS ONE 2013, 8, e65712. [Google Scholar] [CrossRef]
  96. Lehr, M.; Prud’hommeaux, E.; Shafran, I.; Roark, B. Fully Automated Neuropsychological Assessment for Detecting Mild Cognitive Impairment. In Proceedings of the Interspeech 2012, Portland, OR, USA, 9–13 September 2012; ISCA: Singapore, 2012; pp. 1039–1042. [Google Scholar]
  97. Calamia, M.; Weitzner, D.S.; De Vito, A.N.; Bernstein, J.P.K.; Allen, R.; Keller, J.N. Feasibility and Validation of a Web-Based Platform for the Self-Administered Patient Collection of Demographics, Health Status, Anxiety, Depression, and Cognition in Community Dwelling Elderly. PLoS ONE 2021, 16, e0244962. [Google Scholar] [CrossRef]
  98. Bissig, D.; Kaye, J.; Erten-Lyons, D. Validation of SATURN, a Free, Electronic, Self-administered Cognitive Screening Test. Alzheimers Dement. Transl. Res. Clin. Interv. 2020, 6, e12116. [Google Scholar] [CrossRef]
  99. Ip, E.H.; Barnard, R.; Marshall, S.A.; Lu, L.; Sink, K.; Wilson, V.; Chamberlain, D.; Rapp, S.R. Development of a Video-Simulation Instrument for Assessing Cognition in Older Adults. BMC Med. Inform. Decis. Mak. 2017, 17, 161. [Google Scholar] [CrossRef]
  100. Satoh, T.; Sawada, Y.; Saba, H.; Kitamoto, H.; Kato, Y.; Shiozuka, Y.; Kuwada, T.; Shima, S.; Murakami, K.; Sasaki, M.; et al. Assessment of Mild Cognitive Impairment Using CogEvo: A Computerized Cognitive Function Assessment Tool. J. Prim. Care Community Health 2024, 15, 21501319241239228. [Google Scholar] [CrossRef]
  101. Dwolatzky, T.; Dimant, L.; Simon, E.S.; Doniger, G.M. Validity of a Short Computerized Assessment Battery for Moderate Cognitive Impairment and Dementia. Int. Psychogeriatr. 2010, 22, 795–803. [Google Scholar] [CrossRef]
  102. Ye, S.; Sun, K.; Huynh, D.; Phi, H.Q.; Ko, B.; Huang, B.; Hosseini Ghomi, R. A Computerized Cognitive Test Battery for Detection of Dementia and Mild Cognitive Impairment: Instrument Validation Study. JMIR Aging 2022, 5, e36825. [Google Scholar] [CrossRef]
  103. Patrick, K.S.; Chakrabati, S.; Rhoads, T.; Busch, R.M.; Floden, D.P.; Galioto, R. Utility of the Brief Assessment of Cognitive Health (BACH) Computerized Screening Tool in Identifying MS-Related Cognitive Impairment. Mult. Scler. Relat. Disord. 2024, 82, 105398. [Google Scholar] [CrossRef]
  104. Yuen, K.; Beaton, D.; Bingham, K.; Katz, P.; Su, J.; Diaz Martinez, J.P.; Tartaglia, M.C.; Ruttan, L.; Wither, J.E.; Kakvan, M.; et al. Validation of the Automated Neuropsychological Assessment Metrics for Assessing Cognitive Impairment in Systemic Lupus Erythematosus. Lupus 2022, 31, 45–54. [Google Scholar] [CrossRef]
  105. Rodríguez-Salgado, A.M.; Llibre-Guerra, J.J.; Tsoy, E.; Peñalver-Guia, A.I.; Bringas, G.; Erlhoff, S.J.; Kramer, J.H.; Allen, I.E.; Valcour, V.; Miller, B.L.; et al. A Brief Digital Cognitive Assessment for Detection of Cognitive Impairment in Cuban Older Adults. J. Alzheimers Dis. 2021, 79, 85–94. [Google Scholar] [CrossRef]
  106. Fukui, Y.; Yamashita, T.; Hishikawa, N.; Kurata, T.; Sato, K.; Omote, Y.; Kono, S.; Yunoki, T.; Kawahara, Y.; Hatanaka, N.; et al. Computerized Touch-Panel Screening Tests for Detecting Mild Cognitive Impairment and Alzheimer’s Disease. Intern. Med. 2015, 54, 895–902. [Google Scholar] [CrossRef]
  107. Takechi, H.; Yoshino, H. Usefulness of CogEvo, a Computerized Cognitive Assessment and Training Tool, for Distinguishing Patients with Mild Alzheimer’s Disease and Mild Cognitive Impairment from Cognitively Normal Older People. Geriatr. Gerontol. Int. 2021, 21, 192–196. [Google Scholar] [CrossRef]
  108. Kouzuki, M.; Miyamoto, M.; Tanaka, N.; Urakami, K. Validation of a Novel Computerized Cognitive Function Test for the Rapid Detection of Mild Cognitive Impairment. BMC Neurol. 2022, 22, 457. [Google Scholar] [CrossRef]
  109. Ruano, L.; Sousa, A.; Severo, M.; Alves, I.; Colunas, M.; Barreto, R.; Mateus, C.; Moreira, S.; Conde, E.; Bento, V.; et al. Development of a Self-Administered Web-Based Test for Longitudinal Cognitive Assessment. Sci. Rep. 2016, 6, 19114. [Google Scholar] [CrossRef]
  110. Curiel, R.E.; Crocco, E.; Rosado, M.; Duara, R.; Greig, M.T.; Raffo, A.; Loewenstein, D.A. A Brief Computerized Paired Associate Test for the Detection of Mild Cognitive Impairment in Community-Dwelling Older Adults. J. Alzheimers Dis. 2016, 54, 793–799. [Google Scholar] [CrossRef]
  111. Dawadi, P.N.; Cook, D.J.; Schmitter-Edgecombe, M.; Parsey, C. Automated Assessment of Cognitive Health Using Smart Home Technologies. Technol. Health Care 2013, 21, 323–343. [Google Scholar] [CrossRef]
  112. Maito, M.A.; Santamaría-García, H.; Moguilner, S.; Possin, K.L.; Godoy, M.E.; Avila-Funes, J.A.; Behrens, M.I.; Brusco, I.L.; Bruno, M.A.; Cardona, J.F.; et al. Classification of Alzheimer’s Disease and Frontotemporal Dementia Using Routine Clinical and Cognitive Measures across Multicentric Underrepresented Samples: A Cross Sectional Observational Study. Lancet Reg. Health Am. 2023, 17, 100387. [Google Scholar] [CrossRef]
  113. Tsai, C.-F.; Chen, C.-C.; Wu, E.H.-K.; Chung, C.-R.; Huang, C.-Y.; Tsai, P.-Y.; Yeh, S.-C. A Machine-Learning-Based Assessment Method for Early-Stage Neurocognitive Impairment by an Immersive Virtual Supermarket. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 2124–2132. [Google Scholar] [CrossRef]
  114. Xiao, Y.; Jia, Z.; Dong, M.; Song, K.; Li, X.; Bian, D.; Li, Y.; Jiang, N.; Shi, C.; Li, G. Development and Validity of Computerized Neuropsychological Assessment Devices for Screening Mild Cognitive Impairment: Ensemble of Models with Feature Space Heterogeneity and Retrieval Practice Effect. J. Biomed. Inform. 2022, 131, 104108. [Google Scholar] [CrossRef]
  115. Lagun, D.; Manzanares, C.; Zola, S.M.; Buffalo, E.A.; Agichtein, E. Detecting Cognitive Impairment by Eye Movement Analysis Using Automatic Classification Algorithms. J. Neurosci. Methods 2011, 201, 196–203. [Google Scholar] [CrossRef]
  116. Kang, M.J.; Kim, S.Y.; Na, D.L.; Kim, B.C.; Yang, D.W.; Kim, E.-J.; Na, H.R.; Han, H.J.; Lee, J.-H.; Kim, J.H.; et al. Prediction of Cognitive Impairment via Deep Learning Trained with Multi-Center Neuropsychological Test Data. BMC Med. Inform. Decis. Mak. 2019, 19, 231. [Google Scholar] [CrossRef] [PubMed]
  117. Xiao, H.; Fangfang, H.; Qiong, W.; Shuai, Z.; Jingya, Z.; Xu, L.; Guodong, S.; Yan, Z. The Value of Handgrip Strength and Self-Rated Squat Ability in Predicting Mild Cognitive Impairment: Development and Validation of a Prediction Model. Inq. J. Health Care Organ. Provis. Financ. 2023, 60, 004695802311552. [Google Scholar] [CrossRef] [PubMed]
  118. Na, K.-S. Prediction of Future Cognitive Impairment among the Community Elderly: A Machine-Learning Based Approach. Sci. Rep. 2019, 9, 3335. [Google Scholar] [CrossRef] [PubMed]
  119. Kalafatis, C.; Modarres, M.H.; Apostolou, P.; Marefat, H.; Khanbagi, M.; Karimi, H.; Vahabi, Z.; Aarsland, D.; Khaligh-Razavi, S.-M. Validity and Cultural Generalisability of a 5-Minute AI-Based, Computerised Cognitive Assessment in Mild Cognitive Impairment and Alzheimer’s Dementia. Front. Psychiatry 2021, 12, 706695. [Google Scholar] [CrossRef] [PubMed]
  120. O’Malley, R.P.D.; Mirheidari, B.; Harkness, K.; Reuber, M.; Venneri, A.; Walker, T.; Christensen, H.; Blackburn, D. Fully Automated Cognitive Screening Tool Based on Assessment of Speech and Language. J. Neurol. Neurosurg. Psychiatry 2021, 92, 12–15. [Google Scholar] [CrossRef]
  121. Zhou, H.; Park, C.; Shahbazi, M.; York, M.K.; Kunik, M.E.; Naik, A.D.; Najafi, B. Digital Biomarkers of Cognitive Frailty: The Value of Detailed Gait Assessment Beyond Gait Speed. Gerontology 2022, 68, 224–233. [Google Scholar] [CrossRef]
  122. Alzheimer’s Association. 2024 Alzheimer’s Disease Facts and Figures. Alzheimers Dement 2024, 20, 3708–3821. [Google Scholar] [CrossRef]
  123. Handzlik, D.; Richmond, L.L.; Skiena, S.; Carr, M.A.; Clouston, S.A.P.; Luft, B.J. Explainable Automated Evaluation of the Clock Drawing Task for Memory Impairment Screening. Alzheimers Dement. Diagn. Assess. Dis. Monit. 2023, 15, e12441. [Google Scholar] [CrossRef]
  124. Wei, W.; Zhào, H.; Liu, Y.; Huang, Y. Traditional Trail Making Test Modified into Brand-New Assessment Tools: Digital and Walking Trail Making Test. J. Vis. Exp. 2019, 153, e60456. [Google Scholar] [CrossRef]
  125. Drapeau, C.E.; Bastien-Toniazzo, M.; Rous, C.; Carlier, M. Nonequivalence of Computerized and Paper-and-Pencil Versions of Trail Making Test. Percept. Mot. Skills 2007, 104, 785–791. [Google Scholar] [CrossRef]
  126. Sacco, G.; Ben-Sadoun, G.; Bourgeois, J.; Fabre, R.; Manera, V.; Robert, P. Comparison between a Paper-Pencil Version and Computerized Version for the Realization of a Neuropsychological Test: The Example of the Trail Making Test. J. Alzheimers Dis. 2019, 68, 1657–1666. [Google Scholar] [CrossRef] [PubMed]
  127. Jee, H.; Park, J. Feasibility of a Novice Electronic Psychometric Assessment System for Cognitively Impaired. J. Exerc. Rehabil. 2020, 16, 489–495. [Google Scholar] [CrossRef] [PubMed]
  128. Cahn-Hidalgo, D.; Estes, P.W.; Benabou, R. Validity, Reliability, and Psychometric Properties of a Computerized, Cognitive Assessment Test (Cognivue ®). World J. Psychiatry 2020, 10, 1–11. [Google Scholar] [CrossRef] [PubMed]
  129. Tornatore, J.B.; Hill, E.; Laboff, J.A.; McGann, M.E. Self-Administered Screening for Mild Cognitive Impairment: Initial Validation of a Computerized Test Battery. J. Neuropsychiatry Clin. Neurosci. 2005, 17, 98–105. [Google Scholar] [CrossRef]
  130. Shopin, L.; Shenhar-Tsarfaty, S.; Ben Assayag, E.; Hallevi, H.; Korczyn, A.D.; Bornstein, N.M.; Auriel, E. Cognitive Assessment in Proximity to Acute Ischemic Stroke/Transient Ischemic Attack: Comparison of the Montreal Cognitive Assessment Test and MindStreams Computerized Cognitive Assessment Battery. Dement. Geriatr. Cogn. Disord. 2013, 36, 36–42. [Google Scholar] [CrossRef]
  131. Ritsner, M.S.; Blumenkrantz, H.; Dubinsky, T.; Dwolatzky, T. The Detection of Neurocognitive Decline in Schizophrenia Using the Mindstreams Computerized Cognitive Test Battery. Schizophr. Res. 2006, 82, 39–49. [Google Scholar] [CrossRef]
  132. Hammers, D.; Spurgeon, E.; Ryan, K.; Persad, C.; Barbas, N.; Heidebrink, J.; Darby, D.; Giordani, B. Validity of a Brief Computerized Cognitive Screening Test in Dementia. J. Geriatr. Psychiatry Neurol. 2012, 25, 89–99. [Google Scholar] [CrossRef]
  133. Segkouli, S.; Paliokas, I.; Tzovaras, D.; Lazarou, I.; Karagiannidis, C.; Vlachos, F.; Tsolaki, M. A Computerized Test for the Assessment of Mild Cognitive Impairment Subtypes in Sentence Processing. Aging Neuropsychol. Cogn. 2018, 25, 829–851. [Google Scholar] [CrossRef]
  134. Gills, J.L.; Bott, N.T.; Madero, E.N.; Glenn, J.M.; Gray, M. A Short Digital Eye-Tracking Assessment Predicts Cognitive Status among Adults. GeroScience 2021, 43, 297–308. [Google Scholar] [CrossRef]
  135. Larøi, F.; Canlaire, J.; Mourad, H.; Van Der Linden, M. Relations between a Computerized Shopping Task and Cognitive Tests in a Group of Persons Diagnosed with Schizophrenia Compared with Healthy Controls. J. Int. Neuropsychol. Soc. 2010, 16, 180–189. [Google Scholar] [CrossRef]
  136. Da Motta, C.; Carvalho, C.B.; Castilho, P.; Pato, M.T. Assessment of Neurocognitive Function and Social Cognition with Computerized Batteries: Psychometric Properties of the Portuguese PennCNB in Healthy Controls. Curr. Psychol. 2021, 40, 4851–4862. [Google Scholar] [CrossRef]
  137. Mackin, R.S.; Rhodes, E.; Insel, P.S.; Nosheny, R.; Finley, S.; Ashford, M.; Camacho, M.R.; Truran, D.; Mosca, K.; Seabrook, G.; et al. Reliability and Validity of a Home-Based Self-Administered Computerized Test of Learning and Memory Using Speech Recognition. Aging Neuropsychol. Cogn. 2022, 29, 867–881. [Google Scholar] [CrossRef] [PubMed]
  138. Cavedoni, S.; Chirico, A.; Pedroli, E.; Cipresso, P.; Riva, G. Digital Biomarkers for the Early Detection of Mild Cognitive Impairment: Artificial Intelligence Meets Virtual Reality. Front. Hum. Neurosci. 2020, 14, 245. [Google Scholar] [CrossRef] [PubMed]
  139. Lim, J.E.; Wong, W.T.; Teh, T.A.; Lim, S.H.; Allen, J.C.; Quah, J.H.M.; Malhotra, R.; Tan, N.C. A Fully-Immersive and Automated Virtual Reality System to Assess the Six Domains of Cognition: Protocol for a Feasibility Study. Front. Aging Neurosci. 2021, 12, 604670. [Google Scholar] [CrossRef]
  140. Jamshed, M.; Shahzad, A.; Riaz, F.; Kim, K. Exploring Inertial Sensor-Based Balance Biomarkers for Early Detection of Mild Cognitive Impairment. Sci. Rep. 2024, 14, 9829. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow chart adapted for this study.
Figure 1. PRISMA flow chart adapted for this study.
Jcm 13 07068 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Babatope, E.Y.; Ramírez-Acosta, A.Á.; Avila-Funes, J.A.; García-Vázquez, M. The Potential of Automated Assessment of Cognitive Function Using Non-Neuroimaging Data: A Systematic Review. J. Clin. Med. 2024, 13, 7068. https://doi.org/10.3390/jcm13237068

AMA Style

Babatope EY, Ramírez-Acosta AÁ, Avila-Funes JA, García-Vázquez M. The Potential of Automated Assessment of Cognitive Function Using Non-Neuroimaging Data: A Systematic Review. Journal of Clinical Medicine. 2024; 13(23):7068. https://doi.org/10.3390/jcm13237068

Chicago/Turabian Style

Babatope, Eyitomilayo Yemisi, Alejandro Álvaro Ramírez-Acosta, José Alberto Avila-Funes, and Mireya García-Vázquez. 2024. "The Potential of Automated Assessment of Cognitive Function Using Non-Neuroimaging Data: A Systematic Review" Journal of Clinical Medicine 13, no. 23: 7068. https://doi.org/10.3390/jcm13237068

APA Style

Babatope, E. Y., Ramírez-Acosta, A. Á., Avila-Funes, J. A., & García-Vázquez, M. (2024). The Potential of Automated Assessment of Cognitive Function Using Non-Neuroimaging Data: A Systematic Review. Journal of Clinical Medicine, 13(23), 7068. https://doi.org/10.3390/jcm13237068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop