Next Article in Journal
Robustness of STIRAP Shortcuts under Ornstein-Uhlenbeck Noise in the Energy Levels
Next Article in Special Issue
A Sequential Emotion Approach for Diagnosing Mental Disorder on Social Media
Previous Article in Journal
Detection of Aflatoxins B1 in Maize Grains Using Fluorescence Resonance Energy Transfer
Previous Article in Special Issue
Multi-Task Topic Analysis Framework for Hallmarks of Cancer with Weak Supervision
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Project Report

A Study of the Effectiveness Verification of Computer-Based Dementia Assessment Contents (Co-Wis): Non-Randomized Study

1
Corporate Research Institute of Clupea, Inc., Daegu 41585, Korea
2
Division of Clinical Psychology, Department of Psychiatry, Yeungnam University, Daegu 42415, Korea
3
Department of Psychiatry, College of Medicine, Yeungnam University, Daegu 42415, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(5), 1579; https://doi.org/10.3390/app10051579
Submission received: 30 January 2020 / Revised: 20 February 2020 / Accepted: 21 February 2020 / Published: 26 February 2020
(This article belongs to the Special Issue Data Technology Applications in Life, Diseases, and Health)

Abstract

:

Featured Application

Computer-based dementia assessment content (Co-Wis) has been developed for the purpose of efficient and accurate dementia prevention and screening. In this study, we verified the reliability and validity of the developed Co-Wis which we anticipate will play an important role in future big data-based dementia management systems.

Abstract

Computer-based neuropsychological assessments have many advantages over traditional neuropsychological assessments. However, limited data are available on the validity and reliability of computer-based assessments. The purpose of this study was to examine the reliability and validity of computer-based dementia assessment contents (Co-Wis). This study recruited 113 participants from Yeungnam University Medical Center in Daegu from June 2019 to December 2019 and received ethical approval. Participants were evaluated using standard and objective dementia cognitive test tools such as the Korean version of the Mini-Mental State Examination (K-MMSE), the Clinical Dementia Rating Scale (CDR), and the Standardized Seoul Neuropsychological Screening Battery-II (SNSB-II). To verify the effectiveness of Co-Wis, the concurrent validity, test–retest reliability (Pearson’s correlation coefficients), construct validity (Factor analysis), and signal detection analysis (ROC curve) were used. In most of the Co-Wis subtests, the concurrent validity and test–retest reliability showed statistically significant correlations (p < 0.05, p < 0.01). The factor analysis showed that Co-Wis assessed the most major cognitive areas (Tucker–Lewis Index (TLI) = 0.876, Comparative Fit Index (CFI) = 0.897, RMSEA = 0.88). Thus, Co-Wis appears clinically applicable and with high reliability and validity. In the future, we should develop tests to evaluate both standard data and big data-based machine learning.

Graphical Abstract

1. Introduction

Dementia is most commonly diagnosed in the elderly. In the Republic of Korea, more patients are being diagnosed with dementia due to the rapid aging of the population [1]. In 2017, it was estimated that the prevalence of dementia among those aged 65 or above was 9.94% and that approximately 700,000 individuals had dementia [2]. Along with the advance of pharmacological treatments or interventions, modifiable dementia risk factors need to be addressed to prevent this disease [3].
Dementia is a degenerative disease, its early diagnosis is critical, and considerable effort has been invested in preventing and curing dementia [4]. To receive a diagnosis of dementia, diverse medical assessment encompassing interpretation of neuropsychological tests, imaging or pathology, medical history, and assessment of activities of daily living are needed [5]. Neuropsychological assessments are both used to identify signs of cognitive impairment that are consistent with dementia or with neurological disorders [6] and include traditional paper-based and computer-based assessment methods [7].
Computer-based neuropsychological assessments have many advantages over traditional assessments including conservation of both costs and time, consistency of results, accurate recording of responses, and the ability to automatically store and compare a person’s performance between assessment sessions [8]. However, most computer-based neuropsychological assessments are limited to those that use a keyboard and mouse as the primary inputs and besides there are limited data on the validity and reliability of them [9,10,11]. Although previous studies on patients with brain injuries evaluated the usefulness of computerized neurocognitive tests [12,13], further research is needed to overcome the limitations reported in the ethics, number of subjects, and heterogeneity of research methods. Thus, the present study aimed to validate the effectiveness of computer-based dementia assessment content (Co-Wis).

2. Materials and Methods

2.1. Clinical Trial Ethics (Institutional Review Board Approval)

The study procedures were performed in accordance with the Declaration of Helsinki and were approved by the Institutional Review Board of Yeungnam University (IRB No.: YUMC 2019-04-058-001). Coded identifiers were assigned to study subjects who visited the testing agency (medical institution) and provided written informed consent for study participation. The descriptive writing included the general content of the study, and the participants participated after fully understanding the study (Figure 1).

2.2. Clinical Trial Design

The purpose of this clinical trial was to confirm the effectiveness of the Co-Wis for the early diagnosis and prevention of dementia symptoms. Subjects were recruited from the Division of Clinical Psychology of the Department of Psychiatry at Yeungnam University Hospital in Daegu, Korea, from June 2019 to December 2019. All individuals with a subjective cognitive decline who attended the unit to undergo neurocognitive assessment were screened and among them, 113 subjects with an objective cognitive decline were included in the study. Participants were evaluated using standard and objective dementia cognitive test tools such as the Korean version of the Mini-Mental State Examination (K-MMSE), the Clinical Dementia Rating Scale (CDR), and the Standardized Seoul Neuropsychological Screening Battery-II (SNSB-II). Additionally, participants underwent a comprehensive clinical evaluation including a detailed medical history, neurological examinations, and a neuropsychological evaluation. The overall clinical trial progression is depicted in Figure 2.
Subjects were included in the present study if aged between 45 and 90, having an expressed informed consent, a diagnosis of Neurocognitive Disorder according to DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition; APA, 2013) criteria, a total score of 10–26 on the K-MMSE completed within one year before screening or after screening, a score of 0.5–2 at the CDR, a completed SNSB-II within one year before screening or after screening, and not having alterations in the sensory perception and motricity which could interfere with the trial. Subjects were excluded if they needed hospitalization or outpatient care due to severe medical conditions, showed persistent behavioral problems requiring frequent hospitalizations or inpatient cares, medical re-access was needed due to rapid dementia symptoms presentation, having other medical illnesses, using drugs affecting cognitive function, participated in clinical trials, or other studies that are related to this study within 3 months before screening.
This was a non-randomized control single-group clinical trial. Only a single-blind method was applied hiding the purpose of the study from the subjects since the researcher’s blindness was difficult to assure due to the neuropsychological test behavioral characteristics [14]. A total of 11 researchers participated in the clinical trial. The SNSB-II was administered at the screening stage and the Co-Wis at Visit 1 and 2. The test–retest was done in a time interval ranging between two weeks and three months. All tests were conducted in a quiet and focused environment to accurately examine the subject’s cognitive function [15,16,17]. Two tablet PCs with a built-in Co-Wis application were prepared. Tests were conducted in a pre-specified order and, if they were discontinued, they were re-administered from the point of interruption. Clinical trials with the same procedure will be revisited within two weeks or three months after the first trial with Co-Wis to avoid learning biases. Contents and procedures implemented in this study were standardized and conducted per the maximum number of participants.

2.3. Computer-Based Dementia Assessment Contents (Co-Wis)

Co-Wis evaluated attention, memory, language, visuospatial, and executive function domains [18] (Table 1).
To verify the effectiveness of the contents, the concurrent validity, test–retest reliability, construct validity, and signal detection analysis were used (Figure 3).
The Co-Wis was designed as a computer-based tool for dementia prevention with the aim to make its contents accessible for psychology and rehabilitation clinics worldwide. Furthermore, we improved the system to support multiple dementia prevention sessions simultaneously and patients used the application created with the Unity game engine. At the initial stage of the setup, the user was prompted to enter the identifier provided by the therapist. The identifier was used by the application to find the patient within the assessment database. Each therapist had access to his or her Co-Wis account and was able to accept new patients and track the historical session data. Co-Wis was designed to predict dementia disease by applying machine learning based on big data collected through content execution (Figure 4 and Figure 5).

2.4. Data Analysis

Data were analyzed using IBM Statistical Package for Social Sciences (SPSS) version 25.0. To examine the concurrent validity of the Co-Wis, Pearson correlation coefficients were calculated for Co-Wis and the SNSB-II. A value of p < 0.05 was considered statistically significant [19,20]. The test–retest reliability of Co-Wis was evaluated. Pearson correlation coefficients were used to test for test–retest reliability within the clinical trial process [21,22]. Factor analyses (exploratory and confirmatory) were conducted for each cognitive domain subtest to investigate the Co-Wis composition feasibility [23,24]. To assess model fit, we examined the CMIN/df, Tucker–Lewis Index (TLI), the Comparative Fit Index (CFI), and the Root Mean Error of Approximation (RMSEA), which are less sensitive to sample sizes and reflect the simplicity of the model [25]. To evaluate the diagnostic utility, we examined the Co-Wis sensitivity and specificity using a receiver operating characteristic (ROC) curve [26].

3. Results

3.1. Descriptive Statistics

Demographic information and screening data of the participants are presented in Table 2. The average age of the sample was 70.68 years (SD = 9.31), and the average years of education was 9.05 years (SD = 4.67). The mean K-MMSE score was 23.11 (SD = 3.62) and the mean CDR score was 0.80 (SD = 0.44) (Table 2).

3.2. Concurrent Validity

To verify the public validity of Co-Wis construction, the study subjects also completed the SNSB-II within a year from the first test administration to compare results. Pearson’s correlation coefficient between the raw scores of the Co-Wis subtests and the corresponding SNSB-II subtests scores were calculated. All the Co-Wis and the SNSB-II subsets were significantly correlated, with correlation coefficients that ranged from 0.409 to 0.798. Upon examination of convergent validity, we determined that the Co-Wis and SNSB-II results were highly correlated (Table 3).

3.3. Test–Retest Reliability

In this study, the correlation coefficient between Visit 1 and 2 trials was calculated to verify Co-Wis test–retest reliability. The test–retest was done in a time interval ranging between two weeks and three months. Most items showed significant test–retest reliability, with scores ranging from 0.529 to 0.872. Trail making (A type)’s test–retest low reliability results 0.251 were shown (Table 4).

3.4. Construct Validity

In this study, we explored the factor structure of the Co-Wis subtests. Factor analyses were conducted to see if the Co-Wis subtests had a factor structure that corresponded to the presumptive cognitive domain. We extracted Co-Wis factors using the principal component formula and SPSS version 25.0. Through the Kaiser–Meyer–Olkin (KMO) and Bartlett tests, the adequacy of the selection of the subtests and the suitability of the factorial analysis model were verified, as were results of the commonality (Table 5).
The Kaiser–Meyer–Olkin (KMO) result was 0.928, meaning that the number of variables and cases used in the exploratory factor analysis was appropriate. Bartlett’s test results suggested that factor analysis was appropriate (p < 0.05). The five criteria in the cognitive domain had a commonality of 0.3 for all the Co-Wis subtests, indicating a high commonality among the items (Table 6).
Subsequently, confirmatory factor analysis was performed to verify the five sub-factors assumed in this study. The result of confirmatory factor analysis and goodness-of-fit are shown in Table 7 and the path model is shown in Figure 6 (TMT A: trail making A, B type, FD I: figure drawing immediate, FD R: figure drawing recall, WM I: word memory immediate, WM D: word memory delayed, M R: word memory recall, PN: picture naming, VF A: verbal fluency category: animal, VF M: verbal fluency category: market, VF ㄱ: verbal fluency initial word: ㄱ, VF ㅅ: verbal fluency initial word: ㅅ, VF ㅇ: verbal fluency initial word: ㅇ, CD: clock drawing, PZ: puzzle construction, AR: four fundamental arithmetic, DC: digit coding, ST W: stroop word, ST C: stroop color). The coincidence of the five-factor model of Co-Wis was CMIN/df = 1.864, TLI = 0.876, CFI = 0.897, and RMSEA = 0.088. If the CMIN/df was less than or equal to three, it was considered acceptable. TLI and CFI were judged to have a good agreement when they were greater than 0.90. The RMSEA was considered good when below 0.06 and with a good agreement when below 0.08.

3.5. Signal detection analysis

All the Co-Wis subtest were significantly value, with area under the curves that ranged from 0.694–0.860 (Figure 7, Table 8). We also analyzed and presented the sensitivity and specificity of each subtest (Table 9).

4. Discussion

Cognitive assessment could favor early detection of neurological disorders in comorbidity with psychiatry disorders and the necessity of computer-based neuropsychological tests rather than traditional paper-based ones is increasing [27]. The purpose of this clinical trial was to verify the validity and effectiveness of the Co-Wis compared with those of the SNSB-II, similar to a previous study on elderly participants comparing the prototype of a machine cognitive test with a traditional paper-based psychometric tool [28,29]. To confirm the consistency of the test results, we also checked the test–retest reliability between Visit 1 and 2 in a time interval between two weeks and three months to avoid fatigue and carryover effects linked to repeated measurements [30,31,32]. Exploratory and confirmatory factor analysis was conducted to investigate the factor structure of the Co-Wis subtests. We found that the Co-Wis was significantly correlated with the SNSB-II. Co-Wis has been tested and retested in two trials to ensure the stability of the test results. Our findings showed that Co-Wis results were significantly correlated with the SNSB-II scores. Test–retest reliability was assessed to ensure the consistency of the Co-Wis test results. The trail making test A type showed a test–retest reliability value of 0.251, which is considered as low. Given that the trail making test measured the time until test completion, this result seems to reflect the presence of a practice effect bias of the participants [32,33]. Therefore, it is reported that the participants showed a big difference in trail making test A between the first and repeated trials. Before using exploratory factor analysis, the KMO measure and Bartlett test were used to examine the suitability of the data for factor analysis. The KMO measure 0.928 and Bartlett test was 0.000. KMO measures have values greater than 0.90 (marvelous), 0.80–0.89 (meritorious), 0.70–0.79 (middling), and 0.60–0.69 (mediocre). A value between 0.50 and 0.59 was classified as poor and unacceptable if below 0.5. Bartlett’s test of sphericity tests the null hypothesis and the alternative hypothesis for the existence of commonality and small values (less than 0.05, rejecting the null hypothesis) of the significance level indicate that factor analysis may be useful for data analysis. Bartlett’s test of sphericity tests the null hypothesis and the alternative hypothesis for the existence of commonality and small values (less than 0.05, rejecting the null hypothesis) of the significance level indicate that factor analysis may be useful for data analysis [23]. Using the exploratory factor analysis results of the Co-Wis subtest, we confirmed a commonness of 0.3 over for all subtest items [23]. The cognitive domain was derived from the five components of attention, memory, language, visuospatial, and executive function. Confirmatory factor analysis showed that the five factors of the Co-Wis were acceptable. Considering the standardization coefficients of each factor, all factor coefficients showed statistically significant (p < 0.001), thus ensuring the stability of the structural model [24,34,35]. In the signal detection analysis, the overall scale showed good sensitivity and specificity, so that the Co-Wis contents could be identified as well as existing dementia assessment tools. Our results are in line with previous studies of Park and Heo (2017), Park et al. (2017), and Lee et al. (2019) on computerized neurocognitive test development and assessment of the elderly and patients with mild cognitive impairment or stroke. Additionally, Co-Wis showed high reliability and validity compared with the ones of paper-based assessments. This study confirmed the effectiveness of the Co-Wis and results will be of great value in the field, as the existing studies were conducted on small samples, without IRB approval and formal procedures [36,37,38]. The Co-Wis contents will direct guidelines for big data-based dementia care system development to solve challenging medical demands in dementia medical health experts. However, further research and data collection are needed. First, Co-Wis still requires a standardization score and big data collection in various environments. Secondly, to develop a computerized neurocognitive test that is effective in predicting and detecting dementia and multi-national data, the development of an English version and the verification of its validity and reliability are issues that need to be approached in the future [39]. Thirdly, in the future, it would be necessary to further verify the role of Co-Wis in dementia prevention and diagnosis by comparing patients with healthy controls (RCT). Co-Wis could play an important role in advancing dementia prevention, screening, and care. Thus, using big data collection and machine learning analysis, we may be able to reduce the social impact of dementia while also mitigating the symptoms of this disease.

5. Patents

Clupea, Inc. Co-Wis. KIPO Trademark application 40-2018-0112761, August 14, 2018, and Dementia testing management server using machine learning and dementia testing method using the same. KIPO Patent 10-2019-0171912, December 20, 2019.

Author Contributions

S.I.S. wrote the supported in the report and contributed to building the project. H.S.J., J.Y.K., D.S.B., and B.H.K. also supported in writing the report. J.P.P. gave developing and the Co-Wis. H.G.K., D.H.C., and G.H.K. supervised research and the project. All authors substantially contributed to each of their expertise. All authors have read and agreed to the published version of the manuscript.

Funding

This project was supported by the Institute of Advanced Convergence Technology (IACT) grant funded by the Ministry of Science and ICT (Republic of Korea government) (MSICT) (1100-1141-305-029-19, Dementia Prevention and Diagnosis Content Development Project of ICT fund business).

Acknowledgments

Thank you all of each hospital’s subjects and staff who contributed their effort to the project. We would like to thank Editage (www.editage.co.kr) for English language editing.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jo, J.H.; Kim, B.S.; Chang, S.M. Major Depressive Disorder in Family Caregivers of Patients with Dementia. J. Korean Soc. Biol. Ther. Psychiatry 2019, 25, 95–100. [Google Scholar]
  2. Lee, D.W.; Seong, S.J. Korean national dementia plans: From 1st to 3rd. J. Korean Med. Assoc. 2018, 61, 298–303. [Google Scholar] [CrossRef]
  3. Sarant, J.S.; Harris, D.; Busby, P.; Maruff, P.; Schembri, A.; Lemke, U.; Launer, S. The Effect of Hearing Aid Use on Cognition in Older Adults: Can We Delay Decline or Even Improve Cognition Function? J. Clin. Med. 2020, 9, 254. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Lee, J.E.; Shin, D.W.; Han, K.; Kim, D.; Yoo, J.E.; Lee, J.; Kim, S.Y.; Son, K.Y.; Cho, B.; Kim, M.J. Change in Metabolic Syndrome Status and Risk of Dementia. J. Clin. Med. 2020, 9, 122. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Pasquier, F. Early Diagnosis of Dementia. J. Korean Acad. Fam. Med. 1999, 246, 6–15. [Google Scholar]
  6. Ahn, H.J.; Chin, J.H.; Park, A.; Lee, B.H.; Suh, M.K.; Seo, W.S.; Na, D.L. Seoul Neuropsychological Screening Battery-Dementia Version(SNSB-D): A Useful Tool for Assessment and Monitoring Cognitive Impairments in Dementia Patients. J. Korean Med. Sci. 2009, 25, 1071–1076. [Google Scholar] [CrossRef] [Green Version]
  7. Jun, J.Y.; Song, S.E.; Park, J.P. A Study of the Reliability and the Validity of Clinical Data Interchange Standards Consortium(CDISC) based Nonpharmacy Dementia Diagnosis Contents(Co-Wis). J. Korean Contents Assoc. 2019, 19, 638–649. [Google Scholar]
  8. Zygouris, S.; Tsolaki, M. Computerized Cognitive Testing for Older Adults: A review. Am. J. Alzheimer’s Dis. Other Dement. 2014, 13, 1–16. [Google Scholar] [CrossRef]
  9. Park, J.H. A Systematic Review of Computerized Cognitive Function Tests for the Screening of Mild Cognitive Impairment. Korean Soc. Occup. Ther. 2016, 24, 19–31. [Google Scholar] [CrossRef]
  10. Tierney, M.C.; Szalai, J.P.; Snow, W.G.; Fisher, R.H.; Nores, A.; Nadon, G.; Dunn, E. George-Hyslop, P.H. St. Prediction of probable Alzheimer’s disease in memory-impaired patients: A prospective longitudinal study. Am. Acad. Neurol. 1996, 46, 661–665. [Google Scholar]
  11. Werner, P.; Korczyn, A.D. Willingness to use computerized systems for the diagnosis of dementia testing a theoretical model in an Israeli sample. Alzheimer Dis. Assoc. Disord. 2012, 26, 171–178. [Google Scholar] [CrossRef] [PubMed]
  12. Kim, Y.H.; Shin, S.H.; Park, S.H.; Ko, M.H. Cognitive Assessment for Patient with Brain Injury by Computerized Neuropsychological. Korean Acad. Rehabil. Med. 2001, 25, 209–216. [Google Scholar]
  13. You, Y.S.; Jung, I.K.; Lee, J.H.; Lee, H.S. The Study of the Usefulness of Computerized Neuropsychological Test (STIM) in Traumatic Brain-injury Patients. Korean J. Clin. Psychol. 1998, 17, 133–147. [Google Scholar]
  14. Lim, S.C.; Seo, J.C.; Kim, K.U.; Seo, B.M.; Kim, S.W.; Lee, S.Y.; Jung, T.Y.; Han, S.W.; Lee, H. The Comparison of Acupuncture Sensation Index Among Three Different Acupuncture Devices. J. Korean Acupunct. Moxibustion Soc. 2004, 21, 209–219. [Google Scholar]
  15. Cheon, J.S. Neurocognitive Assessment of Geriatric Patients. J. Soc. Biol. Ther. Psychiatry 2000, 6, 126–139. [Google Scholar]
  16. Jung, Y.J.; Kang, S.W. Difference in Sleep, Fatigue, and Neurocognitive Function between Shift Nurses and Non-shift Nurses. Korean J. Adult Nurs. 2017, 29, 190–199. [Google Scholar] [CrossRef] [Green Version]
  17. Song, S.J.; Shim, H.J.; Park, C.H.; Lee, S.H.; Yoon, S.W. Analysis of Correlation between Cognitive Function and Speech Recognition in Noise. Korean J. Otorhinolaryngol. Head Neck Surg. 2010, 53, 215–220. [Google Scholar] [CrossRef]
  18. American Occupational Therapy Association. Occupational therapy practice framework: Domain & process(3rd edition). Am. J. Occup. Ther. 2014, 68, 1–48. [Google Scholar]
  19. Montes, R.M.; Lobete, L.D.; Pereira, J.; Schoemaker, M.M.; Riego, S.S.; Pousada, T. Identifying Children with Developmental Coordination Disorder via Parental Questionnaires. Spanish Reference Norms for the DCDDaily-Q-ES and Correlation with the DCDQ-ES. Int. J. Environ. Res. Public Health 2020, 17, 555. [Google Scholar] [CrossRef] [Green Version]
  20. Wattad, R.; Gabis, L.V.; Shefer, S.; Tresser, S.; Portnoy, S. Correlations between Performance in a Virtual Reality Game and the Movement Assessment Battery Diagnostic in Children with Developmental Coordination Disorder. Appl. Sci. 2020, 10, 833. [Google Scholar] [CrossRef] [Green Version]
  21. Eun, H.J.; Kwon, T.W.; Lee, S.M.; Kim, T.H.; Choi, M.R.; Cho, S.J. A Study on Reliability and Validity of the Korean Version of Impact of Event Scale-Revised. J. Korean Neuropsychiatry Assoc. 2005, 44, 303–310. [Google Scholar]
  22. Edwards, J.D.; Vance, D.E.; Wadley, V.G.; Cissell, G.M.; Roenker, D.L.; Ball, K.K. The Reliability and Validity of Useful Field of View Test. Rehabil. Welf. Eng. Assist. Technol. 2005, 11, 529–543. [Google Scholar]
  23. Ku, M.H.; Kim, H.J.; Kwon, E.J.; Kim, S.H.; Lee, H.S.; Ko, H.J.; Jo, S.M.; Kim, D.K. A Study on the Reliability and Validity of Seoul-Instrumental Activities of Daily Living(S-IADL). Korean Neuropsychiatry Assoc. 2004, 43, 189–199. [Google Scholar]
  24. Ha, K.S.; Kwon, J.S.; Lyoo, I.G.; Kong, S.W.; Lee, D.W. Development and Standardization Process, and Factor Analysis of the Computerized Cognitive Function Test System for Korea Adults. Korean Neuropsychiatry Assoc. 2002, 41, 551–562. [Google Scholar]
  25. Kang, H.C. A Guide on the Use of Factor Analysis in the Assess. Constr. Validity. J. Korean Acad. Nurs. 2013, 43, 587–594. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Hong, S.H. The Criteria for Selecting Appropriate Fit Indices in Structural Equation Modeling and Their Rationales. Korean J. Clin. Psychol. 2000, 19, 161–177. [Google Scholar]
  27. Choi, J.E.; Hwang, S.K. Predictive Validity of Pressure Ulcer Risk Assessment Scales among Patient in a Trauma Intensive Care Unit. J. Korean Crit. Care Nurs. 2019, 12, 26–38. [Google Scholar] [CrossRef]
  28. Di Nuovo, A.; Varrasi, S.; Lucas, A.; Conti, D.; McNamara, J.; Soranzo, A. Assessment of Cognitive skills via Human-robot Interaction and Cloud Computing. J. Bionic Eng. 2019, 16, 526–539. [Google Scholar] [CrossRef]
  29. Rossi, S.; Santangelo, G.; Staffa, M.; Varrasi, S.; Conti, D.; Di Nuovo, A. Psychometric Evaluation Supported by a Social Robot: Personality Factors and Technology Accpectance. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication RO-MAN2018, Nanjing, China, 27–31 August 2018. [Google Scholar]
  30. Kim, J.S.; Kim, K. Effect of Motor Imagery Training with Visual and Kinesthetic Imagery Training on Balance Ability in Post Stroke Hemiparesis. J. Korean Soc. Phys. Med. 2010, 5, 517–525. [Google Scholar]
  31. Lee, J.Y.; Kim, H.; Seo, Y.K.; Kang, H.W.; Kang, W.C.; Jung, I.C. A Research to Evaluate the Reliability and Validity of Pattern Identifications Tool for Cognitive Disorder: A Clinical Study Protocol. J. Orient. Neuropsychiatry 2018, 29, 255–266. [Google Scholar]
  32. De Gruijter, D.N.; Leo, J.T. Statistical Test Theory for the Behavioral Science; CRC Press: London, UK, 2007. [Google Scholar]
  33. Bartels, C.; Wegrzyn, M.; Wiedl, A.; Ackermann, V.; Ehrenreich, H. Practice effects in healthy adults: A longitudinal study on frequent repetitive cognitive testing. BMC Neurosci. 2010, 11, 118. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Han, S.S.; Lee, S.C. Nursing and Health Statistical Analysis; Hannarae Publishing: Seoul, Korea, 2018. [Google Scholar]
  35. Hooper, D.; Coughlan, J.; Mullen, M.R. Structural Equation Modeling: Guidelines for Determining Model Fit. Electron. J. Bus. Res. Methods 2008, 6, 53–60. [Google Scholar]
  36. Park, H.S.; Heo, S.Y. Mobile Screening Test System for Mild Cognitive Impairment: Concurrent validity with the Montreal Cognitive Assessment and Inter-rater Reliablity. Korean Soc. Cogn. Rehabil. 2017, 6, 25–42. [Google Scholar]
  37. Park, H.S.; Yang, N.Y.; Moon, J.H.; Yu, C.H.; Jeong, S.M. The Validity of Reliability of Computerized Comprehensive Neurocognitive Function Test in the Elderly. J. Rehabil. Welf. Eng. Assist. Technol. 2017, 11, 339–348. [Google Scholar]
  38. Lee, J.M.; Won, J.H.; Jang, M.Y. Validity and Reliability of Tablet PC-Based Cognitive Assessment Tools for Stroke Patients. Korean Soc. Occup. Ther. 2019, 27, 45–56. [Google Scholar] [CrossRef]
  39. Takahashi, J.; Kawai, H.; Suzuki, H.; Fujiwara, Y.; Watanabe, Y.; Hirano, H.; Kim, H.; Ihara, K.; Miki, A.; Obuchi, S. Development and validity of the Computer-Based Cognitive Assessment Tool for intervention in community-dwelling older individuals. Geriatr. Gerontol. Int. 2020, 1–5. [Google Scholar] [CrossRef]
Figure 1. IRB approval process for the clinical trial on computer-based dementia assessment (Co-Wis).
Figure 1. IRB approval process for the clinical trial on computer-based dementia assessment (Co-Wis).
Applsci 10 01579 g001
Figure 2. Clinical trial of the computer-based dementia assessment contents (Co-Wis).
Figure 2. Clinical trial of the computer-based dementia assessment contents (Co-Wis).
Applsci 10 01579 g002
Figure 3. Clinical trial process for the computer-based dementia assessment contents (Co-Wis).
Figure 3. Clinical trial process for the computer-based dementia assessment contents (Co-Wis).
Applsci 10 01579 g003
Figure 4. Component block diagram of the dementia testing management server using machine learning.
Figure 4. Component block diagram of the dementia testing management server using machine learning.
Applsci 10 01579 g004
Figure 5. Flow chart of the component block diagram of the dementia testing management server using machine learning.
Figure 5. Flow chart of the component block diagram of the dementia testing management server using machine learning.
Applsci 10 01579 g005
Figure 6. Confirmatory factor analysis model of Co-Wis.
Figure 6. Confirmatory factor analysis model of Co-Wis.
Applsci 10 01579 g006
Figure 7. Receiver operating characteristic of computer-based dementia assessment contents (Co-Wis). (a) Positive ROC curve (trail making), (b) negative ROC curve (figure drawing, word memory, picture naming, verbal fluency, four fundamental arithmetic, digit coding, stroop, clock drawing, puzzle construction).
Figure 7. Receiver operating characteristic of computer-based dementia assessment contents (Co-Wis). (a) Positive ROC curve (trail making), (b) negative ROC curve (figure drawing, word memory, picture naming, verbal fluency, four fundamental arithmetic, digit coding, stroop, clock drawing, puzzle construction).
Applsci 10 01579 g007
Table 1. Cognition area construction of Co-Wis.
Table 1. Cognition area construction of Co-Wis.
Cognition DomainSubtestSubtest Item
AttentionTrail Making TestA, B type
MemoryFigure DrawingImmediate, Recall
Word MemoryImmediate, Delay, Recognition
LanguagePicture Naming
Verbal FluencyAnimal, Market, Initial Sound
VisuospatialClock Drawing
Puzzle Construction
Executive FunctionFour Fundamental ArithmeticAdd, Subtract, Multiply, Divide
Digit Coding
StroopWord, Color
Table 2. Demographic information of study subjects.
Table 2. Demographic information of study subjects.
DemographicsMaleFemaleTotal
N (%) or Mean (SD)N (%) or Mean (SD)N (%) or Mean (SD)
Sex46 (40.71)67 (59.29)113 (100)
Age
Range47–8645–8645–86
45–597 (15.22)7 (10.45)14 (12.39)
60–697 (15.22)21 (31.34)28 (24.78)
70–7927 (58.70)27 (40.30)54 (47.79)
Over 805 (10.87)12 (17.91)17 (15.04)
Total70.11 (10.29)71.08 (8.63)70.68 (9.31)
Education
0–1017 (36.96)52 (77.61)69 (61.06)
11–2029 (63.04)15 (22.39)44 (38.94)
Total11.50 (4.35)7.37 (4.12)9.05 (4.67)
K-MMSE
10–186 (13.04)40 (86.96)18 (15.93)
19–2612 (17.91)55 (82.09)95 (84.07)
Total23.22 (3.58)22.99 (3.63)23.08 (3.60)
CDR
0.528 (60.87)37 (55.22)65 (57.52)
113 (28.26)25 (37.31)38 (33.63)
25 (10.87)5 (7.46)10 (8.85)
Total0.80 (0.48)0.80 (0.42)0.80 (0.44)
1 Total sample (n) = 113.
Table 3. Concurrent validity of Co-Wis.
Table 3. Concurrent validity of Co-Wis.
Co-WisTraditional Assessment (SNSB-II)r
SubtestMean (SD)SubtestMean (SD)
Word memory (immediate)11.39 (4.98)SVLT-E (immediate)13.87 (5.64)0.607 **
Word memory (delay)1.69 (2.20)SVLT-E (delay)2.99 (2.77)0.534 **
Word memory (recognition)5.88 (2.49)SVLT-E (recognition)8.53 (3.02)0.409 **
Word memory (total)18.96 (8.39)SVLT-E (total)25.39 (9.76)0.654 **
Figure drawing (immediate)14.81 (4.78)RCFT (copy)26.14 (10.45)0.501 **
Figure drawing (recall)4.65 (5.46)RCFT (delay)8.50 (8.03)0.592 **
Clock drawing2.24 (0.98)Clock drawing2.32 (0.91)0.639 **
Puzzle construction1.84 (1.82)Clock drawing2.32 (0.91)0.468 **
Four fundamental arithmetic (total)6.19 (3.30)Calculation (total)8.74 (3.18)0.776 **
Digit coding7.91 (6.83)Digit coding8.74 (3.18)0.776 **
Stroop (word)40.72 (17.15)K-CWST-60 (word)51.87 (26.93)0.494 **
Stroop (color)18.56 (14.97)K-CWST-60 (color)27.89 (16.78)0.655 **
Picture naming6.01 (1.99)S-K-BNT10.11 (2.98)0.664 **
Verbal fluency (total)31.06 (16.99)COWAT (total)38.47 (21.17)0.793 **
Trail making (A type)47.91 (56.66)Trail making (A type)56.05 (55.88)0.422 **
Trail making (B type)135.04 (115.67)Trail making (B type)145.04 (112.74)0.798 **
1 * p < 0.05, ** p < 0.01
Table 4. Test–Retest reliability of Co-Wis.
Table 4. Test–Retest reliability of Co-Wis.
Co-Wis SubtestCo-Wis Visit 1
Mean (SD)
Co-Wis Visit 2
Mean (SD)
r
Word memory (immediately)11.39 (4.95)13.42 (6.22)0.772 **
Word memory (delay)1.69 (2.20)2.72 (2.790.626 **
Word memory (recognition)5.88 (2.49)6.46 (2.42)0.683 **
Word memory (total)18.96 (8.3922.60 (10.47)0.816 **
Figure drawing (immediately)14.81 (4.78)15.48 (5.10)0.564 **
Figure drawing (recall)4.65 (5.46)8.82 (6.87)0.553 **
Clock drawing2.24 (0.98)2.38 (0.81)0.588 **
Puzzle construction1.84 (1.82)1.90 (1.86)0.529 **
Four fundamental arithmetic6.19 (3.30)6.48 (3.39)0.759 **
Digit coding7.91 (6.83)9.02 (7.64)0.872 **
Stroop (word)40.72 (17.15)41.49 (15.82)0.587 **
Stroop (color)18.56 (14.97)21.39 (15.57)0.765 **
Picture naming6.01 (1.99)6.47 (1.99)0.754 **
Verbal fluency (total)31.06 (16.99)32.75 (17.05)0.829 **
Trail making (A type)47.91 (56.66)39.53 (40.35)0.251 **
Trail making (B type)135.04 (115.67)109.49 (105.60)0.696 **
1 * p < 0.05, ** p < 0.01.
Table 5. Appropriateness selection of Co-Wis subtest and suitability of factor analysis model.
Table 5. Appropriateness selection of Co-Wis subtest and suitability of factor analysis model.
MeasureResult Value
Kaiser–Meyer–Olkin0.928
Bartlett0.000**
1 Kaiser–Meter–Olkin (correlation coefficient), Bartlett (probability value), ** p < 0.001.
Table 6. Commonality of Co-Wis exploratory factor analysis.
Table 6. Commonality of Co-Wis exploratory factor analysis.
Co-Wis SubtestInitialExtraction
Word memory (immediately)1.0000.636
Word memory (delay)1.0000.635
Word memory (recognition)1.0000.456
Figure drawing (immediately)1.0000.374
Figure drawing (recall)1.0000.502
Clock drawing1.0000.574
Puzzle construction1.0000.527
Four fundamental arithmetic1.0000.610
Digit coding1.0000.734
Stroop (word)1.0000.641
Stroop (color)1.0000.691
Picture naming1.0000.576
Verbal fluency (category: animal)1.0000.645
Verbal fluency (category: market)1.0000.636
Verbal fluency (initial word: ㄱ)1.0000.783
Verbal fluency (initial word: ㅅ)1.0000.748
Verbal fluency (initial word: ㅇ)1.0000.728
Trail making (A type)1.0000.577
Trail making (B type)1.0000.710
Table 7. Confirmatory factor analysis model validity of Co-Wis.
Table 7. Confirmatory factor analysis model validity of Co-Wis.
Co-Wis Subtestχ2dfCMIN/dfTLICFIRMSEA
5 factor264.6961421.8640.8760.8970.088
Table 8. Receive operating characteristic curve of Co-Wis.
Table 8. Receive operating characteristic curve of Co-Wis.
Co-Wis Result SubtestAreaStandard Errorp Value95% CI
LowerUpper
Trail making A type0.8000.0430.0010.7150.885
Trail making B type0.8240.0430.0010.7390.909
Figure drawing immediately0.7790.0430.0010.5980.791
Figure drawing recall0.6940.0490.0010.7700.912
Word memory total0.8600.0350.0010.7910.929
Picture naming0.7020.0510.0010.6020.801
Verbal fluency total0.8480.0370.0010.7760.920
Four fundamental arithmetic0.8000.0440.0010.7140.886
Digit coding0.8560.0360.0010.7850.927
Stroop semantic0.7840.0440.0010.6980.871
Stroop color0.8400.0370.0010.7680.912
Clock drawing0.7670.0490.0010.6720.862
Puzzle construction0.7530.0470.0010.6610.845
Table 9. Sensitivity and specificity of Co-Wis.
Table 9. Sensitivity and specificity of Co-Wis.
Co-Wis Result SubtestSensitivity (%)Specificity (%)Cuff-Off Score
Trail making A type75.072.331
Trail making B type77.178.589
Figure drawing immediately70.870.816
Figure drawing recall68.863.13
Word memory total81.383.118
Picture naming70.853.87
Verbal fluency total77.172.329
Four fundamental arithmetic79.272.37
Digit coding83.376.97
Stroop semantic70.872.343
Stroop color79.273.816
Clock drawing66.775.43
Puzzle construction83.358.53

Share and Cite

MDPI and ACS Style

Song, S.I.; Jeong, H.S.; Park, J.P.; Kim, J.Y.; Bai, D.S.; Kim, G.H.; Cho, D.H.; Koo, B.H.; Kim, H.G. A Study of the Effectiveness Verification of Computer-Based Dementia Assessment Contents (Co-Wis): Non-Randomized Study. Appl. Sci. 2020, 10, 1579. https://doi.org/10.3390/app10051579

AMA Style

Song SI, Jeong HS, Park JP, Kim JY, Bai DS, Kim GH, Cho DH, Koo BH, Kim HG. A Study of the Effectiveness Verification of Computer-Based Dementia Assessment Contents (Co-Wis): Non-Randomized Study. Applied Sciences. 2020; 10(5):1579. https://doi.org/10.3390/app10051579

Chicago/Turabian Style

Song, Seung Il, Hyun Seok Jeong, Jung Pil Park, Ji Yean Kim, Dai Seg Bai, Gi Hwan Kim, Dong Hoon Cho, Bon Hoon Koo, and Hye Geum Kim. 2020. "A Study of the Effectiveness Verification of Computer-Based Dementia Assessment Contents (Co-Wis): Non-Randomized Study" Applied Sciences 10, no. 5: 1579. https://doi.org/10.3390/app10051579

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop