Next Article in Journal
Biological Sunglasses in a Deep-Sea Squid: Pigment Migration in the Retina of Gonatus onyx
Previous Article in Journal
Features Associated with Visible Lamina Cribrosa Pores in Individuals of African Ancestry with Glaucoma: Primary Open-Angle African Ancestry Glaucoma Genetics (POAAGG) Study
 
 
Article
Peer-Review Record

A Pilot Study to Improve Cognitive Performance and Pupil Responses in Mild Cognitive Impaired Patients Using Gaze-Controlled Gaming

by Maria Solé Puig 1, Patricia Bustos Valenzuela 2, August Romeo 2 and Hans Supèr 1,2,3,4,5,6,*
Reviewer 1:
Reviewer 2:
Submission received: 17 November 2023 / Revised: 12 March 2024 / Accepted: 18 April 2024 / Published: 24 April 2024
(This article belongs to the Section Visual Neuroscience)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The authors present an initial investigation on the possible efficacy of gaze-controlled video games as a treatment for mild cognitive impairment. The authors show altered pupillary response in an oddball discrimination task for target stimuli following exposure to the gaze-controlled game, with this change not present for control participants who played a mouse-controlled version of the game. Experimental participants showed improvements in performance on cognitive assessment tests (MoCA, CANTAB), within a limited set of domains.

 

Main Comments:

 

I have several concerns with the statistical processes applied in the manuscript, and associated data presentation:

 

1. Why did the authors not conduct an overall comparison test (e.g., 2-way ANOVA or, better, a non-parametric variant given the unequal sample sizes)? This would seem particularly useful, given the apparent differences in MoCA scores between experimental and control groups.

 

2. Familywise error rates:

 

When comparing pre and post training performance on MoCA and CANTAB tests, did the authors correct significance levels for comparisons of sub-scales? This would seem to be particularly relevant and appropriate here. This would likely reduce the overall number of significant effects.

 

Similarly, for CANTAB results, could the authors provide details of statistical test results for each domain in addition to the breakdown of the RVP tests? Given the number of tests here, corrections of significance levels are likely also appropriate.

 

3. Presentation of individual data:

 

It would be very useful to see plots of pre- vs post-training performance on MoCA and CANTAB tests that include individual data (or at least better show the distribution of per-participant changes). The authors state the individual number of experimental group participants who show test improvements but do not provide this information for the control group.

 

In addition to these points, it would be useful to see some additional consideration of the differences between the experimental and control groups. For example, the authors report a lack of correlation between number of training sessions and change in MoCA scores, but could there be a correlation between mean MoCA score and pre- vs post-training difference? This could be an important aspect of the data to examine, given the general differences in MoCA scores between experimental and control groups. Perhaps training is only effective for patients with MoCA scores falling within a particular range? If this is the case, then the observed effects may simply be due to practise. The authors would only be able to rule such an effect out if the control group was expanded to include participants with lower overall MoCA performance levels.

 

Finally, it would be good to see some additional presentation of data from the training task and from the oddball task. Could the authors show some analysis of behavioural performance on the oddball task, comparing experimental and control groups? This would also help one small area of confusion in the current manuscript, where it is often unclear whether the authors are referring to the game-based task or to the oddball task. I would encourage them to clarify these points in their presentation of these results.

 

 

Away from methodological and data analysis issues, it would be good to see the authors expand further on the theoretical issues underpinning their game-based intervention. Why should gaze-controlled gaming affect attentional performance and what do the authors think is happening when participants play their game? How could these issues be examined in future? This would seem to be particularly important if the authors wish to make a clear argument for their particular intervention rather than other gaze-based tasks.

 

Comments on the Quality of English Language

The paper is generally very well-written, with some occasional areas where grammar requires minor corrections (e.g., missing definite and indefinite articles). These are very minor, and did not impact my ability to evaluate the manuscript.

Author Response

Main Comments:

 

  1. Why did the authors not conduct an overall comparison test (e.g., 2-way ANOVA or, better, a non-parametric variant given the unequal sample sizes)? This would seem particularly useful, given the apparent differences in MoCA scores between experimental and control groups.

 

Research studies show mixed results of the effects on attention by video gaming. We believe that training the oculomotor systems is a relevant element in attention behaviour. Therefore, our primary aim was to see whether gaze-controlled games improve attention processing in patients with MCI. A control group was added to support the idea. Therefore, our initial statistical analysis did not include a multi-comparison.

 

A  Wilcoxon rank sum test was conducted and showed significant differences in MoCA scores before (p < 0.0002) and after (p = 0.0003) gaming sessions between the experimental and control groups.

 

We also included a Kruskal Wallis test comparing the pupil responses of the experimental and control groups. The Kruskal Wallis ANOVA was conducted to compare the effect of gaming on cognitive pupil responses in gaze and mouse controlled conditions, and showed a significant effect of gaming (F(3,82)=26,51,p=7 x10-6).

 

These results have been included in the revised manuscript.

 

 

  1. Familywise error rates 

When comparing pre and post training performance on MoCA and CANTAB tests, did the authors correct significance levels for comparisons of sub-scales? This would seem to be particularly relevant and appropriate here. This would likely reduce the overall number of significant effects.

We have tested the significance after Bonferroni correction for multiple comparisons. The results show that the MoCA scores in the visual spatial domain as well as the RVPa scores of CANTAB remain significant between pre and post-test outcomes.

 

The results are included in the revised manuscript

 

 

Similarly, for CANTAB results, could the authors provide details of statistical test results for each domain in addition to the breakdown of the RVP tests? Given the number of tests here, corrections of significance levels are likely also appropriate.

 

The statistical results for each domain of the different CANTAB tests are presented in the Annex. As most of them are not significant, we opted not to include these data in the main manuscript.

 

 

  1. Presentation of individual data:

It would be very useful to see plots of pre- vs post-training performance on MoCA and CANTAB tests that include individual data (or at least better show the distribution of per-participant

changes).

 

We have included plots displaying the individual pre- vs post-training results of the MoCA scores in the visual-spatial domain and the scores of the CANTAB RVPA for both groups.

 

 

The authors state the individual number of experimental group participants who show test improvements but do not provide this information for the control group.

We provide this data in the revised manuscript

 

 

In addition to these points, it would be useful to see some additional consideration of the differences between the experimental and control groups. For example, the authors report a lack of correlation between number of training sessions and change in MoCA scores, but could there be a correlation between mean MoCA score and pre- vs post-training difference? This could be an important aspect of the data to examine, given the general differences in MoCA scores between experimental and control groups. Perhaps training is only effective for patients with MoCA scores falling within a particular range? If this is the case, then the observed effects may simply be due to practise. The authors would only be able to rule such an effect out if the control group was expanded to include participants with lower overall MoCA performance levels.

 

We calculated the correlation between the improvement in the MoCA scores and the pre-post training difference of the patient group but no significant correlation (R is close to 0) was observed. We repeated the calculation using the MoCA scores in the visual domain and neither found a correlation.

Indeed, there may be a ceiling effect. Controls have higher MoCA scores making it more difficult to improve further while the participants in the experiental group have more room to improve. In future studies, patients with different types of dementia who score low on MoCA could be tested to evaluate practice effects. We mention this in the text of the revised manuscript

 

Finally, it would be good to see some additional presentation of data from the training task and from the oddball task. Could the authors show some analysis of behavioural performance on the oddball task, comparing experimental and control groups? This would also help one small area of confusion in the current manuscript, where it is often unclear whether the authors are referring to the game-based task or to the oddball task. I would encourage them to clarify these points in their presentation of these results.

 

Besides the gaming performance, we now include the behavioural performance on the oddball task. Furthermore, in the revised manuscript we better differentiate between the different tests/tasks we used to avoid confusion.

 

 

Away from methodological and data analysis issues, it would be good to see the authors expand further on the theoretical issues underpinning their game-based intervention. Why should gaze-controlled gaming affect attentional performance and what do the authors think is happening when participants play their game? How could these issues be examined in future? This would seem to be particularly important if the authors wish to make a clear argument for their particular intervention rather than other gaze-based tasks.

 

We have expanded on these issues in the discussion

 

Reviewer 2 Report

Comments and Suggestions for Authors

Thank you for allowing me to review the manuscript “A Pilot Study to Improve Cognitive Performance and Pupil Responses in MCI Patients Using Gaze-Controlled Gaming”.

Generally, the text is well written, covers an important topic and is worth publishing in your journal. Prior to publication, however, I would like the authors to consider the following points: 

 

Lines 44-48: Please give a few references for your correct statement that MCI may progress to more severe forms of cognitive decline.

Lines 77-78: How would the results be if an intent-to-treat analysis (entering the drop-out’s pretest scores as posttest scores) were performed by the authors? 

Lines 78-79: Why are the two groups so uneven in numbers? How were the participants allocated to the experimental and the control group? Did the authors perform randomization? How many eligible patients were asked to participate? What were the inclusion and exclusion criteria?

Lines 111-121: Please give references for the MoCA and the CANTAB

Lines 158-163: I do not think that before-after comparisons using t-tests or Wilcoxon tests are the correct choice for assessing the effectiveness of the training. Please repeat the evaluations with analyses of variance with a repeated measures factor (time point: pretest vs. posttest) and a group factor (experimental group vs. control group). Or were the comparisons only performed for the experimental group? If so, why?

Lines 204-215: Why were only the CANTAB scores and not the MoCA scores analyzed?

Discussion: Maybe the authors should include a “shortcomings” section in which they reflect on the methodological weaknesses of their study and recommend replication of the results using studies of higher quality.

 

Author Response

Reply to reviewer 2

Lines 44-48: Please give a few references for your correct statement that MCI may progress to more severe forms of cognitive decline.

In the revised manuscript we have cited some papers that show the risk of MCI patients developing more severe forms of dementia

 

 

Lines 77-78: How would the results be if an intent-to-treat analysis (entering the drop-out’s pretest scores as posttest scores) were performed by the authors? 

Our protocol and consent forms, do not allow us to analyze any data from patients that drop out of the study. So, we cannot perform this analysis.

 

 

Lines 78-79: Why are the two groups so uneven in numbers? How were the participants allocated to the experimental and the control group? Did the authors perform randomization? How many eligible patients were asked to participate? What were the inclusion and exclusion criteria?

 

Research studies show mixed results of the effects on attention by video gaming. We believe that training the oculomotor systems is a relevant element in attention behaviour. Therefore, our primary aim was to see whether gaze-controlled games improve attention processing in patients with MCI. A control group was added to support the idea.

 

Therefore, we first recruited patients to assess cognitive improvement after gaze-controlled gaming. The recruitment of patients was done in day-care centres. The coordinators at the centers invited patients to participate and we don’t know how many patients were asked to participate. The daily training of participating patients was an intensive task. Some patients dropped out and did not want to continue to participate in the study. This information has now been added to the revised paper.

 

The recruitment of the control was done after testing the patients. As the project funding ended, we needed to finish the study so the control group was not as large as the experimental group. In the revised manuscript, we mention the unbalanced number of subjects in both groups

 

We also mention the inclusion and exclusion criteria.

 

 

Lines 111-121: Please give references for the MoCA and the CANTAB

 

We have now included some references

 

 

Lines 158-163: I do not think that before-after comparisons using t-tests or Wilcoxon tests are the correct choice for assessing the effectiveness of the training. Please repeat the evaluations with analyses of variance with a repeated measures factor (time point: pretest vs. posttest) and a group factor (experimental group vs. control group). Or were the comparisons only performed for the experimental group? If so, why?

 

Our primary aim was to see whether gaze-controlled games improve attention processing in patients with MCI. The control group was added to support this idea. Therefore, our initial statistical analysis did not include a multi-comparison.

 

W

Lines 204-215: Why were only the CANTAB scores and not the MoCA scores analyzed?

To study the potential of the therapeutic effects of the gaze-controlled game, we tested the patients with CANTAB and MoCA tests and compared the pre and post-scores on these tests. In the manuscript, we show the results of both tests (e.g. figure 1&2)

A  Wilcoxon rank sum test was conducted and showed significant differences in MoCA scores before (p < 0.0002) and after (p = 0.0003) gaming sessions between the experimental and control groups.

 

We also included a Kruskal Wallis test comparing the pupil responses of the experimental and control groups. The Kruskal Wallis ANOVA was conducted to compare the effect of gaming on cognitive pupil responses in gaze and mouse controlled condition and showed a significant effect of gaming (F(3,82)=26,51,p=7 x10-6).

 

These results have been included in the revised manuscript.

 

Discussion: Maybe the authors should include a “shortcomings” section in which they reflect on the methodological weaknesses of their study and recommend replication of the results using studies of higher quality.

Thank you for the suggestion. We have now included a section “Shortcomings”

 

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The revision addresses the main comments I had raised during my initial review. 

 

I appreciate the authors’ efforts to provide a stronger theoretical grounding linking gaze control to cognitive function. I would still like to see this link spelled out more clearly, however. Currently, the section on the link between the locus coeruleus, cognition and eye movements reads as somewhat disconnected from the rest of the discussion. This could be resolved if the authors were to add a simple sentence at the start of this section, along the lines of “One way in which training with gaze control tasks may impact cognitive functioning is through its impact on the locus coeruleus.” It would also be worth suggesting potential future work that could address this link (e.g., neuroimaging work).

 

One further minor comment is that I have noted the occasional use of ‘subject’ as opposed to ‘participant’. Could the authors please ensure that the use of terminology is appropriate and consistent in this regard.

Comments on the Quality of English Language

Some lingering typographical and grammatical issues remain, although these are relatively minor. I would ask the authors to check over their manuscript again to ensure that these are removed.

Author Response

I appreciate the authors’ efforts to provide a stronger theoretical grounding linking gaze control to cognitive function. I would still like to see this link spelled out more clearly, however. Currently, the section on the link between the locus coeruleus, cognition and eye movements reads as somewhat disconnected from the rest of the discussion. This could be resolved if the authors were to add a simple sentence at the start of this section, along the lines of “One way in which training with gaze control tasks may impact cognitive functioning is through its impact on the locus coeruleus.” It would also be worth suggesting potential future work that could address this link (e.g., neuroimaging work).

 

We have included your sentence and suggested future work that can address the link between training oculomotor control and cognitive functions,

 

 

One further minor comment is that I have noted the occasional use of ‘subject’ as opposed to ‘participant’. Could the authors please ensure that the use of terminology is appropriate and consistent in this regard.

 

We have replaced “subjects” for “participants”

 

 

Some lingering typographical and grammatical issues remain, although these are relatively minor. I would ask the authors to check over their manuscript again to ensure that these are removed.

We have checked the manuscript for grammatical issues and corrected the mistakes

 

 

Reviewer 2 Report

Comments and Suggestions for Authors

+++ Lines 44-48: Please give a few references for your correct statement that MCI may progress to more severe forms of cognitive decline.

You have added one review article of Jongsiriyanyong & Limpawattana. However, this text does not contain information about the progression of MCI to dementia.

+++ Lines 77-78: How would the results be if an intent-to-treat analysis (entering the drop-out’s pretest scores as posttest scores) were performed by the authors? 

Okay, that’s a pity. Would you mind to add this information to your manuscript?

+++ Lines 78-79: Why are the two groups so uneven in numbers? How were the participants allocated to the experimental and the control group? Did the authors perform randomization? How many eligible patients were asked to participate? What were the inclusion and exclusion criteria?

Thank you. Would you please include the information regarding the control group in your “participants” section?

+++ Lines 158-163: I do not think that before-after comparisons using t-tests or Wilcoxon tests are the correct choice for assessing the effectiveness of the training. Please repeat the evaluations with analyses of variance with a repeated measures factor (time point: pretest vs. posttest) and a group factor (experimental group vs. control group). Or were the comparisons only performed for the experimental group? If so, why?

I see, but you compare and discuss possible improvements in both groups. I apologize for being so persistent, but I would ask you to carry out the analysis of variance analysis I recommended.

Since you want to make statements about differential changes concerning the MCI and the control group, significant differences between the groups at the pre- and posttest are not helpful. By the way, why do both groups of MCI patients differ significantly in their MoCA scores? Does a control group that already differs significantly from the experimental group in the pretest make sense?

 

Author Response

Reply to reviewer 2

+++ Lines 44-48: Please give a few references for your correct statement that MCI may progress to more severe forms of cognitive decline. You have added one review article of Jongsiriyanyong & Limpawattana. However, this text does not contain information about the progression of MCI to dementia.

We have included the following references:

Petersen RC. Mild Cognitive Impairment. Continuum (Minneap Minn). 2016 Apr;22(2 Dementia):404-18. doi: 10.1212/CON.0000000000000313. PMID: 27042901.

 

Flicker C, Ferris SH, Reisberg B. Mild cognitive impairment in the elderly Predictors of dementia. Neurology 1991;41:1006-9.

 

Morris JC, Storandt M, Miller JP, McKeel DW, Price JL, Rubin EH, et al. Mild cognitive impairment represents early-stage Alzheimer disease. Arch. Neurol 2001;58:397-405 doi: 10.1001/archneur.58.3.397.

 

Boyle PA, Wilson RS, Aggarwal NT, Tang Y, Bennett DA. Mild cognitive impairment: risk of Alzheimer disease and rate of cognitive decline. Neurology 2006;67:441-5.

 

Gauthier S, Reisberg B, Zaudig M, Petersen RC, Ritchie K, Broich K, et al. Mild cognitive impairment. Lancet 2006;367:1262-70

 

 

+++ Lines 77-78: How would the results be if an intent-to-treat analysis (entering the drop-out’s pretest scores as posttest scores) were performed by the authors?  Okay, that’s a pity. Would you mind to add this information to your manuscript?

We have added this to the manuscript.

 

+++ Lines 78-79: Why are the two groups so uneven in numbers? How were the participants allocated to the experimental and the control group? Did the authors perform randomization? How many eligible patients were asked to participate? What were the inclusion and exclusion criteria? Thank you. Would you please include the information regarding the control group in your “participants” section?

We have included this information in the revised manuscript

 

+++ Lines 158-163: I do not think that before-after comparisons using t-tests or Wilcoxon tests are the correct choice for assessing the effectiveness of the training. Please repeat the evaluations with analyses of variance with a repeated measures factor (time point: pretest vs. posttest) and a group factor (experimental group vs. control group). Or were the comparisons only performed for the experimental group? If so, why? I see, but you compare and discuss possible improvements in both groups. I apologize for being so persistent, but I would ask you to carry out the analysis of variance analysis I recommended.

 

I understand your point. We have now applied an analysis of variance (ANOVA) with a repeated measures factor (pretest vs. posttest) and a group factor (experimental groups low, mid, high vs. control group).

The following results have been included in the revised manuscript.

Since the pre-test scores were higher in the control group than in the experimental group, a ceiling effect may have influenced the results. Therefore, we divided the MoCA scores in the visuospatial domain of the experimental group into ‘low’, ‘mid’ and ‘high’, depending on their score. Then, ANOVA was applied to factor ‘group’ (‘low’, ‘mid’, ‘high’, and ‘control’), in addition to factor ‘session’ (‘pre’, ‘post’). The marginal means for the ‘group’ factor were 1.44 ± 0.86, 1.67 ± 1.03, 3.67 ± 1.08 and 4.12 ± 0.72 for ‘low’, ‘mid’, ‘high’ and ‘control’, respectively. Significant differences are present (F 3,62 = 38.7, p= 3.2∙10 -14 ), and the Tukey-Kramer procedure indicates that ‘low’-‘high’, ‘mid’-‘high’, ‘low’-‘control’ and ‘mid’-‘control’ differences are significant. The session factor involves significant differences as well (F 1,62 = 5.23, p= 0.026). After restricting session comparisons to each of the three patient groups, we observe that the significant difference between ‘pre’ and ‘post’ sessions comes from the ‘low’ patient group (‘pre’: 1.00 ± 0.00, ‘post’: 1.89 ± 1.05, F 1,16 = 6.4, p=0.022) and not from ‘mid’ (‘pre’: 1.33 ± 0.50, ‘post’: 2.00 ± 1.32, F 1,16 = 2.0, p=0.18) or ‘high’ (‘pre’: 3.44 ± 0.88, ‘post’: 3.89 ± 1.27, F 1,16 = 0.74, p=0.40) patient groups.

 

By the way, why do both groups of MCI patients differ significantly in their MoCA scores? Does a control group that already differs significantly from the experimental group in the pretest make sense?

We subsequently included the control group, but regrettably, the MCI patients in this group presented with higher pre-test MoCA scores This higher baseline MoCA score may have introduced a ceiling effect, making it more challenging to observe improvements. Therefore, we divided the MoCA scores in the visuospatial domain of the experimental group into ‘low’, ‘mid’ and ‘high’, depending on their score and compared the results to the score of the control group ( see previous point). We address this issue in the revised manuscript.

 

 

 

Back to TopTop