Next Article in Journal
The Effects of Transcranial Direct Current Stimulation in Patients with Mild Cognitive Impairment
Previous Article in Journal
Astrocytoma Mimicking Herpetic Meningoencephalitis: The Role of Non-Invasive Multimodal Monitoring in Neurointensivism
 
 
Article
Peer-Review Record

Video Motion Analysis as a Quantitative Evaluation Tool for Essential Tremor during Magnetic Resonance-Guided Focused Ultrasound Thalamotomy

Neurol. Int. 2023, 15(4), 1411-1422; https://doi.org/10.3390/neurolint15040091
by Mayumi Kaburagi 1,2,†, Futaba Maki 1,2,†, Sakae Hino 1,2, Masayuki Nakano 3, Toshio Yamaguchi 4,5, Masahito Takasaki 6, Hirokazu Iwamuro 7, Ken Iijima 8, Jinichi Sasanuma 3, Kazuo Watanabe 3, Yasuhiro Hasegawa 1,2 and Yoshihisa Yamano 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Neurol. Int. 2023, 15(4), 1411-1422; https://doi.org/10.3390/neurolint15040091
Submission received: 21 August 2023 / Revised: 17 November 2023 / Accepted: 20 November 2023 / Published: 29 November 2023

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This is a well-written and interesting paper. The technology reported herein for quantifying upper extremity tremor is likely to be of interest to both clinical and research communities.  For the most part, the methods are sound, and conclusions are supported by the data.  I have just a few comments regarding the statistical approach should the authors have an opportunity to revise.

 

  1.      It is not clear how the motion parameters were calibrated for the postural tremor task.  It is stated beginning on line 169 “After video recording, three anatomical regions of interest (finger, wrist, and elbow) were manually selected from the video, and their XY coordinate values were automatically tracked for each frame (25 fps). Motion parameters such as average speed (mm/s), average acceleration (mm/s2), average frequency (Hz), and average amplitude (mm) were calculated.”  Please expand on the data reduction procedure and calibration for extracting motion parameters from video images.   Specifically, how were the dynamic parameters calculated from a video segment of the finger, etc.  Calibration and data reduction are less of an issue for the line drawing task which was apparently handled by Movalyzer software.

  2.      Statistical analyses included inter-rater reliability where 3 raters independently performed video analyses.  Assuming these assessments were based on previously completed videographic assessments from which the four or five motion parameters were automatically calculated by software. If so, what were the potential sources of human error or judgment that could contribute the anything other than perfect reliability.   I have trouble understanding the role of a human evaluator in the calculation of motion parameters supporting the claim of high inter-rater reliability (Table 2).  Inter-rater reliability would be a critical statistic if multiple raters performed the assessments on the same patient; but this appears not to be the case.  Please clarify.

  3.      Regarding the correlation coefficients reported in table 3, it is not clear that the motion parameters themselves are statistically independent. It’s highly probably that velocity and acceleration are highly correlated, while frequency and amplitude could be inversely related. These inter-parameter relationships should also be reported.  If these motion parameters are found not to be truly independent, the statement on page 6 line 220 is incorrect.  Authors should address this concern, particularly with regard to the R2-R2 change scores entered into the regression models (and reported in Table 6).  It’s possible that while collinearity may exist across parameters obtained for a single assessment, the change scores may be independent.

  4.      I assume that the asterisk next to the term ‘Combined model’ in Tables 6 and 7 refers to the models as “stepwise regression models.  A footnote would be less confusing (than the * in the text (e.g. line 338 page 11 and 350 page 12).

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The methods are not described adequately, so it is not possible for me to evaluate the significance of this study.  The following questions need to be addressed.

1.       What is the lowest amplitude of tremor that can be detected videographically?  Tremor must be distinguished from other adventitious movement.  This can be done with spectral analysis.  Of the 43 patients, how many had a statistically-significant spectral peak at the tremor frequency for postural tremor and line drawing tremor?

2.       The wing posture is the part of the Fahn-Tolosa-Marin (FTM).  It is odd that this task was used to assess ceiling effect in the assessment of postural tremor by the FTM scale.  The limitations of the FTM are well documented.  The wing posture is used in the TETRAS scale, which does not have a significant ceiling effect.  The results of videography should be correlated with TETRAS ratings.

3.       I have no idea how average speed (mm/s), average acceleration (mm/s2), average frequency (Hz), and average amplitude (mm) were calculated.  The clinical rating pertain to tremor, not other movements.  The relevant measures of tremor are acceleration, velocity, displacement and frequency, computed from a spectral peak.

4.       The test-retest ICCs are too high to believe.  There is considerable within-subject test-retest variability in tremor amplitude.  I suspect the authors simply analyzed the same video twice, 1-3 days apart.  They need to assess two separate exams, as in the paper by Ondo et al. (ref. 19).

5.       The authors repeatedly state that “even with lightweight and small devices, their attachment can influence the tremor severity”.  To my knowledge, there is no evidence for this statement.  In fact, the authors’ videographic method should be compared with an inertial measurement unit on the hand (wing posture test) and with a digitizing tablet (line drawing task).

6.       Which line or lines of the FTM were used in the analysis?  Again, I have no idea how the authors computed cumulative length, velocity, acceleration, and amplitude.

7.       What is the rationale for comparing single videographic measures with the CRST subscores?

The authors cite the Elble-Ellenbogen paper comparing FTM (CRST) spiral ratings with quantitative tremor measurements using a digitizing tablet.  The authors need to use a similar approach to assess the value of videography vs TETRAS wing posture ratings and vs CRST line drawing ratings.  Test-retest reliability and MDC should be based on two separate measurements.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

No remaining concerns

Author Response

Thank you for reviewing.

Reviewer 2 Report

Comments and Suggestions for Authors

This paper is much improved, but there are still issues that need to be addressed.

First, I apologize for confusion on my part. I thought that the video measurements were done intraoperatively. In fact, they were simply done before and after surgery. I somehow got the impression that the authors were testing the validity of their device in the scanner during focused ultrasound thalamotomy. Statements such as “However, these devices cannot be brought into the MRI room, limiting their use in monitoring treatment effectiveness during MRgFUS thalamotomy” caused my confusion. This particular statement is irrelevant in the context of this study and should be deleted in the introduction and also in the discussion. A timeline of the procedures performed in this study (experimental protocol) would be very helpful.

Similarly, the discussion beginning on line 358 seems irrelevant. Again, the authors suggest that their videographic method is of value in the OR and in other settings. This has not been demonstrated. Please restrict all comments to your experimental hypothesis and study results.

The videos in the supplement appear to be normal volunteers mimicking tremor. The videos require legends.

The new figure 1C comparing TETRAS to videography clearly illustrates the advantage of TETRAS over the Fahn-Tolosa-Marin scale. The authors should use these data to compute the Weber-Fechner relationship between TETRAS ratings and tremor amplitude.

In the new table 3, the authors report the means of the video graphic measures. Are these arithmetic means or geometric means? I suspect these data are not normally distributed.

The authors state “While TETRAS demonstrates a clearer and more accurate quantification of essential tremor severity compared to CRST (Figure 1A), there was variability in the amplitude values from video analysis, even for patients with the same TETRAS score”. This is not surprising and has been reported many times before. Videographic and other transducer measures of tremor will always be more sensitive to within-subject variability. However, this within-subject variability may be purely random and have no clinical significance relative to a treatment effect. I know of no study that has shown a device to have greater sensitivity to a treatment effect, compared to TETRAS. In table 3, the authors should include the minimum detectable change of TETRAS. They should also include an estimate of the TETRAS minimum detectable change using the Weber-Fechner relationship, as performed in the paper by Elble and Ellenbogen. This would give us a better idea of the sensitivity to treatment effect of clinical ratings (i.e., TETRAS) versus videography. The authors are probably aware that motion transducers have not been more reliable than clinical ratings in published clinical trials. Videography may ultimately be the exception to this rule.

Finally, motion of the hand in the wing posture is not two-dimensional, especially in patients with severe tremor.  The authors must have calibrated their videography in some way to measure amplitude. Motion toward or away from the camera would have an effect on the perceived amplitude in the vertical direction. Please explain how the videography was calibrated for amplitude.

Comments on the Quality of English Language

N/A

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop