Feasibility of a Mobile-Based System for Unsupervised Monitoring in Parkinson’s Disease

Mobile health (mHealth) has emerged as a potential solution to providing valuable ecological information about the severity and burden of Parkinson’s disease (PD) symptoms in real-life conditions. Objective: The objective of our study was to explore the feasibility and usability of an mHealth system for continuous and objective real-life measures of patients’ health and functional mobility, in unsupervised settings. Methods: Patients with a clinical diagnosis of PD, who were able to walk unassisted, and had an Android smartphone were included. Patients were asked to answer a daily survey, to perform three weekly active tests, and to perform a monthly in-person clinical assessment. Feasibility and usability were explored as primary and secondary outcomes. An exploratory analysis was performed to investigate the correlation between data from the mKinetikos app and clinical assessments. Results: Seventeen participants (85%) completed the study. Sixteen participants (94.1%) showed a medium-to-high level of compliance with the mKinetikos system. A 6-point drop in the total score of the Post-Study System Usability Questionnaire was observed. Conclusions: Our results support the feasibility of the mKinetikos system for continuous and objective real-life measures of a patient’s health and functional mobility. The observed correlations of mKinetikos metrics with clinical data seem to suggest that this mHealth solution is a promising tool to support clinical decisions.


Introduction
Parkinson's disease (PD) is a complex neurodegenerative disorder, with a multitude of fluctuating and heterogeneous motor and non-motor manifestations [1]. The currently available therapeutic interventions drastically improve symptoms and quality of life of early stage PD [2]. However, after a few years of dopaminergic therapy, patients suffer from motor and non-motor complications, leading to a deterioration in their quality of life, caregiver burden, and an increase in healthcare resource consumption [2].
Currently, the ability to provide optimized and personalized care is based on a clinical interview, diaries, and scales, performed during short in-person meetings, which take place at best every 3 or 6 months [3][4][5]. As Parkinson's disease is a complex disease with symptoms that vary across the day and with the medication cycle, a patient's condition during visits may not accurately reflect the degree and nature of their disability, limiting both the clinician's ability to capture an accurate image of the patient's health and consequently manage the disease [3]. Moreover, according to the published evidence, the patient's performance differs substantially when comparing in-clinic (supervised) with real-life (unsupervised) assessments [6].
To overcome these limitations, several studies have been conducted to explore the role of digital measurement tools, including mobile and wearable technologies, and remote monitoring [4,5,7,8]. These allow for: (1) capturing, with higher frequency, the full complexity, and diversity of PD symptoms; (2) provide a more realistic portrayal of patients' functionality; and (3) enable closer monitoring of the response to therapy [6,7,9,10].
In this context, mobile health (mHealth) technologies, which can collect and connect clinical and non-clinical information to feed the existing health informatics systems (e.g., electronic medical records), seem a valuable solution to address the growing challenges of cost and provision of quality of healthcare [3]. The key feature of mobile phones (i.e., their pervasiveness, portability, ubiquity, and immediacy), make them a very attractive tool not only for improving, in a cost-effective way, continuous monitoring, clinical decision-making, and communication between stakeholders but also to potentiate patient empowerment to self-manage their disease and to reach a larger number of patients [3].
Although mHealth seems a promising new avenue for PD monitoring in a reallife environment, technical and clinical feasibility in real-life conditions remains to be determined [11,12]. Moreover, this type of tool is frequently associated with adherence problems: it is necessary to find solutions that enable its long-term use [13,14].
In this study, we aim to test the feasibility (including patients' satisfaction, adherence, and compliance) and usability of an mHealth system (mKinetikos) for continuous and objective real-life measures of patients' health and functional mobility, in unsupervised settings.

Study Design
A prospective feasibility and usability clinical study was conducted.

Primary
To test the feasibility (including patients' satisfaction, adherence, and compliance) of an mHealth system for continuous and objective real-life measures of patients' health, in unsupervised settings.

Secondary
To test the usability of an mHealth system for continuous and objective real-life measures of patients' health, in unsupervised settings.

Exploratory Analysis
To correlate the mHealth system metrics, created based on the data collected during the study, with the recommended clinical tools for evaluating changes in patients' health and functional mobility, to understand their ability to accurately evaluate these constructs and monitor changes over time.

Participants
Study participants were recruited from the CNS (Campus Neurológico Sénior), a tertiary specialized movement disorders center in Portugal. Patients were eligible if they had a diagnosis of PD according to the International Parkinson and Movement Disorder Society criteria, were able to walk unassisted in the OFF phase, had a smartphone compatible with the mKinetikos app (i.e., Android mobile operating system version 7 or higher), were able to understand and comply, and agreed to participate. Exclusion criteria were the presence of a cardiovascular, pulmonary, or musculoskeletal condition that, according to the clinician's best judgment, affect a patient's ability to participate in the study, and the inability to correctly respond to the assessment protocol, according to investigators' best clinical judgment. The study was undertaken with the understanding and written consent of each participant, received approval from the CNS Ethics Committee (Reference 06-2019) and was in compliance with national legislation and the Declaration of Helsinki. Participants were required to agree to all aspects of the study and were able to leave the study at any time.

Study Supplies
The app mKinetikos (Kinetikos, Coimbra, Portugal) provides a platform that pairs a mobile application for continuous patient monitoring in unsupervised settings and a cloud-based dashboard that dynamically displays all collected information and allows clinicians to remotely interact with patients.

•
Mobile-based application (mHealth app): It allows for passive long-term unsupervised functional mobility quantification and position tracking outdoors, remote active testing (e.g., 1-min balance test, finger tapping, and a walk test), and on-demand (onetime or regular) self-reported questionnaires to easily quantify and track a user's progress and treatment response over time. Additionally, the user can manage their medication and use a direct communication channel with their healthcare team. Users receive a weekly report of their functional mobility and their performance in the active tests.

•
Online dashboard: It is a web-based dashboard where the clinical team has access to patients' information and can interact with them.

Assessment Protocol
Patients were assessed at baseline and for seven months (Table 1 and Appendix A). At baseline, the mKinetikos application was installed and synced with the Kinetikos online dashboard, and all patients were provided with an explanation of how technical support could be reached during the study. Moreover, after the installation, patients performed the active tests, in ON-state medication, in the presence of the team so they could learn how to use the application and perform each of the active tests. They were also instructed to keep the phone on them for as much as possible during the day and to perform the active tests in ON-state medication.
Daily, data on kinematics-based functional mobility features and displacements (quantity, not geographic location) were collected through kinematic algorithms that run continuously and unobtrusively in the background (mKinetikos passive test).
Participants were asked to answer a daily survey including three 5-point Likert scales about their perception of well-being, daily task limitations, and symptom severity. To minimize recall bias, the survey was sent every day at 7:00 p.m. and stayed available for five hours (with a reminder after three hours).
Additionally, every week, at noon of the same weekday, participants were asked to complete a set of three active mKinetikos tests that included a 1-min quiet stance, a 3 × 10-m walk test, and a 1-min finger-tapping test with the dominant hand (the 1-min duration allows for a significant data collection and do not seem to induce significant fatigue) [15,16]. The active tests were released each week at noon to avoid the morning time, which can be difficult for certain patients, but they remained available until the following one was launched, allowing participants to complete the test whenever they wanted, enhancing compliance. Participants received a reminder every 2 days at 2:15 p.m. if they had not yet taken any of the tests for that week.
Participants received a report at the end of each week that included a global score of their performance in the active tests, as well as a summary of their daily survey scores, allowing for a comparison of outcomes over time.
Due to the lockdown caused by the COVID-19 pandemic, in-person assessments from months 4 to 6 had to be canceled. They were replaced by phone assessments that included the assessments that were suitable to be performed at distance (questionnaires). At month 7, respecting all the COVID-19 safety measures, we were able to perform a complete assessment protocol.  [17] Hoehn and Yahr stage [17,18] PDQ-39 TUG [19][20][21] CGI-S [22] CGI-C [22] PGI-S [22] PGI-C [22] mKinetikos active tests 1-min quiet stance, 10 m walk test, finger-tapping test mKinetikos passive tests Kinematics-based gait and balance features and displacements PSSUQ

Analysis of mKinetikos Data
All reported mKinetikos metrics correspond to averages over the two weeks before each clinical evaluation [23]. Mobile Patient Global Impression (mPGI) metric corresponds to the average score of daily survey answers. From Tapping active tests, we extracted the total number of touches and the maximum distance between two consecutive touches over the duration of the test. From balance active tests, we calculated the centroidal frequency from the spectral density calculated by using the XY projection of the acceleration signal [24]. Regarding displacements, walking minutes were determined by using the ActivityRecognitionClientAPI (https://developers.google.com/android/reference/com/ google/android/gms/location/ActivityRecognitionClient (accessed on 20 July 2021)), and distance traveled was calculated from the smartphone GPS (including walk, car, etc.). Moreover, this automatic walking detection is used as a trigger to collect sensor data (accelerometer, gyroscope, and/or magnetometer upon availability) at 100 Hz. The vertical component of the acceleration was calculated by using the available sensors: directly from the acceleration [25], using SAAM algorithm [26] when magnetometer is available and using a Madgwik filter [27] when the three sensors are available. From these unsupervised walking events, we calculated the stride lengths and stance durations [28]. Heel-strikes and toe-offs correspond to alternating minima in the vertical component of the acceleration [29]. Before this evaluation, a low-pass Butterworth at 2 Hz was applied. Stance duration corresponds to the time between a heel-strike and a subsequent toe-off. Stride length was calculated according to Equation (1) of Zhang et al., 2018 [30], which is an empirical equation that depends on the stride frequency and the vertical acceleration variance. The smartphone location is not expected to have a significant impact on the collected gait metrics [31].

Statistical Analysis
Demographic, clinical, feasibility, and usability data were analyzed by using descriptive statistics. Continuous outcomes were presented as a mean ± standard deviation (SD).
The primary outcome was feasibility and included the following: • Patients' satisfaction measured through Item 1 of the Post-Study System Usability Questionnaire (PSSUQ) [32]; • Adherence measured through the number of dropouts at the end of the study; • Average compliance (throughout the whole study) was measured as the average of the following: (1) percent of the daily surveys (2) percent of expected tests performed (weekly active tests), and (3) percent expected medication registration (based on an individual's medication schedule). Patients were then divided into three groups: "low" (≤0.33), "medium" (0.33-0.66), and "high" (≥0.66) compliances (these compliances correspond to a global compliance throughout the whole study). Moreover, the temporal evolution of this outcome was measured at months 1, 3, and 7 (after one month, immediately before the COVID-19 confinement, and at the end of the study).
The secondary outcomes were usability, measured with PSSUQ and technical feasibility as measured by the percentage of data that were correctly streamed during the 7 months of the study.
As an exploratory analysis, we evaluated the correlation of a set of mKinetikos variables with the clinical outcomes. For this purpose, a linear regression was used to find which variables better correlate with the results of (1) the Timed Up and Go (TUG) test, (2) MDS-UPDRS finger-tapping score (3.4 item), (3) MDS-UPDRS balance score (3.11 and 3.12 items), (4) an estimation of MDS-UPDRS gait and balance score (calculated as the sum of the following items: 3.10 (gait), 3.11 (freezing), 3.12 (postural instability)), (5) Patient Global Impression-Severity (PGI), and (6) Clinician Global Impression-Severity (CGI). After this, the Spearman correlation coefficient was used to estimate the validity of each mKinetikos variable. These variables were based on the data from the active and passive tests and used the following general expression: where A, B, and C are the fitting parameters. The fitting procedure was performed by using the Levenberg-Marquardt algorithm implemented in SciPy. All data processing and analyses were performed by using Python 3.7.6. Graphical representations were generated by using gnuplot 5.2. All variations throughout the study were evaluated by using Friedman test (for repeated data) followed by Dunn's multiple comparisons post hoc test.

Cohort General Data
Twenty PD patients were assessed for eligibility and included in the study between November and December 2019. The mean age of participants was 60.8 ± 11.2 years, and the number of men was 14 (70%). The average disease duration was 7.7 ± 5.9 years, 40% (n = 8) had motor fluctuations, and dyskinesias and freezing were present in 45% (n = 9) of the participants. The mean Hoehn and Yahr stage was 2.0 ± 0.5. Patients' demographics and clinical characteristics at baseline are summarized in Table 2. 2 25 [2,73]

Satisfaction
The mean level of satisfaction at the end of the study was 1.5 ± 1.1 (with 1 representing the highest level of satisfaction).

Adherence
Of the 20 patients included, 17 (85%) participants completed the 7 months of the study. The reasons for dropping out were problems with the balance tests, unwillingness to continue using the app, not attending follow-up, and family problems (Table 3).

Compliance
Of the 17 participants in the study, only 5.9% (n = 1) showed a low level of global compliance, 47.1% (n = 8) a medium level, and 47.1% (n = 8) a high level of global compliance to system measures. Table 4 characterizes the sample per level of compliance. When compared to participants with a medium level of compliance, those with a high level of compliance were younger, had a shorter disease duration, and had lower MDS-UPDRS scores. Interestingly, all patients showed moderate compliance values in the first month (Table 3 and Figure 1). Throughout the study, different behaviors were observed. The low-compliance patient gradually stopped using the app. On the other hand, from the other 16 patients, eight kept their compliance constant, whereas the other eight gradually increased their compliance. The mean percentage of compliance with active tests ranged from 80.9 to 94.1% during the first month, with the finger-tapping test showing the highest and the walk test the lowest level of compliance. During the study, there was a non-significant drop of 19.1% of compliance in the balance and finger-tapping active test, and 11.8% in the walk test. All baseline assessments were performed on a Saturday, and 46.2% of the active tests were performed with a mean delay of 2.5 ± 1.0 days. According to Figure 2, the early hours of the morning and late afternoon are the times that seem the most convenient.  Compliance with the daily survey at the end of the first month was 55.5%, registering a non-significant drop of 16% at the end of the study (Figure 3). Compliance with medication alerts was 58.0% at the end of the first month and is also approximately constant during the 7 months of the study (Table 3 and Figure 1). The chat usage was negligible, since most patients opted for direct contact with clinical and technological teams (phone calls and chat applications). Displacements in months 1 and 3 are similar, which indicates a similar use of the smartphone (for the 7th month, the values are much smaller due to the COVID-19 confinement).

Usability
Generally, PSSUQ showed a high satisfaction with the use of mKinetikos with an average value of 31.3 by the end of the study (scale ranges from 0 to 133, where 0 corresponds to the best system's usability). Moreover, a 6-point drop in the total score of the PSSUQ was observed throughout the study. In the sub-scales of system usefulness, information quality, and interface quality, the drop ranged from one to two points (Table 3).
We analyzed the items with the worst (highest) score at baseline and at the end of the trial to determine the areas in which participants had the greatest problems at baseline and those that had the most potential for improvement; however, PSSUQ is not validated to be used in this format. The items with a worse mean score (2.6 and 3.7, respectively, in a 7-point Likert scale, where 1 represents "Strongly agree") at month 1 were related to users' ability to recover from mistakes by using the system (Item 8) and with the clarity of the information provided (Item 9). At the end of the study, all items maintained or improved the score except for users' satisfaction with the system interface (Item 10) and their ability to recover from mistakes using the system (Item 8) and to efficiently complete the tasks (Item 7), that worsened 0.2 points.

Technical Feasibility
During the 7 months of the study, 99.9% of the expected data were correctly streamed.

mKinetikos Scores
We explored several mKinetikos variables and a combination of variables to predict clinical outcomes. From this approach, we obtained six moderate to strong correlations between mKinetikos metrics and clinical data (Table 5 and Figure 4).
The mTUG showed a strong correlation with the TUG test (r = 0.69, p ≤ 0.001). The other mKinetikos scores (mTapping, mMDS_FM, and mMDS_Balance) showed a moderate correlation with the corresponding clinical outcomes, with r values ranging from 0.51 to 0.64, p ≤ 0.01 (Table 5, Figure 2, and Appendix B).

Discussion
The present study explores the feasibility and usability of mKinetikos, an mHealth system for continuous unsupervised and objective real-life measures of PD patients' clinical status. Patients were able to use the mKinetikos app for 7 months, with a high level of satisfaction and compliance. The system also has proved to have good technological feasibility, with 99.9% of the data correctly streamed to the system and a high level of user usability. In the United States and Europe, more than 40% of older individuals do not consult a PD specialist or neurologist. These people are at a higher risk of falling, being admitted to a skilled care facility, or dying, according to published research, demanding the development of new, valid, and feasible mHealth monitoring solutions [33,34]. The mKinetikos responds to this need by collecting and merging unsupervised information, at a distance, from different sources (i.e., patient surveys, active tests, and passive data).

Feasibility
Eighty-five percent (n = 17) of the 20 participants enrolled, completed the study, having an approximately constant level of satisfaction throughout the study. Taking into account the 7-months follow-up of the study, our dropout rate (15%, n = 3) is lower than other previous similar studies that reported 24-39% dropouts, with 6-months of followup [5,7,23,35]. Some factors may have contributed to this difference. First, the number of surveys and active tests requested. In our study, participants were asked to answer one daily survey (three questions) and weekly active tests. In other studies, participants were requested to perform active tests three to four times a day, for a month [23]. Moreover, although it is important to have different sources of information, we believe that an mHealth solution's primary source of data should be passively collected, not only to improve patient adherence but also to provide more ecological knowledge about users' average results rather than their best. This poses a challenge in terms of data collection and compliance, as continuous data collection is needed. Second, the number of sensors required by the mKinetikos system. According to a review by the Movement Disorders Society Task Force on Technology, the number of sensors needed to accurately monitor PD symptoms can negatively affect patients' adherence [7]. The fact that mKinetikos only requires the normal use of a mobile phone by the patient may have contributed to the increased adherence. Third, the availability of a weekly report on the patient's performance, which we believe worked as a motivational factor contributing to maintaining patient adherence over time [36]. In a similar study where participants used the systems for 11.6 days, the author highlights the importance of having motivational aspects (e.g., community interactions and personalized feedback) to deal with longitudinal decline in the use of such technologies [36].
The two main reasons for dropouts in our study were difficulties managing system error and unwillingness to continue using the system. This goes in line with a previous study that suggested that the reasons for dropouts are likely multifactorial, including study fatigue, loss of the novelty aspect, device-specific, and technical issues [5].
The compliance level was approximately constant throughout the study, and higher than levels previously reported in the literature. According to a review on this topic, 26% of smartphone apps are used only once and 74% of apps are not used more than 10 times [7]. The analysis of the demographic and clinical characteristics per level of compliance revealed that participants with a high level of compliance are younger, with shorter disease duration and a lower level of disability. These results are supported by a previous study that states that patients who were more willing to use mHealth systems are younger and better educated [35]. Younger patients are more familiar with technology and are more likely to recognize the benefits of mHealth devices [35]. However, PD is a disease associated with aging. Even a patient who is comfortable with technology may be unable to interpret the information offered, visualize the interface, and/or have the dexterity to manage the app due to difficulties associated with age and more severe stages of disease (cognitive, visual, and motor dexterity impairments). When developing this type of solution, developers should keep this in mind [7,37].
Interestingly, after 7 months, we still observed high values for active tests. The analysis of compliance per active test showed that users comply more with tests like fingertapping and 1-min balance, rather than with walk tests. The mKinetikos walk test required participants to walk 10 m, which, in addition to requiring greater effort, may not be easy for some patients to perform at home. This can influence the differences between compliance rates in the active tests.
There was also a difference in the compliance levels between the daily survey (55.5% ± 34.0) and weekly active tests (88.2% ± 28.3). Two factors may have contributed to this: (1) the fatigue associated with a daily survey throughout 7 months; and (2) the daily survey was only available for five hours, while the active tests were available until the release of a new one (i.e., for one week). Although the notification of the active tests was sent on the same day of the week and participants were asked to perform the test on the day they became available, 46.2% of the tests were performed with a mean delay of 2.5 ± 1.0 days (Figure 1). This delay did not have a direct impact on our results, since we use the time at which the test was performed as a reference. In a study evaluating the feasibility of digital motor diaries, the average delay of answering was >4 h [8]. The time of the day when the tests are made available may interfere with compliance. According to our results, the early hours of the morning and late afternoon are the times that seem the most convenient (Figure 1).
Regarding the chat usage, most patients contacted the clinical team during the trial. However, as all patients were already used to being in close contact with clinicians, the mKinetikos chat was used less. We think that, in a broader study, where the patients do not have frequent contact with the clinical team, the chat functionality will be much more important. Having more data and easier communication with the patient does not guarantee a higher level of care. Healthcare professionals should learn how to incorporate these data into their daily routines to be a useful resource rather than a burden. Future research should investigate this topic.
Compliance with medication notifications increased over time. According to a previous study, patients with better drug compliance were more willing to use apps [35]. We hypothesize that the medication notifications, besides increasing drug compliance, increase compliance with the app. Even if patients fail to perform the active tests, they start relying on the medication reminders, continuing to use the system and allowing passive data to continue to be collected. Furthermore, they will enable clinicians to better interpret the results of the daily survey, as they will help them to determine if patients were in an "ON" or "OFF" state of medication at the time they filled out the questionnaires.

Usability
According to the results in the PSSUQ, the system has a good level of usability. Moreover, the observed drop in global usability is within the range of the SD, which indicates that this value was approximately constant throughout the study. However, the system interface (Item 10), the clarity of the information provided (Item 9), the resources to recover from mistakes in using the system (Item 8), and the ability to efficiently complete the tasks (Item 7) can be improved. In a previous study [36] in which 61% of participants had difficulties recovering from errors, the authors suggested a better explanation of the functioning of the areas more prone to cause an error and to provide user-friendly tools or additional information so that they can recover by themselves from the error [36]. Since most PD patients are older adults, who are often unfamiliar with technological solutions, these tips to help solve problems in use should be very clear and easy to understand. To avoid usability issues, future technology solutions should be designed with input from patients, caregivers, and healthcare professionals.

mKinetikos Scores Validity
The mKinetikos variables that best correlate with the results of clinical outcomes resulted in the five composed mKinetikos scores: mTapping, mMDS_Gait&Balance, mMDS_ Balance, mTUG, and mPGI. We observed a strong correlation between mTUG and the TUG test (r = 0.69, p ≤ 0.001) and a moderate correlation between the other mKinetikos scores and the respective clinical outcomes (r values ranging from 0.51 to 0.64, p ≤ 0.01). It is noteworthy that the strong correlation of mTUG with TUG test was achieved without any active test, as it only depends on gait parameters that are passively obtained (Table 5). Despite the huge advances in technology and the multitude of devices that are currently available, their widespread use continues to be limited due to the need for valid and meaningful algorithms. For this reason, we included this exploratory analysis in our study.
According to the literature, the design of mHealth systems should be driven by global and functional outcomes and be meaningful to patients, rather than disease-specific [3,7]. The focus of mKinetikos on functional mobility tries to respond to this need. The previous literature showed that patients valued functional mobility for being a meaningful outcome that is easy to describe. Health professionals find it a useful outcome to facilitate patients' abilities to describe their disability and to help clinicians adopt a more patient-centered approach and to provide individualized care [38]. The TUG test is the gold-standard outcome measure to evaluate functional mobility in PD [39].

Limitations of the Study
This study presents two main limitations: the small sample size and the interruption of monthly in-person assessment between months 4 and 6, due to the COVID-19 pandemic. We believe that the more difficult access to healthcare services may have contributed to increasing compliance with the system and may influence the results in favor of patients' satisfaction. However, we think that the information generated here is useful for clinicians and for guiding future studies on mHealth systems.

Conclusions
Overall, mKinetikos seems to be a feasible and usable mHealth system for continuous and remote monitoring of PD patients' functional mobility and global health status. Throughout the 7-month report, users maintained a high level of satisfaction and compliance. The findings suggest that it may be a useful tool for clinicians to better understand patients' everyday experiences and provide more patient-centered care. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent was obtained from the patients to publish this paper.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author.

•
The Timed and Go (TUG) test [19][20][21] Construct assessed: Functional mobility. Test description: The TUG test is a clinical test where is asked to the participant to rise from a chair, walk three meters at a comfortable and safe pace, turn, and return to the chair.
• Movement Disorder Society-Unified Parkinson's Disease Rating Scale (MDS-UPDRS) [17,40] Construct assessed: Motor and non-motor symptoms of PD patients Test description: The MDS-UPDRS is a four-part rating scale: Part I (non-motor daily living experiences), Part II (motor daily living experiences), Part III (motor examination), and Part IV (motor examination) (motor complications). Part I contains two parts: IA and IB. IA involves a variety of behaviors that the investigator assesses with all relevant information from patients and carers, and IB is completed by the patient with or without the assistance of the caregiver, but independently of the investigator. The rater completes Part III, which contains instructions for the rater to provide or demonstrate to the patient. Part IV contains instructions for the rater as well as information for the patient to read. The rater completes this section, which combines patient-derived data with the rater's clinical observations and judgments.
• Parkinson's Disease Questionnaire (PDQ-39) [41] The patient was instructed to mark a 10 m straight path (i.e., without changes of direction) and walk it three times.
• Finger tapping test (1 min) The patient was asked to tap on the screen during 1 min.
• One-minute quiet stance