Implementation of Online Behavior Modification Techniques in the Management of Chronic Musculoskeletal Pain: A Systematic Review and Meta-Analysis

Purpose: The main aim of this systematic review and meta-analysis (MA) was to assess the effectiveness of online behavior modification techniques (e-BMT) in the management of chronic musculoskeletal pain. Methods: We conducted a search of Medline (PubMed), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Web of Science, APA PsychInfo, and Psychological and Behavioral Collections, from inception to the 30 August 2021. The main outcome measures were pain intensity, pain interference, kinesiophobia, pain catastrophizing and self-efficacy. The statistical analysis was conducted using RStudio software. To compare the outcomes reported by the studies, we calculated the standardized mean difference (SMD) over time and the corresponding 95% confidence interval (CI) for the continuous variables. Results: Regarding pain intensity (vs. usual care/waiting list), we found a statistically significant trivial effect size in favor of e-BMT (n = 5337; SMD = −0.17; 95% CI −0.26, −0.09). With regard to pain intensity (vs. in-person BMT) we found a statistically significant small effect size in favor of in-person BMT (n = 486; SMD = 0.21; 95%CI 0.15, 0.27). With respect to pain interference (vs. usual care/waiting list) a statistically significant small effect size of e-BMT was found (n = 1642; SMD = −0.24; 95%CI −0.44, −0.05). Finally, the same results were found in kinesiophobia, catastrophizing, and self-efficacy (vs. usual care/waiting list) where we found a statistically significant small effect size in favor of e-BMT. Conclusions: e-BMT seems to be an effective option for the management of patients with musculoskeletal conditions although it does not appear superior to in-person BMT in terms of improving pain intensity.


Introduction
The serious health crisis the world is currently experiencing as a result of coronavirus disease 2019 (COVID- 19) is affecting virtually all social and professional spheres [1]. At the clinical level, conventional rehabilitation consultations have had to be suspended, and many patients have had to interrupt their standard or conventional therapy (face to face). A small percentage of patients have begun undergoing therapy through telematic channels [1]. Although is still too early to determine the actual percentage of clinicians who have incorporated telerehabilitation (TR) into their portfolio of services, we suspect that there have been few. TR is defined as the implementation of a virtual, technology-based clinical-healthcare intervention in order to deliver care at a distance [2].
The person-centered model of care encompasses a number of dimensions in which the therapist-patient alliance, behavioral analysis, the patient as a whole, patient empowerment and finally the therapist's perspective are included [3]. It involves a range of tools in 2.1. 1

. Population
The participants selected for the studies were patients older than 18 years with any kind of chronic musculoskeletal disorder. The participants' gender was irrelevant. We excluded patients with musculoskeletal pain due to oncologic or traumatic process.

Intervention and Control
The intervention was e-BMT applied through a technology device (Website, online, telephone or mobile application). The intervention could be applied alone or embedded with another treatment, only if the control group contains only the additional treatment. Control group could be usual care, waiting list, no intervention or in-person equivalent BMT.

Outcomes
The measures used to assess the results were pain intensity, pain interference, kinesiophobia, pain catastrophizing and self-efficacy. Time of measurement was restrained to post-treatment results.

Study Design
We only included randomized studies (randomized controlled trials (RCTs) or randomized parallel design-controlled trials) given the amount of literature available in this area.
The search strategy was adapted to other electronic databases. In addition, we manually checked the reference of the studies included in the review and we checked the studies included on systematic review related to this topic. The search was also adapted and performed in Google Scholar due to its capacity to search for relevant articles and grey literature [30,31]. No restrictions were applied to any specific language as recommended by the international criteria [32]. The different search strategies used are detailed in Appendix B.
Two independent reviewers conducted the search using the same methodology, and the differences were resolved by consensus moderated by a third reviewer. We used Rayyan software to organize studies, assess studies for eligibility and remove duplicates [33].

Selection Criteria and Data Extraction
The two phases of studies selection (title/abstract screening and full-text evaluation) were realized by two independent reviewers. First, they assessed the relevance of the studies regarding the study questions and aims, based on information from the title, abstract and keywords of each study. If there was no consensus or the abstracts did not contain sufficient information, the full text was reviewed. In the second phase of the analysis, the full text was used to assess whether the studies met all the inclusion criteria. Differences between the two independent reviewers were resolved by a consensus process moderated by a third reviewer [34]. Data described in the results were extracted by means of a structured protocol that ensured that the most relevant information was obtained from each study [35].

Risk of Bias and Methodological Quality Assessment
The Risk Of Bias 2 (RoB 2) tool was used to assess randomized trials [36]. It covers a total of five domains: (1) Bias arising from the randomization process, (2) Bias due to deviations from the intended interventions, (3) Bias due to missing outcome data, (4) Bias in measurement of the outcome, (5) Bias in selection of the reported result. The study will be categorized has having (a) low risk of bias if all domains shown low risk of bias, (b) some concerns if one domain is rated with some concerns without any with high risk of bias, and (c) high risk of bias, if one domain is rated as having high risk of bias or multiple with some concerns.
Two independent reviewers examined the quality and the risk of bias of all the selected studies using the same methodology. Disagreements between the reviewers were resolved by consensus with a third reviewer. The concordance between the results (inter-rater reliability) was measured using Cohen's kappa coefficient (κ) as follows: (1) κ > 0.7 indicated a high level of agreement between assessors; (2) κ = 0.5-0.7 indicated a moderate level of agreement; and (3) κ < 0.5 indicated a low level of agreement) [39].

Certainty of Evidence
The certainty of evidence analysis was based on classifying the results into levels of evidence according to the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) framework, which is based on five domains: study design, imprecision, indirectness, inconsistency and publication bias [40]. The assessment of the five domains was conducted according to GRADE criteria [41,42]. Evidence was categorized into the following four levels accordingly: (a) High quality. Further research is very unlikely to change our confidence in the effect estimate. All five domains are also met; (b) Moderate quality. Further research is likely to have an important impact on our confidence in the effect estimate and might change the effect estimate. One of the five domains is not met; (c) Low quality. Further research is very likely to have a significant impact on our confidence in the effect estimate and is likely to change the estimate. Two of the five domains are not met; and (d) Very low quality. Any effect estimates highly uncertain. Three of the five domains are not met [41,42].
For the risk of bias domain, the recommendations were downgraded one level in the event there was an uncertain or high risk of bias and serious limitations in the effect estimate (more that 25% of the participants were from studies with high risk of bias, as measured by the RoB2 scale). In terms of inconsistency, the recommendations were downgraded one level when the point estimates varied widely among studies, the confidence intervals showed minimal overlap or when the I 2 was substantial or large (greater than 50%). In regard to indirectness, domain recommendations were downgraded when severe differences in interventions, study populations or outcomes were found (the recommendations were downgraded in the absence of direct comparisons between the interventions of interest or when there are no key outcomes, and the recommendation is based only on intermediate outcomes or if more than 50% of the participants were outside the target group). For the imprecision domain, the recommendations were downgraded one level if there were fewer than 300 participants for the continuous data [43]. Finally, the recommendations were downgraded due to the strong influence of publication bias if the results changed significantly after adjusting for publication bias.

Data Synthesis and Analysis
The statistical analysis was conducted using RStudio software (RStudio, PBC, Boston, MA) according to the guide from Harrer et al. [44]. To compare the outcomes reported by the studies, we calculated the standardized mean difference (SMD) over time and the corresponding 95% confidence interval (CI) for the continuous variables. The statistical significance of the pooled SMD was examined as Hedges' g to account for a possible overestimation of the true population effect size in the small studies [45]. The estimated SMDs were interpreted as described by Hopkins et al. [46], that is, we considered that an SMD of 4.0 represented an extremely large clinical effect, 2.0-4.0 represented a very large effect, 1.2-2.0 represented a large effect, 0.6-1.2 represented a moderate effect, 0.2-0.6 represented a small effect and 0.0-0.2 represented a trivial effect. If necessary, CI and standard error (SE) where converted in standard deviation (SD) using the formulas recommended by the Cochrane Handbook for Systematic Reviews of Interventions version 6.2: SD = √ (N) * (upper limit − lower limit)/3.92 and SD = √ (N) * SE, respectively [47]. We used the same inclusion criteria for the systematic review and the meta-analysis and included three additional criteria: (1) In the results, there was detailed information regarding the comparative statistical data of the exposure factors, therapeutic interventions, and treatment responses; (2) the intervention was compared with a similar control group; and (3) data on the analyzed variables were represented in at least three studies.
Since we pooled different treatments, we could not assume that there was a unique true effect. So, we anticipated between-study heterogeneity and used a random-effects model to pool effect sizes. In order the calculate the heterogeneity variance τ 2 , we used the Restricted Maximum Likelihood Estimator as recommended for continuous outcomes [48,49]. To calculate the confidence interval around the pooled effect, we used Knapp-Hartung adjustments [50,51].
In order to pool the catastrophizing variable and the different subscales of the Pain Catastrophizing scale [52], we ran a subgroup analysis using fixed-effects (plural) model [53]. First, we pooled effect sizes in each subgroup (Pain catastrophizing or other catastrophizing overall score, Helplessness, Magnification and Rumination) using a random-effects model. Finally, we used a fixed-effect model to pool the pooled effects from the different subgroups.
We estimated the degree of heterogeneity among the studies using Cochran's Q statistic test (a p-value < 0.05 was considered significant), the inconsistency index (I 2 ) and the prediction interval (PI) based on the between-study variance τ 2 [46]. The Cochran's Q test allows us to assess the presence of between-study heterogeneity [54]. Despite its common use to assess heterogeneity, the I 2 index only represent the percentage of variability in the effect sizes not caused by sampling error [55]. Therefore, as recommended, we additionally report PIs. The PIs are an equivalent of standard deviation and represent a range within which the effects of future studies are expected to fall based on current data [55,56].
To detect the presence of outliers that could potentially influence the estimated pooled effect and assess the robustness of our results, we applied an influence analysis based on the leave-one-out method [57]. If a study's results had an important influence on the pooled effect, we conducted a sensitivity analysis, removing it or them. We additionally ran a drapery plot which is based on p-value functions and give us the p-value curve for the pooled estimate for all possible alpha levels [58].
To detect publication bias, we performed a visual evaluation of the Doi plot and the funnel plot [59], seeking asymmetry. We also performed a quantitative measure of the Luis Furuya-Kanamori (LFK) index, which has been shown to be more sensitive than the Egger test in detecting publication bias in a meta-analysis of a low number of studies [60]. An LFK index within ±1 represents no asymmetry, exceeding ±1 but within ±2 represents minor asymmetry, and exceeding ±2 involves major asymmetry. If there was significant asymmetry, we applied a small-study effect method to correct for publication bias using the Duval and Tweedie Trim and Fill Method [61].

Meta-Analysis Results
The overall strength of evidence for each variable and the reason it was downgraded is detailed in Table 2 [66,110], so, we ran a sensitivity analysis without them (Appendix F). The sensitivity analysis showed a statistically significant trivial effect size (38 RCTs; n = 5337; SMD = −0.17; 95% CI −0.26, −0.09) of e-BMT on pain intensity, with a significant heterogeneity (Q = 67.4 (p < 0.01), I 2 = 44% (18%, 62%), PI −0.48, 0.13) and a low strength of evidence ( Figure 1). Since PI crosses zero, we cannot be confident that future studies will not find contradictory results. The drapery plot revealed that the statistically significance of the results is robust through different p-value functions (Appendix F). With respect to the presence of publication bias, the visual evaluation of the shape of the funnel and Doi plot shown an asymmetrical pattern, showing a minor asymmetry (LFK index = −1.79) (Appendix F). When the sensitivity analysis is adjusted for publication bias, there is not anymore statistically significant effect (Appendix F). Subgroup analyses are detailed in Table 3. They also showed less feeling of helplessness (p = 0.003) and rumination (p = 0.0003), globally, there was a higher improvement of catastrophizing (p = 0.0006). There was also a higher improvement of self-efficacy (p < 0.0001).   Intervention groups had significantly a significantly lower scores of pain intensity average than waiting list (p ≤ 0.03). All treatment groups, without control group, showed a significant improvement of self-efficacy and kinesiophobia (p ≤ 0.046).  Intervention group had a significantly higher improvement of pain intensity (p = 0.037). However, there was not for pain interference. There was also a statistically significant time by group interaction for kinesiophobia (p < 0.001). Other outcomes were not significant. Both mindfulness group improved more catastrophizing than waiting list (p < 0.01) post-treatment but without significant differences between them. Both mindfulness group showed lower pain-intensity than control group post-treatment (p < 0.01 and p < 0.05), but face-to-face showed also lower pain score than online treatment (p < 0.05).  There were no significant differences between groups except for self-efficacy (p < 0.001). Pain intensity: NRS (0-10) -Self-efficacy for exercise: Exercise Self-efficacy score

Gialanella et al., 2017 and 2020
Intervention group showed no statistically significant on pain interference (p = 0.09). Intervention group showed a higher exercise self-efficacy post-treatment (p = 0.01) who failed to maintain at 12 months.
There were no more significant differences. There was a statistically significant higher improvement in pain intensity (p < 0.05) in intervention group. There was also a significant higher improvement of self-efficacy (p = 0.003).  There were significant differences post-treatment in favor of the intervention group in self-efficacy (p = 0.000) and quality of life (p = 0.003), who maintained at 9 months (p = 0.000 and p = 0.004, respectively).  There were significant time-by-group interactions on pain self-efficacy (p < 0.05), pain severity (p < 0.05), kinesiophobia (p < 0.01), in favor of intervention group. However, there were no interactions for pain interference.

Online relaxation and problem-solving intervention
Internet-based There was a statistically significant difference between groups at post treatment for pain intensity (p = 0.009).  There was not a statistically significant interaction group by time on pain interference and pain intensity. However, there was a significant interaction group by time on self-efficacy (p = 0.00) in favor of the online group.

Pain Intensity (vs. In-Person BMT)
The influence analyses revealed no presence of outliers (Appendix G). The statistical analysis showed a statistically significant small effect size (5 RCTs; n = 486; SMD = 0.21; 95% CI 0.15, 0.27) of in-person BMT on pain intensity, with no significant heterogeneity (Q = 0.23 (p < 0.99), I 2 = 0% (0%, 79.2%), PI 0.14, 0.28)) and a moderate strength of evidence ( Figure 2). Since PI does not cross zero, we can be confident that future studies will not find contradictory results. The drapery plot revealed that the statistically significance of the results is robust through different p-value functions (Appendix G). With respect to the presence of publication bias, the visual evaluation of the shape of the funnel and Doi plot shown an asymmetrical pattern, showing a major asymmetry (LFK index = −2.36) (Appendix G). However, the adjustment did not influence the results (Appendix G). When the sensitivity analysis is adjusted for publication bias, there is no influence of the results (Appendix G). Figure 1. Sensitivity analysis of the pain intensity variable for online behavioral techniques against usual care or waiting list. The forest plot summarizes the results of included studies (sample size, mean, standard deviation (SD), standardized mean differences (SMDs), and weight). The small boxes with the squares represent the point estimate of the effect size and sample size. The lines on either side of the box represent a 95% confidence interval (CI).

Pain Interference (vs. Usual Care/Waiting List)
The influence analyses revealed no presence of outliers (Appendix H). The statistical analysis showed a statistically significant small effect size (13 RCTs; n = 1642; SMD = −0.24; 95% CI −0.44, −0.05) of e-BMT on pain interference, with a significant heterogeneity (Q = 28.78 (p < 0.01), I 2 = 58% (23%, 77%), PI −0.79, 0.31) and a low strength of evidence ( Figure 3). Since PI crosses zero, we cannot be confident that future studies will not find contradictory results. We cannot be confident of the significance of our results, the drapery plot revealed that the statistically significance of the results did not maintain at p = 0.01 (Appendix H). With respect to the presence of publication bias, the visual evaluation of the shape of the funnel and Doi plot showed a symmetrical pattern, showing no asymmetry (LFK index = −0.21) (Appendix H). Subgroup analyses are detailed in Table 3.

Kinesiophobia (vs. Usual Care/Waiting List)
The influence analyses revealed that the study from Friesen et al. was an outlier [69], so, we ran a sensitivity analysis without it (Appendix H). The sensitivity analysis showed a statistically significant small effect size (3 RCTs; n = 340; SMD = −0.57; 95% CI −1.08, −0.06) of e-BMT on kinesiophobia, with no significant heterogeneity (Q = 2.09 (p = 0.35), I 2 = 4% (0%, 90%)) and a moderate strength of evidence ( Figure 4). All the subscales of the pain catastrophizing scale were significantly improved. The drapery plot revealed that the statistically significance of the results is robust through different p-value functions (Appendix H). With respect to the presence of publication bias, the visual evaluation of the shape of the funnel and Doi plot showed an asymmetrical pattern, showing a major asymmetry (LFK index = −4.12) (Appendix G). When the sensitivity analysis was adjusted for publication bias, there still was a statistically significant small effect (Appendix H).

Catastrophizing (vs. Usual Care/Waiting List)
The influence analyses revealed that the studies from Ruehlman et al. and Trudeau et al. were outliers [93,103], so, we ran a sensitivity analysis without them (Appendix I). The sensitivity analysis showed a statistically significant small effect size (16 RCTs; n = 1613; SMD = −0.40; 95% CI −0.48, −0.32) of e-BMT on catastrophizing, with no significant heterogeneity (Q = 1.76 (p = 0.62) I 2 = 31% (0%,56%)) and a moderate strength of evidence ( Figure 5). All the subscales of the pain catastrophizing scale showed statistically significant improvements. The drapery plot revealed that the statistically significance of the results is robust through different p-value functions (Appendix I). With respect to the presence of publication bias, the visual evaluation of the shape of the funnel and Doi plot showed a symmetrical pattern, showing no asymmetry (LFK index = −0.34) (Appendix I).

Self-Efficacy (vs. Usual Care/Waiting List)
The influence analyses revealed that the study from Kleiboer et al. was an outlier [79] (Appendix J) so, we ran a sensitivity analysis without it. The sensitivity analysis showed a statistically significant small effect size (20 RCTs; n = 2811; SMD = 0.38; 95% CI 0.17, 0.60) of e-BMT on self-efficacy, with a significant heterogeneity (Q = 50.41 (p < 0.01), I 2 = 62% (29%, 80%), PI −0.14, 0.91) and a low strength of evidence ( Figure 6). Since PI crosses zero, we cannot be confident that future studies will not find contradictory results. The drapery plot revealed that the statistically significance of the results is robust through different p-value functions (Appendix J). With respect to the presence of publication bias, the visual evaluation of the shape of the funnel and Doi plot showed a symmetrical pattern, showing a minor asymmetry (LFK index = 1.78) (Appendix J). When the sensitivity analysis was adjusted for publication bias, there was still a statistically significant small effect (Appendix J). Subgroup analyses are detailed in Table 3.

Discussion
The aim of this systematic review was to assess the effectiveness of e-BMT in painrelated variables in patients with musculoskeletal chronic pain. We found a trivial effect of e-BMT on pain intensity when compared with usual care or waiting list. Subgroup analyses showed that e-BMT seems to be more effective in fibromyalgia, internet-based or an application of more than 1 month. However, e-BMT showed a statistically significant lower improvement in pain intensity than an equivalent in-person BMT. There was a small effect on pain interference, kinesiophobia, and self-efficacy when compared with usual care or waiting list. Subgroup analyses showed that e-BMT seems to be more effective in unspecified chronic pain, CBT or self-management intervention, or an intervention that lasts between 1 and 3 months. There was a small effect on catastrophizing when compared with usual care or waiting list, however, when analyzed per item, all the subscales (helplessness, rumination and magnification and the overall score) showed a small effect in favor of e-BMT.
Dario et al. reviewed the effect of e-BMT on chronic LBP patients and found no effect on pain intensity [27]. We found that e-BMT had an overall significant effect on pain intensity, however, our subgroup analysis revealed no statistically significant effect for chronic LBP which confirms their results. Unlike us, they included only four studies in their meta-analysis. Du et al. reviewed the effect of online self-management on chronic LBP [24]. Unlike us, they found that an online e-BMT has similar effect in pain intensity, nonetheless, in the present systematic review we add a quantitative analysis to confirm that in-person BMT is more effective. We want to emphasize that there are no systematic reviews that provide meta-analyses on the effect of e-BMT, exclusively in adults, compared to usual care/waiting list on different important variables of the chronic pain patient (e.g., catastrophizing, pain interference, kinesiophobia, self-efficacy), nor that provide a quantitative comparison with in-person BMT.
The COVID-19 pandemic has confronted us with an important barrier to the appropriate management of the patient with chronic pain: social distancing [13,14]. Their treatments were undermined by this situation, resulting in a worsening of their condition [13,14]. Despite a current improvement of the COVID-19 pandemic situation, it has not concluded and the future is uncertain [121,122]. This leaves us with a question from which we must learn to prepare ourselves for the future: how to provide an effective rehabilitation to chronic pain patients when it is impossible to be physically present? TR and the use of new technologies appear as a serious answer to this problem and have been recommended worldwide [14,123]. Patients with chronic pain highlight the importance of health professionals to give them the tools to cope with the burden of chronic pain [124]. e-BMT offers the possibility to give to the patient tools to self-manage its condition through the different BMT (e.g., CBT, ACT) whatever the patient's situation: from geographic isolation to social distancing. In the present systematic review, we found that e-BMT is effective in the management of the patient with chronic pain.
We found that in-person BMT was superior to e-BMT in improving pain intensity. Lewis et al. studied how patients perceived the transition from in-person to online treatment and found that 40% of patients thought the transition to online treatment may have affected the effectiveness of the treatment, and even more, 68% said they would not want to continue online when it would be possible to do so in person [125]. Our results could be explained by some patients' preference for face-to-face treatment and, therefore, some patients may have the worst expectations about their treatment. Future studies should evaluate patient expectations of e-BMT as a possible confounding factor. Finally, the data must be considered with caution due to the heterogeneity of the sample, although a subgroup analysis was carried out to assess the effect of each intervention within BMTs and also within each specific clinical population. One of the things that the authors reflect on the results obtained is whether they are generalizable to all patients with persistent pain of musculoskeletal origin. The answer would be that it depends. First, it would have to be seen whether or not they have the presence of psychosocial variables such as catastrophic thoughts, movement-related fear or lack of self-efficacy. If these variables are not present, it would make little sense to implement interventions aimed at improving them. However, if they are present and can have an impact on the lives of patients with persistent pain, these tools should be considered. However, future studies are necessary, especially in order to homogenize the sample, something that is always sought after in the treatment of patients with pain.

Practical implication
About clinical implications, the results showed good results in favor of e-BMT. This gives us an effective treatment window in the COVID-19 era, so we are going to have a greater impact on patients with persistent pain. In addition, there is a decentralization of interventions, which may have some positive effects such as improving and increasing adherence to treatments due to easier accessibility, as well as lowering barriers to access or facilitating follow-up. Future studies should also focus on longer follow-ups to see this effectiveness and evaluate variables such as motivation or adherence to chronic pain treatments. Finally, telemedicine rehabilitation may lead to lower costs for both patients and therapists, which may reduce waiting lists for clinical treatments.

Limitations
Despite the use of subgroup analyses to study the heterogeneity between studies, the difference between the protocols of e-BMT prevents us to offer to health professionals a specific intervention design to implement. After adjusting for publication bias, our results on pain intensity versus usual care were no more statistically significant, so our results should be interpreted with caution. Our results on pain intensity, pain interference and self-efficacy are supported by only very low to low quality of evidence, true effects might be or are probably different from our estimated effects [126]. No study showed a low risk of bias according to the RoB2 scale, future studies should improve their quality to improve the confidence we can have in their results.

Conclusions
Based on the results obtained, e-BMT seems to be an effective option for the management of patients with musculoskeletal conditions with chronic musculoskeletal pain, especially in the era of COVID-19 where social distancing must be privileged. However, it does not appear superior to in-person BMT in terms of improving pain intensity.

Pain"[Mesh])) AND (randomized controlled trial[pt] OR controlled clinical trial[pt] OR randomized[tiab] OR placebo[tiab] OR clinical trials as topic[mesh:noexp] OR randomly[tiab] OR trial[ti] NOT (animals[mh] NOT humans [mh]) NOT ("protocol") NOT ("Review"))
CINAHL-173 results (web or internet or online or mobile or remote treatment or digital treatment or Internet-Based Intervention or Telerehabilitation or Telemedicine) AND (chronic pain or persistent pain or long term pain or long-term pain) AND (randomized controlled trials or rct or randomised control trials) NOT (systematic review or meta-analysis or literature review or review of literature) NOT (pediatric or child or children or infant or adolescent) Psychology and Behavioral Sciences Collection (EBSCO)-12 results (web or internet or online or mobile or remote treatment or digital treatment or Internet-Based Intervention or Telerehabilitation or Telemedicine or) AND (chronic pain or persistent pain or long term pain or long-term pain) AND (randomized controlled trials or rct or randomised control trials) NOT (systematic review or meta-analysis or literature review or review of literature) NOT (pediatric) APA PsychINFO-75 results (web or websites or internet or online or Online Therapy or mobile or Mobile Applications or remote treatment or digital treatment or Digital Interventions or Internet-Based Intervention or Telerehabilitation or Telemedicine) AND (chronic pain or persistent pain or long term pain or long-term pain) AND (randomized controlled trials or rct or randomised control trials) NOT (systematic review or meta-analysis or literature review or review of literature) NOT (pediatric or child or children or infant or adolescent) Web of science-49 studies TI = (Web OR eearth OR melth OR remote treatment OR digital treatment OR Mobile Applications OR Software OR Online OR Telephone OR Cell phone OR estherapy OR Internet OR Online OR Telerehabilitation OR Internet-Based Intervention OR Telerehabilitation OR Telemedicine) AND TI = (Chronic pain) AND TI = (randomi?ed controlled trial* OR rct) Google Scholar ("web" OR "online" OR "internet" OR "mobile" OR "telerehabilitation" OR "telemedicine") AND [allintitle:"chronic pain" OR "persistent pain"] AND ("randomized controlled trial" OR "randomised controlled trial OR "RCT")-review  Notes: 1: subject choice criteria are specified; 2: random assignment of subjects to groups; 3: hidden assignment; 4: groups were similar at baseline; 5: all subjects were blinded; 6: all therapists were blinded; 7: all evaluators were blinded; 8: measures of at least one of the key outcomes were obtained from more than 85% of baseline subjects; 9: intention-to-treat analysis was performed; 10: results from statistical comparisons between groups were reported for at least one key outcome; 11: the study provides point and variability measures for at least one key outcome. 1: item 1 does not contribute to the final score.