The Rethinking Clinical Trials (REaCT) Program. A Canadian-Led Pragmatic Trials Program: Strategies for Integrating Knowledge Users into Trial Design

We reviewed patient and health care provider (HCP) surveys performed through the REaCT program. The REaCT team has performed 15 patient surveys (2298 respondents) and 13 HCP surveys (1033 respondents) that have addressed a broad range of topics in breast cancer management. Over time, the proportion of surveys distributed by paper/regular mail has fallen, with electronic distribution now the norm. For the patient surveys, the median duration of the surveys was 3 months (IQR 2.5–7 months) and the median response rate was 84% (IQR 80–91.7%). For the HCP surveys, the median survey duration was 3 months (IQR 1.75–4 months), and the median response rate, where available, was 28% (IQR 21.2–49%). The survey data have so far led to: 10 systematic reviews, 6 peer-reviewed grant applications and 19 clinical trials. Knowledge users should be an essential component of clinical research. The REaCT program has integrated surveys as a standard step of their trials process. The COVID-19 pandemic and reduced face-to-face interactions with patients in the clinic as well as the continued importance of social media highlight the need for alternative means of distributing and responding to surveys.


Introduction
There are many barriers to performing clinical trials and in recent years the number of adult cancer patients accrued to trials has steadily fallen [1]. The REthinking Clinical Trials (REaCT) Program was created with the intention of overcoming many of these barriers for comparing standard of care interventions, so that more patients could be offered participation in trials, participation would be less onerous, and results would be clinically important [2,3]. While initially developed as an initiative in Ottawa, it became increasingly clear that investigators in other centres were also interested in participating in REaCT trials as well as leading their own studies using the REaCT infrastructure. Thus over the years the program has expanded to multiple sites across Canada. The key elements of the program are shown in Figure 1, and broadly incorporate: identification of clinically relevant questions, conduct of systematic reviews of the evidence and surveys of end users, performance of pragmatic trials (using simply defined study endpoints, avoidance of superfluous data collection, use of an integrated consent model (ICM) incorporating oral consent [2,4,5], efficient Research Ethics Board (REB) approval [6], web-based randomisation in the clinic, and the use of real-time electronic data capture), economic analyses and knowledge mobilisation strategies. To date, the REaCT investigators have performed 20 randomized trials at 16 centres and has accrued over 3300 patients. The mandate of these trials has been broad,

Materials and Methods
All surveys performed by the REaCT team since program inception in 2014 were reviewed as were studies performed by the team members that followed the same methodology. Where information was not available from the original publication of each survey, source documentation was sought if feasible.

Patient Survey Outcomes
Outcome data collected from patient surveys included patient demographics (i.e., type of cancer, stage of cancer), how potential survey participants were identified (e.g., from clinic lists), how participants were contacted for survey participation (e.g., approached by a HCP or cold-called by a study clinical research associate), how surveys were distributed to participants, and how survey responses were collected (in clinic, email, mail, various online platforms such as Microsoft Forms or the institution's electronic medical record EMR). Where possible, information on response rates to surveys was also collected.

Health Care Provider Survey Outcomes
Outcome data for HCP surveys included: types of participants (e.g., surgical/medical//radiation oncologists, surgeons, RNs, APNs), how participants were identified (e.g., society listings), how participants were contacted (email, various online platforms such as Microsoft Forms), how surveys were distributed, and how survey responses were collected (in clinic, email, Microsoft Forms). Using a modified Dillman approach, each survey was sent to HCPs at least twice [22]. Where possible, information on response rates to surveys was also collected.

Results
The REaCT team members have performed and published 15 patient and 12 HCP surveys. These are outlined in Tables 1 and 2, respectively.

Process for Designing Surveys
The surveys were consistently designed by a multidisciplinary team with demonstrated expertise in oncology, survey design, and methodology. Each survey was pilot tested on a limited number of patients, oncologists, advanced practice nurses and nonhealthcare professionals before launch. Over time, it has become clear that repeated readings of surveys are needed to ensure that they remain clearly written with unambiguous answers. In addition, keeping surveys as short as possible to ensure compliance is essential [23].

Choice of Research Ethics Board (REB)
As publication of survey results is the intent of most surveys performed, we used either local REBs or, where more than one site would be accrued, we used the Ontario Cancer Research Ethics Board (OCREB). In the few examples where there was no intent to publish, no REB approval was sought. This included ad hoc surveys of colleagues in our centre asking what differences in study outcomes would be enough to drive changes in practice for the purpose of sample size application for grants. In the current review we only discus those surveys with a formal protocol that follows the REaCT program processes.

Use of Incentives
A significant issue with surveys is ensuring that the response rate is high enough to make the study findings truly meaningful. Some authors have proposed that survey response rates should achieve at least 60% to ensure that the validity of results is not influenced by nonresponse bias [24]. There is literature on the use of incentives (e.g., financial reward for completing the survey) as a tool for increasing response rates [25]. However, as an academic investigator-led program such incentives could be financially prohibitive to actually performing the study. In addition, any honoraria received are also taxable income that should be declared by the recipient [25]. To date, we have only had funds to offer a gift voucher (a coffee card worth $5) to those physicians who sent us an email on completion of this REB-approved survey [26].  Led to a population-based cohort study [28]   pre-and post-medication prophylaxis is required. A single protocol for post-medications is required when pre-medication not taken as prescribed.
Led to a clinic trial [55]

Patient Surveys
Of the 15 patient surveys performed the survey topics addressed a broad range of topics including perceptions around post-operative radiological staging [27], choices of adjuvant surgery/radiotherapy and endocrine therapy in patients ≥70 [29], toxicities from endocrine therapy (hot flashes [32], urogenital side effects [33]), timing of starting endocrine therapy in patients receiving radiotherapy [78], adjuvant chemotherapy choices of chemotherapy for TNBC [37], ranking of chemotherapy toxicities for both early stage and metastatic patients [38,49], taxane-associated pain syndrome [50], use of filgrastim for primary febrile neutropenia prophylaxis for adjuvant chemotherapy [51], dosing of dexamethasone in patients receiving docetaxel [26], choice of vascular access strategy for chemotherapy administration [56], choice of endpoints for chemotherapy-induced nausea and vomiting (CINV) [58] and de-escalation of adjuvant bisphosphonates [59]. All of these surveys involved patients with breast cancer. Two surveys included patients with bone metastases, evaluating the use of bone-modifying agents (BMAs) accrued patients with breast cancer and castration resistant prostate cancer (CRPC) [61,68].
Of the 15 surveys performed, 5 required written consent. However, in more recent years, after working closely with local and provincial REBs all surveys used implied consent. Patients gave verbal consent to being approached for a survey and could choose to anonymously complete the survey or not. This occurred because of the increasing recognition that not all surveys required written consent and indeed the requirement for written consent could reduce the validity of study findings to reflect as broad a patient population as possible. Potential patients for surveys were often identified in the clinic (11/15), however in more recent surveys patients have also been identified and approached through their involvement in other studies [29,32] and pharmacy lists [50,68]. With the introduction of the MyChart function within the EPIC EMR patients are also now able to consent to being contacted about other studies [29,32]. Previously while most studies would accrue patients through the physician at a clinic visit it is evident that more recent studies launched since March 2021 and COVID-19 restrictions on in-person visits to the clinic have used a combination of approaches including cold calling by study CRAs [29,32,68]. However, all eligible patients were approached and presented the survey by someone in their circle of care. Traditionally, REB approval has required that paper-based copies of any survey be available for all patients for completion either in the clinic or at home and this was so for all 15 studies. However, there has been an increasing move to responses being made by; telephone (3 surveys), email (9 surveys), use of a laptop in the clinic (2 surveys), or by regular mail (4 surveys). As responses to mailed out surveys have proven to be low we are no longer offering this option.
Using these strategies, a total of 2298 of 2624 contacted patients have responded to the 15 surveys. The median duration of the surveys was 3 months (IQR 2.5-7) and the median response rate was 84% (IQR 80-91.7%). The surveys frequently identified clinical equipoise (Table 1), and all have been either published or are currently under review [29,32]. The survey data led support to the REaCT program performing: a population-based cohort study (1), systematic reviews (10), peer-reviewed grant applications (6), review articles (3), treatment guidelines (4) and 19 clinical trials.

Health Care Provider Surveys
Of the 13 HCP surveys performed, the survey topics were similar to those in the patient surveys (Table 2).These topics included: development of a decision aid for breast cancer patients considering contralateral prophylactic mastectomy [73], perceptions around post-operative radiological staging [74], management of lobular cancer [75], choices of adjuvant surgery/radiotherapy and endocrine therapy in patients aged 70 or over [77], timing of starting endocrine therapy in patients receiving radiotherapy [78], choice of chemotherapy for TNBC [37], toxicities from endocrine therapy [81], and supportive care studies for chemotherapy patients. These studies have evaluated: choice of vascular access for chemotherapy administration [83], use of growth factors with neo/adjuvant chemotherapy for breast cancer [51], dexamethasone pre-medication with docetaxel [26], as well as the de-escalation of bone-modifying agents in both the adjuvant [84] and metastatic settings [85,86]. Most studies related to the care of breast cancer patients, while the surveys evaluating bone-modifying agents in the metastatic setting [85,86] also included patients with castration resistant prostate cancer.
A broad range of HCPs were surveyed including; medical oncologists (13), radiation oncologists (9), surgical oncologists (7), oncology nurses (including advanced practice nurses (APNs) and nurse practitioners (NPs) (4), general practitioners in oncology (2) (2). With time, these lists were used to derive a list of responsive HCPs that was used in 10 further surveys. All surveys included contacting HCPs by email and 2 also used regular mail. As these studies all received REB approval, they required a documented consent process. For the HCP surveys, completion of the survey (whether on paper or electronic) implied consent to participate in the study.
Using these strategies, a total of 1033 of 3280 contacted HCPs responded. For 13 surveys, the median duration of surveys was 3 months (IQR 1.75-4 months) and the median response rate, where available, was 28% (IQR 21.2-49%). Similar to the patient surveys, a consequence of the 13 HCP surveys was that they frequently identified clinical equipoise ( Table 2). All the surveys were published or are currently under review [77,81]. The survey data led support to: a development of a decision aid, a population-based cohort study, 6 systematic reviews, 5 peer-reviewed grant applications, 2 review articles, 2 treatment guidelines and 15 clinical trials.

Discussion
Surveys provide an important form of scientific inquiry that aim to gather reliable and unbiased data in an efficient, reasonably inexpensive, and adaptable way from a representative sample of respondents [23][24][25]. Knowledge user input through surveys is an essential part of the planning for any clinical trial. Knowledge users can provide invaluable information on such diverse issues as clinical equipoise, meaningful study endpoints, clinical importance of the question being asked, elements of study design to enhance pragmatism and improve enrollment, and willingness to participate in clinical trials (whether as a patient or as a treating physician). In this manuscript, we present the experience of the largest pragmatic oncology program that we are aware of in Canada. We also present important lessons learned regarding survey implementation thus far in the engagement of our most vital knowledge users. The lessons learned are particularly important in an era of rapid expansion of social media as well as the impact of the COVID-19 pandemic when face-to-face visits to the cancer centre are becoming less frequent and will likely remain so in the post-COVID world.
With 15 patient surveys that received feedback from 2298 respondents, and 13 HCP surveys answered by 1033 respondents covering a broad range of mainly breast cancerrelated topics, we feel we have successfully integrated surveys of knowledge users into our trials methodologies. The results of the current study show that planned collection and integration of knowledge user feedback in the Canadian health care system is feasible. These surveys have also provided information on clinical equipoise and endpoints that are important to patients. Indeed, an example was with our CINV patient survey where it was apparent that patients did not feel that the traditional endpoints used in emesis trials did not reflect the endpoints that were important to them [58]. This feedback led to a change in the design of our most recent study of CINV interventions, where nausea was made the primary endpoint [13]. Another example is the variability in filgrastim use in patients receiving chemotherapy for breast cancer [51]. This demonstration of clinical equipoise led to a successful clinical trial that showed shorter durations of filgrastim were equally effective as longer durations but with less toxicity [15]. Clearly it is therefore gratifying that our end user surveys have both directly and indirectly led to a number of important outcomes such as grant applications, systematic reviews, review papers and guidelines as well as actual clinical trials designed to answer the clinical equipoise that has been raised by end users.
Clearly as in all areas of research there are many potential limitations with performing surveys. With the need for a representative sample of respondents [23][24][25], response rates are important. Indeed, journal reviewers frequently cite low response rates as a limitation, and can also represent a barrier to publication. A growing challenge is establishing what number represents an acceptable response rate nowadays as COVID-19 has fundamentally changed the nature of clinical care with a significant reduction in face-to-face interactions between HCPs and patients. With respect to patient surveys we have explored different strategies for enhancing both the approaching of patients (for example by using pharmacy lists, as well as the MYChart function on EPIC that allows patients to consent to be approached for research endeavours). There is also an inherent bias in the types of patients approached by HCPs as they are usually under the care of investigators involved in the particular study and also rarely reflect practice across nations as a whole. Our team has also faced low response rate to telephone and mail surveys, and increasingly we are trying to perform all surveys through electronic platforms. There is also the issue that implied consent as reflected through the completion of the survey may not actually mean that the subject fully understands the objective of the study. Finally, some journals have asked us to link certain survey responses to individual patient data [59]. As surveys are typically anonymous, such post hoc analyses are not possible. With respect to HCP surveys, a challenge has been relatively low response rates. For some membership listings (e.g., CANO), we were unable to target HCPs treating a specific tumour site, meaning that response rates are at times lowered as many recipients simply do not treat that type of cancer. There is also the inherent bias of the types of HCPs who respond which is difficult to overcome. While the use of financial incentives is outlined above, these costs put this type of initiative out of reach of many investigator-non-pharmaceutical company initiated studies [25]. Another important challenge is HCP irritability at receiving unsolicited emails for survey participation. We have tried to resolve this by asking HCPs to tell us if they are not interested in receiving these emails. Finally, there exists the limitation of the surveys thus far being predominantly breast cancer-related and having a Canadian bias.
We feel end user feedback will remain an essential component of any clinical research program. Future studies are clearly needed. These could evaluate better strategies for identifying and receiving responses from as broad a range of end users as possible. Such studies could also evaluate the use of social media platforms technology. For example, for our own patients in Ottawa harnessing convenience of EPIC electronic health records to do electronic surveys may present interesting ongoing opportunities). Future studies could also potentially allow expansion of the program outside of Canada.

Conclusions
Surveys of knowledge users are an essential component of clinical research. The REaCT program has integrated surveys as a standard step of their trials process which has resulted in; grant applications, systematic reviews, review papers, guidelines and clinical trials. The COVID-19 pandemic and reduced face-to-face interactions with patients in the clinic as well as the continued importance of social media highlight the need for alternative means of distributing and responding to surveys.

Conflicts of Interest:
The authors declare no conflict of interest.