Next Article in Journal
Blockade of ALDH in Cisplatin-Resistant Ovarian Cancer Stem Cells In Vitro Synergistically Enhances Chemotherapy-Induced Cell Death
Next Article in Special Issue
Patient-Reported Outcomes Measurement in Radiation Oncology: Interpretation of Individual Scores and Change over Time in Clinical Practice
Previous Article in Journal
Two Distinct Clinical Patterns of Ibrutinib-to-Venetoclax Transition in Relapsed Chronic Lymphocytic Leukemia Patients
Previous Article in Special Issue
Evaluation of Patient-Reported Outcome Differences by Radiotherapy Techniques for Bone Metastases in A Population-Based Healthcare System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Validation of the BC-Brain Patient-Reported Outcome Questionnaire for Patients with Central Nervous System Tumours Treated with Radiotherapy

1
BC Cancer-Prince George, Prince George, BC V2M 7E9, Canada
2
School of Population and Public Health, University of British Columbia, Vancouver, BC V6T 1Z3, Canada
3
BC Cancer-Vancouver, Vancouver, BC V5Z 4E6, Canada
4
Department of Surgery, University of British Columbia, Vancouver, BC V5Z 1M9, Canada
*
Author to whom correspondence should be addressed.
Curr. Oncol. 2022, 29(4), 2798-2807; https://doi.org/10.3390/curroncol29040228
Submission received: 1 March 2022 / Revised: 11 April 2022 / Accepted: 13 April 2022 / Published: 16 April 2022

Abstract

:
The BC-brain questionnaire was developed by BC Cancer to detect health problems in patients with central nervous system (CNS) tumours in routine clinical care, treated with radiotherapy (RT), as part of the Prospective Outcomes and Support Initiative (POSI). This study aimed to present and validate the BC-brain questionnaire in patients with brain metastases (BrM) treated with RT. The BC-brain questionnaire was constructed with three subscales: mobility, thinking and CNS symptoms. Patients with BrM from five BC Cancer centres completed this questionnaire at first visit and subsequent follow-up appointments. A total of 365 patients finished the first and 105 finished the follow-up questionnaire. Summary scores of each subscale were calculated. Mobility, thinking and subtotal score showed good reliability with Cronbach’s α > 0.7. Multitrait scaling analysis showed good convergent and divergent validity. The correlations between subscales ranged from 0.262 to 0.456 for baseline and from 0.378 to 0.597 for follow-up. Patients on dexamethasone had worse performance. Patients with a KPS of </=70 had worse performance than patients with a KPS of >70. In general, this BC-brain questionnaire has good reliability and validity, and is proper to use as an option for a patient-reported outcome (PRO) instrument to measure the quality of life in BrM patients treated with RT.

1. Introduction

Brain metastases (BrM) occur in approximately 20~40% of cancer patients during the course of disease [1,2,3]. The median survival is approximately 3–6 months following whole-brain RT (WBRT) [2,4], 11 months following stereotactic radiosurgery (SRS) and 2 months with supportive care only [5]. Common symptoms of BrM include headache, nausea, seizure and neurocognitive impairments, which have a major negative impact on patients’ quality of life (QoL) [6]. The treatment for patients with BrM remains an integral balance between life expectancy and patients’ QoL. Figure 1 displays a solitary BrM on MRI.
Traditionally, patients’ functional status, as measured by the Karnofsky performance score (KPS) or the Eastern Cooperative Oncology Group (ECOG) performance status, is one of the key factors to assess a patient’s fitness for treatment [7,8,9]. They are one-dimensional measures that are mostly based on the physical functions of patients; such measures are considered insufficient to describe patients’ overall QoL. A rising concept is health-related QoL (HRQOL), which was defined by the US Food and Drug Administration as “a multi-domain concept that represents the patient’s general perception of the effect of illness and treatment on physical, psychological, and social aspects of life” [10,11,12]. HRQOL was first accepted as one of the complimentary outcomes in patient care, but is increasingly being regarded as a primary outcome to guide treatment decisions.
Patient-reported outcomes (PROs) are “any information on the outcomes of health care obtained directly from patients without modification by clinicians or other health care professionals” [13,14,15,16]. A PRO is a good complement to physicians’ assessments to capture information from the patients’ perspective. QoL measures are often examples of PROs, but not all PROs are valid QoL measures, as some are not validated multi-domain questionnaires.
In brain cancer patients, the commonly used PRO instruments for HRQOL were developed for use in clinical trials, and include the functional assessment of cancer therapy–brain (FACT-Br) to be used with functional assessment of cancer therapy—general (FACT-G); the European Organization for Research and Treatment of Cancer (EORTC) (EORTCQLQ-BN20) that can be used together with EORTC core quality of life questionnaire (QLQ-C30); the MD Anderson symptom inventory for brain tumour (MDASI-BT); the brain symptom and impact questionnaire (BASIQ) and their translated versions [17,18,19,20,21]. Among them, the former two were the most popular. It has come to our attention that the pre-existing instruments were either lengthy or not designed/validated for BrM groups receiving RT or not designed for use in routine clinical care. This has driven the design of this questionnaire for patients with tumours of the central nervous system (CNS), either as primary brain tumours or BrM, which is reasonably short, yet comprehensive enough to cover brain-metastases-specific symptoms to better reflect their HRQOL. Importantly, it was designed to be practically useful in routine clinical care while seeing patients on or shortly after RT.
The objective of this work was to provide an overview of the administration of this BC-brain questionnaire at BC Cancer and examine its validity among patients receiving RT for BrM.

2. Materials and Methods

This BC-brain questionnaire was designed in-house at BC Cancer. BC Cancer (http://www.bccancer.bc.ca/ (accessed on 28 February 2022)), part of Provincial Health Services Authority, operates six regional cancer centres (Abbotsford, Kelowna, Prince George, Surrey, Vancouver and Victoria), providing assessment and diagnostic services, chemotherapy, radiation therapy and supportive care. Each of BC Cancer’s centres delivers cancer treatment based on provincial standards and guidelines established by BC Cancer [22].
Designed by a group of oncologists, and a nurse practitioner with extra expertise in CNS malignancies, the initial question pool consisted of the highlighted symptoms listed by this group and questions similar to those found on commonly available HRQOL questionnaires, such as the FACT and EORTC questionnaires. After item deduction, the final version consists of 24 questions. The first question is a general rating of the overall QoL with a scale from 0 to 10 (with a higher score representing better QoL) in the past three days. Questions 2 to 15 rated different aspects of QoL with a Likert scale from 0 to 4 (0 = not at all, 1 = a little bit, 2 = somewhat, 3 = quite a bit and 4 = very much.), with a higher score representing higher severity of symptoms. Question 16 to 24 were excluded from this validation because they were not QoL-related: questions 16 to 18 were specific symptoms from rare CNS issues that require specialized management, and questions 19 to 24 were designed for medicine administration (a full version of this questionnaire is attached in the appendix). The work reported in this study will focus on the 14 QoL-related questions (namely, questions 2–15), using question 1 as a reference for validation purposes.
The construct of the 14 questions was categorized into 3 subscales. The first subscale measures mobility and consists of questions 1 to 4; the second subscale measures thinking and consists of questions 5 to 8; and the third subscale measures CNS symptoms and consists of questions 9 to 14.
The first administration of the questionnaire was delivered to patients with an electronic device on their first visit. Patients could choose to answer or skip a question. Their answers were stored in the BC-brain database electronically. In the absence of an electronic device, a printed version was given to the patient and their answers were later entered into the BC-brain database by trained staff. The follow-up happened in a two-week to two-month interval. Patients who came back to the cancer centres answered the questionnaire on an electronic device, while patients who did not were followed up by phone. The minimum interval was set to avoid patients remembering their previous answer and to allow time to change, and the maximum interval was set to cover a reasonably good sample size for the follow-up. In this validation work, if a patient skipped no more than five questions, we considered that a “completed” case in this analysis.
We recruited patients from 5 of BC Cancer’s regional care centres: Abbotsford Centre, Prince George Centre, Kelowna Centre, Surrey Centre and Vancouver Centre. We included patients if they met the following criteria: aged 18 years or older, diagnosed with BrM and were seen at BC Cancer for RT. Patients were excluded if they were under 18 years of age, were diagnosed with a primary brain tumour or if they did not complete the BC-brain questionnaire.
Data linkage was performed with the cancer agency information system (CAIS) using BC Cancer ID to retrieve demographic and treatment information. Scores for subscales were linearly converted in a 0–100 scale during the analysis for ease of reporting and result interpretation, with a higher score being interpreted as more severe symptoms and thus worse HRQOL. Item 1 was converted with a lower score representing a better HRQOL on a 0–100 scale to be conceptually consistent with other items. Missing data were checked using Little’s test and imputation was performed when applicable.
Internal consistency was tested using Cronbach’s α and α ≥ 0.7 was considered statistically reliable [23,24]. Convergent and divergent validity were tested by multitrait scaling analysis [25,26,27]. Convergent validity was established when item–own scale (corrected for overlap) was greater than 0.4; divergent validity was established if the item–other scale correlations were lower than item–own scale correlation [21,28,29]. The correlations between subscales were also reviewed. It was hypothesised that all three subscales would have a relatively fair correlation with each other because they are all aspects of HRQOL while keeping themselves distinct with different focus. Their relations with the general QoL question and subtotal score (corrected for overlap) were also examined to further establish internal consistency and validity. Ranges were reported, floor effect and ceiling effect were examined, with a percent of 15 or more in the lowest scale or a percent of 15 or more in the highest scale being considered floor effect and ceiling effect, respectively [30,31].
Known-group validity was tested to examine the capability of this questionnaire to differentiate patient groups [23]. It was hypothesised that patients with higher KPS (KPS > 70) would possibly lead to a better HRQOL in general and thus have lower result scores than patients with lower KPS (KPS ≤ 70). KPS was retrospectively collected from the CAIS system within 14 days before or after the questionnaire administering date. It was also hypothesised that patients who use dexamethasone would present with worse symptoms and thus have higher scores than those who were not put on dexamethasone. Dexamethasone usage was reported by the patients using question 20 and data were collected from the BC-brain questionnaire database. A third hypothesis is that HRQOL was not related to age (age ≥ 60 vs. age < 60) or primary sites (lung vs. other sites).
Responsiveness was tested to examine the capability of this questionnaire to detect the change of HRQOL over time [23,24,32]. Patients were divided into two groups: KPS decreased (a decrease of 10 or more in KPS) group and KPS unchanged or increased (an increase of 10 or more in KPS) group. It was hypothesised that patients with improved KPS would have improved QoL and thus report lower scores to the follow-up as compared to their response at baseline. To allow change over time while keeping the loss to follow-up rate low, the interval of KPS-assessed date between baseline and follow-up was at least 1 week.
t-test, Spearman’s correlation, ANOVA or relevant non-parametric tests were employed when applicable for comparison. Analyses were two-tailed and a p value of less than or equal to 0.05 was considered statistically significant. Data was analysed using SPSS14.0. This research was approved by the University of British Columbia research ethical board.

3. Results

3.1. Patient Population

From July 2016 to October 2018, 419 patients were approached, and 365 patients completed the baseline questionnaire. The age of the 365 patients ranged from 23 to 88 years, with a mean of 63.3 years and standard deviation of 11.8 years. The majority were female. Patient demographic characteristics are reported in Table 1. A total of 54 patients approached did not complete the baseline assessment. Reasons for incompletion are listed in Table 2. They were older (mean age 69.4 vs. 63.3, p < 0.001) and had a lower KPS score than those who completed the baseline administration (65.1 vs. 78.3, p < 0.001).
Among the 365 patients who finished the baseline administration, 105 finished the full follow-up questionnaire in person, and 138 finished a short questionnaire over the phone (because the short questionnaire had a different construct than this BC-brain questionnaire, the 138 cases were not included as “complete follow-up” in the analysis of this validation work), and 122 did not complete the follow-up in either the long questionnaire or the short questionnaire. They had a lower KPS (74 vs. 80, p < 0.001) and higher scores in the general question, mobility, CNS symptoms and the subtotal (36 vs. 30, p = 0.01; 31 vs. 21, p = 0.001; 28 vs. 23, p = 0.002 and 26 vs. 21, p = 0.001 respectively).

3.2. Summary Scores

The summary of the subscale scores are presented in Table 3. The missing data was low for all items. The counts of missing questions per patient vary from zero questions per patient to five questions per patient and two patients skipped five questions. Question 1 had the highest missing rate; seven (2%) patients skipped this question. Mobility and CNS symptoms had higher mean score than thinking. All scores were low in their absolute value as compared to the range 0–100.
At baseline, no ceiling effects were observed among the subscales. Floor effects presented in all subscales, with the most observed in thinking. This remained true for the follow-up.

3.3. Reliability

Internal consistency was assessed using Cronbach’s α and results were presented in Table 3. Mobility, thinking and the subtotal showed high general reliability. During the follow-up, with CNS symptoms being an exception, the internal consistency remained satisfactory for mobility, thinking and the subtotal.

3.4. Validity

Item–scale correlations are listed in Table 3. Mobility and thinking showed excellent convergent validity with an item–own scale correlation greater than 0.4 (with one exception that the item “Do you feel hyper and/or agitated” had an item–own scale correlation of 0.34), and divergent validity with item–own scale correlation greater than the item–other scale correlation. CNS symptoms showed fair convergent and divergent validity.
Inter-scale correlations are reported in Table 4. Correlations for baseline are presented under the diagonal and correlations for follow-up are presented above the diagonal. Correlations among the subscales and the subtotal were corrected for overlap. Correlations were high among the three subscales, all being greater than or approaching 0.4. All subscales correlated well with the general QoL. All subscales correlated well with the subtotal.

3.5. Known Group Differentiation

334 patients at baseline and 101 patients at follow-up had a clear understanding of whether they were on dexamethasone or not. Comparisons of summary scores between groups are reported in Table 5.
The group on dexamethasone had higher statistically significant scores than the group not on dexamethasone on all subscales, the general question and the subtotal. It remained true on mobility, CNS symptom and subtotal during follow-up.
Patients in the low KPS group (70 and below) had higher statistically significant scores on all three subscales, the general question and the subtotal than the high KPS group (80 and above). These differences remained significant during follow-up too.
No differences were detected among the age groups or the primary site groups.

3.6. Responsiveness

All 105 patients met the criteria for the time interval required for KPS comparison. In total, 51 patients had decreased KPS while 54 patients had unchanged or increased KPS. For the KPS decreased group, we could observe an increase in score on the general question (40 vs. 28, p = 0.002), although CNS symptoms scores decreased slightly (22 vs. 27, p = 0.041). For the KPS unchanged or increased group, we could observe a decrease in score in CNS symptoms (18 vs. 24, p = 0.006) and subtotal (16 vs. 21, p = 0.003). No significant changes were detected on other subscales.

4. Discussion

This provincial study demonstrated good validity of this multi-dimensional BC-brain questionnaire developed for patients receiving radiotherapy. It was designed for use in both primary and metastatic brain tumour patients and was short in length. Specifically, it was developed for routine clinical care, as compared to many other questionnaires that were designed for clinical trials. This study was the first study to validate this questionnaire in patients with metastatic brain tumours.
We observed floor effects and no ceiling effects. This could be explained by the impression that not all cancer patients will present with severe symptoms, or their mobility/thinking functions were limitedly affected. This was also in agreement with the fact that their KPS scores in general were high. Similar observations were reported for the EORTC QLQ-BN20, where all dimensions had floor effects but no ceiling effect [33,34].
The inter-scale correlation analysis showed close correlations among the three subscales. This agreed with our hypothesis that the three subscales were non-orthogonal key aspects of HRQOL. They closely related to each other while also being able to reflect distinctive aspects of HRQOL.
Responsiveness over time was not supported when patients were stratified by their KPS changes. Several previous studies reported “lack of significant changes in scores over time” and “no significant changes in the FACT-G and FACT-Br subscales except for in the physical well-being subscale of the FACT-G” on the BrM population [3,33]. However, other reports observed increased levels of emotional distress, future uncertainty, visual disorder, motor dysfunction, seizures, drowsiness and weakness of both legs in brain cancer patients whose KPS had deteriorated [19]. Our observation on responsiveness could be a result of following factors: Certain BrM patients with rapidly deteriorating health status may be too ill to fill out questionnaires, and those who are able to complete at follow-up may have relatively high and stable HRQOL [3,17,19,35]. Additionally, the fact that palliative treatments were actively taking place between baseline and follow-up, and thus may have contributed to the declination of the summary scores [3,35].
This study should be interpreted in the context of its strengths and limitations. We had a large sample size compared to many previous studies [3,17,23,36,37,38,39]. It also met recommendations on sample size in PRO measures in the literature [40,41,42,43]. Large sample size was a direct benefit of integrating the questionnaire into the BC Cancer Prospective Outcomes and Support Initiative (POSI) system [13]. The POSI system was used across BC Cancer Agencies to collect reliable, accurate and tractable patient report outcomes, although one centre did not adopt the BC-brain questionnaire. Data was collected and managed in a systemic manner by trained staff and data grew on a daily basis. Thanks to the adoption of the EMR, demographic and treatment information was retrieved from the systems without patient involvement and thus reduced patient’s burden and avoided recall bias. One limitation of the study is the loss to follow-up. Loss to follow-up has been a long-standing issue in the area of patient-reported outcomes, which is also apparent in metastatic brain tumour populations [3,19,35]. Our results showed that patients not completing the questionnaire had a lower KPS, which may suggest that they were too unwell to take the questionnaire. In general, the population-based nature of this study is a particular strength with its patient coverage and generalizability.

5. Conclusions

This BC-brain questionnaire was designed to be clinically useful to guide patient care. It has good reliability and validity. It is short in length and is easy to administer without adding much patient burden in routine clinical care. It could serve as an option for a PRO instrument to measure the quality of life in BrM patients treated with radiotherapy. Future work on validation could seek to improve data collection from patients during the follow-up phase. Future research could explore the feasibility of including HRQOL in patient selection and treatment decision making for BrM patients.

Author Contributions

Data curation, L.Y., A.N. and R.O.; Formal analysis, L.Y.; Methodology, R.O.; Project administration, A.N. and R.O.; Supervision, A.N. and R.O.; Writing—original draft, L.Y.; Writing—review and editing, A.N. and R.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the University of British Columbia, British Columbia Cancer Agency Research Ethics Board (UBC BCCA REB; H14-00647, January 2022).

Informed Consent Statement

Patient consent was waived due to the retrospective nature of the study and the fact that the majority of subjects were deceased at the time of study completion.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to REB approval and BC Cancer Data Access Requests guidelines.

Conflicts of Interest

Robert Olson and Alan Nichol have received grant funding from Varian Medical Systems.

References

  1. Nayak, L.; Lee, E.Q.; Wen, P.Y. Epidemiology of brain metastases. Curr. Oncol. Rep. 2012, 14, 48–54. [Google Scholar] [CrossRef] [PubMed]
  2. Wong, J.; Hird, A.; Kirou-Mauro, A.; Napolskikh, J.; Chow, E. Quality of life in brain metastases radiation trials: A literature review. Curr. Oncol. 2008, 15, 25. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Thavarajah, N.; Ray, S.; Bedard, G.; Zhang, L.; Cella, D.; Wong, E.; Danjoux, C.; Tsao, M.; Barnes, E.; Sahgal, A.; et al. Psychometric validation of the Brain Symptom and Impact Questionnaire (BASIQ) version 1.0 to assess quality of life in patients with brain metastases. CNS Oncol. 2015, 4, 11–23. [Google Scholar] [CrossRef] [PubMed]
  4. Rodin, D.; Banihashemi, B.; Wang, L.; Lau, A.; Harris, S.; Levin, W.; Dinniwell, R.; Millar, B.A.; Chung, C.; Laperriere, N.; et al. The Brain Metastases Symptom Checklist as a novel tool for symptom measurement in patients with brain metastases undergoing whole-brain radiotherapy. Current Oncol. 2016, 23, e239. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Samlowski, W.E.; Watson, G.A.; Wang, M.; Rao, G.; Klimo, P., Jr.; Boucher, K.; Shrieve, D.C.; Jensen, R.L. Multimodality Treatment of Melanoma Brain Metastases Incorporating Stereotactic Radiosurgery (SRS). Cancer 2007, 109, 1855–1862. [Google Scholar] [CrossRef] [PubMed]
  6. Yamada, Y.; Chang, E.; Fiveash, J.B.; Knisely, J. Radiotherapy in Managing Brain Metastases a Case-Based Approach; Springer International Publishing: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  7. Karnofsky, D.A.; Abelmann, W.H.; Craver, L.F.; Burchenal, J.H. The use of the nitrogen mustards in the palliative treatment of carcinoma. With particular reference to bronchogenic carcinoma. Cancer 1948, 1, 634–656. [Google Scholar] [CrossRef]
  8. Karnofsky, D.A. The clinical evaluation of chemotherapeutic agents in cancer. In Evaluation of Chemotherapeutic Agents; CiNii: Tokyo, Japan, 1949; pp. 191–205. [Google Scholar]
  9. Oken, M.M.; Creech, R.H.; Tormey, D.C.; Horton, J.; Davis, T.E.; Mcfadden, E.T.; Carbone, P.P. Toxicity and response criteria of the Eastern Cooperative Oncology Group. Am. J. Clin. Oncol. 1982, 5, 649–656. [Google Scholar] [CrossRef]
  10. Sitlinger, A.; Zafar, S.Y. Health-related quality of life: The impact on morbidity and mortality. Surg. Oncol. Clin. 2018, 27, 675–684. [Google Scholar] [CrossRef]
  11. Fiteni, F.; le Ray, I.; Ousmen, A.; Isambert, N.; Anota, A.; Bonnetain, F. Health-related quality of life as an endpoint in oncology phase I trials: A systematic review. BMC Cancer 2019, 19, 361. [Google Scholar] [CrossRef]
  12. U.S. Food and Drug Administration. Guidance For Industry Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims; U.S. Food and Drug Administration: Silver Spring, MD, USA, 2009.
  13. Olson, R.A.; Howard, F.; Lapointe, V.; Schellenberg, D.; Nichol, A.; Bowering, G.; Curtis, S.; Walter, A.; Brown, S.; Thompson, C.; et al. Provincial development of a patient-reported outcome initiative to guide patient care, quality improvement, and research. Healthc. Manag. Forum 2018, 31, 13–17. [Google Scholar] [CrossRef] [Green Version]
  14. Caissie, A.; Brown, E.; Olson, R.A.; Barbera, L.; Davis, C.A.; Brundage, M.; Milosevic, M. Improving patient outcomes and radiotherapy systems: A pan-Canadian approach to patient-reported outcome use. Med. Phys. 2018, 45, e841–e844. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Faria, R.; Langleben, A.; Kushneryk, A.; Wilson, J. Real world implementation of patient report outcomes: Sustainability constraints and impact on patients health outcomes. J. Clin. Oncol. 2019, 37, 291. [Google Scholar] [CrossRef]
  16. Rivera, S.C.; Kyte, D.G.; Aiyegbusi, O.L.; Slade, A.L.; McMullan, C.; Calvert, M.J. The impact of patient-reported outcome (PRO) data from clinical trials: A systematic review and critical analysis. Health Qual. Life Outcomes 2019, 17, 156. [Google Scholar] [CrossRef] [PubMed]
  17. Weitzner, M.A.; Meyers, C.A.; Gelke, C.K.; Byrne, K.S.; Levin, V.A.; Cella, D.F. The functional assessment of cancer therapy (FACT) scale. Development of a brain subscale and revalidation of the general version (FACT-G) in patients with primary brain tumors. Cancer 1995, 75, 1151–1161. [Google Scholar] [CrossRef]
  18. Cella, D.F.; Tulsky, D.S.; Gray, G.; Sarafian, B.; Linn, E.; Bonomi, A.; Silberman, M.; Yellen, S.B.; Winicour, P.; Brannon, J.; et al. The Functional Assessment of Cancer Therapy scale: Development and validation of the general measure. J. Clin. Oncol. 1993, 11, 570–579. [Google Scholar] [CrossRef] [PubMed]
  19. Osoba, D.; Aaronson, N.K.; Muller, M.; Sneeuw, K.; Hsu, M.A.; Yung, W.A.; Brada, M.; Newlands, E. The development and psychometric validation of a brain cancer quality-of-life questionnaire for use in combination with general cancer-specific questionnaires. Qual. Life Res. 1996, 5, 139–150. [Google Scholar] [CrossRef] [PubMed]
  20. Aaronson, N.K.; Ahmedzai, S.; Bergman, B.; Bullinger, M.; Cull, A.; Duez, N.J.; Filiberti, A.; Flechtner, H.; Fleishman, S.B.; de Haes, J.C.; et al. The European Organization for Research and Treatment of Cancer QLQ-C30: A quality-of-life instrument for use in international clinical trials in oncology. JNCI J. Natl. Cancer Inst. 1993, 85, 365–376. [Google Scholar] [CrossRef]
  21. Armstrong, T.S.; Mendoza, T.; Gring, I.; Coco, C.; Cohen, M.Z.; Eriksen, L.; Hsu, M.; Gilbert, M.R.; Cleeland, C. Validation of the M.D. Anderson Symptom Inventory Brain Tumor Module (MDASI-BT). J. Neuro-Oncol. 2006, 80, 27–35. [Google Scholar] [CrossRef]
  22. A comprehensive Cancer Control Program for BC. Available online: http://www.bccancer.bc.ca/ (accessed on 28 February 2022).
  23. Tsang, S.; Royse, C.F.; Terkawi, A.S. Guidelines for developing, translating, and validating a questionnaire in perioperative and pain medicine. Saudi J. Anaesth. 2017, 11 (Suppl. S1), S80. [Google Scholar] [CrossRef]
  24. Cronbach, L.J. Coefficient alpha and the internal structure of tests. Psychometrika 1951, 16, 297–334. [Google Scholar] [CrossRef] [Green Version]
  25. Hays, R.D.; Hayashi, T. Beyond internal consistency reliability: Rationale and user’s guide for multitrait analysis program on the microcomputer. Behav. Res. Methods Instrum. Comput. 1990, 22, 167–175. [Google Scholar] [CrossRef]
  26. Howard, K.I.; Forehand, G.A. A method for correcting item-total correlations for the effect of relevant item inclusion. Educ. Psychol. Meas. 1962, 22, 731–735. [Google Scholar] [CrossRef]
  27. Ware, J.E., Jr.; Snyder, M.K.; Wright, W.R.; Davies, A.R. Defining and measuring patient satisfaction with medical care. Eval. Program Plan. 1983, 6, 247–263. [Google Scholar]
  28. Shin, Y.S.; Kim, J.H. Validation of the Korean version of the European Organization for Research and Treatment of Cancer brain cancer module (EORTC QLQ-BN20) in patients with brain tumors. Health Qual. Life Outcomes 2013, 11, 145. [Google Scholar] [CrossRef] [Green Version]
  29. Arraras, J.I.; de la Vega, F.A.; Asin, G.; Rico, M.; Zarandona, U.; Eito, C.; Cambra, K.; Barrondo, M.; Errasti, M.; Verdún, J.; et al. The EORTC QLQ-C15-PAL questionnaire: Validation study for Spanish bone metastases patients. Qual. Life Res. 2014, 23, 849–855. [Google Scholar] [CrossRef]
  30. Van Dijk, M.J.; Groen, W.G.; Moors, S.C.; Bekkering, W.P.; Hegeman, A.K.; Janssen, A.; van der Net, J. The Dutch translation of the Childhood Health Assessment Questionnaire: An explorative study of the ceiling effect. Pediatr. Rheumatol. 2008, 6, P110. [Google Scholar] [CrossRef] [Green Version]
  31. Groen, W.; Ünal, E.; Nørgaard, M.; Maillard, S.; Scott, J.; Berggren, K.; Sandstedt, E.; Stavrakidou, M.; Van der Net, J. Comparing Validation of the Chinese version of EORTC QLQ-BN 20 for patients with brain cancer different revisions of the Childhood Health Assessment Questionnaire to reduce the ceiling effect and improve score distribution: Data from a multi-center European cohort study of children with JIA. Pediatr. Rheumatol. 2010, 8, 16. [Google Scholar]
  32. Terwee, C.B.; Dekker, F.W.; Wiersinga, W.M.; Prummel, M.F.; Bossuyt, P.M. On assessing responsiveness of health-related quality of life instruments: Guidelines for instrument evaluation. Qual. Life Res. 2003, 12, 349–362. [Google Scholar] [CrossRef]
  33. Zhang, K.; Tian, J.; He, Z.; Sun, W.; Pekbay, B.; Lin, Y.; Wu, D.; Zhang, J.; Chen, P.; Guo, H.; et al. Validation of the Chinese version of EORTC QLQ-BN 20 for patients with brain cancer. Eur. J. Cancer Care 2018, 27, e12832. [Google Scholar] [CrossRef]
  34. Taphoorn, M.J.; Claassens, L.; Aaronson, N.K.; Coens, C.; Mauer, M.; Osoba, D.; Stupp, R.; Mirimanoff, R.O.; van den Bent, M.J.; Bottomley, A.; et al. An international validation study of the EORTC brain cancer module (EORTC QLQ-BN20) for assessing health-related quality of life and symptoms in brain cancer patients. Eur. J. Cancer 2010, 46, 1033–1040. [Google Scholar]
  35. Pulenzas, N.; Ray, S.; Zhang, L.; McDonald, R.; Cella, D.; Rowbottom, L.; Sahgal, A.; Soliman, H.; Tsao, M.; Danjoux, C.; et al. The Brain Symptom and Impact Questionnaire in brain metastases patients: A prospective long-term follow-up study. CNS Oncol. 2016, 5, 31–40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Gazzotti, M.R.; Alith, M.B.; Malheiros, S.M.; Vidotto, M.C.; Jardim, J.R.; Nascimento, O.A. Functional Assessment of Cancer Therapy-Brain questionnaire: Translation and linguistic adaptation to Brazilian Portuguese. Sao Paulo Med. J. 2011, 129, 230–235. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Arli, S.K.; Gurkan, A. Validity and Reliability of Turkish Version of the Functional Assessment of Cancer Therapy–Brain Questionnaire. Cancer Nurs. 2017, 40, 224–229. [Google Scholar] [CrossRef] [PubMed]
  38. Khoshnevisan, A.; Yekaninejad, M.S.; Ardakani, S.K.; Pakpour, A.H.; Mardani, A.; Aaronson, N.K. Translation and validation of the EORTC brain cancer module (EORTC QLQ-BN20) for use in Iran. Health Qual. Life Outcomes 2012, 10, 54. [Google Scholar] [CrossRef] [Green Version]
  39. Chmielowska, K.; Tomaszewski, K.A.; Pogrzebielski, A.; Brandberg, Y.; Romanowska-Dixon, B. Translation and validation of the Polish version of the EORTC QLQ-OPT30 module for the assessment of health-related quality of life in patients with uveal melanoma. Eur. J. Cancer Care 2013, 22, 88–96. [Google Scholar] [CrossRef]
  40. Perneger, T.V.; Courvoisier, D.S.; Hudelson, P.M.; Gayet-Ageron, A. Sample size for pre-tests of questionnaires. Qual. Life Res. 2015, 24, 147–151. [Google Scholar] [CrossRef] [Green Version]
  41. Blair, J.; Conrad, F.G. Sample size for cognitive interview pretesting. Public Opin. Q. 2011, 75, 636–658. [Google Scholar] [CrossRef]
  42. Anthoine, E.; Moret, L.; Regnault, A.; Sébille, V.; Hardouin, J.B. Sample size used to validate a scale: A review of publications on newly-developed patient reported outcomes measures. Health Qual. Life Outcomes 2014, 12, 2. [Google Scholar] [CrossRef] [Green Version]
  43. Boynton, P.M.; Greenhalgh, T. Selecting, designing, and developing your questionnaire. BMJ 2004, 328, 1312–1315. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Solitary BrM on T1-weighted MRI with gadolinium.
Figure 1. Solitary BrM on T1-weighted MRI with gadolinium.
Curroncol 29 00228 g001
Table 1. Characteristics of patient population.
Table 1. Characteristics of patient population.
CharacteristicsNo. of Patients (Percent)
Gender
 Female211 (58%)
 Male154 (42%)
KPS
 90–100171 (47%)
 70–80129 (35%)
 50–6053 (15%)
 <5012 (3%)
Primary Tumour Site
 Lung207 (57%)
 Breast64 (18%)
 Melanoma31 (9%)
 Gastro-intestinal23 (6%)
 Genito-urinary20 (5%)
 Others20 (5%)
Treatment Centre
 Abbotsford Centre50 (14%)
 Prince George Centre26 (7%)
 Surrey Centre30 (8%)
 Kelowna Centre12 (3%)
 Vancouver Centre247 (68%)
Interpreter Used
 Yes35 (10%)
 No330 (90%)
Follow-up
 Complete in person105 (29%)
 Complete over the phone138 (38%)
 Incomplete122 (33%)
Table 2. Reason for incompletion.
Table 2. Reason for incompletion.
Reason for IncompletionCount (Percent)
Baseline (n = 54)
 Declined27 (50%)
 Unfit/unresponsive13 (24%)
 No interpreter6 (11%)
 Missed4 (7%)
 In hospital/hospice1 (2%)
 No reason3 (6%)
Follow-up (n = 122)
 Unable to contact31 (25%)
 In hospital/hospice28 (23%)
 No record in system26 (21%)
 Deceased15 (12%)
 Not treated6 (5%)
 Incorrectly coded as complete5 (4%)
 No interpreter3 (2%)
 Unfit/unresponsive2 (2%)
 Declined2 (2%)
 First attempt2 (2%)
 Missed1 (1%)
 Second attempt1 (1%)
Table 3. Summary scores.
Table 3. Summary scores.
SubscaleMean(SD)Floor
No. (%)
Ceiling
No.(%)
RangeCronbach’s αItem–Own Scale CorrelationItem–Other Scale Correlation
Baseline
General31.8 (20.4)39 (11%)1 (0%)0–100------
Mobility24.4 (23.3)202 (55%)13 (4%)0–1000.850.56–0.730.29–0.44
Thinking17.6 (17.0)253 (69%)2 (1%)0–87.50.760.34–0.620.26–0.48
CNS Symptoms24.8 (15.2)142 (39%)0 (0%)0–750.630.25–0.480.15–0.57
Subtotal22.6 (14.3)187 (51%)0 (0%)0–750.83----
Follow-up
General37.6 (19.9)4 (4%)0 (0%)0–90------
Mobility22.7 (24.9)67 (64%)5 (5%)0–1000.880.53–0.730.28–0.52
Thinking14.7 (19.3)81 (77%)1 (1%)0–81.30.810.32–0.640.20–0.54
CNS Symptoms19.7 (13.1)57 (54%)0 (0%)0–500.490.14–0.380.03–0.44
Subtotal19.2 (14.7)69 (66%)0 (0%)0–57.10.84----
Table 4. Inter-scale correlations.
Table 4. Inter-scale correlations.
Baseline\Follow-UpGeneralMobilityThinkingCNS SymptomsSubtotal
General--0.480.420.400.54
Mobility0.46--0.600.380.60
Thinking0.260.44--0.420.63
CNS Symptoms0.400.430.44--0.44
Subtotal0.490.510.520.51--
All correlations are significant at 0.01 level (2-tailed).
Table 5. Known-group comparison.
Table 5. Known-group comparison.
GeneralMobilityThinkingCNS SymptomsSubtotal
Mean (SD)Mean (SD)Mean (SD)Mean (SD)Mean (SD)
Baseline
DexNo use28.2 (18.1)16.8 (19.1)14.8 (15.9)22.0 (14.9)18.5 (13.5)
Use33.7 (21.1)29.1 (24.4)20.0 (18.0)27.0 (15.3)25.6 (14.4)
p0.010<0.0010.0060.003<0.001
KPS>7027.9 (18.4)18.1 (18.8)15.6 (16.0)22.6 (14.4)19.3 (12.6)
</=7039.6 (22.0)37.3 (26.3)21.6 (18.3)29.1 (16.0)29.3 (15.2)
p<0.001<0.0010.001<0.001<0.001
Follow-up
DexNo use35.1 (18.6)18.5 (24.5)11.9 (16.4)17.3 (12.7)16.1 (13.5)
Use41.7 (21.0)28.7 (25.2)19.7 (22.6)22.6 (13.0)23.9 (15.4)
p0.0970.0450.0580.0180.010
KPS>7031.5 (17.6)13.2 (16.2)9.7 (14.8)17.3 (13.2)13.9 (11.7)
</=7045.7 (20.1)35.4 (28.7)21.5 (22.4)23.0 (12.5)26.1 (15.4)
p<0.001<0.0010.0030.026<0.001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, L.; Nichol, A.; Olson, R. Validation of the BC-Brain Patient-Reported Outcome Questionnaire for Patients with Central Nervous System Tumours Treated with Radiotherapy. Curr. Oncol. 2022, 29, 2798-2807. https://doi.org/10.3390/curroncol29040228

AMA Style

Yan L, Nichol A, Olson R. Validation of the BC-Brain Patient-Reported Outcome Questionnaire for Patients with Central Nervous System Tumours Treated with Radiotherapy. Current Oncology. 2022; 29(4):2798-2807. https://doi.org/10.3390/curroncol29040228

Chicago/Turabian Style

Yan, Ling, Alan Nichol, and Robert Olson. 2022. "Validation of the BC-Brain Patient-Reported Outcome Questionnaire for Patients with Central Nervous System Tumours Treated with Radiotherapy" Current Oncology 29, no. 4: 2798-2807. https://doi.org/10.3390/curroncol29040228

Article Metrics

Back to TopTop