Phase 1 oncology trials are mainly designed to evaluate the toxicity and pharmacokinetic profiles of investigational agents in order to determine the appropriate dose for subsequent phase 2 testing. Phase 1 trial participants typically have advanced cancer for which standard therapies are either not available or have been exhausted. Though toxic deaths are rare at ~0.5% [1
], nonfatal serious toxicities may often be encountered. Overall nonfatal serious grade 3 or 4 treatment-related toxicities occurred in approximately 10% of 6474 patients who participated in phase 1 clinical trials reported between 1991–2002 [2
]. More recently, phase 1 trials involving molecular targeted agents similarly have an estimated rate of grade 3 or 4 toxicity between 10–15% [3
Much research has been dedicated towards being able to select the “fittest” of the oncologic population for early phase clinical trials. This is not only because of the magnitude of possible toxicities these patients face but also because replacement of patients during the dose-escalation phase, either due to early clinical deterioration or non-treatment related serious adverse events (SAEs), is a common logistical issue that prolongs the study duration due to the nature of dose-escalation study designs. As a result of prior research, organ function, tobacco use and performance status levels have been identified as prognostic factors for toxicity independent of dose administered, while lactate dehydrogenase (LDH) levels and performance status are prognostic factors for survival [5
]. However, while patient performance status and organ function data are routinely used to determine eligibility for phase 1 study involvement, a recent retrospective review showed a 50% SAE rate in cycle 1 among patients participating in phase 2 trials of molecularly-targeted agents [7
]. This demonstrates a gap in our current process for optimizing patient selection to minimize non-treatment related AEs.
The Royal Marsden prognostic score (RMS) was developed in a British center after a retrospective review of 212 phase 1 patients identified LDH, number of metastatic sites and hypoalbuminemia as independent negative prognostic factors for overall survival [8
]. It was subsequently reported to be helpful in prospectively evaluating a selected cohort of phase 1 patients [9
]. However, RMS is still limited by its crude prognostication ability and its impact on reducing recruitment to phase 1 studies, and thus it is not incorporated routinely in screening procedures for inclusion of patients in phase 1 trials [10
The Charlson Comorbidity Index (CCI) is another well-validated measure, developed from a longitudinal study of over 500 patients. It assigns a weighted score for certain medical conditions that co-exist for each patient that affect overall mortality [11
]. A regression model was created that can predict the occurrence of SAE for patients during the first cycle of phase 2 trials based on a albumin, LDH, number of target lesions, age, performance status and CCI score [7
]. However, while CCI has been extensively applied in health services research in cancer patients, its utility in predicting short-term outcomes seems to be more limited [12
Quality-of-life (QOL) outcome measurements have an established role in oncology clinical trials, including the drug approval process, especially when survival outcomes compared to standard of care are not significantly different [15
]. It has also been shown that among general cancer patients receiving cytotoxic chemotherapy, patients with high QOL lived significantly longer than patients with low QOL, particularly in patients with metastatic disease [17
]. Despite its potential prognostic role, there is lack of data on the utility of QOL evaluation for patient selection in phase 1 oncology clinical trials.
Social support, though often overlooked by physicians, seems to also play an instrumental role in outcomes for cancer patients. Cancer patients with poor social support suffer from increased rates of depression [18
] and decreased compliance with treatment [19
], making them more susceptible to disease progression [20
] and increased mortality [21
We thus aimed to prospectively evaluate whether assessment of QOL and social support at time of study screening can be a tool in evaluating patient fitness and risk for SAEs and subject replacement in phase 1 studies.
Between September 2011 to August 2013, patients ≥18 years of age with histologically or cytologically confirmed solid tumors were approached for participation in this prospective, observational study at the time of screening for any of 22 phase 1 clinical trials, excluding phase 1 trials that involve regional therapies such as radiation, surgery or photodynamic therapy, at Roswell Park Cancer Institute (Supplementary Table S1
). Patients unable to read or understand English were excluded. Informed consent to this study meeting Federal and Institutional requirements was obtained from each patient prior to registration. Institutional review board approval was obtained for this study.
QOL and social support questionnaires were administered at baseline anytime between screening for the phase 1 trial and before the first treatment day. Questionnaires were thereafter administered on day 1 of each subsequent treatment cycle until day 1 of cycle 4 if patient was still enrolled in the therapeutic study. The three questionnaires used were Functional Assessment of Cancer Therapy-General (FACT-G) [22
], European Organization for Research and Treatment of Cancer Quality of Life Questionnaire-Core 30 (EORTC QLQ-C30) version 3 [23
] and Medical Outcomes Study Social Support Survey (MOSSSS) [24
]. RMS and CCI were determined at baseline.
Dose-limiting toxicities (DLTs) as defined according to the respective treatment study patient enrolled to, need for patient replacement in the actual interventional phase 1 trial, and all SAEs that occurred within the first four cycles were collected. SAE is defined as any CTC version 4 grade 3 or higher toxicity, regardless of treatment attribution. All consented patients with available LDH levels at baseline for determination of RMS were included in the relevant statistical analysis. Toxicity data, specifically protocol-defined DLTs and SAEs were prospectively collected from the weekly phase 1 safety meeting minutes and verified with chart review.
A linear transformation was used to standardize the raw QOL score, so that scores range from 0 to 100, with 100 representing the highest level of functioning possible [8
]. Based on informal review of published data, up to 25% of patients enrolled in phase 1 studies had to be “replaced” during dose-escalation phase. Thus for sample size of 100 patients accrued to the study, there will be about 80% power at a 0.05 significance level to detect minimum odds ratio of 1.9 for patient observations one standard deviation from the mean QOL score.
Descriptive statistics such as frequencies and relative frequencies were computed for categorical variables. Numeric variables were summarized using simple descriptive statistics such as the mean, standard deviation, median, range, etc. Mann-Whitney-Wilcoxon test was used to test significant differences between different groups for numeric variables. Fisher’s exact test was used to test significant differences between different groups for categorical variables. The estimated overall survival distributions were obtained using the Kaplan-Meier method. Using this distributional estimate, summary descriptive statistics such as the median survival and a 95% confidence interval of the median survival were obtained. Statistical assessment of observed differences in the survival distributions of different groups of interest was done using the log-rank test. The Cox proportional hazards model was used to assess the effect of numerical variables on survival analyses. All tests were two-sided and tested at a 0.05 nominal significance level. SAS version 9.4 statistical software (SAS Inc., Cary, NC, USA) was used for all statistical analyses.
While previous research has highlighted the longitudinal prognostic impact of QOL in phases 1–3 trials [26
], this study is the first to demonstrate that baseline QOL scores using a validated tool such as the EORTC QLQ-C30 questionnaire may independently provide an objective measurement to evaluate the risk of incurring SAE for each individual patient during phase 1 trial participation. Differences in the individual components of the EORTC QLQ-C30 questionnaire obtained by the patients were not substantial on their own, but when a cumulative score was obtained for QOL, it was able to achieve statistical significance leading to the conclusion that patients with better median QOL scores incur SAEs less frequently as compared to those with lower scores.
As the EORTC QLQ-C30 questionnaire measures somatic and psychological symptoms, functional status and overall health of the individual it would be reasonable to assume a patient with lower scores is more likely to have worse mental and physical disease burden and demonstrate abnormalities at the biochemical level such as increased Interleukin-2 and Interleukin-6 levels, anemia and hypoalbuminemia, all which are known to contribute towards poor QOL in cancer patients [29
]. These factors combined, along with many others likely predisposes such individuals to SAEs with experimental chemotherapy.
This study found that unlike the EORTC QLQ-C30, the FACT-G QOL questionnaire was not able to stratify patients according to risk for SAE during the trial. The reason may be that although both questionnaires are validated measures of QOL in cancer patients, there are significant differences between their structure, social domain questions and their overall tone [35
]. As an example, FACT-G asks patients to reflect on their thoughts and feelings whereas the QLQ-C30 questions focus on more objective aspects of functioning [35
]. Therefore, despite considerable overlap, neither of these two QOL questionnaires can be replaced by the other, nor can a direct comparison between their results be made [36
Another interesting observation is that MOSSSS score was higher by cycle 3 compared to baseline. One may hypothesize that this indirectly reflects that patients with better social support are more likely to stay on study treatment, particularly with phase 1 studies wherein there are generally more research-related tests and procedures involved during the first one or two cycles of treatment. However, this is not supported by our other observation that MOSSSS scores were higher in the group of patients who were replaced compared to patients who were not replaced in the seven phase 1 trials that incurred subject replacement. One possibility is that a patient with comparatively lower score may look upon the care team itself as an important source of social support and thus remain in the study even if non-DLT toxicities were encountered or additional research-related tests are required to be repeated, whereas patients with a higher score maybe negatively influenced in such circumstances by their caregivers to withdraw from a study early.
We believe this study provides rationale for clinicians to consider stratifying potential enrollees into phase 1 oncologic trials according to their baseline QOL. Obtaining QOL scores is possible in a timely manner, averaging 11 min with most patients not requiring assistance [37
], and is typically less burdensome to patients than blood draws or CT scans and also more cost-effective. The need for additional measures for patient inclusion in phase 1 trials is highlighted by the fact that despite strict criteria involving performance status, organ function, and LDH levels built into eligibility requirements for phase 1 clinical trials in oncology, there continues to be a considerable degree of early trial discontinuation and patient replacement during phase 1 trials (16% in the series reported by Olmos et al. [10
]). More recently, a simplified risk score was proposed to identify patients at risk for early discontinuation prior to study enrollment to address this logistical issue [38
]. Unfortunately, the proposed model, according to the authors, will still indiscriminately exclude seven patients for every three patients accurately excluded while early discontinuation rates would remain >10%. This fact thus deters widespread adoption of this metric in patient selection. In comparison, our pilot study aimed to evaluate the use of QOL as a tool to identify patients at risk for incurring toxicities, SAEs and early discontinuation from phase 1 trials.
Once baseline QOL scores are obtained, those with particularly dismal QOL scores should not be considered candidates for the trials, in the same way traditional criteria such as organ function might exclude patients from experimental chemotherapy as the risks greatly outweigh the benefit in these cases. Physicians should proceed cautiously with other patients with moderately low scores. Patients should be counselled as to the increased risks of SAE occurrence and prior to enrollment in any trial they should have aggressive symptom management either by the oncologist or a specialist in palliative medicine with the aim of improving QOL before experimental chemotherapy is begun. Specialist follow up should continue throughout the trial. Discussion of baseline QOL scores and its implications should be incorporated into the informed consent and decision-making process with cancer patients.
Our intent is not to promote another barrier or exclusion from phase 1 trials based on QOL scores. Indeed there are good reasons for all cancer patients to seek participation in well-designed clinical treatment trials [39
], including phase 1 trials particularly if they have a realistic probability to derive some benefit despite anticipated adverse events and drug toxicities. While benefits of trial participation traditionally range from being exposed to state-of-the-art treatments [40
], to being under the care of physicians participating in clinical trials who, it has been suggested, take better care of all of their patients [41
], recent studies confirm that prognosis for phase 1 patients appears to have improved. Meta-analysis of phase 1 studies sponsored by the Cancer Therapy Evaluation Program from 1991 to 2002 revealed overall response and disease stability rates of 10.6% and 34.1%, with overall toxic death rate remaining low (0.49%) in 11,935 participating patients [42
]. While survival data is difficult to interpret due to the heterogeneous patient samples involved, it is reassuring that analysis of treatment outcomes in contemporary phase 1 oncology trials shows these trials to be safe and associated with clinical benefit in greater than 40% of patients [42
], with median survival of 8.7 months [43
]. By helping physicians better identify patients at an increased risk for SAEs during the trial period, the chances for successful patient accrual, less subject replacements and maximum clinical benefit to the participants is greatly increased, all of which are key requirements for any trial.
This study has some limitations. Overall this study features a small cohort of patients and the results need to verified on a larger scale. It did not meet the hypothesis that QOL scores are able to predict occurrence of DLTs or subject replacement. One reason may be due to the fact that it is underpowered to detect differences based on the small sample size since out of 92 patients included in the analysis, only 12 were replaced in the overall cohort of patients. More data also needs to be gathered to validate these findings and to model a predictive score for decision algorithms.
While the implications of these findings provide rationale to incorporate QOL assessment in the design and development of phase 1 clinical trials, additional investigation is warranted to validate the role of QOL in patient selection criteria for Phase 1 cancer trials. Future studies may involve dividing QOL scores into ranges that can define high/average or low scores and thus allow patients to be grouped accordingly. This practice would be able to provide clearer guidelines to physicians when making clinical decisions. Prospective research should also focus on the impact of interventions such as early involvement of palliative care in the management of these patients to help diminish the risk of SAEs and potentially early discontinuation from the treatment trial. As the physician-patient relationship has also been shown to affect a patient’s QOL [44
], screening for patient satisfaction with their healthcare professional may be able to identify a potentially modifiable contributor towards poorer QOL.