A Simulation Study on Adaptive Assignment Versus Randomizations in Clinical Trials
Abstract
:1. Introduction
2. Procedures
- Procedure ER (equal randomized): Half of the n patients are randomly assigned to treatment A, while the other half receive treatment B. This approach ensures an equal representation of both treatments but may not maximize the overall success rate due to its lack of adaptability.
- Procedure RR (repeatedly randomized): Patients are randomized with equal probability at each assignment. This method provides flexibility by continuously adjusting treatment allocation but may introduce variability in treatment success.
- Procedure SR (single-randomized): Randomization occurs only at the first assignment, and the selected treatment is used for all subsequent patients. While this method simplifies the decision-making process, it risks poor outcomes if the initial randomization does not reflect the true effectiveness of the treatments.
- Procedure JB (J. Bather): Patients are randomized at each step using the following adaptive procedure. The first two patients are assigned treatments A and B randomly, ensuring that each receives a different therapy. Suppose that, by time t (), t patients have been treated, and the outcomes—successes and failures—on treatments A and B are recorded as , respectively, such that . DefineLetUnder this procedure, the next patient (patient ) receives treatment A with probabilityThis procedure dynamically adjusts treatment based on previous outcomes, potentially enhancing overall success rates by favoring treatments that have shown higher effectiveness.
- Procedure PW (play-the-winner/switch-from-a-loser): The first patient is randomly assigned to either treatment A or B with a probability of 0.5 for each. For patients 2 to n, the treatment given to the previous patient is repeated if it was successful; otherwise, the other treatment is administered. This strategy leverages successful outcomes to inform subsequent treatment choices, thus optimizing the allocation based on real-time feedback.
- Procedure RB (robust Bayes): This policy utilizes a randomization strategy based on a uniform prior density:Due to the symmetry of the uniform prior distribution, the treatments for the first patient are initially equivalent and can be chosen at random. If the first patient experiences success, the second patient receives the same treatment. Conversely, if the first patient experiences a failure, the second patient receives the alternative treatment. Therefore, procedure RB mimics procedure PW for the first two treatment assignments. The same treatment is continued as long as it is successful. However, after a failure, switching to the other treatment may or may not be optimal. If the data strongly support the treatment that has just failed, that treatment will be used again. Specifically, if the current probability of success of treatment A (the posterior expected value of , ) is greater than that of treatment B, then treatment A will be used. Additionally, if both treatments are judged to be equally effective at any stage, the next treatment assignment will be randomized. This approach balances robustness and adaptability, utilizing Bayesian updating to make informed decisions while maintaining some level of randomness.
- Procedure WT (W. Thompson): Similar to procedure JB, this procedure randomizes between treatments X and Y for patients 1 through n. The randomization is based on the current probability distribution of , assuming a uniform prior density on :The next patient receives treatment X with a probability equal to the current probability that . This probability is given byBy continuously updating the probability estimates, this procedure aims to maximize the chances of success based on the accumulated data.
- Procedure PR (posterior probability ratio): This newly proposed procedure selects treatment A or B based on posterior probabilities. In contrast to procedure JB, this procedure randomizes between treatments A and B for patients 1 through n. The randomization is based on the current expected values of and , assuming a uniform prior density on . The next patient receives treatment A with a probability given byThis procedure provides a direct comparison of the posterior probabilities of success for each treatment, thus enabling more informed treatment decisions based on the most current data.
3. Probabilities
4. Numerical Studies
- Comparison of randomization procedures:
- Figure 1 and Figure 2 illustrate that the randomization procedures ER and RR perform similarly, with ER slightly outperforming RR. In certain cases, SR shows superior performance, indicating that the choice of randomization procedure significantly impacts outcomes. Notably, for each value of k, the range of over which SR outperforms ER and RR decreases as increases. Conversely, as k increases, these ranges widen for a fixed . Overall, SR is less effective for larger values of when or k is small to intermediate, underscoring the importance of tailoring randomization procedures to the specific trial conditions.
- Performance of adaptive procedures:
- Figure 3 and Figure 4 reveal that, among the adaptive procedures, PR consistently performs the worst. Conversely, JB and WT demonstrate a similar performance, with JB slightly outperforming WT. For a small , RB generally exhibits the best performance, followed by JB, WT, and PW. However, as increases, JB becomes the most effective procedure, with WT next in line, while PW and RB alternate in performance based on trial conditions.For smaller values, RB tends to outperform JB when larger k values are combined with moderate to high values, but it underperforms compared to PW when smaller k values and lower to moderate values are involved. These performance shifts highlight the sensitivity of adaptive procedures to variations in k, , and , emphasizing the necessity of tailoring the choice of procedure to the specific trial conditions, target success probability, and desired number of successes.
- Comparison between SR, ER, JB, and RB:
- Figure 5 and Figure 6 indicate that, for small values of , RB generally exhibits the best performance, followed by either JB or SR, while ER performs the least effectively. As increases, however, JB emerges as the most effective procedure, with the performance of RB, SR, and ER alternating based on trial conditions. Notably, ER outperforms RB when both and are large.
- Analysis of heatmaps:
- The series of heatmaps displayed in Figure 7, Figure 8 and Figure 9 illustrate how each allocation procedure performs across varying values of k and the parameters and . Key observations include the following.
- Performance variation by procedure:
- ER typically exhibits lighter shades of blue across various k values, indicating higher CPL and, thus, less favorable performance compared to the other procedures.
- JB shows darker blue regions in specific areas, particularly at higher k values, suggesting effective performance in minimizing CPL under those conditions.
- RB demonstrates dark blue regions, especially at a lower , indicating effective performance; however, its effectiveness diminishes as increases.
- SR shows darker blue regions at higher values, indicating better performance in reducing CPL, although its effectiveness varies across all k values.
- Influence of k on procedure performance:
- As k increases from 40 to 80, the distribution of darker blue regions varies across procedures. Notably, JB tends to achieve darker blue intensities (indicating better CPL performance) at higher k values, while RB displays a similar pattern but is more sensitive to changes in .
- Parameter sensitivity:
- The effectiveness of each procedure varies with and . For instance, both JB and RB exhibit darker blue regions (lower CPL) at lower values, while SR shows darker blue at higher values. This variability underscores the importance of selecting a procedure based on specific trial conditions, as some procedures yield better performance with low CPL in particular parameter settings.
5. Comparison of CESL and CPL: Methodological Perspectives on Treatment Allocation
6. Concluding Remarks
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Thompson, W.R. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 1933, 25, 285–294. [Google Scholar] [CrossRef]
- Feldman, D. Contributions to the ‘two-armed bandit’ problem. Ann. Math. Stat. 1962, 33, 847–856. [Google Scholar] [CrossRef]
- Zelen, M. Play the winner rule and the controlled clinical trial. J. Am. Stat. Assoc. 1969, 64, 131–146. [Google Scholar] [CrossRef]
- Sobel, M.; Weiss, G.H. Play-the-winner rule and inverse sampling in selecting the better of two binomial populations. J. Am. Stat. Assoc. 1971, 66, 546–551. [Google Scholar] [CrossRef]
- Berry, D.A. Modified two-armed bandit strategies for certain clinical trials. J. Am. Stat. Assoc. 1978, 73, 339–345. [Google Scholar] [CrossRef]
- Bather, J.A. Randomized allocation of treatments in sequential medical trials (with discussion). J. R. Stat. Soc. Ser. 1981, 43, 265–292. [Google Scholar] [CrossRef]
- Berry, D.A.; Fristedt, B. Bandit Problems: Sequential Allocation of Experiments; Chapman and Hall: London, UK, 1985. [Google Scholar]
- Berry, D.A.; Eick, S.G. Adaptive assignment versus balanced randomization in clinical trials: A decision analysis. Stat. Med. 1995, 14, 231–246. [Google Scholar] [CrossRef] [PubMed]
- Pocock, S.J.; Simon, R. Sequential treatment assignment with balancing for prognostic factors in the controlled clinical trial. Biometrics 1999, 35, 103–115. [Google Scholar] [CrossRef]
- Chaudhuri, S.; Lo, S.H. A hybrid of response-adaptive and covariate-adaptive randomization for multi-center clinical trials. Stat. Med. 2002, 21, 131–145. [Google Scholar]
- Berry, D.A.; Eick, S.G. The Design and Analysis of Sequential Clinical Trials; Springer: New York, NY, USA, 1995. [Google Scholar]
- Rosenberger, W.F.; Lachin, J.M. Randomization in Clinical Trials: Theory and Practice; John Wiley & Sons: Hoboken, NJ, USA, 2002. [Google Scholar]
- Hu, F.; Rosenberger, W.F. The Theory of Response-Adaptive Randomization in Clinical Trials; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
- Chow, S.C.; Chang, M. Adaptive design methods in clinical trials—A review. Orphanet J. Rare Dis. 2008, 3, 11. [Google Scholar] [CrossRef] [PubMed]
- Porcher, R.; Lecocq, B.; Vray, M. Adaptive methods: When and how should they be used in clinical trials? Therapies 2011, 66, 309–317. [Google Scholar] [CrossRef]
- Laage, T.; Loewy, J.W.; Menon, S.; Miller, E.R.; Pulkstenis, E.; Kan-Dobrosky, N.; Coffey, C. Ethical Considerations in Adaptive Design Clinical Trials. Ther. Innov. Regul. Sci. 2016, 51, 190–199. [Google Scholar] [CrossRef] [PubMed]
- Bhatt, D.L.; Mehta, C. Adaptive Designs for Clinical Trials. N. Engl. J. Med. 2016, 375, 65–74. [Google Scholar] [CrossRef] [PubMed]
- Chang, M.; Balser, J. Adaptive Design - Recent Advancement in Clinical Trials. J. Bioanal. Biostat. 2016, 1, 14. [Google Scholar] [CrossRef]
- Pallmann, P.; Bedding, A.W.; Choodari-Oskooei, B.; Dimairo, M.; Flight, L.; Hampson, L.V.; Holmes, J.; Mander, A.P.; Odondi, L.; Sydes, M.R.; et al. Adaptive designs in clinical trials: Why use them, and how to run and report them. BMC Med. 2018, 16, 29. [Google Scholar] [CrossRef]
- Kelly, L.E.; Dyson, M.P.; Butcher, N.J.; Balshaw, R.; London, A.J.; Neilson, C.J.; Junker, A.; Mahmud, S.M.; Driedger, S.M.; Wang, X. Considerations for adaptive design in pediatric clinical trials: Study protocol for a systematic review, mixed-methods study, and integrated knowledge translation plan. Trials 2018, 19, 572. [Google Scholar] [CrossRef]
- Thorlund, K.; Haggstrom, J.; Park, J.J.; Mills, E.J. Key design considerations for adaptive clinical trials: A primer for clinicians. BMJ 2018, 360, k698. [Google Scholar] [CrossRef] [PubMed]
- Afolabi, M.O.; Kelly, L.E. Non-static framework for understanding adaptive designs: An ethical justification in paediatric trials. J. Med. Ethics 2022, 48, 825–831. [Google Scholar] [CrossRef] [PubMed]
- Kaizer, A.M.; Belli, H.M.; Ma, Z.; Nicklawsky, A.G.; Roberts, S.C.; Wild, J.; Wogu, A.F.; Xiao, M.; Sabo, R.T. Recent innovations in adaptive trial designs: A review of design opportunities in translational research. J. Clin. Transl. Sci. 2023, 7, e125. [Google Scholar] [CrossRef] [PubMed]
- Ben-Eltriki, M.; Rafiq, A.; Paul, A.; Prabhu, D.; Afolabi, M.O.S.; Baslhaw, R.; Neilson, C.J.; Driedger, M.; Mahmud, S.M.; Lacaze-Masmonteil, T.; et al. Adaptive Designs in Clinical Trials: A Systematic Review—Part I. BMC Med. Res. Methodol. 2024, 24, 229. [Google Scholar] [CrossRef]
- Efron, B. Forcing a sequential experiment to be balanced. Biometrika 1971, 58, 403–417. [Google Scholar] [CrossRef]
- Zelen, M. The randomization and stratification of patients to clinical trials. J. Chronic Dis. 1974, 27, 365–375. [Google Scholar] [CrossRef] [PubMed]
- Wei, L.J. An application of an urn model to the design of sequential controlled clinical trials. J. Am. Stat. 1978, 73, 559–563. [Google Scholar] [CrossRef]
- Pocock, S.J. Clinical Trials: A Practical Approach; Wiley: Hoboken, NJ, USA, 1983. [Google Scholar]
- Rosenberger, W.F.; Lachin, J.M. Randomization in Clinical Trials: Theory and Practice, 2nd ed.; Wiley: Hoboken, NJ, USA, 2016. [Google Scholar]
- Cornfield, J. Randomization by group: A formal analysis. Am. Epidemiol. 1978, 108, 100–104. [Google Scholar] [CrossRef]
- Fleiss, J.L. The Design and Analysis of Clinical Experiments; Wiley: Hoboken, NJ, USA, 1986. [Google Scholar]
- R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2024; Available online: https://www.R-project.org/ (accessed on 11 December 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, C.-T.; Li, Y.-W.; Hong, Y.-J. A Simulation Study on Adaptive Assignment Versus Randomizations in Clinical Trials. Mathematics 2025, 13, 44. https://doi.org/10.3390/math13010044
Lin C-T, Li Y-W, Hong Y-J. A Simulation Study on Adaptive Assignment Versus Randomizations in Clinical Trials. Mathematics. 2025; 13(1):44. https://doi.org/10.3390/math13010044
Chicago/Turabian StyleLin, Chien-Tai, Yun-Wei Li, and Yi-Jun Hong. 2025. "A Simulation Study on Adaptive Assignment Versus Randomizations in Clinical Trials" Mathematics 13, no. 1: 44. https://doi.org/10.3390/math13010044
APA StyleLin, C.-T., Li, Y.-W., & Hong, Y.-J. (2025). A Simulation Study on Adaptive Assignment Versus Randomizations in Clinical Trials. Mathematics, 13(1), 44. https://doi.org/10.3390/math13010044