Next Article in Journal
Longitudinal Survival Analysis Using First Hitting Time Threshold Regression: With Applications to Wiener Processes
Previous Article in Journal
An Integrated Hybrid-Stochastic Framework for Agro-Meteorological Prediction Under Environmental Uncertainty
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Clinical Trials and Sample Size Determination in the Presence of Measurement Error and Heterogeneity

1
Department of Statistics, Quaid-i-Azam University, Islamabad 45320, Pakistan
2
Department of Statistical Sciences, University of Padua, 35121 Padova, Italy
3
Department of Statistics and Operations Research, College of Sciences, King Saud University, Riyadh 11451, Saudi Arabia
4
Department of Mathematics, College of Sciences and Arts (Muhyil), King Khalid University, Muhyil 61421, Saudi Arabia
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Stats 2025, 8(2), 31; https://doi.org/10.3390/stats8020031
Submission received: 16 March 2025 / Revised: 23 April 2025 / Accepted: 23 April 2025 / Published: 25 April 2025

Abstract

:
Adaptive clinical trials offer a flexible approach for refining sample sizes during ongoing research to enhance their efficiency. This study delves into improving sample size recalculation through resampling techniques, employing measurement error and mixed distribution models. The research employs diverse sample size-recalculation strategies standard simulation, R1 and R2 approaches where R1 considers the mean and R2 employs both mean and standard deviation as summary locations. These strategies are tested against observed conditional power (OCP), restricted observed conditional power (ROCP), promising zone (PZ) and group sequential design (GSD). The key findings indicate that the R1 approach, capitalizing on mean as a summary location, outperforms standard recalculations without resampling as it mitigates variability in recalculated sample sizes across effect sizes. The OCP exhibits superior performance within the R1 approach compared to ROCP, PZ and GSD due to enhanced conditional power. However, a tendency to inflate the initial stage’s sample size is observed in the R1 approach, prompting the development of the R2 approach that considers mean and standard deviation. The ROCP in the R2 approach demonstrates robust performance across most effect sizes, although GSD retains superiority within the R2 approach due to its sample size boundary. Notably, sample size-recalculation designs perform worse than R1 for specific effect sizes, attributed to inefficiencies in approaching target sample sizes. The resampling-based approaches, particularly R1 and R2, offer improved sample size recalculation over conventional methods. The R1 approach excels in minimizing recalculated sample size variability, while the R2 approach presents a refined alternative.

1. Introduction

Clinical trials play a pivotal role in the evaluation of new medical treatments or interventions, providing reliable evidence for the safety and efficacy of treatments. However, traditional clinical trial designs can be inefficient and inflexible, requiring careful calculation of sample sizes based on accurate parameter assumptions. Incorrect parameter assumptions can lead to sample sizes that are too small or too large, with significant ethical and economic consequences. Adaptive clinical trials (ACTs) offer a promising alternative, allowing for modifications to the trial’s parameters based on accumulating data. This flexibility can maximize efficiency, enhance the trial’s ability to answer research questions effectively and reduce the number of participants required.
One critical aspect of adaptive clinical trials is the sample size, which directly impacts the trial’s statistical power and the ability to draw meaningful conclusions [1]. The traditional fixed-sample size designs often face many challenges, as the required sample size is typically determined before any data are collected, leading to inefficiencies and potential resource wastage. Sample size recalculation in adaptive clinical trials is an important area of research, as it can help to improve the efficiency and accuracy of clinical trials [2]. This research explores a new approach to sample size recalculation based on a measurement error model (MEM) and mixture distribution. The proposed approach generates treatment and control groups and adaptively adjusts the sample size based on interim analysis. The study investigates several approaches to recalculating sample size in adaptive clinical trials, with a focus on using a two-component mixture distribution to generate treatment and control groups. These approaches adaptively adjust the sample size based on interim analysis, ensuring that the trial is appropriately powered and resource-efficient.
Sample size determination is a critical aspect of clinical trial design. It is the process of estimating the number of participants needed to provide sufficient statistical power to detect a meaningful treatment effect. Inadequate sample sizes can lead to inconclusive or false-negative results, while excessively large sample sizes can waste resources. Calculating sample size typically involves using four parameters: Type-I error, power, control group assumptions (response rate and standard deviation) and predicted treatment impact. Measurement error models are statistical models used to account for measurement errors in data. In clinical trials, measurement errors can arise due to various factors, such as variability in assessments or instruments used to measure outcomes. These errors can lead to biased results and reduced statistical power. By accounting for measurement errors, researchers can better distinguish true treatment effects from noise, leading to more reliable and robust conclusions about the effectiveness of interventions. Moreover, the utilization of mixture-distribution models for the generation of treatment and control groups addresses the challenges associated with randomization, covariate balance and group heterogeneity, ultimately enhancing the robustness and credibility of clinical trial outcomes.
Several methods have been proposed to implement resampling in adaptive clinical trials, including Pocock [3] and O’Brien and Fleming [4] boundaries. These methods allow for sample size updates based on the observed interim effect while controlling the overall Type-I error rate of the study. Denne [5] noticed that sample size may be determined by one or more nuisance characteristics, which are typically unknown, to obtain a specific power at a predetermined absolute difference in mean response. Friede and Kieser [6] developed the internal pilot study design, which enables the sample size to be revised for a trial using the estimated variance discovered by interim analysis. Charles et al. [7] noticed that the primary goal of an a priori sample size calculation is to determine the minimum number of participants required to identify clinical significance. Hammouri et al. [8] used urn allocation and the O’Brien and Fleming multiple testing procedure to compare two treatments in clinical trials in a novel way.
Kieser and Friede [9] discussed two-stage techniques, where the variance is reestimated from a subsample and the sample size is modified as needed, which are appealing due to the uncertainty in the design step. They demonstrated analytically that the Type-I error rate of the t-test is not impacted by the use of straightforward, blind variance estimators for sample size recalculation. Harden and Friede [10] and Das et al. [11] analyzed multi-center randomized clinical trials, which is crucial for evidence-based medicine. To address clustering in data, the authors’ mixed models are used. The existing sample size-calculation methods only consider balanced treatment allocations, which may not be realistic. To overcome this, a new sample size-determination procedure is proposed for multi-center trials comparing two treatment groups [10]. The method incorporated random effects, allowing arbitrary sample sizes and assumed fixed block length block randomization. Through simulations, the proposed approach demonstrated its superiority over conventional methods, taking into account parameters such as block length and center heterogeneity.
Jennison and Turnbull [12] discussed the common practice of setting the sample size in clinical trials based on a specified treatment effect, disregarding the importance of detecting smaller but clinically significant effects. The proposed group sequential designs focused on reducing the expected sample size while maintaining sufficiency and considering the possibility of small treatment effects at the design stage. The methods are compared with Fisher’s variance spending procedure and show potential advantages. Pritchett et al. [13] compared different types of sample size-recalculation (SSR) designs, including blinded SSR, unblinded SSR and conventional group sequential designs (GSDs). The study also presented statistical methods for unblinded and blinded SSR designs and highlighted the importance of controlling Type-I error rate and estimating the treatment effect accurately. Chakraborty and Gu [14] discussed the prevalent challenges posed by missing values and dropouts in longitudinal studies within medical and public health fields.
Boos and Brownie [15] introduced novel rank-based methods within the framework of mixed linear models for analyzing data from multisite clinical trials. Unlike current rank methods, the proposed procedures specifically assess a drug’s main effect in the presence of a random drug-by-site (or investigator) interaction. Nagin and Odgers [16] highlighted the growing utilization of group-based trajectory models in clinical research for tracking symptom development and gauging diverse responses to clinical interventions. The review furnished a comprehensible overview of both group-based trajectory and growth mixture modeling, coupled with instances showcasing their clinical research applications. Deng et al. [17] discussed a novel two-stage multivariate Mendelian randomization method for investigating the causal effects of clinical factors on various outcomes, especially in cases of mixed correlated outcomes with different distributions. The conventional MR methodology focused on single outcomes, disregarding correlation structures, potentially resulting in reduced statistical power. The proposed method addressed this limitation by jointly analyzing multiple outcomes using genetic instrumental variables. Spanbauer and Sparapani [18] suggested precision medicine’s transformative potential for clinical trials and subsequent treatment strategies. The study highlighted the relevance of modern machine learning techniques, particularly Bayesian additive regression trees (BART), for identifying distinct population segments and devising personalized treatment rules. Liang et al. [19] proposed an innovative mixed-effects varying-coefficient model to address measurement errors in covariates. Morgan and Elashoff [20] discussed the impact of measurement error in prognostic factors, often considered as covariates in clinical trials assessing treatment effects. Employing Weibull regression models and asymptotic theory, the study investigated the efficiency of treatment effect estimation when adjusting for a dichotomous and a continuous covariate affected by measurement error. The analysis revealed how such errors can diminish estimation efficiency.
Measurement error in covariates is a critical challenge in biomedical research, introducing bias and imprecision in exposure–outcome relationships. Wang et al. [21] discussed the challenges posed by measurement error in the context of generalized linear mixed models (GLMMs) for clustered data, where one predictor is afflicted by such error. Yang et al. [22] introduced a corrected empirical likelihood approach for statistical inference in generalized linear measurement error models, encompassing Gaussian, Poisson and logistic regressions. Brakenhoff et al. [23] investigated the impact of measurement error in covariates within medical research, particularly their potential to introduce bias and imprecision in exposure–outcome relationships. Despite the acknowledged significance of this issue, the extent to which it was addressed in current research practices remains uncertain. Through a systematic review of general medicine and epidemiology literature, the study highlighted a lack of consideration for covariate measurement error in a majority of high-impact journal publications. This oversight makes it challenging for readers to assess the robustness of presented results. The research underscored the need for heightened awareness regarding the possible repercussions of measurement error and calls for guidance on employing correction methods. Measurement error, arising from inaccuracies in measurement instruments and data, poses a critical challenge to valid inferences in biomedical research. As medical datasets continue to expand, recognizing and addressing covariate measurement error becomes imperative for maintaining the integrity of research findings.
This research aims to improve the efficiency and accuracy of adaptive clinical trials by (i) proposing a new approach to generate treatment and control groups based on a measurement error and mixture model and (ii) investigate the application of different sample size-recalculation methods at interim analyses. The research is structured around a rigorous methodological framework. To achieve the objectives, a simulation-based study is conducted to evaluate the performance of the MEM in generating trial groups. Furthermore, various sample size-recalculation approaches are applied in interim analyses, including observed conditional power (OCP), restricted observed conditional power (ROCP), promising zone (PZ) and group sequential design (GSD). The effectiveness of these methods is assessed concerning statistical power, Type-I error control and overall trial efficiency.
The rest of the study is organized as follows: Section 2 discusses the measurement error and mixture models. Different methodologies to recalculate the sample size are discussed in Section 3. The performance evaluation criteria of sample size-calculation methods are discussed in Section 4. A simulation study to evaluate the performance assuming mixture and measurement error models is tabulated and discussed in Section 5 and Section 6, respectively. An illustrative study for alleviating pain among osteoarthritis patients is discussed in Section 7. Finally, Section 8 presents concluding and future remarks.

2. Measurement Error Models and Mixture Distribution

2.1. Measurement Error Model

Measurement error models (MEMs) are statistical techniques used to account for inaccuracies and uncertainties in the measurement process when analyzing data. In various research fields, measurements often contain errors that can distort the true relationships between variables. These errors can arise from a variety of sources, such as imperfect instruments, human error, environmental factors and inherent variability in the phenomenon being measured. MEMs help researchers address these issues by providing a framework to estimate the true relationships between variables while considering the impact of measurement errors. These models can be broadly categorized into two main types:
  • Classical Measurement Error Models: In classical MEMs, the error is assumed to be present in the independent (explanatory) variable. This type of model is commonly known as an errors-in-variables (EIV) model. It accounts for the fact that the observed values of the independent variable are subject to measurement errors, leading to biased and inconsistent parameter estimates if not properly addressed. Mathematically, a classical EIV model can be expressed as follows
    true relationship: Y = α + β X + u
    observed relationship: Y o b s = α + β X o b s + ϵ
    where X o b s = X + η is the observed (error-prone) value of X, ϵ is the error in the observed Y and η is normally distributed with zero mean and variance σ 2 .
  • Errors-in-Response or Dependent Variable Models: These models focus on measurement errors in the dependent (response) variable. In this case, the observed responses are considered to be measured with errors. Errors-in-response models are less common but are used when the measurement error is primarily concentrated in the outcome variable. Mathematically,
    true relationship: Y = α + β X + u
    observed relationship: Y o b s = Y + ϵ
    where ϵ is the error in the observed response variable Y.
Measurement errors can significantly affect the reliability and validity of the estimates derived from statistical analyses. The key implications of measurement errors on estimates are the following. Measurement errors can introduce bias in parameter estimates, leading to incorrect conclusions about the relationships between variables. For example, if the true relationship between two variables is linear, measurement errors can make it appear non-linear or attenuate the observed relationship. The researchers may overestimate or underestimate the strength of associations, leading to misguided policy recommendations or interventions. Also, measurement errors can reduce the precision and efficiency of parameter estimates. The variability introduced by measurement errors can inflate standard errors, leading to wider confidence intervals and reduced statistical power. Inconsistent estimates occur when the magnitude and direction of bias change across different samples or settings. This can lead to difficulties in replicating research findings and generalizing results. Furthermore, measurement errors can distort hypothesis tests, leading to incorrect p-values and flawed decisions about statistical significance. This can result in both Type-I and Type-II errors.

2.2. Mixed Distribution Models

The mixed distribution model (MDM) is a powerful statistical technique that allows researchers to create well-balanced and comparable groups, taking into account the heterogeneity of the underlying data. This section outlines the process of utilizing the MDM for group assignment, its advantages and the steps involved in its implementation. The MDM is a sophisticated statistical approach that combines elements of probability distributions to create groups that are representative of the underlying population. It allows for the incorporation of various covariates and factors, ensuring that the treatment and control groups are not only randomized but also balanced using relevant characteristics. The MDM takes into consideration both continuous and categorical variables, accommodating the complexity of real-world data. The general form of a mixed distribution model can be expressed as follows:
f ( x ; θ ) = i = 1 k π i · f i ( x ; θ i )
where f ( x ; θ ) represents the mixed distribution with parameters θ , k is the number of components in the mixture, π i are the mixing proportions, satisfying i = 1 k π i = 1 and f i ( x ; θ i ) represents the ith component distribution with parameters θ i . For a two-component mixture f ( x ; θ ) = i = 1 2 π i · f i ( x ; θ i ) = π 1 · f 1 ( x ; θ 1 ) + π 2 · f 2 ( x ; θ 2 ) , which further can be written as f ( x ; θ ) = π · f 1 ( x ; θ 1 ) + ( 1 π ) · f 2 ( x ; θ 2 ) due to the fact i = 1 2 π i = 1 . The MDM ensures that treatment and control groups are balanced, thereby reducing the risk of confounding variables affecting the results. This balance enhances the internal validity of the clinical trial. Moreover, by accounting for the underlying distribution of the data, the MDM helps ensure that the generated groups accurately represent the population.

3. Methodology

In this section, a comprehensive overview of the research design, data-analysis techniques and any other pertinent procedures employed to ensure the validity and reliability of the study’s findings is discussed.
We consider a clinical experiment that is two-armed, randomized and controlled. The n observations in treatment group T and control group C have a normal distribution with means μ T and μ C with a common variance σ 2 , i.e., X i T N ( μ T , σ 2 ) , X i C N ( μ C , σ 2 ) , i = 1 , 2 , , n . The current study is interested in investigating the one-sided superiority test problem, which can be stated as H 0 : μ T μ C 0 vs . H 1 : μ T μ C > 0 which refers to a situation where high values of the endpoint are viewed favorably. We investigate an adaptive group sequential design with two stages, which is the most basic and widely used adaptive group sequential design. Consequently, we have to make two independent statistics,
T i = X ¯ i T X ¯ i C S pooled , i . n i 2 ,
where i { 1 , 2 } denote stages, X i T ¯ and X i C ¯ are the means of treatment and control groups, Spooled,i is the pooled standard deviation and n i denote sample size per group in stage i with n 1 + n 2 = n . The well-known formula for S pooled is given as:
S pooled = ( n 1 1 ) S 1 2 + ( n 2 1 ) S 2 2 n 1 + n 2 2 ,
where n 1 and n 2 are the sample sizes of the two groups, S 1 and S 2 are the standard deviations of the two groups. It must be noted that T 1 only includes data from the first stage, and T 2 only includes data from the second stage, both of which have an approximately normal distribution.
If the interim test statistic T 1 falls within the recalculation area (RA) given as [ q 1 α 0 ; q 1 α 1 ), where α 0 denotes futility stopping bound for a one-sided p-value of stage one, α 1 refers to the local one-sided significance level and q are the respective quantiles of the normal distribution, the trial moves to the second stage. If T 1 q 1 α 1 the trial is stopped with an early rejection of the null hypothesis after the first stage, or accept the null hypothesis if T 1 < q 1 α 0 . After combining all observed data over two stages using an inverse normal combination test given as
T = w 1 . T 1 + w 2 . T 2 w 1 2 + w 2 2 ,
where w 1 and w 2 are the weights, i.e., w 1 = n 1 and w 2 = n 2 , T 1 and T 2 are two stochastically independent test statistics. If T q 1 α , where α refers to the local one-sided significance level for the final analysis, then the null hypothesis is rejected at the final analysis. For instance, local significance levels can be determined using the adjustments suggested by Pocock [3] or O’Brien and Fleming [4].

3.1. Methods for Recalculating Sample Size

Various approaches exist for adjusting sample sizes during an ongoing clinical trial. One straightforward method is the implementation of a group sequential design (GSD), where a fixed predetermined sample size is allocated for each stage. A more flexible variation of this concept is found in adaptive group sequential designs, where interim sample sizes can be determined based on the accumulating data. In the realm of adaptive group sequential designs, a prevalent strategy involves sample size adjustments aimed at attaining a predefined conditional power value. This conditional power metric delineates the probability of accurately rejecting the null hypothesis, given the observed interim test statistic value and the cumulative sample size per group. The calculation of conditional power (CP) is contingent upon the true standardized treatment effect ( Δ ), which gauges the difference ( μ T μ C ) between means of the treatment and control groups divided by the shared standard deviation ( σ ) , i.e., Δ = ( μ T μ C ) / σ . This dynamic approach to modifying sample sizes within adaptive designs holds substantial promise for enhancing trial efficiency and statistical robustness based on emerging data trends. Thus,
C P Δ ( t 1 , n ) = 0 , if the trial ends early due to futility , 1 Φ q 1 α . w 1 2 + w 2 2 w 2 2 t 1 . w 1 w 2 Δ . n 1 2 . n n 1 n 1 , if the sample size is recalculated , 1 , if the trial is stopped early for efficacy .
The subsequent subsections outline three distinct methods for recalculating the sample size by leveraging the observed conditional power. In this context, the formula incorporates the substitution of Δ which is replaced by the observed interim effect denoted as t 1 2 / n 1 . Alternatively, akin strategies entail the integration of an assumed effect for Δ , often termed as the anticipated conditional power. Notably, our current focus revolves around evaluating the impact of the resampling tool assuming MEM and MDM, which harmonizes seamlessly with existing recalculation approaches. Given this specific emphasis, we prioritize a comprehensive exploration of recalculation rules hinged on the observed conditional power, thereby omitting an exhaustive examination of varied recalculation strategies diverging primarily in their selection of Δ values. Within this framework, we exclusively investigate recalculation rules that draw from observed conditional power, while concurrently imposing an upper limit of n m a x on the overall total sample size per group, a measure employed for practical viability considerations.

3.1.1. The Observed Conditional Power (OCP) Approach

For observed interim test statistics to fall in the recalculation area [ q 1 α 0 ; q 1 α 1 ) , it is necessary to make sure that we have the right number of people in our groups to catch any real effects. This idea is called conditional power, which tells us how likely we are to find something real. If this power ( 1 β ) is higher, it is better.
The basic concept is to find the smallest whole number n ˜ that fits a special rule. This rule says that n ˜ has to be greater than or equal to a specific value. This value comes from the equation
n ˜ n 1 . 1 + q β q 1 α . w 1 2 + w 2 2 w 2 + t 1 . w 1 w 2 t 1 2 .
Equation (5) helps us to know how many people we need in each group so that our study makes sense and we can obtain meaningful results. Using the OCP approach, we determine the overall sample size for each group as follows. If the interim test statistic t 1 falls within a recalculated area then the total sample size per group is the smaller value between a calculated quantity denoted as n ˜ ( t 1 ) and a predefined maximum value n m a x . On the other hand, if t 1 does not fall within the recalculated area, then the total sample size per group remains fixed at n 1 . This method ensures that we adapt the sample size based on the interim results and the level of confidence we seek in our study. This leads to the following formula
n OCP ( t 1 ) = min ( n ˜ ( t 1 ) , n max ) , if t 1 R A , n 1 , else .

3.1.2. The Restricted Observed Conditional Power Approach

The approach known as the restricted observed conditional power (ROCP) shares similarities with the OCP method; however, as implied by its name, it comes with a specific restriction. An issue raised about the OCP approach centers on a particular scenario: when the Formula (5) indicates the need for larger sample sizes than the maximum value n max , the sample size is then capped at n max , irrespective of the potential conditional power achievable with this larger size. A viable approach might be to consider enlarging the sample size only if it ensures a minimum acceptable conditional power defined as ( 1 β l o w R O C P ) . This adjustment aims to strike a balance between sample size requirements and the attainable level of statistical confidence. As a result, the total sample size per group according to the ROCP approach is as follows
n ROCP ( t 1 ) = min ( n ˜ ( t 1 ) , n max ) , if t 1 RA , and C P ( t 1 , n max ) 1 β l o w R O C P , n 1 , else .

3.1.3. The Promising Zone Approach

The innovative promising zone (PZ) approach, introduced by Mehta and Pocock [24], presents a distinct methodology. It commences with the determination of an initial total sample size, denoted as n i n i , for each group. Notably, this initial size is intentionally kept smaller than the maximum allowable total sample size, n m a x , for each group. Additionally, the PZ approach establishes a predetermined lower threshold for the conditional power, represented as 1 β l o w P Z . Importantly, it is worth highlighting that 1 β l o w P Z is not necessarily equal to 1 β l o w R O C P , introducing flexibility based on specific requirements.
As the study progresses, the PZ approach facilitates sample size updates contingent on the observed interim test statistic t 1 . These updates are governed by two potential pathways. First, if the recalculated sample size, referred to as n ˜ , adheres to the Formula (5), it follows the trajectory set by the initially proposed total sample size, n i n i . Alternatively, the sample size is restricted to the maximum value, n m a x , per group. This adaptive approach optimally balances sample size requirements with the potential for achieving a robust conditional power, further enhancing the precision and reliability of our study outcomes. Consequently, the total sample size per group according to the promising zone (PZ) approach equals
n PZ ( t 1 ) = min ( n ˜ ( t 1 ) , n max ) , if t 1 RA and 1 β PZ _ low C P ( t 1 , n i n i ) < 1 β , n i n i , if t 1 RA and C P ( t 1 , n i n i ) < 1 β l o w P Z , or t 1 RA and C P ( t 1 , n i n i ) 1 β , n 1 , else .

4. Evaluating the Performance of Sample Size-Recalculation Rules

When our objective centers on enhancing the effectiveness of rules for recalculating sample sizes, it becomes essential to establish appropriate criteria for evaluating their performance. Among the common evaluation criteria are the average sample size and global power, both of which take on a stochastic nature within the framework of adaptive design. Recognizing the significance of not only assessing central tendencies but also incorporating measures of variability, we need a comprehensive perspective.
The evaluation of a sample size-recalculation rule can be approached from different angles. The global perspective examines the scenario before the trial’s commencement, providing an average view of the two options: early trial termination or sample size recalculation at interim stages. However, this perspective presents challenges in interpreting a combination of performance aspects associated with both, stopping early and recalculating sample sizes.
An alternative approach that prompts the researcher to consider how the sample size should be recalculated if the observed effect falls within the recalculation area during the interim analysis is a conditional perspective. Here, we assess the recalculation rules under the assumption that the trial continues past the interim point, where t 1 falls between t 1 [ q 1 α 0 ; q 1 α 1 ) , even though the specific value of t 1 remains unknown. This conditional perspective pertains to the recalculation area rather than a particular t 1 value. In this study, our focus is squarely on this conditional perspective, providing a comprehensive exploration that offers valuable insights into the optimization of sample size-recalculation methods. As a result, we explore ways to adjust sample sizes, looking at how well they perform based on the following criteria:
  • The expected conditional power, denoted as E [ C P Δ R A ] .
  • The variability of the conditional power, represented as V a r [ C P Δ R A ] .
  • The anticipated conditional total sample size per group, expressed as E [ C N Δ R A ] , which is the average size per group in the recalculation area.
  • The variability of the conditional total sample size per group, indicated by V a r [ C N Δ R A ] .
The assessment of performance measures encompasses a range of true standardized effect sizes, quantified by Δ = μ T μ C σ . Notably, these evaluation criteria can be unified into a comprehensive performance metric, known as the conditional performance score (CS). While we provide an overview of this score’s essential characteristics, the CS is composed of four distinct components: two components for evaluating the location and variability of the conditional power ( e C P ( Δ ) and v C P ( Δ ) ) and two components for the location and variability of the conditional sample size ( e C N ( Δ ) and v C N ( Δ ) ).
The fundamental concept underlying the location components is to compare expected values against predefined target values. In cases where the maximum allowed sample size is not greater than the corresponding fixed sample size and the effect size is non-zero, the initially planned power value of 1 β serves as the target for the conditional power. Conversely, when circumstances differ, such as when the trial might not merit continuation to the second stage, the target values shift to the first stage’s sample size n 1 and the global one-sided significance level α .
In terms of the variation components, the observed variation is juxtaposed against the maximum feasible variation within the specific context. Each of the four score components can assume values ranging from zero to one, permitting independent evaluation. Moreover, these components can be amalgamated into two sub-scores: the conditional power sub-score S C P ( Δ ) and the conditional sample size sub-score S C N ( Δ ) , or be consolidated into a singular performance value, denoted by the conditional performance score (CPS). Mathematically, this score is calculated as
C P S ( Δ ) = 1 2 · [ S C P ( Δ ) + S C N ( Δ ) ] .
In evaluating all (sub-)scores and components, it is important to note that higher values are indicative of superior performance. We can give different levels of importance to the parts included in the two sub-scores by carefully deciding how much weight to assign to each. The components of C P S ( Δ ) can be derived as follows [25]. The conditional power sub-score is
S C P ( Δ ) = γ loc · e _ C P ( Δ ) + γ var · v _ C P ( Δ ) , with γ loc + γ var = 1 .
Here, γ l o c and γ v a r represent the weights assigned to the location component e C P and the variation component v C P , respectively. A similar approach is taken for the conditional sample size sub-score. Here, we opt equal weighting of all components, which means γ l o c = γ v a r = 0.5 while the results for γ l o c = 0.2 are also computed and depicted in Figures S20–S31 of the supplementary text. The fundamental concept underpinning the two location components, namely e_cp and e_n, is to assess and compare the calculated average conditional power and the calculated average conditional sample size against predefined target values. The target value for the sample size is specified as
N target = n f i x , if n f i x n m a x a n d Δ 0 , n 1 , if n f i x n m a x o r Δ = 0 ,
where n f i x is required sample size in fixed study design. The target value for conditional power is as follows
C P target = 1 β , if n f i x n m a x a n d Δ 0 , α , if n f i x n m a x o r Δ = 0 ,
where α is defined as global one-sided significance level. For the conditional sample size, the sub-score is defined as
S C P ( Δ ) = γ loc 1 | E [ C N Δ R A ( T 1 ) ] N target | n max n 1 + γ var 1 Var N Δ R A ( T 1 ) Var max N Δ R A ( T 1 )
Thus, 1 | E [ C N Δ R A ( T 1 ) ] N target | n max n 1 = e_cp and 1 Var N Δ R A ( T 1 ) Var max N Δ R A ( T 1 ) = v_cp, where γ l o c + γ v a r = 1. Both sub-scores, SCN and SCP, have a range of [0, 1]. These are determined by the degree to which certain conditions are met. Specifically, SCN is influenced by the closeness of the variation to its target value. Similarly, SCP is determined by the degree to which another set of conditions is met. Both SCN and SCP attain larger values when the respective variations are small, and the predefined target values are closely approached. Therefore, we can define the point-wise total conditional performance score as follows.
C N target = n Δ fix , if n Δ fix n max and Δ 0 n 1 , if n Δ fix > n max or Δ = 0
C P target = 1 β , if n Δ { § n max and Δ 0 α , if n Δ { § > n max or Δ = 0
SCN ( Δ ) = γ loc 1 E [ CN Δ RA ( T 1 ) ] CN target n max n 1 + γ var 1 Var ( CN Δ RA ( T 1 ) ) Var max ( CN Δ RA ( T 1 ) )
Then,
C P S ( Δ ) = 1 2 S C P ( Δ ) + S C N ( Δ ) .

Resampling Approach for Recalculating Sample Size

To account for the potential variations in the interim effect, we may consider utilizing resampling as a technique to assess the fluctuation of a random variable. The application of the resampling approach is contingent upon the observed interim test statistic falling within the designated recalculation area, indicating the proposition of a second stage. In this context, B test statistics are resampled from a normal distribution with the observed interim test statistic as the average and a standard deviation of one. It is important to note that the resampling procedure exclusively occurs if the observed interim test statistic aligns with the recalculation area, suggesting the feasibility of a subsequent stage. Consequently, all resampled test statistics, including those outside the recalculation area, contribute to the computation of the final value for the sample size of the second stage. This process unfolds as follows: for each of the B resampled test statistics, the second-stage sample size is reevaluated, resulting in an array of sample sizes denoted as n ˜ ( ) , 1 ( t 1 ) , n ˜ ( ) , 2 ( t 1 ) , n ˜ ( ) , 3 ( t 1 ) , , n ˜ ( ) , B ( t 1 ) , where (*) signifies the index for the initial sample size-recalculation rule. It is important to acknowledge that some of these “recalculated" sample sizes may indeed correspond to the initial sample size n 1 . In the final step, a comprehensive location metric summarizes the entire set of B sample sizes, ultimately determining the definitive value for the second-stage sample size. This methodology empowers us to effectively incorporate the variability of the interim effect into our decision-making process for sample size adjustment. In our exploration, we distinguish between two different ways:
  • The simpler approach involves setting the second stage sample size as the average of all the resampled sample sizes:
    n ( ) R 1 ( t 1 ) = 1 B b = 1 B ( n ˜ ( ) , i ( t 1 ) )
    We refer to this method as the R1 approach.
  • Considering that the initial sample size of the first stage can greatly influence the resampled sample sizes, we contemplate an alternative. Here, we compute the final second-stage sample size as the mean plus the standard deviation of the resampled sample sizes:
    n ( ) R 2 ( t 1 ) = 1 B b = 1 B ( n ˜ ( ) , i ( t 1 ) ) + 1 B 1 b = 1 B n ˜ ( ) , i ( t 1 ) 1 B b = 1 B ( n ˜ ( ) , i ( t 1 ) ) 2
This inclusion of the standard deviation means that we tend to select larger sample sizes. We term this approach as the R2 method. It is worth noting that instead of incorporating the standard deviation, other measures of the distribution of resampled sample sizes (such as predefined quantiles) could also be used to achieve a similar effect. As such, the R2 approach serves as just one illustrative possibility within this context.

5. Simulation Study for Evaluating the Performance of Sample Size-Recalculation Approaches

To comprehensively assess the effectiveness of the various sample size-recalculation methods as outlined earlier, a comprehensive simulation study is conducted. These approaches are meticulously evaluated through specific performance measures, including the novel conditional performance score with parameter values of γ l o c = γ v a r = 0.5 .
In this simulation study, we work with groups of equal sizes, with the first stage consisting of n 1 = 50 participants. The initial second stage sample size per group is established as n 2 = 50 , culminating in an initial total sample size of n i n i = n 1 + n 2 = 100 . The maximum feasible sample size per group is set at four times the interim sample size n 1 , resulting in n m a x = 200 . The weights for the inverse normal combination test are uniformly assigned as w 1 = w 2 = 50 . We fixed the global one-sided significance level at α = 0.025 , while the local significance levels are calculated according to Pocock [3] method, specifically, α 1 = α = 0.0147 . Additionally, a futility bound of α 0 = 0.5 is established.
A desired level of conditional power is set at 1 β = 0.8 . For the ROCP, the lower bound for conditional power ( 1 β low ROCP ) is fixed at 0.6 . Similarly, for the promising zone (PZ), a lower bound ( 1 β low PZ ) is fixed at 0.36 , following the approach proposed by Mehta and Pocock [24]. To explore the performance of these designs across various scenarios, we considered a range of underlying true standardized treatment effects Δ { 0.0 , 0.1 , 0.2 , 0.3 , 0.4 , 0.5 } . To ensure statistical robustness, each scenario underwent 10,000 simulation iterations. Notably, for the resampling methods, a total of B = 5000 samples are used. For the sake of comparison, a group sequential design (GS) employing n 1 = n 2 = 50 and employing the same decision boundaries as described above is also simulated and evaluated side by side. This extensive simulation endeavor facilitated a comprehensive exploration of the performance characteristics of the various sample size adjustment strategies under a wide array of circumstances.
For the simulation purpose, initially, we generated a treatment group from MEM and a control group from a normal distribution having n = 50 , μ = 0.3 and σ = 1 . Also, we generated both treatment and control groups from MEM and run the simulation.
In the case of mixture distribution, we generated treatment and control groups of size n = 50 from the normal mixture. One distribution is taken as the standard normal distribution, i.e., N(0,1), while the other distribution is also taken as normal having μ = 0.5 and σ = 1 , i.e., N (0.5,1). However, the mixing proportion is chosen as 0.1, 0.2 and 0.5, i.e., π = 0.1 ,   0.2 ,   0.5 .
First, the results are discussed with the help of the conditional power score (CPS). A method with a higher value of the CPS is considered a better performer. In the standard sample size recalculation without resampling, one can see from Table 1 that group sequential design performed relatively well. The reason behind this is that there is no variation in recalculated sample sizes. If we access the performance of the R1 approach, this approach considers the mean as the summary location measure, and it performs better than the respective standard sample size-recalculation approach without resampling for CPS for all Δ , true standardized effect sizes. The reason behind the better performance of the R1 approach over the standard sample size-recalculation approach without resampling is that the resampling approach (R1 approach) reduces the variability in recalculated sample sizes for all Δ .
Furthermore, in the R1 approach, the OCP performed either better than the GS design or had a similar conditional performance score as compared to the ROCP and PZ (Table 1, columns 3 and 4) due to the better conditional power of the OCP (Table 2 and Table 3). It is to be noted that there is an observable tendency towards increasing the initial stage’s sample size, denoted as the mean of n 1 when utilizing sample size recalculation via the R1 approach. This phenomenon arises because, under the R1 approach, test statistics that fall outside the recalculated area (RA) could undergo resampling, even if the interim test statistic falls within RA. To address this concern, the R2 approach has been developed. This alternative approach relies on a distinct summary location measure, which is calculated as the mean of the resampled sample sizes along with its corresponding standard deviation. As it can be seen in Table 4 the R2 approach tends to move towards GS design due to n m a x as the sample size boundary.
In the R2 approach, the ROCP performed well for almost all Δ for the CPS (Table 1, column 5) compared to the OCP and PZ approach. Overall, the R2 approaches secured a distinguished position against the original sample size recalculation without resampling, However, the GS design outperforms the different designs in the R2 approach for all Δ . Note that the CPS of respective designs in the R2 approach is worse than the R1 approach for the same values of Δ ( 0.0 , 0.1 , 0.2 , 0.3 , 0.4 , 0.5 ) . The reason behind this worst behavior of the R2 approach is that the sample size-recalculation designs in the R2 approach do not tend towards the target values of sample size effectively, this can be seen in the worst values of conditional sample size sub-score SCN (Table 3 and Table 4). The results also indicate that a larger mixing proportion tends to favor the R2 approach, while the R1 approach performs better when the mixing proportion is small. The results for mixing proportion π = 0.2 are given the supplementary text.
Overall, the results indicate that the R1 approach is the most effective method for improving CPS across various designs and effect sizes, while the R2 approach offers some advantages but does not consistently outperform the original methods or the R1 approach. The R1 method’s ability to maintain a balanced sample size contributes to its superior performance, while the R2 approach’s tendency to approach maximum sample sizes may hinder its effectiveness, particularly for smaller true underlying effect sizes.
For the standard sample size recalculation, the GSD outperformed the rest (Table 5). This is due to the fact that there is no variation in recalculated sample sizes. The R1 approach performed well as compared to standard sample size recalculation for all Δ values in all designs. The reason behind the better performance of the R1 approach against standard simulation is that the resampling approach (R1 approach) reduces the variability in recalculated sample sizes for all Δ (Table 6 vs. Table 7). Furthermore, in the R1 approach, the ROCP and PZ have approximately the same CPS as the GSD, while the OCP has a smaller CPS than the GSD. This is due to a smaller conditional power of the OCP. While recalculating sample size using the R1 approach, this approach has a tendency to increase initial sample size, denoted as n 1 (7). This is due to the reason that if the test statistics fall outside the recalculation area, it could go through resampling, even if interim test statistics fall within the recalculated area (RA). To overcome this issue, the R2 approach is introduced, which relies on the mean as well as standard deviation of resampled sample sizes as the summary location measure. The effect of this can be seen in (8). The R2 approach tends to recalculate sample size up to the maximum allowed sample size n m a x . Overall, the R2 approach has secured a distinct position against the R1 approach and standard simulation. However, the GSD in standard simulation outperformed various designs in the R2 approach for all Δ . The detailed simulation results are presented in Table 6, Table 7 and Table 8. The box plots for second-stage conditional power and recalculated sample size for standard simulation, R1 approach and R2 approach, are illustrated in Figures S1–S12, respectively.

6. Results from Measurement Error Model (MEM)

From Table 9, one can see that the GSD is the performance winner in standard simulation without resampling. This is due to the fact that recalculated sample sizes have no variation. Moreover, it is interesting to note that the CPS in standard simulation remained the same for Δ = { 0 , 0.1 , 0.2 , 0.3 } . In the R1 approach, the ROCP and PZ have approximately the same CPS, while the OCP has a smaller CPS value. Furthermore, the performance of the R1 approach is better against the standard simulation. This is due to decreased variation by the R1 approach in recalculated sample size. The performance of the R1 approach and R2 approach is the same for all Δ . This is mainly due to the generation of treatment and control groups from the MEM. The detailed simulation results are presented in Table 10, Table 11 and Table 12. The box plots for second stage conditional power and recalculated sample size for standard simulation, R1 approach and R2 approach are illustrated in Figures S13–S17, respectively.
Table 13 lists the CPS of the standard, R1 and R2 approaches. In the standard sample size recalculation (Table 14), the GSD performs well against all other designs for all Δ . This is due to no variation in the recalculated sample size. Comparing the designs in the R1 approach (Table 15) with the GSD, one can see that the ROCP and PZ have approximately a similar performance as the GSD, while the OCP has a little small score. This is due to the better conditional power of the ROCP and PZ against the OCP. The R1 approach demonstrates competitive performance across different Δ values. It is expected that this approach reduces variability in recalculated sample sizes, contributing to consistent CPSs. The R2 approach might have some tendencies to converge toward the performance of the GSD design, possibly due to the sample size boundary imposed by n m a x (Table 16). This behavior can be seen in the CPSs for certain scenarios. The CPS of the R1 approach is higher than the R2 approach. The detailed simulation results are presented in Table 14, Table 15 and Table 16. The box plots for the second stage conditional power and recalculated sample size for the standard simulation, the R1 approach and the R2 approach are illustrated in Figures S18 and S19, respectively.
Previously, we assumed γ loc = γ var = 0.5 . To assess the effect of unequal weights, we assumed γ loc = 0.2 and γ var = 0.8 for the MDM with π = 0.5 and the resulting study is reported in Table 17 for four different design strategies (OCP, ROCP, PZ and GSD) across a range of Δ values from 0.0 to 0.5. The evaluation uses three different approaches, i.e., R1, R2 and standard. The table shows that the standard approach yields the lowest CPS across all designs and Δ values, underscoring the significant effect of unequal weighting on the location parameter. This indicates that the standard method is more sensitive to changes in location weight. In contrast, both the R1 and R2 approaches outperform the standard method in terms of CPS, suggesting greater robustness to variations in location weighting. Among the two, R1 generally performs better than R2, particularly in designs such as ROCP, PZ and GSD. However, for the OCP design, R1 and R2 produce comparable results, with R2 occasionally achieving slightly higher scores. For larger values of Δ , the performance of R2 improves when a smaller value of γ l o c is used, whereas R1 performs better with a larger γ l o c .

7. An Illustrative Clinical Trial Example

In the context of a clinical trial conducted by Bowden and Mander [26], we investigate the effectiveness of treatment labeled as T and placebo labeled as P in alleviating pain among osteoarthritis patients. The trial aims to assess pain relief over 2-week period compared to the baseline. Pain relief levels are measured using the McGill pain scale Melzack and Torgerson [27], which ranges from 0 (indicating no pain) to 50 (indicating the highest pain level). In this trial, the dataset is generated through the MDM having both distributions normal and mixing proportion p = 0.5 . Moreover, the pain relief values may be subject to measurement errors and may follow a MEM.
To enhance comprehension of the proposed methods, we have adapted the original clinical trial design based on recommendations from Herrmann and Rauch [28]. Initially, a pilot study indicated the superiority of the new treatment. However, further evidence is needed to quantify its actual effect, considering both MEM and MDM situations. As a result, the following hypotheses are formulated:
H 0 : μ baseline T μ baseline P 0 vs . H 1 : μ baseline T μ 2 weeks P > 0 .
Here, μ baseline T represents the expected pain relief after 2 weeks for the new treatment, while μ baseline P signifies the same for the standard treatment. Given the potential for measurement errors, the clinical trial employs an adaptive two-stage design, allowing adjustments to the sample size during an interim analysis. Specifically, n 1 = n 2 = 50 is chosen, and the maximum sample size is capped at n max = 200 , recognizing the need for a larger sample size to account for the potential measurement inaccuracies and then utilized the results from Table 2, Table 3 and Table 4 and Table 10, Table 11 and Table 12. Additionally, the trial incorporates a binding futility stop bound ( α 0 = 0.5 ), a global significance level ( α = 0.025 ) and locally adjusted significance levels using the Pocock method, and the adjustments made to account for the possible deviations from a strictly normal distribution.
Suppose that during an interim analysis, an interim effect size of Δ = 0.2 is observed, corresponding to an interim test statistic of T 1 = 1 . The focus lies in assessing the conditional performance differences among the OCP, ROCP and PZ approaches, both with and without the R1 resampling approach while taking into account both the MEM and the MDM. The evaluation assigns equal weight to conditional performance score components, with the understanding that recalibrating the sample size should be driven by reasonable adjustments considering the complex data distribution.
While the primary emphasis is on the performance at Δ = 0.2 , neighboring effect sizes ( Δ = 0.1 and 0.3 ) are also considered by taking into account the mixed distribution performance metrics are presented in Table 1, Table 2, Table 3 and Table 4. Moreover, for the measurement errors, the performance metrics are presented in Table 9, along with Table 10, Table 11 and Table 12. Without resampling and with an interim effect size of Δ = 0.2 , the OCP approach suggests a maximum sample size of 164. Conversely, the ROCP approach implies no need for an increased sample size or a second trial stage, taking into account the complexity of the data distribution. The PZ approach advocates adhering to the total sample size of 118, with potential adjustments for measurement errors and mixed distribution. However, upon implementing the R1 resampling approach, the trial continuation is recommended for all three approaches (OCP, ROCP, PZ), with total sample sizes ranging from at least 75 to a maximum of 130.
On comparing overall conditional performance, as quantified by the conditional performance score, the R1 approach outperforms the original approach across all three recalculation rules and the considered effect sizes, especially given the mixed distribution and measurement errors. This improvement primarily results from variance reduction in the conditional sample size and power due to the resampling approach, which is particularly relevant when dealing with complex data distributions (as illustrated in Table 2 and Table 3). For effect sizes of 0.1 and 0.2 , the ROCP R1 resampling approach exhibits the best performance, while for an effect size of 0.3 , the OCP R1 resampling approach takes the lead. This shows the adaptability of these approaches to different effect sizes within the mixed distribution.
For those interested in global performance, the OCP R1 approach attains greater global power than the other ROCP and PZ R1 approaches across the considered effect sizes, due to larger sample sizes and robust statistical methods. In conclusion, the integration of resampling techniques accounts for potential measurement errors and the MDM, thereby improving the reliability of the sample size-determination process, especially when dealing with complex and non-normal data distributions.

8. Conclusions

Integrating resampling techniques into established sample size-recalculation rules enhances the robustness of recalculation approaches, leading to significantly improved performance across various individual characteristics and conditional performance scores. This improvement is primarily due to the decreased variance of the conditional sample size and conditional power. It is important to note that reference values and the weighting scheme for the conditional performance score can also be adjusted. Additionally, it is observed that the CPS fluctuates around Δ = 0.3 is a common trait of recalculation rules. This pattern emerges because, for small effects, increasing the sample size is not advantageous, while for medium effects and beyond, an increase becomes reasonable.
One could argue against increasing the sample size as the interim test statistic increases, and similarly against substantial jumps in the sample size function, as this implies a drastic change in sample size with minimal alterations in the test statistic. The resampling approach, where conventional rules exhibit issues, presents a compromise between these extremes. It can be contended that any recalculation rule with significant jumps is not inherently reasonable, and thus, the compromise offered by the resampling approach might not be optimal either. A general recommendation suggests configuring design settings to avoid these jumps, such as by setting a smaller maximal sample size n m a x or a larger local significance level ( α 1 + α 2 ). While the resampling approaches surpass the original sample size-recalculation rules concerning the conditional performance score, it does not imply the resulting sample sizes are point-wise optimal. Instead, it mitigates the average risk of selecting a completely incorrect sample size, leading to favorable average outcomes. However, in specific cases, this approach might not be suitable. This characteristic is not a drawback for the resampling approach but is generally applicable to sample size-recalculation rules. Resampling-based sample size-recalculation rules provide a favorable approach to balancing the cost-benefit ratio. By reducing the average deviation from the ideal sample size, the method effectively navigates the costs and benefits of a study, achieving an optimal trade-off. Notably, the similarity between resampling procedures and sample size recalculation for group sequential designs is remarkable. To be more specific, the promising zone approach in conjunction with resampling closely approximates a group sequential design. This arises because the promising zone approach introduces significant sample size adjustments within a narrow range of interim effects, minimally impacting the smoothed sample size curve. This observation further supports the notion that group sequential designs hold a distinctive position among designs incorporating sample size recalculation. It is noticed from the results that a larger mixing proportion tends to favor the R2 approach in the presence of heterogeneity. In the case of measurement error, both the R1 and R2 approaches perform similarly when measurement error is assumed in both the treatment and control groups. However, when measurement error is present only in the treatment group, the R1 approach demonstrates better performance. The findings suggest that the R1 approach is the most reliable method for achieving optimal conditional performance scores, while the R2 approach, despite its improvements, does not consistently match the performance of the R1 method or the standard simulation.
Nevertheless, while sample size recalculation based on group sequential designs depends solely on the interim test statistic for early trial termination, integrating resampling into recalculation rules permits basing sample size adjustments on conditional power considerations. This integration effectively mitigates drastic fluctuations in sample size. Consequently, resampling enhances the robustness of sample size-recalculation rules, effectively addressing the inherent randomness in observed interim test statistics.
Considering the resampling techniques outlined in Formulas (18) and (19), the current work can be extended to studies involving various types of endpoints, as long as the test statistics exhibit approximate normal distribution characteristics. For example, in studies with binary endpoints, one can readily apply the resampling approach by utilizing the normal approximation to the binomial. Similarly, to deal with time-to-event endpoints, one can explore the possibility of employing the resampling approach through the utilization of the logrank test within an adaptive design framework. As an alternative path to enhancing the efficacy of sample size recalculation, consider the development of a more direct approach. This approach could involve formulating a sample size-recalculation function that is specifically designed to optimize the conditional performance score. Also, one can explore the idea of implementing the alternative approach within a numerical-constrained optimization framework. This would involve setting up a mathematical optimization problem where the objective is to maximize the conditional performance score while adhering to certain constraints. It is worth mentioning that if the measurement error model is misspecified, it can have several significant consequences, depending on the context of the analysis. For instance, the resulting estimates may be biased and associated with large standard errors, leading to invalid statistical inference. This misspecification can also impact power analysis, mislead model selection criteria and ultimately result in poor predictive performance. Given the seriousness of these implications, the present computational framework can be studied in model misspecification scenarios.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/stats8020031/s1. The the additional figures and tables are given in the supplementary text. In particular, Section S1 tabulates the results assuming π = 0.2 . Section S2 depicts the box plots of CPS for MDM and MEM.

Author Contributions

Conceptualization, H.F. and S.A.; methodology, H.F. and I.S.; software, H.F. and S.A.; validation, S.A., I.S. and M.M.A.A.; formal analysis, H.F. and S.A.; investigation, H.F. and S.A.; resources, I.S., I.A.N. and S.A.; data curation, S.A. and I.S.; writing—original draft preparation, H.F., S.A. and I.A.N.; writing—review and editing, S.A. and I.S.; visualization, H.F. and I.S.; supervision, S.A. and I.S.; project administration, S.A., I.S. and M.M.A.A.; funding acquisition, I.A.N. All authors have read and agreed to the published version of the manuscript.

Funding

The authors received no specific funding for the article.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Acknowledgments

The authors thanks the two anonymous reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

Notations

The following notations are used in this manuscript:
E [ CN Δ RA ] Expected value of the conditional sample size in the RA.
V a r ( C P Δ R A ) Variance of the conditional power in the RA.
e C N ( Δ ) Location component of the conditional sample size sub-score.
v C N ( Δ ) Variation component of the conditional sample size sub-score.
S C N ( Δ ) Conditional sample size sub-score.
E [ CP Δ RA ] Expected value of the conditional power in the RA.
V a r ( C P Δ R A ) Variance of the conditional power in the RA.
e C P ( Δ ) Location component of the conditional power sub-score.
v C P ( Δ ) Variation component of the conditional power sub-score.
S C P ( Δ ) Conditional power sub-score.
C P S ( Δ ) Final point-wise conditional score.

References

  1. Chow, S.C.; Chang, M. Adaptive design methods in clinical trials—A review. Orphanet J. Rare Dis. 2008, 3, 11. [Google Scholar] [CrossRef] [PubMed]
  2. Herrmann, C.; Kluge, C.; Pilz, M.; Kieser, M.; Rauch, G. Improving sample size recalculation in adaptive clinical trials by resampling. Pharm. Stat. 2021, 20, 1035–1050. [Google Scholar] [CrossRef] [PubMed]
  3. Pocock, S.J. Group sequential methods in the design and analysis of clinical trials. Biometrika 1977, 64, 191–199. [Google Scholar] [CrossRef]
  4. O’Brien, P.C.; Fleming, T.R. A Multiple Testing Procedure for Clinical Trials. Biometrics 1979, 35, 549–556. [Google Scholar] [CrossRef]
  5. Denne, J.S. Sample size recalculation using conditional power. Stat. Med. 2001, 20, 2645–2660. [Google Scholar] [CrossRef]
  6. Friede, T.; Kieser, M. A comparison of methods for adaptive sample size adjustment. Stat. Med. 2001, 20, 3861–3873. [Google Scholar] [CrossRef]
  7. Charles, P.; Giraudeau, B.; Dechartres, A.; Baron, G.; Ravaud, P. Reporting of sample size calculation in randomised controlled trials. BMJ 2009, 338, b1732. [Google Scholar] [CrossRef]
  8. Hammouri, H.; Ali, M.; Alquran, M.; Alquran, A.; Abdel Muhsen, R.; Alomari, B. Adaptive Multiple Testing Procedure for Clinical Trials with Urn Allocation. Mathematics 2023, 11, 3965. [Google Scholar] [CrossRef]
  9. Kieser, M.; Friede, T. Simple procedures for blinded sample size adjustment that do not affect the type I error rate. Stat. Med. 2003, 22, 3571–3581. [Google Scholar] [CrossRef]
  10. Harden, M.; Friede, T. Sample size calculation in multi-centre clinical trials. BMC Med. Res. Methodol. 2018, 18, 156. [Google Scholar] [CrossRef]
  11. Das, S.; Mitra, K.; Mandal, M. Sample size calculation: Basic principles. Indian J. Anaesth. 2016, 60, 652. [Google Scholar] [CrossRef] [PubMed]
  12. Jennison, C.; Turnbull, B.W. Mid-course sample size modification in clinical trials based on the observed treatment effect. Stat. Med. 2003, 22, 971–993. [Google Scholar] [CrossRef] [PubMed]
  13. Pritchett, Y.L.; Menon, S.; Marchenko, O.; Antonijevic, Z.; Miller, E.; Sanchez-Kam, M.; Morgan-Bouniol, C.C.; Nguyen, H.; Prucka, W.R. Sample size re-estimation designs in confirmatory clinical trials—current state, statistical considerations, and practical guidance. Stat. Biopharm. Res. 2015, 7, 309–321. [Google Scholar] [CrossRef]
  14. Chakraborty, H.; Gu, H. A Mixed Model Approach for Intent-to-Treat Analysis in Longitudinal Clinical Trials with Missing Values. Psychiatry Res. 2009, 129, 1–9. [Google Scholar] [CrossRef]
  15. Boos, D.D.; Brownie, C. A rank-based mixed model approach to multisite clinical trials. Biometrics 1992, 48, 61–72. [Google Scholar] [CrossRef]
  16. Nagin, D.S.; Odgers, C.L. Group-based trajectory modeling in clinical research. Annu. Rev. Clin. Psychol. 2010, 6, 109–138. [Google Scholar] [CrossRef]
  17. Deng, Y.; Tu, D.; O’Callaghan, C.J.; Liu, G.; Xu, W. Two-stage multivariate Mendelian randomization on multiple outcomes with mixed distributions. Stat. Methods Med. Res. 2022, 32, 1543–1558. [Google Scholar] [CrossRef]
  18. Spanbauer, C.; Sparapani, R. Nonparametric machine learning for precision medicine with longitudinal clinical trials and Bayesian additive regression trees with mixed models. Stat. Med. 2021, 40, 2665–2691. [Google Scholar] [CrossRef]
  19. Liang, H.; Wu, H.; Carroll, R.J. The relationship between virologic and immunologic responses in AIDS clinical research using mixed-effects varying-coefficient models with measurement error. Biostatistics 2003, 4, 297–312. [Google Scholar] [CrossRef]
  20. Morgan, T.M.; Elashoff, R.M. Effect of covariate measurement error in randomized clinical trials. Stat. Med. 1987, 6, 31–41. [Google Scholar] [CrossRef]
  21. Wang, N.; Lin, X.; Gutierrez, R.G.; Carroll, R.J. Bias analysis and SIMEX approach in generalized linear mixed measurement error models. J. Am. Stat. Assoc. 1998, 93, 249–261. [Google Scholar] [CrossRef]
  22. Yang, Y.; Li, G.; Tong, T. Corrected empirical likelihood for a class of generalized linear measurement error models. Sci. China Math. 2015, 58, 1523–1536. [Google Scholar] [CrossRef]
  23. Brakenhoff, T.B.; Mitroiu, M.; Keogh, R.H.; Moons, K.G.; Groenwold, R.H.; van Smeden, M. Measurement error is often neglected in medical literature: A systematic review. J. Clin. Epidemiol. 2018, 98, 89–97. [Google Scholar] [CrossRef] [PubMed]
  24. Mehta, C.R.; Pocock, S.J. Adaptive increase in sample size when interim results are promising: A practical guide with examples. Stat. Med. 2011, 30, 3267–3284. [Google Scholar] [CrossRef]
  25. Herrmann, C.; Pilz, M.; Kieser, M.; Rauch, G. A new conditional performance score for the evaluation of adaptive group sequential designs with sample size recalculation. Stat. Med. 2020, 39, 2067–2100. [Google Scholar] [CrossRef]
  26. Bowden, J.; Mander, A. A review and re-interpretation of a group-sequential approach to sample size re-estimation in two-stage trials. Pharm. Stat. 2014, 13, 163–172. [Google Scholar] [CrossRef]
  27. Melzack, R.; Torgerson, W.S. On the language of pain. J. Am. Soc. Anesthesiol. 1971, 34, 50–59. [Google Scholar] [CrossRef]
  28. Herrmann, C.; Rauch, G. Smoothing corrections for improving sample size recalculation rules in adaptive group sequential study designs. Methods Inf. Med. 2021, 60, 001–008. [Google Scholar] [CrossRef]
Table 1. Conditional power score with both treatment and control generated from MDM with π = 0.5 .
Table 1. Conditional power score with both treatment and control generated from MDM with π = 0.5 .
Δ DesignStandard SimulationR1 ApproachR2 Approach
0OCP0.3780.47360.5082
0ROCP0.42550.60960.6604
0PZ0.49780.65110.6684
0GSD0.67410.77640.7764
0.1OCP0.3780.43000.4654
0.1ROCP0.42550.54030.6174
0.1PZ0.49780.59540.6282
0.1GSD0.67410.74220.7422
0.2OCP0.3780.39810.4306
0.2ROCP0.42550.47980.5820
0.2PZ0.49780.5490.5943
0.2GSD0.67410.71050.7104
0.3OCP0.62420.62080.6923
0.3ROCP0.40720.38980.6232
0.3PZ0.53810.52720.6521
0.3GSD0.61730.60990.6099
0.4OCP0.52790.55220.6015
0.4ROCP0.52570.54430.6878
0.4PZ0.60690.62220.7001
0.4GSD0.74430.75620.7562
0.5OCP0.46850.54070.5836
0.5ROCP0.46620.52210.6644
0.5PZ0.54750.59230.6740
0.5GSD0.6850.72120.7212
Table 2. Standard simulation with both treatment and control generated from MDM with π = 0.5 .
Table 2. Standard simulation with both treatment and control generated from MDM with π = 0.5 .
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.54210.46960.09240.39220.4309167.00270.22001825.75020.43030.32510.3780
0ROCP0.44510.56910.15040.22440.3968102.58640.64943087.80350.25910.45430.4255
0PZ0.46260.55120.12440.29460.4229117.49770.5500921.01160.59540.57270.4978
0GSD0.38690.62880.08970.40100.5149100.00000.66670.00001.00000.83330.6741
0.1OCP0.54210.46960.09240.39220.4309167.00270.22001825.75020.43030.32510.3780
0.1ROCP0.44510.56910.15040.22440.3968102.58640.64943087.80350.25910.45430.4255
0.1PZ0.46260.55120.12440.29460.4229117.49770.5500921.01160.59540.57270.4978
0.1GSD0.38690.62880.08970.40100.5149100.00000.66670.00001.00000.83330.6741
0.2OCP0.54210.46960.09240.39220.4309167.00270.22001825.75020.43030.32510.3780
0.2ROCP0.44510.56910.15040.22440.3968102.58640.64943087.80350.25910.45430.4255
0.2PZ0.46260.55120.12440.29460.4229117.49770.5500921.01160.59540.57270.4978
0.2GSD0.38690.62880.08970.40100.5149100.00000.66670.00001.00000.83330.6741
0.3OCP0.54210.73550.09240.39220.5639167.00270.93861825.75020.43030.68450.6242
0.3ROCP0.44510.63600.15040.22440.4302102.58640.50923087.80350.25910.38420.4072
0.3PZ0.46260.65390.12440.29460.4743117.49770.6086921.01160.59540.60200.5381
0.3GSD0.38690.57640.08970.40100.4887100.00000.49190.00001.00000.74500.6173
0.4OCP0.54210.73550.09240.39220.5639167.00270.22001825.75020.43030.49190.5279
0.4ROCP0.44510.63600.15040.22440.4302102.58640.98313087.80350.25910.62110.5257
0.4PZ0.46260.65390.12440.29460.4743117.49770.8837921.01160.59540.73950.6069
0.4GSD0.38690.57640.08970.40100.4887100.00000.99960.00001.00000.99980.7443
0.5OCP0.54210.73550.09240.39220.5639167.00270.31601825.75020.43030.37310.4685
0.5ROCP0.44510.63600.15040.22440.4302102.58640.74543087.80350.25910.50230.4662
0.5PZ0.46260.65390.12440.29460.4743117.49770.6460921.01160.59540.62070.5475
0.5GSD0.38690.57640.08970.40100.4887100.00000.76260.00001.00000.88130.6850
Table 3. R1 approach with both treatment and control generated from MDM with π = 0.5 .
Table 3. R1 approach with both treatment and control generated from MDM with π = 0.5 .
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.26090.75800.08770.40770.5828192.00120.0533592.17040.67560.36440.4736
0ROCP0.15790.86370.09600.38030.622073.10730.84602388.84950.34840.59720.6096
0PZ0.18010.84090.07620.44800.6445107.34590.6177513.57040.69790.65780.6511
0GSD0.14930.87250.04700.56660.7196100.00000.66670.00001.00000.83330.7764
0.1OCP0.33980.67710.10230.36040.5188186.29690.0914940.28960.59120.34130.4300
0.1ROCP0.23220.78750.12570.29090.539281.55230.78972810.18270.29320.54140.5403
0.1PZ0.25430.76480.10180.36180.5633110.47910.5968657.42110.65810.62750.5954
0.1GSD0.20960.81070.06470.49130.6510100.00000.66670.00001.00000.83330.7422
0.2OCP0.42320.59160.10580.34960.4706179.23090.13851335.00990.51290.32570.3981
0.2ROCP0.31780.69970.14600.23600.467890.88450.72743112.38240.25620.49180.4798
0.2PZ0.33610.68100.11950.30880.4949113.40070.5773775.11670.62880.60310.5490
0.2GSD0.27840.74010.07980.43520.5876100.00000.66670.00001.00000.83330.7105
0.3OCP0.50900.70150.09920.37010.5358170.74380.96361713.03960.44820.70590.6208
0.3ROCP0.40990.59990.15240.21940.409799.72270.49013166.80770.24970.36990.3898
0.3PZ0.42690.61740.12610.28970.4535116.56290.6024903.00760.59940.60090.5272
0.3GSD0.35560.54420.08890.40360.4739100.00000.49190.00001.00000.74500.6099
0.4OCP0.58680.78140.08290.42430.6028160.43080.59751987.20720.40570.50160.5522
0.4ROCP0.50270.69510.14190.24670.4709106.80620.95502911.70710.28060.61780.5443
0.4PZ0.51680.70950.11940.30890.5092118.47710.8772931.14560.59320.73520.6222
0.4GSD0.43650.62720.09060.39800.5126100.00000.99960.00001.00000.99980.7562
0.5OCP0.65250.84870.06170.50310.6759150.65060.42502120.42960.38610.40550.5407
0.5ROCP0.58320.77760.11940.30900.5433111.97000.68282608.73510.31900.50090.5221
0.5PZ0.59700.79180.10230.36040.5761119.85810.6302961.32780.58660.60840.5923
0.5GSD0.51050.70310.08430.41920.5612100.00000.76260.00001.00000.88130.7212
Table 4. R2 approach with both treatment and control generated from MDM with π = 0.5 .
Table 4. R2 approach with both treatment and control generated from MDM with π = 0.5 .
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.27590.74270.10580.34940.5460197.82480.014530.73370.92610.47030.5082
0ROCP0.19920.82130.07390.45640.6389119.56760.5362167.08060.82770.68190.6604
0PZ0.20270.81780.07350.45770.6378129.57800.469528.73070.92850.69900.6684
0GSD0.14930.87250.04700.56660.7196100.00000.66670.00001.00000.83330.7764
0.1OCP0.36550.65080.12900.28160.4662196.94140.020446.80990.90880.46460.4654
0.1ROCP0.27520.74340.09750.37560.5595122.56170.5163154.59220.83420.67520.6174
0.1PZ0.27860.73990.09660.37850.5592130.67220.462225.90240.93210.69720.6282
0.1GSD0.20960.81070.06470.49130.6509100.00000.66670.00001.00000.83330.7422
0.2OCP0.46170.55210.13910.25410.4031195.50990.029972.71190.88630.45810.4306
0.2ROCP0.35940.65710.11360.32590.4915125.35100.4977130.92090.84740.67260.5820
0.2PZ0.36260.65380.11220.33000.4919131.56750.456222.16610.93720.69670.5943
0.2GSD0.27840.74010.07980.43520.5876100.00000.66670.00001.00000.83330.7105
0.3OCP0.56270.75660.13590.26270.5097193.69490.8834100.32120.86650.87490.6923
0.3ROCP0.45170.64280.11970.30800.4754127.74450.6769102.59510.86500.77090.6232
0.3PZ0.45440.64560.11790.31330.4794132.22880.706818.50980.94260.82470.6521
0.3GSD0.35560.54420.08890.40360.4739100.00000.49190.00001.00000.74600.6099
0.4OCP0.65910.85550.11990.30760.5815191.45470.3907122.84560.85220.62140.6015
0.4ROCP0.54500.73850.11500.32170.5301129.66910.802670.14340.88830.84550.6878
0.4PZ0.54710.74060.11300.32770.5342132.65280.782714.51220.94920.86590.7001
0.4GSD0.43650.62720.09060.39800.5126100.00000.99960.00001.00000.99980.7562
0.5OCP0.74180.94030.09450.38510.6627189.06890.1688143.89810.84010.50450.5836
0.5ROCP0.62760.82320.09950.36910.5961130.86120.556946.99520.90860.73270.6644
0.5PZ0.62930.82490.09760.37510.5999132.74060.544413.13250.95170.74800.6740
0.5GSD0.51050.70310.08430.41920.5612100.00000.76260.00001.00000.88130.7212
Table 5. Conditional power score with both treatment and control generated from MDM with π = 0.1 .
Table 5. Conditional power score with both treatment and control generated from MDM with π = 0.1 .
Δ DesignStandard SimulationR1 ApproachR2 Approach
0OCP0.37810.54790.3963
0ROCP0.41600.72640.5437
0PZ0.49020.66030.5554
0GSD0.67000.67000.6700
0.1OCP0.37810.54800.3963
0.1ROCP0.41600.72660.5440
0.1PZ0.49020.66030.5554
0.1GSD0.67000.67000.6700
0.2OCP0.37810.54780.3963
0.2ROCP0.41600.72660.5439
0.2PZ0.49020.66030.5553
0.2GSD0.67000.67000.6700
0.3OCP0.62840.64850.7173
0.3ROCP0.42130.57140.6488
0.3PZ0.49020.61560.6729
0.3GSD0.62310.62310.6231
0.4OCP0.53910.67590.5904
0.4ROCP0.52980.69840.6784
0.4PZ0.61140.73910.6911
0.4GSD0.75010.75010.7501
0.5OCP0.47970.61640.5310
0.5ROCP0.47030.70690.6190
0.5PZ0.55200.67970.6317
0.5GSD0.69080.69080.6908
Table 6. Standard simulation with both treatment and control generated from MDM with π = 0.1 .
Table 6. Standard simulation with both treatment and control generated from MDM with π = 0.1 .
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.56380.44740.08770.40790.4276164.91500.23391871.25790.42330.32860.3781
0ROCP0.47170.54180.14690.23350.3876105.58230.62953087.61550.25920.44430.4160
0PZ0.48620.52700.12180.30190.4145118.47980.5435952.64820.58850.56600.4902
0GSD0.40630.60900.08870.40440.5067100.00000.66670.00001.00000.83330.6700
0.1OCP0.56380.44740.08770.40790.4276164.91500.23391871.25790.42330.32860.3781
0.1ROCP0.47170.54180.14690.23350.3876105.58230.62953087.61550.25920.44430.4160
0.1PZ0.48620.52700.12180.30190.4145118.47980.5435952.64820.58850.56600.4902
0.1GSD0.40630.60900.08870.40440.5067100.00000.66670.00001.00000.83330.6700
0.2OCP0.56380.44740.08770.40790.4276164.91500.23391871.25790.42330.32860.3781
0.2ROCP0.47170.54180.14690.23350.3876105.58230.62953087.61550.25920.44430.4160
0.2PZ0.48620.52700.12180.30190.4145118.47980.5435952.64820.58850.56600.4902
0.2GSD0.40630.60900.08870.40440.5067100.00000.66670.00001.00000.83330.6700
0.3OCP0.56380.75770.08770.40790.5828164.91500.92471871.25790.42330.67400.6284
0.3ROCP0.47170.66330.14690.23350.4484105.58230.52923087.61550.25920.39420.4213
0.3PZ0.48620.67810.12180.30190.4900118.47980.6151952.64820.58850.60180.4902
0.3GSD0.40630.59620.08870.40440.5003100.00000.49190.00001.00000.74600.6231
0.4OCP0.56380.75770.08770.40790.5828164.91500.56761871.25790.42330.49540.5391
0.4ROCP0.47170.66330.14690.23350.4484105.58230.96323087.61550.25920.61120.5298
0.4PZ0.48620.67810.12180.30190.4900118.47980.8772952.64820.58850.73280.6114
0.4GSD0.40630.59620.08870.40450.5003100.00000.99960.00001.00000.99980.7501
0.5OCP0.56380.75770.08770.40790.5828164.91500.32991871.25790.42330.37660.4797
0.5ROCP0.47170.66330.14690.23350.4484105.58230.72543087.61550.25920.49230.4703
0.5PZ0.48620.67810.12180.30190.4900118.47980.6394952.64820.58850.61400.5520
0.5GSD0.40630.59630.08870.40440.5003100.00000.76260.00001.00000.88130.6908
Table 7. R1 approach with both treatment and control generated from MDM with π = 0.1 .
Table 7. R1 approach with both treatment and control generated from MDM with π = 0.1 .
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.49930.51360.09690.37750.4455129.94070.4671155.92330.83350.65030.5479
0ROCP0.32760.68960.06530.48910.589479.67700.802231.87190.92470.86340.7264
0PZ0.40350.61180.07990.43470.5233101.07990.659523.59750.93520.79740.6603
0GSD0.40630.60900.08870.40440.5067100.00000.66670.00001.00000.83330.6700
0.1OCP0.49920.51360.09690.37760.4456129.92260.4672155.98360.83350.65030.5480
0.1ROCP0.32740.68980.06520.48940.589679.63640.802431.71930.92490.86370.7266
0.1PZ0.40350.61180.07990.43470.5233101.07410.659523.60700.93520.79740.6603
0.1GSD0.40630.60900.08870.40440.5067100.00000.66670.00001.00000.83330.6700
0.2OCP0.49940.51350.09690.37750.4455129.95310.4670156.31030.83330.65010.5478
0.2ROCP0.32750.68970.06520.48930.589579.65520.802331.75830.92490.86360.7266
0.2PZ0.40350.61180.07990.43480.5233101.07390.659523.70790.93510.79730.6603
0.2GSD0.40630.60900.08870.40440.5067100.00000.66670.00001.00000.83330.6700
0.3OCP0.49930.69150.09690.37750.5345129.92780.6915155.55750.83370.76260.6485
0.3ROCP0.32760.51550.06530.48900.502279.67380.356431.87170.92470.64060.5714
0.3PZ0.40350.59330.07990.43480.5140101.07670.499123.67220.93510.71710.6156
0.3GSD0.40630.59620.08870.40440.5003100.00000.49190.00001.00000.74600.6231
0.4OCP0.49930.69160.09690.37750.5345129.95130.8007155.61950.83370.81720.6759
0.4ROCP0.32750.51540.06520.48920.502379.65540.864031.76550.92490.89440.6984
0.4PZ0.40350.59330.07990.43470.5140101.07150.993223.61540.93520.96420.7391
0.4GSD0.40630.59620.08870.40440.5003100.00000.99960.00001.00000.99980.7501
0.5OCP0.49940.69160.09690.37740.5345129.95930.5629155.25170.83390.69840.6165
0.5ROCP0.32760.51550.06530.48910.502379.66950.898231.68640.92490.91160.7070
0.5PZ0.40350.59330.07990.43480.5141101.07860.755423.71300.93510.84530.6797
0.5GSD0.40630.59620.08870.404440.5003100.00000.76260.00001.00000.88130.6908
Table 8. R2 approach with both treatment and control generated from MDM with π = 0.1 .
Table 8. R2 approach with both treatment and control generated from MDM with π = 0.1 .
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.62820.38140.12400.29580.3386192.51960.0499113.22620.85810.45400.3963
0ROCP0.51220.50030.11520.32110.4107129.29160.471478.17220.88210.67680.5437
0PZ0.51460.49790.11330.32680.4124132.67540.448815.22340.94800.69840.5554
0GSD0.40630.60900.08870.40440.5067100.00000.66670.00001.00000.83330.6700
0.1OCP0.62820.38140.12400.29590.3386192.50730.0499113.35040.85810.45400.3963
0.1ROCP0.51200.50050.11520.32110.4108129.22880.471877.73080.88250.67710.5440
0.1PZ0.51450.49790.11330.32680.4124132.65600.449015.34170.94780.69840.5554
0.1GSD0.40630.60900.08870.40440.5067100.00000.66670.00001.00000.83330.6700
0.2OCP0.62820.38140.12400.29580.3386192.51830.0499112.96200.85830.45410.3963
0.2ROCP0.51210.50040.11520.32120.4108129.26570.471677.67280.88250.67700.5439
0.2PZ0.51460.49790.11330.32680.4124132.65470.449015.50990.94750.69820.5553
0.2GSD0.40630.60900.08870.40440.5067100.00000.66670.00001.00000.83330.6700
0.3OCP0.62820.82370.12400.29590.5598192.51550.8913112.92500.85830.87480.7173
0.3ROCP0.51220.70480.11530.32110.5129129.28840.687278.11210.88220.78470.6488
0.3PZ0.51460.70720.11330.32680.5170132.66200.709715.39830.94770.82870.6729
0.3GSD0.40630.59620.08870.40440.5003100.00000.49190.00001.00000.74600.6231
0.4OCP0.62820.82380.12400.29580.5598192.52570.3835112.85840.85840.62090.5904
0.4ROCP0.51210.70480.11530.32100.5129129.26410.805377.71680.88250.84390.6784
0.4PZ0.51450.70720.11330.32680.5170132.64710.782715.41340.94770.86520.6911
0.4GSD0.40630.59620.08870.40440.5003100.00000.99960.00001.00000.99980.7501
0.5OCP0.62820.82380.12400.29580.5598192.53790.1457112.36310.85870.50220.5310
0.5ROCP0.51210.704770.11520.32100.5129129.28490.567477.59560.88260.72500.6190
0.5PZ0.51450.70720.11330.32690.5171132.66130.544915.21000.94800.74640.6317
0.5GSD0.40630.59620.08870.40440.5003100.00000.76260.00001.00000.88130.6908
Table 9. Conditional power score with both treatment and control generated from MEM.
Table 9. Conditional power score with both treatment and control generated from MEM.
Δ DesignStandard SimulationR1 ApproachR2 Approach
0OCP0.43550.62100.6210
0ROCP0.54360.79600.7960
0PZ0.59970.73290.7329
0GSD0.74750.74750.7475
0.1OCP0.43550.62090.6209
0.1ROCP0.54360.79600.7960
0.1PZ0.59970.73300.7329
0.1GSD0.74750.74750.7475
0.2OCP0.43550.62090.6209
0.2ROCP0.54360.79600.7960
0.2PZ0.59970.73290.7329
0.2GSD0.74750.74750.7475
0.3OCP0.60450.61700.6170
0.3ROCP0.34080.53800.5380
0.3PZ0.50500.59390.5940
0.3GSD0.59540.59540.5954
0.4OCP0.47760.63000.6300
0.4ROCP0.46770.66500.6650
0.4PZ0.59740.70950.7095
0.4GSD0.72230.72230.7223
0.5OCP0.41810.57060.5706
0.5ROCP0.47100.68880.6888
0.5PZ0.53800.65010.6501
0.5GSD0.66310.66310.6631
Table 10. Standard simulation results with both treatment and control generated from MEM.
Table 10. Standard simulation results with both treatment and control generated from MEM.
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.33190.68520.10030.36680.5260187.44090.0837872.61320.60620.34490.4355
0ROCP0.22400.79590.12260.29970.547881.24580.79172857.36410.28730.53950.5436
0PZ0.24530.77400.09870.37170.5729110.40770.5973666.48300.65580.62650.5997
0GSD0.20110.81940.06150.50390.6617100.00000.66670.00001.00000.83330.7475
0.1OCP0.33190.68520.10030.36680.5260187.44090.0837872.61320.60620.34490.4355
0.1ROCP0.22400.79590.12260.29970.547881.24580.79172857.36410.28730.53950.5436
0.1PZ0.24530.77400.09870.37170.5729110.40770.5973666.48300.65580.62650.5997
0.1GSD0.20110.81940.06150.50390.6617100.00000.66670.00001.00000.83330.7475
0.2OCP0.33190.68520.10030.36680.5260187.44090.0837872.61320.60620.34490.4355
0.2ROCP0.22400.79590.12260.29970.547881.24580.79172857.36410.28730.53950.5436
0.2PZ0.24530.77400.09870.37170.5729110.40770.5973666.48300.65580.62650.5997
0.2GSD0.20110.81940.06150.50390.6617100.00000.66670.00001.00000.83330.7475
0.3OCP0.33190.51990.10030.36680.4434187.44090.9251872.61320.60620.76560.6045
0.3ROCP0.22400.40920.12260.29970.354481.24580.36692857.36410.28730.32710.3408
0.3PZ0.24530.43110.09870.37170.4014110.40770.5613666.48300.65580.60860.5050
0.3GSD0.20110.38580.06150.50390.4448100.00000.49190.00001.00000.74500.5954
0.4OCP0.33190.51990.10030.36680.4434187.44090.4174872.61320.60620.51180.4776
0.4ROCP0.22400.40920.12260.29970.354481.24580.87462857.36410.28730.58100.4677
0.4PZ0.24530.43110.09870.37170.4014110.40770.9310666.48300.65580.79340.5974
0.4GSD0.20110.38580.06150.50390.4448100.00000.99960.00001.00000.99980.7223
0.5OCP0.33190.51990.10030.36680.4434187.44090.1797872.61320.60620.39290.4181
0.5ROCP0.22400.40920.12260.29970.354481.24580.88772857.36410.28730.58750.4710
0.5PZ0.24530.43110.09870.37170.4014110.40770.6933666.48300.65580.67450.5380
0.5GSD0.20110.38580.06150.50390.4448100.00000.76260.00001.00000.88130.6631
Table 11. R1 approach with both treatment and control generated from MEM.
Table 11. R1 approach with both treatment and control generated from MEM.
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.26740.75140.08350.42200.5867134.24990.438391.69990.87230.65530.6210
0ROCP0.15660.86500.04260.58740.726275.09240.832757.67790.89870.86570.7960
0PZ0.20430.81610.05870.51550.6658103.46960.643510.71470.95640.80000.7329
0GSD0.20110.81940.06150.50400.6617100.00000.66670.00001.00000.83330.7475
0.1OCP0.26740.75140.08350.42210.5867134.24950.438392.30150.87190.65510.6209
0.1ROCP0.15660.86500.04260.58740.726275.09680.832757.66190.89880.86570.7960
0.1PZ0.20420.81620.05870.51550.6658103.45660.643610.65500.95650.80010.7330
0.1GSD0.20110.81940.06150.50400.6617100.00000.66670.00001.00000.83330.7475
0.2OCP0.26740.75130.08350.42210.5867134.25590.438392.43700.87180.65510.6209
0.2ROCP0.15660.86510.04250.58750.726375.08610.832857.72430.89870.86570.7960
0.2PZ0.20420.81620.05870.51550.6658103.46330.643610.65860.95650.80000.7329
0.2GSD0.20110.81940.06150.50400.6617100.00000.66670.00001.00000.83330.7475
0.3OCP0.26740.45370.08350.42210.4379134.22690.720192.13770.87200.79610.6170
0.3ROCP0.15660.34010.04260.58730.463775.09700.325957.82720.89860.61230.5380
0.3PZ0.20420.38890.05870.51570.4523103.45980.515010.77460.95620.73560.5939
0.3GSD0.20110.38580.06150.50390.4448100.00000.49190.00001.00000.74600.5954
0.4OCP0.26750.45380.08350.42190.4379134.24710.772191.72460.87230.82220.6300
0.4ROCP0.15650.34010.04250.58750.463875.08960.833657.49480.89890.86620.6650
0.4PZ0.20420.38890.05870.51570.4523103.46540.977310.80160.95620.96670.7095
0.4GSD0.20110.38580.06150.50390.4448100.00000.99960.00001.00000.99980.7223
0.5OCP0.26740.45370.08350.42210.4379134.22600.534592.08320.87210.70330.5706
0.5ROCP0.15660.34010.04250.58750.463875.09600.928757.50040.89890.91380.6888
0.5PZ0.20420.38890.05870.51570.4523103.46380.739510.77100.95620.84790.6501
0.5GSD0.20110.38580.06150.50390.4448100.00000.76260.00001.00000.88130.6631
Table 12. R2 approach with both treatment and control generated from MEM.
Table 12. R2 approach with both treatment and control generated from MEM.
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.26740.75140.08350.42200.5867134.24990.438391.69990.87230.65530.6210
0ROCP0.15660.86500.04260.58740.726275.09240.832757.67790.89870.86570.7960
0PZ0.20420.81610.05870.51550.6658103.46960.643510.71470.95640.80000.7329
0GSD0.20110.81940.06150.50390.6617100.00000.66670.00001.00000.83330.7475
0.1OCP0.26740.75140.08350.42210.5867134.24950.438392.30150.87190.65510.6209
0.1ROCP0.15660.86500.04260.58740.726275.09680.832757.66190.89880.86570.7960
0.1PZ0.20420.81620.05870.51550.6658103.45660.643610.65500.95650.80010.7329
0.1GSD0.20110.81940.06150.50390.6617100.00000.66670.00001.00000.83330.7475
0.2OCP0.26740.75130.08350.42210.5867134.25590.438392.43700.87180.65510.6209
0.2ROCP0.15660.86510.04250.58750.726375.08610.832857.72430.89870.86570.7960
0.2PZ0.20420.81620.05870.51550.6658103.46330.643610.65860.95650.80000.7329
0.2GSD0.20110.81940.06150.50390.6617100.00000.66670.00001.00000.83330.7475
0.3OCP0.26740.45370.08350.42210.4379134.22690.720192.13770.87200.79610.6170
0.3ROCP0.15660.34010.04260.58730.463775.09700.325957.82720.89860.61230.5380
0.3PZ0.20420.38890.05870.51570.4523103.45990.515010.77460.95620.73560.5940
0.3GSD0.20110.38580.06150.50400.4448100.00000.49190.00001.00000.74600.5954
0.4OCP0.26750.45380.08350.42200.4379134.24710.772191.72460.87230.82220.6300
0.4ROCP0.15650.34010.04250.58750.463875.08960.833657.49480.89890.86620.6650
0.4PZ0.20420.38890.05870.51570.4523103.46540.977310.80160.95620.96670.7095
0.4GSD0.20110.38580.06150.50400.4448100.00000.99960.00001.00000.99980.7223
0.5OCP0.26740.45370.08350.42210.4379134.22600.534592.08320.87210.70330.5706
0.5ROCP0.15660.34010.04250.58750.463875.09600.928757.50040.89890.91380.6888
0.5PZ0.20420.38890.05870.51570.4523103.46380.739510.77100.95620.84790.6501
0.5GSD0.20110.38580.06150.50400.4448100.00000.76260.00001.00000.88130.6631
Table 13. Conditional power score where only treatment is generated with MEM and control from a normal distribution.
Table 13. Conditional power score where only treatment is generated with MEM and control from a normal distribution.
Δ DesignStandard SimulationR1 ApproachR2 Approach
0OCP0.47970.65770.5135
0ROCP0.61760.82630.6648
0PZ0.66080.76500.6725
0GSD0.77940.77940.7794
0.1OCP0.47970.65760.5135
0.1ROCP0.61760.82630.6649
0.1PZ0.66080.76490.6723
0.1GSD0.77940.77940.7794
0.2OCP0.47970.65760.5135
0.2ROCP0.61760.82620.6648
0.2PZ0.66080.76490.6724
0.2GSD0.77940.77940.7794
0.3OCP0.60590.61620.6472
0.3ROCP0.34800.53720.5701
0.3PZ0.51590.59750.6142
0.3GSD0.59740.59740.5974
0.4OCP0.4790.62920.5202
0.4ROCP0.47490.66400.6341
0.4PZ0.62050.71160.6434
0.4GSD0.72430.72430.7243
0.5OCP0.41960.56980.4608
0.5ROCP0.50810.69480.5748
0.5PZ0.56110.65220.5840
0.5GSD0.6650.66500.6650
Table 14. Standard simulation where only treatment is generated with MEM and control from a normal distribution.
Table 14. Standard simulation where only treatment is generated with MEM and control from a normal distribution.
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.24840.77080.08580.41410.5925192.19520.0520569.40700.68190.36690.4797
0ROCP0.15230.86940.09330.38900.629272.25060.85172302.46350.36020.60600.6176
0PZ0.17130.85000.07370.45700.6535106.75040.6217458.64970.71450.66810.6608
0GSD0.14260.87940.04580.57180.7256100.00000.66670.00001.00000.83330.7794
0.1OCP0.24840.77080.08580.41410.5925192.19520.0520569.40700.68190.36690.4797
0.1ROCP0.15230.86940.09330.38900.629272.25060.85172302.46350.36020.60600.6176
0.1PZ0.17130.85000.07370.45700.6535106.75040.6217458.64970.71450.66810.6608
0.1GSD0.14260.87940.04580.57180.7256100.00000.66670.00001.00000.83330.7794
0.2OCP0.24840.77080.08580.41410.5925192.19520.0520569.40700.68190.36690.4797
0.2ROCP0.15230.86940.09330.38900.629272.25060.85172302.46350.36020.60600.6176
0.2PZ0.17130.85000.07370.45700.6535106.75040.6217458.64970.71450.66810.6608
0.2GSD0.14260.87940.04580.57180.7256100.00000.66670.00001.00000.83330.7794
0.3OCP0.24840.43430.08580.41410.4242192.19520.8934569.40700.68190.78760.6059
0.3ROCP0.15230.33570.09330.38900.362472.25060.30692302.46350.36020.33360.3480
0.3PZ0.17130.35520.07370.45700.4061106.75040.5369458.64970.71450.62570.5159
0.3GSD0.14260.32580.04580.57180.4488100.00000.49190.00001.00000.74500.5974
0.4OCP0.24840.43430.08580.41410.4242192.19520.3857569.40700.68190.53380.4790
0.4ROCP0.15230.33570.09330.38900.362472.25060.81462302.46350.36020.58740.4749
0.4PZ0.17130.35520.07370.45700.4061106.75040.9554458.64970.71450.83490.6205
0.4GSD0.14260.32580.04580.57180.4488100.00000.99960.00001.00000.99980.7243
0.5OCP0.24840.43430.08580.41410.4242192.19520.1480569.40700.68190.41490.4196
0.5ROCP0.15230.33570.09330.38900.362472.25060.94762302.46350.36020.65390.5081
0.5PZ0.17130.35520.07370.45700.4061106.75040.7176458.64970.71450.71610.5611
0.5GSD0.14260.32580.04580.57180.4488100.00000.76260.00001.00000.88130.6650
Table 15. R1 approach where only treatment is generated with MEM and control from a normal distribution.
Table 15. R1 approach where only treatment is generated with MEM and control from a normal distribution.
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.19440.82620.06640.48480.6555134.26210.438379.12510.88140.65980.6577
0ROCP0.10950.91340.03100.64810.780773.00420.846659.44500.89720.87190.8263
0PZ0.14590.87600.04450.57820.7271103.88170.64086.88610.96500.80290.7650
0GSD0.14260.87940.04580.57180.7256100.00000.66670.00001.00000.83330.7794
0.1OCP0.19450.82620.06640.48460.6554134.28190.438179.28670.88130.65970.6576
0.1ROCP0.10950.91340.03100.64770.780573.00230.846759.35600.89730.87200.8263
0.1PZ0.14590.87600.04450.57810.7271103.89630.64076.91310.96490.80280.7649
0.1GSD0.14260.87940.04580.57180.7256100.00000.66670.00001.00000.83330.7795
0.2OCP0.19440.82620.06640.48470.6554134.26650.438279.09400.88140.65980.6576
0.2ROCP0.10950.91330.03100.64770.780573.00690.846659.45050.89720.87190.8262
0.2PZ0.14590.87600.04450.57820.7271103.89170.64076.91350.96490.80280.7650
0.2GSD0.14260.87940.04580.57180.7256100.00000.66670.00001.00000.83330.7794
0.3OCP0.19450.37900.06640.48460.4318134.26490.720479.74900.88090.80050.6162
0.3ROCP0.10950.29180.03100.64780.469772.99810.311959.36210.89730.60460.5372
0.3PZ0.14590.32910.04450.57830.4537103.89430.51796.96980.96480.74130.5975
0.3GSD0.14260.32580.04580.57180.4488100.00000.49190.00001.00000.74600.5974
0.4OCP0.19440.37890.06640.48480.4318134.22870.772279.59300.88110.82660.6292
0.4ROCP0.10960.29180.03110.64730.469673.00940.819759.58540.89710.85840.6640
0.4PZ0.14590.32920.04450.57810.4536103.90470.97436.93380.96490.96960.7116
0.4GSD0.14260.32580.04580.57180.4488100.00000.99960.00001.00000.99980.7243
0.5OCP0.19440.37890.06640.48480.4318134.26000.534279.38190.88120.70770.5698
0.5ROCP0.10950.29180.03100.64770.469773.00230.942659.50910.89710.91990.6948
0.5PZ0.145900.32910.04450.57820.4537103.88850.73676.92050.96490.85080.6522
0.5GSD0.14260.32580.04580.57180.4488100.00000.76260.00001.00000.88130.6650
Table 16. R2 approach where only treatment is generated with MEM and control from a normal distribution.
Table 16. R2 approach where only treatment is generated with MEM and control from a normal distribution.
Δ Design E [ CP Δ RA ] e CP ( Δ ) Var ( CP Δ RA ) v CP ( Δ ) SCP ( Δ ) E [ CN Δ RA ] e CN ( Δ ) Var ( CN Δ RA ) v CN ( Δ ) SCN ( Δ ) CPS ( Δ )
0OCP0.26310.75580.10390.35520.5555197.83160.014528.84110.92840.47140.5135
0ROCP0.19010.83060.07240.46180.6462118.90210.5407170.01010.82620.68340.6648
0PZ0.19340.82730.07210.46300.6451129.29700.471428.88940.92830.69980.6725
0GSD0.14260.87950.04580.57180.7256100.00000.66670.00001.00000.83330.7794
0.1OCP0.26310.75580.10400.35520.5555197.83440.014428.68830.92860.47150.5135
0.1ROCP0.19010.83070.07240.46180.6462118.89880.5407169.40120.82650.68360.6649
0.1PZ0.19350.82720.07210.46290.6450129.32870.471129.01200.92820.69970.6723
0.1GSD0.14260.87940.04580.57180.7256100.00000.66670.00001.00000.83330.7794
0.2OCP0.26310.75580.10400.35520.5555197.83280.0144528.68350.92860.47150.5135
0.2ROCP0.19010.83070.07250.46170.6462118.90150.5407169.70980.82630.68350.6648
0.2PZ0.19340.82720.07210.46300.6451129.31490.471228.91320.92830.69980.6724
0.2GSD0.14260.87940.04580.57180.7256100.00000.66670.00001.00000.83330.7794
0.3OCP0.26310.44940.10400.35520.4023197.82250.855928.87390.92840.89210.6472
0.3ROCP0.19010.37440.07240.46170.4181118.89250.6179169.50060.82640.72220.5701
0.3PZ0.19350.37790.07210.46300.4204129.32710.687428.87020.92840.80790.6142
0.3GSD0.14260.32580.04580.57180.4488100.00000.49190.00001.00000.74600.5974
0.4OCP0.26310.44930.10400.35520.4023197.81440.348329.28190.92790.63810.5202
0.4ROCP0.19010.37450.07250.46150.4180118.91080.8743170.15550.82610.85020.6341
0.4PZ0.19350.37800.07220.46280.4204129.34480.804729.06980.92810.86640.6434
0.4GSD0.14260.32580.04580.57180.4488100.00000.99960.00001.00000.99980.7243
0.5OCP0.26310.44930.10390.35520.4023197.82540.110528.87520.92840.51940.4608
0.5ROCP0.19010.37450.07250.46160.4181118.89530.6367169.91710.82620.73140.5748
0.5PZ0.19350.37790.07210.46290.4204129.31210.567229.06990.92810.74770.5840
0.5GSD0.14260.32580.04580.57180.4488100.00000.76260.00001.00000.88130.6650
Table 17. Comparison of conditional power score for R1, R2 and standard approaches using γ loc = 0.2 and MDM with π = 0.5 .
Table 17. Comparison of conditional power score for R1, R2 and standard approaches using γ loc = 0.2 and MDM with π = 0.5 .
Δ DesignStandard ApproachR1 ApproachR2 Approach
0.0OCP0.35750.66790.5845
0.0ROCP0.60950.78970.6477
0.0PZ0.62930.76410.6816
0.0GSD0.71000.77890.7789
0.1OCP0.31130.62890.5410
0.1ROCP0.50150.75800.6084
0.1PZ0.55350.72710.6427
0.1GSD0.66170.74250.7425
0.2OCP0.29170.60890.5210
0.2ROCP0.40880.74090.5914
0.2PZ0.48980.70730.6246
0.2GSD0.62430.72310.7231
0.3OCP0.29050.61490.6163
0.3ROCP0.30830.64590.6011
0.3PZ0.39670.65340.6382
0.3GSD0.51950.66650.6665
0.4OCP0.31090.64300.5979
0.4ROCP0.37120.70990.6468
0.4PZ0.43040.71210.6700
0.4GSD0.54590.72640.7264
0.5OCP0.34810.63740.5988
0.5ROCP0.42070.72210.6459
0.5PZ0.45030.69930.6652
0.5GSD0.56310.71300.7130
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Farooq, H.; Ali, S.; Shah, I.; Nafisah, I.A.; Almazah, M.M.A. Adaptive Clinical Trials and Sample Size Determination in the Presence of Measurement Error and Heterogeneity. Stats 2025, 8, 31. https://doi.org/10.3390/stats8020031

AMA Style

Farooq H, Ali S, Shah I, Nafisah IA, Almazah MMA. Adaptive Clinical Trials and Sample Size Determination in the Presence of Measurement Error and Heterogeneity. Stats. 2025; 8(2):31. https://doi.org/10.3390/stats8020031

Chicago/Turabian Style

Farooq, Hassan, Sajid Ali, Ismail Shah, Ibrahim A. Nafisah, and Mohammed M. A. Almazah. 2025. "Adaptive Clinical Trials and Sample Size Determination in the Presence of Measurement Error and Heterogeneity" Stats 8, no. 2: 31. https://doi.org/10.3390/stats8020031

APA Style

Farooq, H., Ali, S., Shah, I., Nafisah, I. A., & Almazah, M. M. A. (2025). Adaptive Clinical Trials and Sample Size Determination in the Presence of Measurement Error and Heterogeneity. Stats, 8(2), 31. https://doi.org/10.3390/stats8020031

Article Metrics

Back to TopTop