Replicates Number for Drug Stability Testing during Bioanalytical Method Validation—An Experimental and Retrospective Approach

Background: The stability of a drug or metabolites in biological matrices is an essential part of bioanalytical method validation, but the justification of its sample size (replicates number) is insufficient. The international guidelines differ in recommended sample size to study stability from no recommendation to at least three quality control samples. Testing of three samples may lead to results biased by a single outlier. We aimed to evaluate the optimal sample size for stability testing based on 90% confidence intervals. Methods: We conducted the experimental, retrospective (264 confidence intervals for the stability of nine drugs during regulatory bioanalytical method validation), and theoretical (mathematical) studies. We generated experimental stability data (40 confidence intervals) for two analytes—tramadol and its major metabolite (O-desmethyl-tramadol)—in two concentrations, two storage conditions, and in five sample sizes (n = 3, 4, 5, 6, or 8). Results: The 90% confidence intervals were wider for low than for high concentrations in 18 out of 20 cases. For n = 5 each stability test passed, and the width of the confidence intervals was below 20%. The results of the retrospective study and the theoretical analysis supported the experimental observations that five or six repetitions ensure that confidence intervals fall within 85–115% acceptance criteria. Conclusions: Five repetitions are optimal for the assessment of analyte stability. We hope to initiate discussion and stimulate further research on the sample size for stability testing.


Introduction
Evaluation of drug or metabolite stability in biological samples in conditions reflecting sample handling and analysis during bioanalytical method validation is recommended by international regulatory guidelines [1,2] and ICH M10 draft guidelines [3]. This evaluation includes stability in the biological matrix (short-term, long-term, and freeze-thaw), in processed samples and solutions (stock and working solutions). Kaza et al. (2019) [4] discussed the differences and similarities in bioanalytical method validation guidelines [1,2], but the authors omitted to mention differences in the recommended sample size (number of samples) for stability testing. The European Medicines Agency (EMA) [1] does not recommend any specific sample size whereas the U.S. Food and Drug Administration (FDA) [2] and ICH [3] recommend a minimum of three quality control samples (QC) per level of concentration of low QC and high QC to assess the stability of an analyte in a biological matrix. A note from Health Canada does not recommend examining stability using only one repetition of a QC sample [5].
The analyte stability testing refers to other characteristics of the bioanalytical method. The calibration range helps to select studied concentrations (low-and high-quality control samples). However, method precision is important to compare reference samples (e.g., prepared ex tempore) and test samples (i.e., stored for a specified time in specified conditions). Before any regulatory bioanalytical method validation guideline was published, Timm et al. proposed a stability assessment incorporating the precision in the calculation of 95% confidence intervals [6]. However, its application was limited by the assumed equality of variances for the reference and test samples. Rudzki and Leś extended this method for datasets with unequal variances [7]. They also proposed the use of 90% confidence intervals instead of 95% [6] to make the probability equal to the bioequivalence recommendations [8]. Confidence intervals are a good tool for testing stability. Since their introduction by Jerzy Spława Neyman in 1936 [9] they became widely used, including clinical research-for example as bioequivalence criterium [8]. Briefly, the idea of confidence intervals is to define a range of values describing parameters of interest in the population, based on parameter estimates observed in the sample. This estimation has a defined probability-usually 90%, 95%, or 99%. For example, a 90% confidence interval of 85.1-105.2% for mean stability means that there is a 90% probability that the mean stability is between 85.1% and 105.2%. In the case of stability testing, the confidence interval combines central tendency (mean difference between stored and reference samples) and data dispersion (method precision) with a selected probability. This approach is not yet frequently used because it is more restrictive and labor intensive than the guidelines' recommendations. Nevertheless, the confirmation of analyte stability in a biological matrix using this method is associated with a low and pre-defined probability of true instability.
The stability assessment proposed in the draft of the ICH M10 bioanalytical method validation guideline [3] recommends analyzing stored and reference samples but does not include a description of any comparison between them. The lack thereof creates the risk of accepting the method regardless of the 29.8% instability of an analyte [4]. Moreover, there is an insufficient justification of sample size (number of samples) in the stability evaluation. Limiting testing to three samples in each dataset may lead to stability results biased by a single outlier. However, how much do additional analyses increase confidence in the stability results? Is this increase relevant? How to balance it with the cost of extra analyses? Although there may be no universal answer to these questions, further research on sample size for stability assessment is needed.
In this paper, we aim to evaluate the optimal sample size for drug stability testing in human plasma based on confidence intervals [6,7] by conducting an experimental study for tramadol and its major metabolite (O-desmethyl-tramadol), as well as a retrospective data analysis for nine drugs of different structure.

Mass Spectrometric and Chromatographic Conditions
The bioanalytical method was adapted from the previous study [10] with a different chromatographic column and the use of formic acid in the mobile phase instead of acetic acid. The adapted method was validated according to the EMA [1] guidelines, except for long-term stability which was confirmed previously. Instrumental analysis was performed on an Agilent 1260 Infinity (Agilent Technologies, Santa Clara, CA, USA), equipped with an autosampler, a degasser, and a binary pump coupled to a hybrid triple quadrupole/linear ion trap mass spectrometer QTRAP 4000 (ABSciex, Framingham, MA, USA). The Turbo Ion Spray source was operated in positive mode with voltage and source temperatures of 5500 V and 550 • C, respectively. The curtain gas, ion source gas 1, ion source gas 2, and collision gas (all high purity nitrogen) were set at 206.84 kPa, 275.79 kPa, 379 kPa, and "high" instrument units, respectively. The target compounds were analyzed in the Multiple Reaction Monitoring (MRM) mode (Table 1). MRM-multiple reaction monitoring; DP-declustering potential; CE-collision energy; CXP-cell exit potential.
Chromatographic separation was achieved with a Kinetex C18 column (100 mm × 4.6 mm, 2.6 µm, Phenomenex, Torrance, CA, USA) using isocratic elution with methanol and 0.1% formic acid in a ratio of 40:60 at a flow rate of 0.3 mL/min. The column and the autosampler temperature was 50 ± 1 • C and 20 ± 1 • C, respectively. The injection volume was 5 µL.

Stock Solution, Calibration Standards, and Quality Control Samples
The separate standard stock solutions of tramadol, O-desmethyl-tramadol, tramadol-d6, and O-desmethyl-tramadol-d6 were prepared in 50% methanol (v/v) and were stored at −20 • C. The standard working solution was prepared by mixing stock solutions with an appropriate volume of water. The internal standard working solution (250 ng/mL for tramadol-d6 and 75 ng/mL for O-desmethyl-tramadol-d6) was diluted with water and prepared by mixing both internal standards stock solutions.
All calibration standards and the quality control samples were prepared by spiking blank human plasma with a working solution containing both analytes. The calibration standards contained both tramadol and O-desmethyl-tramadol at eight concentrations ranging from 5.0 to 750 ng/mL and from 2.5 to 150 ng/mL. The quality control samples were prepared at concentrations of 15, 350, and 600 ng/mL for tramadol, and 7.5, 70, and 120 ng/mL for O-desmethyl-tramadol.

Sample Preparation
The liquid-liquid extraction with tert-butyl methyl ether and 1M sodium hydroxide was used for the sample preparation [10]. Internal standards were added in one solution. The ether phase was evaporated in nitrogen gas and the dry residue was reconstituted with 150 µL of the mobile phase.

Stability Evaluation and Statistical Methods
The short-term stability was evaluated with sets containing an equal number of test and reference-quality control samples (QC): 3, 4, 5, 6, and 8 for low QC (15/7.5 ng/mL tramadol and O-desmethyl-tramadol) and high QC (600/120 ng/mL tramadol and Odesmethyl-tramadol). The reference and test QC samples (plasma fortified with tramadol and O-desmethyl tramadol solution) were prepared. The test QC samples were stored at room temperature for 24 and 72 h before extraction and LC-MS analysis. Autosampler stability test during the validation method, confirmed that samples are stable for a minimum of 68 h at room temperature [10]. Reference samples were analyzed immediately after preparation, after 24 and 72 h storage in an autosampler at 20 ± 1 • C in the same sequence as test samples. Acceptance criteria were met when the whole confidence interval was within the acceptance range of 85-115%.
The statistical analysis of stability was based on the application of 90% confidence intervals [6,7]. The F-Snedecor test (significance level α = 0.01) was applied to test the hypothesis on variance equality. The influence of the number of repetitions and analyte concentration on the position and width of the confidence interval was analyzed using an analysis of variance (ANOVA, p = 0.05) test with repeated measurements. Normal distribution of the stability was assumed in the estimation of the probability that the confidence interval width is below 30%. The probability P(CI ⊂ [85; 115]) was calculated using the equation: where: χ n−1 -cumulative distribution function of the chi-square distribution for degrees of freedom (df) = n − 1; n-number of repetitions; k-the value of the Student t-distribution quantile at a 0.1 significance level for n − 1 degrees of freedom (df); σ S -standard deviation in stability. More details on mathematical calculations can be found in the Appendix A.

Retrospective Analysis
Stability results for nine drugs were recorded during method validations conducted under Good Laboratory Practice conditions at the former Pharmaceutical Research Institute in Warsaw, Poland ( [11][12][13][14][15], and unpublished data). The following types of stability were studied: short-term stability, freeze and thaw stability, long-term stability at temperatures of −14 • C and −65 • C. Nine drugs with LC-MS and HPLC-UV methods of determination of varying precision were selected to create the data sets. For each drug and each stability test, n = 6 samples were recorded at each low and high QC concentration. To analyze the worst-case scenario, for each dataset a result lying nearest to the mean of n = 6 results was discarded to obtain n = 5 dataset. The same procedure was used to obtain datasets of n = 4 and n = 3. The final number of calculated confidence intervals was 264. Comparison of the width of the confidence intervals between low and high QC was made using a Wilcoxon signed-rank test (significance level p < 0.05). To analyze how differences in one variable (percentage of confidence intervals within acceptance criteria set at 85-115%) can be explained by a difference in a second variable (confidence width or the number of samples), the coefficient of determination was used.

Experimental and Mathematical Studies
Thanks to the design of the experimental study (five sample sizes, two storage durations, two analytes in two concentrations each) we were able to calculate 40 confidence intervals ( Figure 1). For 20 pairs of low and high QC concentrations, we recorded 18 cases (90%) where the 90% confidence interval was wider for low than for high concentration. Moreover, the variability of the confidence interval width-presented as relative standard deviation (RSD) in Table 2-was larger for low concentration. It shows the influence of method precision on stability evaluation, as lower concentrations were measured with worse precision. method precision. The precision of O-desmethyltramadol determination in quality control samples was 7.38% for low QC (7.5 ng/mL) and 2.90% for high QC (120 ng/mL). The precision of tramadol determination was 6.43% for low QC (15 ng/mL) and 3.07% for high QC (600 ng/mL). For each studied QC level, the mean extraction recovery was consistent for both analytes and their ISs-86.08-87.99% for tramadol, 85.55-86.99% for tramadol-d6, 74.45-78.75% for O-desmethyltramadol, and 74.61-79.07% for O-desmethyltramadol-d6. Thus, we do not expect that extraction recovery influenced stability results.   Visual assessment of low concentration data ( Figure 2a) indicates that three and four repetitions are not appropriate due to the width of some confidence intervals over 30%. For five and six repetitions, width is below 20%, while for eight repetitions, width is below 12%. Visual assessment of high concentration data (Figure 2b) is a bit different. For three repetitions the confidence intervals width in 3/4 cases is over 15%, while for all other repetitions it is below 8%, with one exception of 11% (n = 5).  (Figure 2a) indicates that three and four repetitions are not appropriate due to the width of some confidence intervals over 30%. For five and six repetitions, width is below 20%, while for eight repetitions, width is below 12%. Visual assessment of high concentration data (Figure 2b) is a bit different. For three repetitions the confidence intervals width in 3/4 cases is over 15%, while for all other repetitions it is below 8%, with one exception of 11% (n = 5).   Additionally, we have investigated the relation between precision, confidence interval, and the number of repetitions. The length of the confidence interval depends on the sample variance-the greater the n, the shorter the length of the interval (as it is inversely proportional to the square root of n), and the higher the chance the sample variance is assessed correctly. We calculated the probability that for a given precision, the confidence interval derived from n repetitions falls within a 30% range. As expected, the relation between precision and the number of repetitions is sharp ( Figure 4). As an example, for 10% precision, the considered probability is 33% for n = 3, 51% for n = 4, 71% for n = 5, 86% for Additionally, we have investigated the relation between precision, confidence interval, and the number of repetitions. The length of the confidence interval depends on the sample variance-the greater the n, the shorter the length of the interval (as it is inversely proportional to the square root of n), and the higher the chance the sample variance is assessed correctly. We calculated the probability that for a given precision, the confidence interval derived from n repetitions falls within a 30% range. As expected, the relation between precision and the number of repetitions is sharp ( Figure 4). As an example, for 10% precision, the considered probability is 33% for n = 3, 51% for n = 4, 71% for n = 5, 86% for n = 6, and 98% for n = 8. In general, for a smaller number of repetitions, there is a significant probability that the measurements with even high precision may overestimate the sample variance and consequently the length of the confidence interval. The choice of five or six repetitions proves to be enough to ensure that the confidence intervals will fall within the 85-115% interval. Additionally, we have investigated the relation between precision, confidence interval, and the number of repetitions. The length of the confidence interval depends on the sample variance-the greater the n, the shorter the length of the interval (as it is inversely proportional to the square root of n), and the higher the chance the sample variance is assessed correctly. We calculated the probability that for a given precision, the confidence interval derived from n repetitions falls within a 30% range. As expected, the relation between precision and the number of repetitions is sharp ( Figure 4). As an example, for 10% precision, the considered probability is 33% for n = 3, 51% for n = 4, 71% for n = 5, 86% for n = 6, and 98% for n = 8. In general, for a smaller number of repetitions, there is a significant probability that the measurements with even high precision may overestimate the sample variance and consequently the length of the confidence interval. The choice of five or six repetitions proves to be enough to ensure that the confidence intervals will fall within the 85-115% interval. Figure 4. Dependence of the probability that the confidence interval width is below 30% on the precision in measurements. Equal precision for the reference and the studied measurements is assumed. Curves are defined for n = 3 (red), n = 4 (orange), n = 5 (purple), n = 6 (green), and n = 8 (blue).
We postulate that five repetitions of quality control samples at low and high concentration levels are optimal for stability tests during bioanalytical method validation. For each case with n = 5, the stability tests passed and the width of all confidence intervals was below 20%. For n < 5 some of the stability tests failed (part of the confidence interval outside of the acceptance criteria of 85-115%) due to the width of confidence intervals exceeding 30%. Moreover, for n > 5 all stability tests passed and the mean width of the confidence intervals decreased gradually (Table 2).

Retrospective Study
To verify observations from the experimental and the theoretical studies, we have analyzed human plasma stability data for nine validated bioanalytical methods (Figures 5 and A1). For all data, the percentage of confidence intervals lying within acceptance criteria was acceptable for n = 5 (88% for low and 93% for high concentration, respectively) and reached 100% for n = 6 ( Figure 6a). For n = 5, only 5 of 66 results (including four for low QC) were outside of the acceptance limits. The greatest difference between the confidence interval limits and the acceptance criteria was 1.8%.
As expected, a strong positive correlation (r 2 > 0.96) was observed between the number of samples and the percentage of confidence intervals within the acceptance criteria (Figure 6a). Consequently, a strong negative correlation (r 2 > 0.98) was observed between the confidence interval width and the percentage of confidence intervals within the acceptance criteria (Figure 6b). Among confidence intervals for n = 3, 4, and 5, more than a 2-fold higher percentage of confidence intervals outside of acceptance criteria was observed for the low QC ( Figure A2b) than for the high QC ( Figure A2c is consistent with higher values of both width of the confidence interval and its variability expressed as RSD (Table 2, Figures 7 and A4).

Retrospective Study
To verify observations from the experimental and the theoretical studies, we have analyzed human plasma stability data for nine validated bioanalytical methods ( Figure 5 and A1). For all data, the percentage of confidence intervals lying within acceptance criteria was acceptable for n = 5 (88% for low and 93% for high concentration, respectively) and reached 100% for n = 6 ( Figure 6a). For n = 5, only 5 of 66 results (including four for low QC) were outside of the acceptance limits. The greatest difference between the confidence interval limits and the acceptance criteria was 1.8%. Figure 5. Retrospective study of nine drugs' stability in human plasma: number of confidence intervals within (positive results) and outside (negative results) acceptance criteria for nine drugs using n = 3, 4, 5, and 6 samples for stability testing. High and low concentration data are combined.  Table 2). The dataset consisted of 33 dence intervals for each concentration level: LQC (circle)-low-quality control sample; HQC ( gle)-high-quality control sample.
As expected, a strong positive correlation (r 2 > 0.96) was observed between the n ber of samples and the percentage of confidence intervals within the acceptance cr ( Figure 6a). Consequently, a strong negative correlation (r 2 > 0.98) was observed betw the confidence interval width and the percentage of confidence intervals within th ceptance criteria (Figure 6b). Among confidence intervals for n = 3, 4, and 5, more th 2-fold higher percentage of confidence intervals outside of acceptance criteria wa served for the low QC ( Figure A2b) than for the high QC ( Figure A2c Table 2). The dataset consisted of 33 confidence intervals for each concentration level: LQC (circle)-low-quality control sample; HQC (triangle)high-quality control sample. There were no relevant differences in confidence interval width between stability tests ( Figure A3). The highest values for all sample numbers were recorded for the freeze and thaw test, but all other values for each sample number were only 1-2% lower.

Discussion
The results of the experimental, theoretical, and retrospective studies are in good agreement indicating that using 90% confidence intervals requires testing of at least five repetitions of quality controls as references and as stability samples. A retrospective study revealed that the percentage of the confidence intervals within acceptance criteria is strongly correlated with the number of samples used for stability testing (positively) and There were no relevant differences in confidence interval width between stability tests ( Figure A3). The highest values for all sample numbers were recorded for the freeze and thaw test, but all other values for each sample number were only 1-2% lower.

Discussion
The results of the experimental, theoretical, and retrospective studies are in good agreement indicating that using 90% confidence intervals requires testing of at least five repetitions of quality controls as references and as stability samples. A retrospective study revealed that the percentage of the confidence intervals within acceptance criteria is strongly correlated with the number of samples used for stability testing (positively) and the mean of the width of confidence intervals (negatively). The statistically significant difference between low QC and high QC was observed between the percentage of confidence intervals within the acceptance criteria for a given sample number. The type of stability test did not influence confidence interval width. It seems that the excess work between n = 5 and n = 8 is not balanced with the benefit of a narrower width of the confidence interval. On the other hand, there would be 72 more analyses during full validation for one analyte, and this number does not include stability testing in solutions. The amount of excess work and resources for additional analyses may not be assessed in general, because it depends on particular method characteristics.
Our experimental study used a single bioanalytical method for the determination of two analytes in a single laboratory. To increase confidence in conclusions, we have reused previously generated stability data for nine drugs. Retrospective analyses are very popular in medicine [16,17], and slightly less popular in pharmacy [18,19]. On the contrary, in analytical chemistry retrospective analyses are used very rarely [20]. Over 20 years ago the concept of green analytical chemistry to protect the environment was established. Recently, its extension was proposed: white analytical chemistry in addition to green aspects also takes into account analytical and practical attributes [21]. Nevertheless, retrospective analysis has even greater ecological aspects since no chemical analysis is required and no waste is generated. Considering the high amount of analytical data produced each year in laboratories, it would be beneficial to explore them all deeply to draw some general conclusions, answer the emerging questions, and contribute to international guidelines development. The retrospective study enabled comparison of data generated using LC-MS and HPLC-UV methods (Table A3). It may be observed that narrower stability confidence intervals were recorded for HPLC-UV determined imatinib than for LC-MS/MS determined prasugrel ( Figure A1). On the other hand, narrower stability confidence intervals were recorded for LC-MS determined eplerenone than for HPLC-UV determined ibuprofen ( Figure A1). This indicates that the detector type and concentration range are not the appropriate indicator of confidence interval width, which is dependent on method precision.
We limited our study to plasma samples. For neat solutions, due to the lower probability of interferences and lack of variability introduced by sample preparation, the precision should be better and the optimal number of repetitions could be lower. We avoided the exclusion of outlying results. An alternative approach is to use a smaller number of replicates and remove outliers using statistical tests such as the Q-Dixon or Grubbs test. However, this approach-especially for a small number of replicates-may provoke questions from regulatory agencies. Additionally, it does not take into account the precision of the method. Therefore, we do not recommend this approach. The limitation of the retrospective study is that all confidence intervals for n = 6 were within the acceptance criteria as we used validated methods. The calculation of a 90% confidence interval may be considered as complicated compared to current bioanalytical method validation guidelines [1,2]. However, an extra effort in data analysis increases the reliability of stability evaluation.
We assumed a normal distribution of concentration data for stability and reference samples. However, stability is a ratio of stability samples over reference samples and the ratio of two normally distributed samples is never normally distributed itself. This statistical issue is taken into account for bioequivalence testing where the acceptance criteria of 80-125% does not center symmetrically around 100% but does so in log space. Thus, acceptance limits of 85-115% may not be appropriate for stability testing. An approach similar to bioequivalence suggests a criterion of 85.00-117.65%. We have opted to use 85-115% acceptance limits, which are well-established in regulatory guidelines [1,2], but their inconsistency with stability distribution needs further consideration.
Our results are important because the current recommendation of at least three samples for stability testing [2,3] is not sufficient. The proposed n = 5 is in line with reports from other laboratories [22][23][24] where five or six results were used to calculate the 90% confidence intervals for stability. Extending stability acceptance criteria from deviation from nominal concentration by adding a test-to-reference ratio may be considered as an increase of regulatory burden. On the other hand, the reliability of bioanalytical data is crucial for pharmacokinetic calculations and decisions on dosing schemes. The latter impacts drug efficiency and patient safety. Thus, the proper balance between too extensive testing and poor data quality requires further discussion. A possible answer may be a hybrid approach: hard criteria for deviation of the mean from nominal concentration combined with soft criteria for the 90% confidence interval for test-to-reference ratio.
Both experimental and retrospective studies suggest that an optimal number of repetitions is five, as also recommended by the European Bioanalysis Forum [25]. The proper assumption on the relationship between method precision and sample size may be a key factor for successful future simulations. We hope that this paper will initiate discussion and stimulate further research on optimal sample size for stability testing. We expect that further simulations and retrospective studies from other laboratories will support the need for bioanalytical guidelines update.

Conclusions
Five sample repetitions are optimal for the assessment of analyte stability during bioanalytical method validation. Experimental, theoretical, and retrospective study results led to similar conclusions. The number of three or four replicates, in spite of being acceptable in some guidelines, is insufficient (in some cases, the width of the confidence intervals for stability exceeded 30%, which precluded meeting the acceptance criteria). In contrast, the excess work between n = 5 and n = 8 was not balanced with any benefit of narrower confidence interval widths. We hope to initiate a discussion on sample size for stability studies. Such a discussion may result in updated bioanalytical method validation guidelines.     Combined data from a retrospective study of nine drugs using n = 3, 4, 5, and 6. Figure A3. Retrospective study: Relation between mean confidence interval width and type of stability test. Combined data from a retrospective study of nine drugs using n = 3, 4, 5, and 6. Figure A3. Retrospective study: Relation between mean confidence interval width and type of stability test.

Relation between precision in measurements and stability:
Stability S is determined as a ratio of two uncorrelated random variables X (tested samples) and Z (reference samples):

=
Our goal is to derive the relation between the standard deviations in X and Z and the standard deviation in S. We start with linearization, which allows us to reformulate the Z variable as follows: where is the centralized Z variable (mean = 0, standard deviation = 1). Using linear approximation, we may obtain:

Relation between precision in measurements and stability:
Stability S is determined as a ratio of two uncorrelated random variables X (tested samples) and Z (reference samples): S = X Z Our goal is to derive the relation between the standard deviations in X and Z and the standard deviation in S. We start with linearization, which allows us to reformulate the Z variable as follows: where Z is the centralized Z variable (mean = 0, standard deviation = 1). Using linear approximation, we may obtain: µZ 2 E(X) 2 − 2 dZ µZ 2 E X 2 Z + σ Z 2 µZ 4 E X 2 Z 2 = I + I I + I I I I. E(X) 2 = σ X 2 + µX 2 Variables are uncorrelated and expected value of Z is equal to 0: Again, variables are uncorrelated and the standard deviation of Z is equal to 1: Using linearization, we can approximate: Finally:

Probability for the confidence interval:
As demonstrated by Rudzki and Leś, measurements may follow a log-normal distribution [7]. In such a case, the confidence interval can be calculated using logarithmic transformation, which yields a normal distribution of the stability. In order to keep the model simple, from now on we will assume the normal distribution of the stability.
Let us denote the standard deviation in stability as s S . Under the assumption o the normal distribution, the 90% confidence interval of stability has the following form: where: µ S is the mean value of stability and k is the value of the Student t-distribution quantile at a 0.1 significance level for n−1 degrees of freedom (df ). In the presented work, we consider only stable analytes, i.e., µ S =100. The probability that the confidence interval is in the 85-115% interval: P(CI ⊂ [85; 115]) is equivalent to: P s S k √ n < 15 = P s S 2 < 15 √ n k 2 Assuming that the true standard deviation in stability is σ S : where: s 2 (n − 1) σ S 2 ∼ chi 2 (n − 1) As a result: P(CI ⊂ [85; 115]) = χ n−1 225 n (n − 1) k 2 σ S 2 where χ n−1 is the cumulative distribution function of the chi-square distribution for df = n − 1.