Next Article in Journal
An Estimation of the Economic and Environmental Impact of Inhaler Devices Switch for Non-Clinical Reasons in COPD and Asthma: The Case for Spain
Previous Article in Journal
The PICO Puzzle: Can Public Data Predict EU HTA Expectations for All EU Countries?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

EU-HTA Guidance for Clinical Validity: Misconceptions and Flawed Processes

1
CEReSS/UR3279—Health Services Research and Quality of Life Center, Aix-Marseille University, 13385 Marseille, France
2
InovIntell, 215 rue du Faubourg St Honoré, 75008 Paris, France
3
CESP, INSERM U1018, Université Paris-Saclay, 94800 Villejuif, France
4
Clever Access, Tunis 1053, Tunisia
5
InovIntell Co., Ltd., 3023 GJ Rotterdam, The Netherlands
*
Author to whom correspondence should be addressed.
J. Mark. Access Health Policy 2025, 13(3), 33; https://doi.org/10.3390/jmahp13030033
Submission received: 17 March 2025 / Accepted: 6 May 2025 / Published: 15 July 2025
This review of the scope of the European Health Technology Assessment (EU HTA)’s guidance on clinical trial validity in its randomized controlled trials (RCTs) highlights several key issues that undermine its practical application and effectiveness, including misconceptions, errors, and inconsistencies [1]. These flaws are attributable to the absence of a systematic literature review (SLR), a lack of transparency, a lack of accountability, and a lack of consideration of evidence-based medicine (EBM) principles, as mandated by the EU HTA Regulation (EU HTAR) [2,3].
A key concern is the contradiction between the EU HTAR’s prohibition of judgment [3] and the inherent need for judgment in clinical trial evaluations [4,5]. Assessors must determine comparative effectiveness [3], which requires evaluating study design, inclusion criteria, outcomes, and their impact on validity and statistical precision [6]. Prohibiting judgment leads to flawed assessments [6,7], as bias identification—a core aspect of this guidance—relies on it [4,8,9]. Banning judgment undermines clinical evidence assessments.
A significant issue with this guidance is its misinterpretation and inaccurate representation of fundamental concepts within clinical research, including internal and external validity. The guidance defines internal validity as a study’s freedom from bias [1]. However, internal validity is not just about bias control; it is about establishing causality between an intervention and observed outcome [10,11,12]. Even in randomized trials, confounder imbalances and population heterogeneity can weaken causal inferences [10,11,12].
Moreover, the guidance erroneously suggests that biases exclusively affect the internal validity of assessments, but in reality, multiple biases can have significant implications for the external validity of assessments [1]. Additionally, the guidance fails to acknowledge the trade-offs between internal and external validity. Highly controlled trials (high internal validity) may lack real-world applicability (low external validity), while pragmatic trials prioritize generalizability at the cost of causal certainty [12,13,14]. Tools like PRECIS-2 help to balance these dimensions [15,16], yet the guidance offers no such framework. Expecting a single trial to maximize both types of validity alongside statistical precision contradicts methodological reality.
The guidance suggests that external validity can be sufficiently evaluated using qualitative methods on a case-by-case basis [1]. However, this approach may not fully acknowledge the complexity of external validity and the multiple factors—beyond PICO (Population, Intervention, Comparator, Outcome)—that influence the generalizability of study results [17]. Several biases specifically affect external validity [9,17,18]. Using standardized tools or checklists to guide the assessment of external validity, would ensure a more rigorous and transparent evaluation [9].
Although the guidance states that PICO differences affect external validity, internal validity is also PICO-dependent but is still assessed centrally [19]. A more practical approach would be to assess external validity at the PICO level while leaving broader contextual considerations to Member States (MSs)’ discretion.
The guidance ignores Type I and II errors and avoids concluding superiority, non-inferiority, or equivalence. Instead, it focuses on whether the tested hypothesis is or is not rejected, which inherently acknowledges these errors and conclusions [1]. This exclusion appears unreasonable as this factual information is important to HTAs [1,20]. The arguments for prohibited judgment cannot justify this recommendation.
Minimum Clinical Important Difference (MCID) is not addressed while it should be consistent across all MS and managed within the Joint Clinical Assessment (JCA) [1]. It may be reasonable to ask Health Technology Developers (HTDs) to document MCID for critical endpoints.
The guidance advocates for intention-to-treat (ITT) analysis to control for attrition, but it fails to consider that ITT may not be suitable for progressive diseases [21].
Several recommendations within the guidance, such as the selection of Risk of Bias (RoB) 1 over the updated version RoB 2 for bias assessment [1,8] and the assessment method recommended for external validity, are arbitrary [1]. Similarly, the exclusion of Bayesian adaptive trials and pragmatic trials from the guidance limits the relevance of this guidance while these trial designs can provide valuable perspective for HTA decision [1].
Conclusions:
The EU HTA guidance on clinical trial validity is fraught with limitations, stemming from a lack of SLR-based recommendations, a lack of transparency, a lack of accountability, failure to adhere to EBM principles, and the prohibition of judgment [2,3], leading to significant misconceptions and errors. A thorough revision of the guidance is needed, incorporating SLRs to justify methodological choices, expert consultations, greater transparency, and compliance with EBM as requested in the EU HTAR.
Evaluating clinical trials is complex and involves various cognitive skills and judgments [22]. Policymakers likely prohibit judgment in the EU HTAR for good reason. This likely reflects the belief that appraising the value of an intervention and its positioning should be managed by MSs to address national specificities. It is unlikely that this rule is intended to restrict assessors from exercising their scientific judgment. This requires clarification. Prior to assessing comparative effectiveness certainty, the HTA coordination group and the European Commission DG Santé should resolve any methodological uncertainties from their guidance.

Author Contributions

M.T.: Conceptualized the content and wrote the first draft of the manuscript. B.F., A.J., S.A., L.B. and P.A.: Challenged the concept, edited the manuscript and refined arguments for clarity and coherence. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

M.T. and S.A. are employees of company InovIntell. A.J. is employee of Clever Access Tunisia. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EBMEvidence-Based Medicine
EUEuropean Union
EU HTAEuropean Health Technology Assessment
EU HTAREuropean Health Technology Assessment Regulation
HTAHealth Technology Assessment
HTDsHealth Technology Developers
ITTIntention To Treat
JCAJoint Clinical Assessment
MCIDMinimum Clinically Important Differences
MSMember States
PICOPopulation, Intervention, Comparator, Outcome
PRECISPRagmatic Explanatory Continuum Indicator Summary
RoBRisk of Bias
SLRSystematic Literature Reviews

References

  1. HTA Coordination Group (HTACG). Guidance on the Validity of Clinical Studies for Joint Clinical Assessments. V1.0. 4 July 2024. Available online: https://health.ec.europa.eu/document/download/9f9dbfe4-078b-4959-9a07-df9167258772_en?filename=hta_clinical-studies-validity_guidance_en.pdf (accessed on 30 December 2024).
  2. European Access Academy (EAA). Open Letter to DG Santé and the Member State Coordination Group on HTA. 22 November 2024. Available online: https://irp.cdn-website.com/e52b6f19/files/uploaded/Open_Letter_Methods_EU_HTA.pdf (accessed on 28 January 2025).
  3. European Commission. Regulation (EU) 2021/2282 of the European Parliament and of the Council of 15 December 2021 on Health Technology Assessment and Amending Directive 2011/24/EU. 15 December 2021. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32021R2282 (accessed on 18 December 2024).
  4. Higgins, J. The Cochrane Collaboration’s Tool for Assessing Risk of Bias in Randomised Trials. BMJ 2011, 343, d5928. [Google Scholar] [PubMed]
  5. Schünemann, H.J.; Mustafa, R.A.; Brozek, J.; Steingart, K.R.; Leeflang, M.; Murad, M.H.; Bossuyt, P.; Glasziou, P.; Jaeschke, R.; Lange, S. GRADE guidelines: 21 part 1. Study design, risk of bias, and indirectness in rating the certainty across a body of evidence for test accuracy. J. Clin. Epidemiol. 2020, 122, 129–141. [Google Scholar] [PubMed]
  6. Lima, J.P.; Chu, X.; Guyatt, G.H.; Tangamornsuksan, W. Certainty of evidence, why? J. Bras. De Pneumol. 2023, 49, e20230167. [Google Scholar]
  7. US centers for Disease Control and Prevention (CDC). Chapter 7: GRADE Criteria Determining Certainty of Evidence. Available online: https://www.cdc.gov/acip-grade-handbook/hcp/chapter-7-grade-criteria-determining-certainty-of-evidence/index.html (accessed on 17 March 2025).
  8. Nejadghaderi, S.A.; Balibegloo, M.; Rezaei, N. The Cochrane risk of bias assessment tool 2 (RoB 2) versus the original RoB: A perspective on the pros and cons. Health Sci. Rep. 2024, 7, e2165. [Google Scholar] [PubMed]
  9. Jung, A.; Balzer, J.; Braun, T.; Luedtke, K. Identification of tools used to assess the external validity of randomized controlled trials in reviews: A systematic review of measurement properties. BMC Med. Res. Methodol. 2022, 22, 100. [Google Scholar]
  10. Cahit, K. Internal validity: A must in research designs. Educ. Res. Rev. 2015, 10, 111–118. [Google Scholar]
  11. Slack, M.K.; Draugalis, J.R., Jr. Establishing the internal and external validity of experimental studies. Am. J. Health-Syst. Pharm. 2001, 58, 2173–2181. [Google Scholar] [PubMed]
  12. Busch, C. The relationship between internal and external validity. Rerum Causae 2017, 9, 71–91. [Google Scholar]
  13. ASH Clinical News. Who’s in and Who’s out? Available online: https://ashpublications.org/ashclinicalnews/news/2774/Who-s-In-and-Who-s-Out (accessed on 14 January 2025).
  14. Jimenez-Buedo, M.; Miller, L.M. Why a trade-off? The relationship between the external and internal validity of experiments. THEORIA 2010, 25, 301–321. [Google Scholar]
  15. Loudon, K.; Treweek, S.; Sullivan, F.; Donnan, P.; Thorpe, K.E.; Zwarenstein, M. The PRECIS-2 tool: Designing trials that are fit for purpose. BMJ 2015, 350, h2147. [Google Scholar] [PubMed]
  16. Wright, P.J.; Pinto, B.M.; Corbett, C.F. Balancing internal and external validity using precis-2 and re-aim: Case exemplars. West. J. Nurs. Res. 2021, 43, 163–171. [Google Scholar] [PubMed]
  17. Rothwell, P.M. External validity of randomised controlled trials:“to whom do the results of this trial apply?”. Lancet 2005, 365, 82–93. [Google Scholar] [PubMed]
  18. Laerd Dissertations & Theses. Threats to External Validity. Available online: https://dissertation.laerd.com/external-validity-p3.php (accessed on 28 January 2025).
  19. Eldridge, S.; Ashby, D.; Bennett, C.; Wakelin, M.; Feder, G. Internal and external validity of cluster randomised trials: Systematic review of recent trials. BMJ 2008, 336, 876–880. [Google Scholar] [PubMed]
  20. Maltenfort, M. Type I, type II, and occasionally type III: How can we go wrong? Clin. Spine Surg. 2015, 28, 189. [Google Scholar]
  21. Hernán, M.A.; Hernández-Díaz, S. Beyond the intention-to-treat in comparative effectiveness research. Clin. Trials 2012, 9, 48–55. [Google Scholar] [PubMed]
  22. Oliveira, M.D.; Mataloto, I.; Kanavos, P. Multi-criteria decision analysis for health technology assessment: Addressing methodological challenges to improve the state of the art. Eur. J. Health Econ. 2019, 20, 891–918. [Google Scholar] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Toumi, M.; Falissard, B.; Jouini, A.; Aballéa, S.; Boyer, L.; Auquier, P. EU-HTA Guidance for Clinical Validity: Misconceptions and Flawed Processes. J. Mark. Access Health Policy 2025, 13, 33. https://doi.org/10.3390/jmahp13030033

AMA Style

Toumi M, Falissard B, Jouini A, Aballéa S, Boyer L, Auquier P. EU-HTA Guidance for Clinical Validity: Misconceptions and Flawed Processes. Journal of Market Access & Health Policy. 2025; 13(3):33. https://doi.org/10.3390/jmahp13030033

Chicago/Turabian Style

Toumi, Mondher, Bruno Falissard, Asma Jouini, Samuel Aballéa, Laurent Boyer, and Pascal Auquier. 2025. "EU-HTA Guidance for Clinical Validity: Misconceptions and Flawed Processes" Journal of Market Access & Health Policy 13, no. 3: 33. https://doi.org/10.3390/jmahp13030033

APA Style

Toumi, M., Falissard, B., Jouini, A., Aballéa, S., Boyer, L., & Auquier, P. (2025). EU-HTA Guidance for Clinical Validity: Misconceptions and Flawed Processes. Journal of Market Access & Health Policy, 13(3), 33. https://doi.org/10.3390/jmahp13030033

Article Metrics

Back to TopTop