Abstract
This article examines the institutional reliability of expert evidence in construction litigation in England and Wales. Drawing on doctrinal analysis, practitioner interviews, and comparative evaluation of Australia, Singapore, and international arbitration, it argues that reliability should be understood not as an ethical virtue of individual experts but as a systemic property of evidentiary governance. Despite the procedural safeguards of Part 35 of the Civil Procedure Rules, expert independence remains undermined by adversarial incentives, methodological inconsistency, limited judicial capacity, and weak enforcement. Comparative models demonstrate that concurrent evidence, expert accreditation, and structured judicial oversight can effectively realign procedural incentives with epistemic integrity. The article proposes four interdependent reforms—accreditation, methodological standardisation, judicial capacity-building, and feedback-based oversight—to embed reliability as a procedural norm within the Technology and Construction Court. By reframing reliability as an institutional obligation rather than a moral aspiration, the study contributes to wider debates on evidentiary governance, procedural justice, and the regulation of expertise in technologically complex adjudication.
1. Introduction
Expert evidence occupies a pivotal yet contested role within civil adjudication. In construction litigation, it serves as the evidential bridge between technical complexity and judicial reasoning, translating engineering, scheduling, and valuation data into legally intelligible form. Within the Technology and Construction Court (TCC), expert witnesses often exercise decisive influence over questions of causation, liability, and quantum [1,2]. Despite the procedural emphasis on impartiality under Part 35 of the Civil Procedure Rules (CPR) [3], judicial criticism of expert performance has been frequent and severe. Cases such as Van Oord v Dragados [4], Imperial Chemical Industries v Merit Merrell [5], and Jones v Kaney [6] illustrate recurring concerns about adversarial bias, methodological inconsistency, and rhetorical overreach—symptoms of systemic fragility rather than isolated professional failure.
We contend that the reliability of expert evidence cannot be secured solely through ethical injunctions to independence. Rather, reliability should be conceived as an institutional and epistemic property—reflecting the legal system’s capacity to ensure that expert knowledge is methodologically robust, transparent, and resistant to adversarial distortion.
Existing doctrinal frameworks treat reliability as an attribute of individual virtue, enforced through duties of candour and potential negligence liability [7,8]. However, empirical evidence and comparative jurisprudence indicate that structural incentives, procedural culture, and market conditions exert far greater influence on how experts behave and how courts assess their work [9,10].
This study integrates doctrinal analysis, empirical interviews, and comparative legal perspectives to examine how institutional conditions shape expert reliability in construction litigation in England and Wales. Three interrelated challenges are identified: adversarial distortion of expert independence, methodological inconsistency across delay and quantum analyses, and uneven judicial scrutiny and enforcement of evidentiary standards. Comparative insights from Australia, Singapore, and international arbitration reveal alternative institutional designs—such as concurrent evidence, accreditation regimes, and structured feedback systems—that provide regulatory pathways for strengthening expert reliability.
1.1. Research Questions
This paper addresses three principal questions:
- How do procedural and institutional factors in England and Wales shape the reliability of expert evidence in construction litigation?
- How do comparative jurisdictions—Australia, Singapore, and international arbitration—address equivalent evidentiary challenges?
- What reforms could embed epistemic reliability within the institutional design of the Technology and Construction Court?
1.2. Theoretical Basis
The theoretical orientation of this study is socio-legal and comparative. Drawing on Edmond’s [9] and Jasanoff’s [11] theories of epistemic governance, we conceptualise expert reliability as a systemic rather than moral phenomenon. This view aligns with Mnookin et al. [10], who argue that evidentiary legitimacy depends on institutional arrangements that align professional incentives with methodological integrity. Our approach also engages procedural-justice theory [12,13], which links the fairness of adjudication to the transparency and accountability of evidentiary processes.
From a comparative-law standpoint, the study follows the functionalist methodology articulated by Zweigert and Kötz [14], emphasising how different legal systems respond to similar practical problems. Australia, Singapore, and international arbitration are examined not as cultural curiosities but as functional laboratories for testing procedural innovation.
Australia exemplifies judicially managed concurrent evidence (“hot-tubbing”) that mitigates adversarial bias; Singapore represents a hybrid model blending common-law procedure with state-backed accreditation of experts; and international arbitration offers a transnational arena where evidentiary standards evolve through contractual governance rather than statute.
Finally, the study situates its inquiry within the emerging literature on digital and algorithmic evidence [15,16], recognising that AI-assisted methodologies introduce novel epistemic risks requiring transparent validation. This theoretical synthesis grounds the paper’s later reform proposals in an integrated framework of institutional reliability, procedural justice, and evidentiary governance.
2. Methodology and Legal Framework
2.1. Research Design and Rationale
We adopt a mixed-methods socio-legal design integrating doctrinal, empirical, and comparative analysis. This design reflects our premise that the reliability of expert evidence is not solely a doctrinal construct but a socially situated practice shaped by professional norms, procedural culture, and institutional architecture [9,12,17].
Our doctrinal analysis draws upon Part 35 of the Civil Procedure Rules 1998 (as amended) [3], the associated Practice Direction 35, and leading case law including Jones v Kaney [6], Imperial Chemical Industries v Merit Merrell [5], and Van Oord v Dragados [4]. We examine how these sources define the expert’s overriding duty to the court and how judicial expectations of independence and methodological transparency are expressed in procedural law.
Complementing this doctrinal foundation, we conducted a qualitative empirical component to explore how practitioners interpret and enact reliability in day-to-day litigation. Ten semi-structured interviews were undertaken with purposively selected professionals drawn from the main communities engaged in construction disputes—barristers, solicitors, chartered quantity surveyors, delay analysts, and technical consultants. This combination of doctrinal and empirical approaches enables triangulation between the normative ideals of reliability and the lived experience of evidentiary practice.
The mixed-methods design is particularly suited to construction litigation, a domain characterised by technical complexity and high evidentiary dependence [18]. It allows us to connect the “black-letter” procedural framework with the socio-professional dynamics that sustain or undermine evidentiary integrity.
2.2. Reflexivity and Research Positionality
We recognise that socio-legal inquiry requires continuous reflexivity regarding researcher positionality [19]. As academic researchers with professional familiarity in UK construction law and dispute resolution, we occupy a semi-insider position that affords contextual insight but also potential interpretive bias. To mitigate this, we employed peer debriefing and iterative theme validation against doctrinal findings and published literature [17].
Reflexivity is thus embedded not as a methodological afterthought but as a substantive safeguard ensuring that interpretive claims are grounded in verifiable legal and empirical evidence. Our stance aligns with the socio-legal understanding of the researcher as both observer and participant in the construction of legal meaning [20].
2.3. Legal and Institutional Context
Expert evidence in England and Wales is governed primarily by Part 35 of the Civil Procedure Rules [3] and Practice Direction 35. Rule 35.3(1) codifies the expert’s overriding duty to the court, superseding any obligation to the instructing party. Although this provision embodies the ideal of epistemic neutrality, adversarial incentives and commercial pressures often undermine it [7,13].
The abolition of expert-witness immunity in Jones v Kaney [6] aimed to strengthen accountability by exposing negligent experts to civil liability. Yet empirical observation suggests that this has produced limited deterrence: judicial criticism rarely translates into sanctions, and cost orders for unreliable evidence remain exceptional [8,21].
Within the TCC, expert evidence is central to the adjudication of delay, quantum, and design disputes. While judges possess deep legal expertise, they vary in technical literacy—particularly regarding emerging digital tools such as Building Information Modelling (BIM) and algorithmic delay analysis [22]. This asymmetry generates what Edmond [9] terms epistemic dependence, whereby judicial decisions rely heavily on expert mediation of complex technical data.
Procedural mechanisms intended to mitigate adversarial distortion—joint expert statements under r.35.12 and concurrent evidence or “hot-tubbing” under r.35.15—are inconsistently applied and frequently resisted for strategic reasons [23]. The resulting variability highlights a gap between procedural aspiration and institutional reality, sustaining what Freckelton and Selby [24] describe as a persistent “culture of partisan expertise”.
2.4. Doctrinal–Empirical Integration
Our analytical framework integrates doctrinal, empirical, and comparative dimensions to generate a composite understanding of reliability. The doctrinal strand provides a normative benchmark; the empirical strand reveals how those norms are operationalised; and the comparative strand situates both within an international field of procedural innovation.
2.4.1. Selection of Comparative Jurisdictions
We selected Australia, Singapore, and international arbitration as comparators because each offers distinct institutional mechanisms for governing expert evidence while sharing common-law foundations (Table 1).
Table 1.
Comparative Framework.
- Australia has institutionalised concurrent evidence (“hot-tubbing”) across federal and state courts, significantly enhancing transparency and judicial control [24].
- Singapore combines common-law adversarial procedure with regulatory licencing of expert witnesses under its Evidence (Experts) Rules 2017, overseen by the Supreme Court and Singapore Academy of Law [25].
- International arbitration—guided by the IBA Rules on the Taking of Evidence [26] and CIArb Protocols [27]—provides a transnational procedural model promoting disclosure, comparability, and professional accountability.
These jurisdictions were chosen for their functional proximity to England and Wales and their demonstrable procedural innovations relevant to institutional reliability.
2.4.2. Integration of Empirical and Doctrinal Data
We interviewed ten professionals with direct experience in construction litigation: four barristers and solicitors practising before the TCC, three chartered quantity surveyors and delay analysts involved in UK and international arbitration, two technical consultants specialising in digital forensics, and one Singapore-qualified construction lawyer. Each interview, lasting 45–90 min, was conducted online between 2023 and 2024, recorded with consent, and transcribed verbatim.
A common interview guide explored four domains: (1) professional independence and adversarial pressure; (2) methodological consistency; (3) judicial engagement with technical evidence; and (4) cross-jurisdictional practice. Thematic analysis followed Braun and Clarke’s [28] six-stage model, allowing inductive theme generation while anchoring interpretation in theoretical categories of adversarial influence, methodological coherence, and institutional accountability.
By triangulating doctrinal findings, practitioner testimony, and comparative insights, we treat reliability as a systemic property of the evidentiary process rather than a matter of individual virtue. This “triangulation of meaning” enhances explanatory depth and strengthens the study’s internal validity.
2.5. Limitations of the Methodology
While the integration of multiple methods strengthens validity, the purposive and relatively small interview sample (n = 10) limits generalisability.
Our focus on litigation before the TCC excludes arbitration and adjudication forums that display distinct procedural cultures. Nevertheless, the findings provide an empirically grounded foundation for theorising institutional reliability and identifying reform pathways applicable across related dispute-resolution contexts.
The qualitative design also reflects a broader epistemic orientation: our goal is not statistical inference but the critical illumination of the institutional conditions that sustain or compromise expert evidence in construction disputes.
3. Key Challenges Undermining Expert Reliability
Building on the doctrinal, empirical, and comparative analysis outlined above, we identified several recurrent themes illustrating how procedural ideals diverge from evidentiary practice. Despite the safeguards embedded within Part 35 of the Civil Procedure Rules, the reliability of expert evidence in construction litigation remains structurally fragile. Our findings reveal four interconnected challenges: adversarial distortion, methodological inconsistency, limited judicial capacity, and weak procedural enforcement. Together, they demonstrate that reliability functions less as an ethical attribute of individuals and more as an institutional property of the evidentiary system.
3.1. Adversarial Distortion of Independence
Although Rule 35.3 establishes an overriding duty to the court, interviewees consistently described a professional culture in which independence is undermined by adversarial and commercial pressures. Experts are frequently chosen for their perceived alignment with party narratives rather than their neutrality.
One participant observed that professional reputations are often “built on perceived usefulness to the client rather than on neutrality.” This dynamic echoes Freckelton and Selby’s [24] analysis of the “partisan-expert culture” pervasive in common-law litigation and contrasts with Australia’s institutional emphasis on joint testimony and judicial oversight [29].
The removal of witness immunity in Jones v Kaney [6] sought to enhance accountability but has had limited behavioural impact. None of our participants were aware of successful negligence actions against experts, suggesting that deterrence remains symbolic. As Zuckerman [13] and Genn [12] observe, procedural justice cannot rely on moral exhortation alone; it requires incentive structures that align professional independence with procedural integrity. Within the Technology and Construction Court (TCC), the commercial logic of repeat instruction sustains an adversarial marketplace for expertise that systematically erodes neutrality.
3.2. Methodological Inconsistency and Epistemic Opacity
A second challenge concerns the methodological inconsistency of expert analysis. Participants described a “methodological free-for-all” in which delay and quantum experts select techniques—such as as-planned versus as-built, time-impact, or measured-mile analyses—to support client narratives. Identical data sets often yield divergent results, compelling judges to arbitrate between incompatible analytical paradigms.
The absence of binding methodological benchmarks compounds this opacity. While the Society of Construction Law Delay and Disruption Protocol [30] and RICS Practice Statement on Expert Witnesses [31] encourage transparency, their non-mandatory status weakens their influence. In contrast, Australian courts employ concurrent evidence (“hot-tubbing”) procedures that expose methodological weaknesses through direct comparison, while Singapore mandates full disclosure of assumptions and data under its Evidence (Experts) Rules 2017 [25]. Without similar institutional discipline, English courts risk what Edmond [9] describes as “epistemic laissez-faire”—a procedural tolerance of unvalidated methodologies that privileges rhetorical fluency over analytical rigour.
3.3. Judicial Capacity and Cognitive Constraints
Reliability also depends on the judiciary’s capacity to interrogate technically complex material. Although TCC judges possess considerable expertise in construction law, few have formal training in data analytics, delay modelling, or digital forensics. Consequently, they rely heavily on expert mediation of digital evidence such as Building Information Modelling (BIM) and algorithmic delay simulations [15,22].
This dynamic generates what Edmond [9] terms “epistemic dependence,” whereby the court must rely on knowledge it cannot independently verify.
Under cognitive pressure, decision-makers often resort to heuristics such as confidence, fluency, or demeanour [32,33]. As one barrister noted, “The most persuasive expert, not the most rigorous one, often prevails.” Cross-examination accentuates this distortion: designed to test credibility rather than methodology, it rewards rhetorical agility over evidentiary substance. Comparative evidence demonstrates that judicially moderated concurrent evidence, as used in Australian and arbitral forums, reduces this distortion by transforming adversarial confrontation into collaborative clarification. Encouragingly, recent Judicial College initiatives on expert evidence training suggest growing awareness of these cognitive constraints, though systematic technical education remains uneven across the TCC.
3.4. Procedural Laxity and Limited Enforcement
The final challenge is the absence of consistent enforcement mechanisms within the CPR framework. Although judges occasionally criticise experts in obiter dicta—such as Fraser J’s remarks in Van Oord v Dragados [4] regarding “a lack of robust assistance”—such censure has no regulatory consequence. Interviewees reported that unreliable or biased experts often reappear in subsequent cases unchallenged.
Unlike Singapore’s licencing model or Australia’s use of judicial feedback registers, England and Wales maintain no institutional record of expert performance. There is neither a national registry of accredited experts nor a feedback loop connecting judicial evaluation to professional accreditation. This absence of procedural memory perpetuates what Genn [12] calls “managerial minimalism”: a case-management culture prioritising expediency over evidentiary learning. Without feedback mechanisms or post-trial accountability, reliability remains aspirational rather than enforceable.
3.5. Synthesis: Reliability as an Institutional Property
Across these four domains, a consistent pattern emerges. Adversarial economics, methodological pluralism, cognitive constraints, and regulatory inertia interact to produce systemic unreliability. Expert performance is shaped not by isolated moral lapses but by structural incentives embedded in the litigation ecosystem. As Edmond [9] and Mnookin et al. [10] argue, evidentiary reliability depends upon institutional architectures that integrate independence, transparency, and validation into procedural design.
Our findings support a reconceptualisation of reliability as a systemic and institutional phenomenon. It must be cultivated through the procedural environment—via accreditation, methodological standardisation, judicial training, and structured feedback—rather than presumed as an ethical given. In the next section, we translate these findings into a reform framework designed to institutionalise reliability through co-regulation, comparative adaptation, and enhanced evidentiary governance within the Technology and Construction Court.
4. Institutional Pathways for Reform
The preceding analysis demonstrates that the reliability of expert evidence in construction litigation is not secured by professional ethics alone but by the institutional design of evidentiary governance. Reform must therefore move beyond exhortations to impartiality and address the structural incentives, procedural norms, and regulatory mechanisms that shape expert conduct.
Drawing on doctrinal analysis, comparative models, and practitioner testimony, we propose four mutually reinforcing pathways: (1) expert accreditation and co-regulation; (2) methodological standardisation and transparency; (3) judicial capacity-building and evidentiary literacy; and (4) post-trial feedback and data-driven oversight. Together, these pathways aim to embed reliability as a systemic property of the Technology and Construction Court (TCC).
4.1. Expert Accreditation and Co-Regulation
The first reform priority is the creation of a national accreditation framework for expert witnesses in construction disputes. England and Wales currently rely on voluntary professional guidance, such as the RICS Practice Statement on Surveyors Acting as Expert Witnesses [31], but there is no statutory or judicially endorsed system verifying competence or independence. This laissez-faire model contrasts sharply with Singapore, where the Evidence (Experts) Rules 2017 and Supreme Court Practice Directions 2020 establish a court-approved registry of accredited experts. Registration requires professional qualifications, disclosure of prior appointments, and continuing education—conditions that operationalise impartiality through institutional design rather than moral aspiration.
We propose a Co-Regulatory Council for Expert Evidence jointly administered by the judiciary, professional bodies (RICS, ICE, CIArb), and the Civil Justice Council. Accreditation would entail (a) verification of technical credentials, (b) completion of evidentiary-ethics training, and (c) submission to a disciplinary code aligned with CPR r.35.3. A tiered system could differentiate between construction-specific experts (delay, quantum, engineering) and cross-disciplinary specialists (digital forensics, BIM analytics). Judicial nomination from an accredited list would not be mandatory but would create a presumption of reliability analogous to the court-approved expert model in Singapore.
Accreditation also facilitates data collection. Each expert’s registry profile could record anonymised feedback from judges and parties, enabling longitudinal monitoring of performance trends. By embedding accountability within the appointment process, co-regulation aligns professional self-governance with public oversight—satisfying Edmond’s [9] call for institutional mechanisms that secure epistemic integrity through systemic design.
4.2. Methodological Standardisation and Transparency
The second reform path concerns methodological coherence. Our empirical findings reveal that inconsistent analytical techniques—particularly in delay and quantum assessment—generate epistemic opacity and adversarial manipulation. We advocate a move toward methodological standardisation grounded in published, peer-reviewed protocols and transparent data disclosure.
Australia provides a persuasive precedent. The Federal Court Practice Note GPN-EXPT (2016) requires experts to identify assumptions, data sources, and methodologies in a format enabling replication. Singapore’s Form E1 Annex A mandates similar declarations. Adapting these models, the Civil Justice Council could issue a TCC Practice Direction on Expert Methodology, prescribing minimum standards for empirical transparency:
- Disclosure of datasets, software, and algorithms used in delay modelling;
- Explanation of any variance between competing analytical approaches;
- Statement of confidence intervals or error margins where quantification is probabilistic.
Such measures would align construction evidence with broader movements in science-based regulation [10,34]. They would also accommodate the digital turn in evidentiary practice, ensuring that algorithmic tools used in programme analysis meet emerging requirements of explainability and auditability [15]. Transparency thus becomes not only an ethical expectation but a procedural requirement anchored in replicable methodology.
To operationalise these standards, judicial case management under CPR r.35.15 could require early case conferences at which experts jointly declare intended analytical methods. This “methodological pre-clearance” would prevent tactical selection of favourable techniques and reduce later evidentiary conflict. Comparative experience shows that when experts collaborate on framing the analytical architecture, cross-examination focuses on substance rather than semantics—enhancing efficiency and reliability alike.
4.3. Judicial Capacity-Building and Evidentiary Literacy
A third reform pillar is judicial capacity. As Section 3 demonstrated, judges often face epistemic dependence when evaluating digital or algorithmic evidence. Strengthening evidentiary literacy is therefore essential to sustain institutional reliability.
We propose a Judicial College Module on Construction Evidence delivered annually in partnership with technical institutions such as the Institution of Civil Engineers and RICS.
The module would integrate case-based learning on delay analysis, digital-model validation, and concurrent-evidence management. This initiative would extend the Judicial College’s 2023 pilot workshops on expert evidence, formalising technical training within continuing judicial education.
Comparative evidence reinforces the efficacy of such initiatives. Australian courts attribute the success of concurrent-evidence procedures partly to sustained judicial training and peer-learning networks [24].
Singapore likewise integrates evidentiary workshops for judges and accredited experts under the auspices of the Singapore Academy of Law. Similar cross-professional training in the UK would promote shared literacy between bench and bar, reducing cognitive asymmetry and facilitating more informed procedural discretion.
Judicial capacity-building also entails resourcing. Appointment of court scientific advisers—used intermittently in the TCC—could be regularised for high-complexity disputes involving digital or forensic data. These advisers would not determine facts but assist the court in understanding technical submissions, functioning analogously to amicus curiae. Institutionalising such roles would enhance transparency and support the judiciary’s interpretive independence without compromising adversarial balance.
4.4. Post-Trial Feedback and Data-Driven Oversight
The fourth reform area concerns the system’s capacity for learning. At present, England and Wales lack mechanisms for institutional memory regarding expert performance. Feedback is ad hoc, informal, and rarely recorded. Introducing structured post-trial evaluation would convert episodic experience into regulatory intelligence.
We recommend a Judicial Feedback Protocol modelled on Australia’s Federal Court Expert Evidence Register. After each case, the presiding judge would complete a short confidential report evaluating the expert’s clarity, methodological transparency, and adherence to duty. Aggregated data—anonymised and managed by the Co-Regulatory Council—would inform accreditation renewal, professional development, and research.
In addition, an annual Expert Evidence Review could synthesise case-law trends, feedback data, and practitioner surveys to identify systemic weaknesses. Publishing these findings through the Civil Justice Council would promote transparency and continuous improvement. Over time, such feedback loops would create a virtuous cycle of accountability and learning, transforming reliability from a static principle into a dynamic institutional practice.
Digital technologies could enhance this oversight function. A secure, AI-assisted registry could flag recurring procedural issues, such as inconsistent disclosure or data-handling errors, while preserving confidentiality. This aligns with the EU Artificial Intelligence Act (2024) [35] requirement for traceability and human oversight in algorithmic decision-support systems, ensuring that technological tools augment rather than obscure evidentiary scrutiny.
4.5. Integrative Reform: Toward Institutional Reliability
These four pathways—accreditation, methodological standardisation, judicial training, and feedback—should not be viewed as discrete reforms but as components of an integrated ecosystem. Each addresses a distinct vulnerability within the evidentiary chain:
- Accreditation secures the input quality of experts.
- Standardisation governs the process of analysis and disclosure.
- Capacity-building enhances the adjudicative interface.
- Feedback ensures output accountability and system learning.
Collectively, they operationalise what Edmond [9] calls epistemic governance: the institutional alignment of expertise, procedure, and public accountability. Implementation would not require primary legislation; rather, it could proceed through Civil Procedure Rule amendments, Practice Directions, and inter-institutional memoranda between the judiciary and professional bodies.
Empirical experience supports incremental reform. Interviewees emphasised that even modest procedural adjustments—such as mandatory methodological declarations or routine judicial feedback—would markedly improve transparency. As one practitioner noted, “When everyone knows their work will be reviewed, independence stops being aspirational—it becomes procedural.” This cultural shift represents the most profound reform of all: the normalisation of reliability as a collective procedural responsibility rather than an individual ethical virtue.
4.6. Implications for the TCC and Beyond
Implementing these reforms would strengthen the TCC’s international reputation as a forum for technically complex disputes. By institutionalising reliability, the court would move closer to the procedural coherence exemplified by Singapore’s hybrid system and Australia’s concurrent-evidence model. The reforms would also complement broader judicial modernisation efforts, including digital-case management and AI-assisted document review, by ensuring that evidentiary innovations remain anchored in principles of transparency and accountability.
Beyond construction, these proposals have cross-sectoral relevance for financial, environmental, and digital-forensics litigation, where expert testimony increasingly determines outcomes. Embedding institutional reliability within the evidentiary framework would thus represent a structural contribution to civil-justice reform more broadly advancing the rule of law through procedural integrity.
4.7. Conclusions
Institutional reliability is neither accidental nor self-executing; it must be designed, monitored, and continually renewed. The four pathways proposed here translate the normative aspirations of CPR Part 35 into practical mechanisms of governance. Through co-regulation, transparency, judicial literacy, and feedback, the TCC can transform expert evidence from a site of contestation into a model of epistemic accountability. In doing so, the English civil-justice system would reaffirm its global leadership in procedural innovation—demonstrating that reliability, once conceived as a matter of professional conscience, can be secured through the architecture of law itself.
5. Conclusions and Future Research Directions
This study has examined the reliability of expert evidence in construction litigation in England and Wales through an integrated doctrinal, empirical, and comparative lens. Our analysis shows that the current system—anchored in Part 35 of the Civil Procedure Rules—continues to privilege adversarial logics and professional autonomy over institutional accountability. Judicial expectations of independence, methodological consistency, and transparency are often frustrated by structural incentives that reward partiality and rhetorical skill.
By combining legal texts, practitioner insights, and comparative models from Australia, Singapore, and international arbitration, we have reframed reliability as an institutional rather than a moral property. The findings reveal four systemic deficits: (1) adversarial distortion of expert independence; (2) methodological incoherence in delay and quantum analysis; (3) uneven judicial capacity to evaluate complex technical material; and (4) procedural laxity in monitoring expert performance. These deficits interact to create an evidentiary environment in which accuracy depends more on professional culture and commercial dynamics than on procedural safeguards.
5.1. From Ethical Virtue to Institutional Design
The reforms proposed in Section 4 respond directly to these challenges. We argue that institutional reliability can be operationalised through four interdependent mechanisms: accreditation, methodological standardisation, judicial capacity-building, and feedback-based oversight. Together, these measures re-align procedural incentives with epistemic integrity. The resulting model shifts responsibility for reliability from individual conscience to system design—embodying what Edmond [9] terms epistemic governance.
This theoretical shift carries normative significance. It positions reliability not as an aspirational virtue but as a procedural entitlement of the parties and the public. In this view, the duty of the expert is matched by a reciprocal duty of the court and the profession to sustain the institutional conditions that make impartial expertise possible. Such an approach resonates with contemporary movements in evidence law, where reliability is conceived as a collective, regulable quality akin to due process or open justice [10,12].
5.2. Broader Implications
Institutionalising reliability within the Technology and Construction Court would have implications extending beyond construction disputes. As litigation increasingly involves digital models, algorithmic tools, and cross-disciplinary expertise, evidentiary governance must adapt to new epistemic risks. Transparent methodologies, accredited experts, and informed judges will be essential to maintaining public confidence in technologically mediated adjudication.
The proposed reforms also align with wider regulatory trends. The EU Artificial Intelligence Act 2024 [35] requires traceability and human oversight of algorithmic systems—principles equally relevant to AI-assisted evidence in court. Integrating these standards into the TCC’s procedural framework would ensure that technological innovation enhances rather than undermines accountability. In this sense, construction litigation becomes a testbed for evidentiary reform in the digital age.
5.3. Limitations
While our mixed-methods approach enables triangulation between doctrinal principles and professional experience, its qualitative scope is necessarily limited. The interview sample, though diverse in expertise, is not statistically representative of the construction-law profession. Further empirical work involving judges, clients, and institutional stakeholders would enrich understanding of how reliability is perceived and enacted across the litigation ecosystem. Similarly, the comparative analysis, though systematic, was confined to three jurisdictions; expanding this to include civil-law systems such as Germany or the Netherlands could yield valuable contrasts in evidentiary regulation.
5.4. Future Research Directions
Future research should pursue three lines of inquiry.
First, quantitative mapping of expert-evidence outcomes could identify correlations between methodological approaches, case complexity, and judicial criticism—creating an empirical baseline for monitoring reform impact.
Second, comparative institutional ethnographies of courts employing concurrent evidence (e.g., New South Wales, Singapore) would clarify how procedural culture interacts with legal design.
Third, interdisciplinary collaboration with cognitive psychology and data science could inform the development of decision-support tools that assist judges in evaluating probabilistic or algorithmic evidence without compromising procedural fairness.
These avenues would advance a broader research agenda on institutional epistemology—the study of how legal systems create, manage, and legitimate expert knowledge. Such an agenda has increasing urgency as evidentiary practice becomes entwined with digital and computational technologies.
5.5. Closing Reflection
Reliability in expert evidence cannot be legislated into existence through abstract rules; it must be continually produced through the interaction of law, expertise, and institutional design. By re-centring reliability as a systemic objective rather than a personal virtue, the English civil-justice system can evolve from reactive case management to proactive epistemic governance. The reforms proposed here—accreditation, standardisation, capacity-building, and feedback—represent incremental yet transformative steps toward that goal.
In sum, the pursuit of reliable expertise is inseparable from the pursuit of justice itself. When the evidentiary foundations of judicial reasoning are sound, the legitimacy of adjudication follows. Ensuring that soundness is not merely the task of experts or judges but the collective responsibility of the legal system as a whole.
Funding
This research received no external funding.
Institutional Review Board Statement
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of University of Strathclyde, Department of Architecture Ethics Committee (REF DEC 025-043 24 February 2025).
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Coulson, P. The Technology and Construction Court: Practice and Procedure; Sweet & Maxwell: London, UK, 2006. [Google Scholar]
- Gould, N. Experts: The Rules, Recent Developments and Good Practice; Fenwick Elliott Papers; Fenwick Elliott LLP: London, UK, 2012. [Google Scholar]
- Civil Procedure Rules (England and Wales) (1998, as Amended). Part 35: Experts and Assessors and Practice Direction 35: Experts and Assessors. London: Ministry of Justice. Available online: https://www.justice.gov.uk/courts/procedure-rules/civil/rules/part35 (accessed on 18 November 2025).
- Van Oord UK Ltd. v Dragados UK Ltd. EWHC 774 (TCC). 2021.
- Imperial Chemical Industries Ltd. v Merit Merrell Technology Ltd. EWHC 1763 (TCC). 2017.
- Jones v Kaney. UKSC 13. 2011.
- Hughes, K. The abolition of expert witness immunity. Camb. Law J. 2011, 70, 516–518. [Google Scholar] [CrossRef]
- Uff, J. Expert Immunity and the Impact of Kaney. Constr. Law J. 2012, 28, 15–22. [Google Scholar]
- Edmond, G. The Admissibility of Expert Evidence. Oxf. J. Leg. Stud. 2019, 39, 142–168. [Google Scholar] [CrossRef]
- Mnookin, J.L.; Cole, S.A.; Dror, I.E.; Fisher, B.A.; Houck, M.M.; Inman, K.; Risinger, D.M. The Need for a Research Culture in the Forensic Sciences. UCLA Law Rev. 2011, 58, 725–779. [Google Scholar]
- Jasanoff, S. Science at the Bar: Law, Science and Technology in America; Harvard University Press: Cambridge, MA, USA, 1995. [Google Scholar]
- Genn, H. Judging Civil Justice; Cambridge University Press: Cambridge, MA, USA, 2009. [Google Scholar]
- Zuckerman, A.A.S. Civil Justice in Crisis: Comparative Perspectives of Civil Procedure; Oxford University Press: Oxford, NY, USA, 2006. [Google Scholar]
- Zweigert, K.; Kötz, H. An Introduction to Comparative Law, 3rd ed.; Oxford University Press: Oxford, NY, USA, 1998. [Google Scholar]
- Doshi-Velez, F.; Kim, B. Towards a Rigorous Science of Interpretable Machine Learning. arXiv 2017, arXiv:1702.08608. [Google Scholar] [CrossRef]
- Kravtsov, S. Online Dispute Resolution—From Origins to the Present. Jurid. Trib. Rev. Comp. Int. Law 2024, 14, 243–259. [Google Scholar] [CrossRef]
- Banakar, R. Socio-Legal Methodology and the Construction of the Object of Research; Routledge: London, UK, 2011. [Google Scholar]
- Mason, A. Judicial Reasoning and Expert Evidence in Complex Construction Disputes. Constr. Law J. 2017, 33, 355–369. [Google Scholar]
- Finlay, L. Negotiating the Swamp: The Opportunity and Challenge of Reflexivity in Research Practice. Qual. Res. 2002, 2, 209–230. [Google Scholar] [CrossRef]
- Cotterrell, R. The Sociology of Law: An Introduction; Oxford University Press: Oxford, NY, USA, 2018. [Google Scholar]
- Barton, C. Expert Evidence, Professional Integrity and the Adversarial System. Leg. Stud. 2019, 39, 255–274. [Google Scholar] [CrossRef]
- Champion, R. E-Litigation in the Technology and Construction Court: An Expert’s Experience; Society of Construction Law Papers, D122; SCL International: London, UK, 2011. [Google Scholar]
- McClellan, P. New Method with Experts-Concurrent Evidence. J. Ct. Innov. 2010, 3, 259. [Google Scholar]
- Freckelton, I.; Selby, H. Expert Evidence: Law, Practice, Procedure and Advocacy; Thomson Reuters: Sydney, Australia, 2016. [Google Scholar]
- Supreme Court of Singapore. Practice Directions: Expert Evidence, Licensing and Accreditation of Court Experts; Singapore Academy of Law (SAL): Singapore, 2011. [Google Scholar]
- Ashford, P. The IBA Rules on the Taking of Evidence in International Arbitration: A Guide; Cambridge University Press: Cambridge, MA, USA, 2013. [Google Scholar]
- CIArb. Protocol for the Use of Party-Appointed Expert Witnesses in International Arbitration; Chartered Institute of Arbitrators: London, UK, 2007; Available online: https://www.ciarb.org/media/zvijl3kx/7-party-appointed-and-tribunal-appointed-expert-witnesses-in-international-arbitration-2015.pdf (accessed on 18 November 2025).
- Webster, R. What Does Lived Experience Work in Criminal Justice Feel Like? Russell Webster Blog. 26 October 2021. Available online: https://www.russellwebster.com/what-does-lived-experience-work-in-criminal-justice-feel-like/ (accessed on 18 November 2025).
- ACI Operations Pty Ltd. v McConnell Dowell Constructors Pty Ltd. NSWSC 1779. 2014.
- Society of Construction Law (SCL). Delay and Disruption Protocol; SCL: London, UK, 2017. [Google Scholar]
- RICS. Surveyors Acting as Expert Witnesses: Practice Statement and Guidance Note; RICS: London, UK, 2011. [Google Scholar]
- Kahneman, D. Thinking, Fast and Slow; Penguin: London, UK, 2011. [Google Scholar]
- Rachlinski, J.J.; Wistrich, A.J. Judging the Judiciary by the Numbers. Annu. Rev. Law Soc. Sci. 2017, 13, 203–229. [Google Scholar] [CrossRef]
- National Research Council. Strengthening Forensic Science in the United States: A Path Forward; National Academies Press: Washington, DC, USA, 2009. [Google Scholar]
- EU Artificial Intelligence Act. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on Artificial Intelligence (AI Act). Off. J. Eur. Union 2024, L1689, 1–144. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 (accessed on 18 November 2025).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).