Next Article in Journal
Pipeline Curvature Detection Using a Pipeline Inspection Gauge Equipped with Multiple Odometry
Previous Article in Journal
Deep Learning for Classification of Internal Defects in Fused Filament Fabrication Using Optical Coherence Tomography
Previous Article in Special Issue
Towards Intelligent Water Safety: Robobuoy, a Deep Learning-Based Drowning Detection and Autonomous Surface Vehicle Rescue System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Domain Collaborative Framework for Practical Application of Causal Knowledge Discovery from Public Data in Elite Sports

1
Sports Training Research Center, China Institute of Sport Science, Beijing 100061, China
2
Chinese National Race-Walking Team, Chinese Athletics Association, Beijing 100061, China
3
Sports Artificial Intelligence Research Institute, Capital University of Physical Education and Sports, Beijing 100191, China
4
Exercise Physiology Research Center, China Institute of Sport Science, Beijing 100061, China
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2026, 9(2), 43; https://doi.org/10.3390/asi9020043
Submission received: 28 December 2025 / Revised: 4 February 2026 / Accepted: 6 February 2026 / Published: 14 February 2026
(This article belongs to the Special Issue Recent Developments in Data Science and Knowledge Discovery)

Abstract

In elite sports, discovering interdisciplinary causal relationships from public data is critical for gaining a competitive edge. However, the causal knowledge required for these practices is difficult to obtain through either existing intervention-based sports science methods or computational techniques focused on statistical association. This paper formalizes a multi-domain collaborative framework, which involves three roles: (1) the elite sports team; (2) the sport science expert; and (3) the causal inference expert. Our nine-step workflow, which processes three core elements of problem, data, and computing, guides these experts through a cycle that systematically transforms practical problems into computational models and, crucially, translates complex analytical outputs back into actionable strategies. The framework also introduces a dual-dimensional “field evaluation” method, encompassing both process and outcome, to quantify the trustworthiness of knowledge in practical settings where a “gold standard” is absent. This framework was applied in an illustrative case study prior to the Paris 2024 Olympics, providing one additional evidence-informed input for the national team. The success was observed and interpreted as contextual consistency rather than causal validation. This framework ensures the practical application of causal discovery in elite sports, offering a repeatable and explainable pathway for generating credible, evidence-based insights from public data for elite sports decision-making.

1. Introduction

In the high-stakes world of elite sports, the margin between victory and defeat is infinitesimally small. This was vividly demonstrated at the Paris 2024 Olympics, where the men’s 100 m champion was determined by a mere five-thousandths of a second, while in the women’s 20 km race walk, world record holder Yang Jiayu finally secured her first Olympic gold with a winning margin of just 25 s separating her from the silver medalist after nearly 90 min of grueling competition. The quest to find this competitive edge, formerly the domain of intuition, is now being revolutionized by advancements in AI. With the development of smart sports monitoring [1,2] and big data analytics, teams and athletes are increasingly turning to data-driven decision-making to gain a competitive edge [3,4]. However, a crucial distinction must be made between training and competition environments. While teams collect extensive proprietary data during training, major events often prohibit self-monitoring. This makes the official data publicly released by federations—from competition results to broadcast footage—the most critical, and sometimes the only accessible dataset. The central challenge to find a competitive edge, therefore, is transforming this public data into credible, actionable knowledge, i.e., uncovering the underlying causal relationships that drive performance, a task essential to devising winning strategies.
Yet, this pursuit of causal knowledge from public data reveals a significant gap between the existing methodologies. On the one hand, traditional sports science relies on intervention-based methods like randomized controlled trials (RCTs), which are considered the gold standard for causal-knowledge acquisition [5]. However, these methods are often time-consuming, ethically complex, and limited to a small number of controlled variables and a single discipline at most times. In elite sports, the quest to find a competitive edge is usually urgent. The small margin between victory and defeat often lies in the intersection between multiple disciplines, like environment [6], public health [7], and even socio-economics [8], creating a gap between interventional studies in labs and the competitions in the real world that can only be observed [9]. On the other hand, conventional computational techniques, such as machine learning and statistical analysis, excel at identifying patterns and associations within large datasets [10]. Yet, they primarily answer “what” is happening (correlation) rather than “why” it is happening (causation) [11]. In the high-stakes world of elite sports, for a coach or athlete, the path to success must be crystal clear. A strategy based on a spurious correlation can be ineffective or even detrimental, making the “black box” nature of many predictive models a barrier to trust and adoption in the practice of elite sports [12].
Causal discovery is a causal inference technique uncovering causal paths and directions from unordered observational data by automatically generating directed acyclic graphs (DAGs) [13]. As such, it is stated as a promising approach for causal knowledge acquisition in more and more top sports journals in recent years [14,15,16]. But in contrast to these theoretical prospects, as of August 2025, no practical applications of causal discovery in sports were published, particularly in elite sports. The barriers lie not only in the gap between sports practice and scientific research but also in the gap between intervention-based methodology in sports science and computation-based methodology of data science. These gaps manifest in various aspects, including problem formulation, data sources, method selection, and result implementation, and last but certainly not least, the evaluation criteria.
To bridge the gaps, pave the way for generating credible and evidence-based insights from public data, and enhance trust and efficacy in elite sports decision-making, this paper formalized a multi-domain collaborative framework to discover causal knowledge from public data. This work contributes primarily in two ways:
  • An Actionable Framework: We introduced our structured methodology that orchestrates the collaboration between three key roles, including the ‘elite sports team’, the ‘sport science expert’, and the ‘causal inference expert’. Then, through a nine-step workflow, a practical problem is translated into a causal computing and the computational outputs are eventually translated back into practical strategies. We also proposed a dual-dimensional field evaluation for practical application of causal discovery in elite sports, encompassing both outcome evaluation and process evaluation.
  • Demonstrated Real-World Application: We showcase the framework’s adaptability with an exploratory case study where our framework was used to discover the causal impact of ambient temperature on 20 km race-walking performance. We present the real-life collaboration between experts from the three domains in each of the nine steps to provide one additional evidence-informed input for the national team’s cooling strategy at the Paris 2024 Games. As of December 2025, no original article applying causal discovery algorithms to sport could be identified in Web of Science, IEEE Xplore, ACM DL or Scopus (keywords: ‘causal discovery’ and ‘sport *’).

2. Related Work

Causality, although now often tacitly assumed by sports scholars to be equivalent to positive results from RCTs, has only been established as the gold standard in its parent discipline of medicine for about the past 50 years [5]. Throughout the long history of philosophy and in recent AI technology, causality has been defined in more generalized terms. Therefore, to address causality issues in elite sports, it is necessary to clarify exactly what causality we are seeking first [17]. Philosophical discussions on causality date back over two thousand years, with Aristotle in Metaphysics interpreting it as “explanatory factors” and categorizing them into four types [18,19]: the substance (essence, form, or pattern), the matter or substratum, the source of change, and the purpose or the good. Among these, the third category of cause—“the source of change”—corresponds to RCT and represents the narrow concept of cause as understood by Bacon, Galileo, and Newton over three hundred years in the past [20,21,22].
However, in sports, mental factors such as goals and tactics should instead be interpreted as the purpose or the good, thereby transcending the scope of RCT [23,24]. Therefore, in elite sports, the causality discovered through causal discovery is of a broader nature—namely, Aristotle’s “explanatory factors.”
Causal inference emerged decades ago to acquire causal knowledge computationally outside of experiments. Regarding causal inference, there are three main categories of techniques across fields, all of which have been applied in sports science. The first is the regression or the structural equation model (SEM) from econometrics [25,26], which tests manually specified causal hypotheses using quasi-natural experiments and statistical tests, e.g., difference-in-differences for evaluating the effect of the Olympics on children’s physical activity [27]. The second is Rubin’s causal model (RCM), from epidemiology [28], which also extends the real interventions to virtual ones and makes them more suitable for addressing problems in elite sports where RCTs cannot be practically implemented, like the causal effect of timeouts on outcomes in NBA matches [29]. And the third is the structural causal model (SCM) from computer science [30]. SCM no longer limits itself to simulating interventions but directly attempts to trace the data production process; it has increasingly been applied in recent years in many sciences, including sports [14,15,16]. Unlike earlier frameworks, it unifies all categories of causality under the concept of “difference making” [31], with its causal philosophy aligning more closely with Aristotle’s “explanatory factors.”
Specifically, the SCM not only verifies the existence and strength of causal relationships like the two aforementioned methods but also directly identifies the direction and pathways of “difference making” within disordered data, which are precisely what coaches and athletes require for making decisions in elite sports practice. Therefore, in elite sports, the causality discovered through causal discovery represents the directions and pathways that make a difference in the data generation process.
Causal discovery is the technique proposed in the SCM to identify causal pathways. As the basis of the SCM, causal discovery uncovers causal directions and paths from unordered observational data by automatically generating a directed acyclic graph (DAG). As the only intelligent technique capable of substituting human experts in uncovering causal directions and pathways, causal discovery has been the subject of numerous reviews [13,32,33,34]. Among them, the taxonomy proposed by Zhang et al. stands out for its clarity and practical value, dividing causal discovery algorithms into three distinct classes, each driven by different philosophical logics and suited for different application scenarios, as Table 1 shows [13].
(1)
Constraint-based methods searching for “difference making”: PC treats causality as a set of conditional independence facts implied by d-separation [35], while FCI generalizes PC by allowing unknown hidden common causes denoted as bidirectional edges [36]. They need only reliable independence tests to return a Markov equivalence class, so they scale to tens of thousands of genes in sparse regulatory networks.
(2)
Score-based approaches to maximize “explanatory factors”: GES recasts the task as greedy maximization of a score over equivalence classes, so humans cannot easily trace the “why”; when used alone, these methods assume no confounders [37]. Its speed advantage outweighs the slight loss of interpretability, making GES the default first-cut tool in high-dimensional genomics.
(3)
Functional causal models exploiting causal asymmetry proposed by Hume, Russell, and Lewis [38,39,40]: LiNGAM, ANM, and PNL encode the physical generative process (Y = f(X,ε) with ε⊥⊥X [41]); the ensuing invertibility constraint lets them orient every edge and output in a DAG, making them ideal for small protein-signaling pathways or fMRI effective connectivity graphs where directionality is crucial.
Evaluation of paradigms for causal discovery encompasses two distinct scenarios. The first involves situations where ground truth is available, allowing performance to be assessed using metrics such as precision, recall, and F1 score. The truly challenging scenario, however, is the second one. In the absence of ground truth, validating the computational output of causal discovery is crucial for building trust in practical applications. The existing methods can use algorithms such as bootstrap repeats to check the stability of the discovered causal graph [42]. Sensitivity analysis can also be employed to assess how robust the conclusions are to potential unmeasured confounding, providing bounds on the possible outcomes [43].
As mentioned above, solutions to date have only been able to verify the model itself [44] and not the trustworthiness of the knowledge discovered in the results. But what matters in elite sports is the trustworthiness of the new knowledge rather than a modeling error.
Explainable AI, also known as XAI, is a critical focus for trustworthy, usable, and reliable artificial intelligence, and it is also key to the practical application of AI technology in elite sports. Most existing XAI methods rely on techniques like game theory and probing to provide post hoc explanations for machine learning outcomes [45]. By incorporating causal discovery, the interpretability of the machine learning process itself is enhanced [46]. An explainable process is more comprehensible than an explainable result, and it enhances trustworthiness.
For applying causal inference techniques in elite sports, explainable processes must be computational, sports science-based, and practical, necessitating interdisciplinary explainable collaboration.

3. The Framework

We propose an interdisciplinary collaboration framework to achieve explainability in causal discovery applications for elite sports and thereby enhance trustworthiness. As Figure 1 shows, the center of the framework is the multi-domain collaborative team which includes three key roles:
(1)
The Elite Sports Team: The coach, the athletes, and the managers are the practice and problem owners, who raise practical problems about the unknown knowledge, yet refuse to trust experimental data from other populations or computations, whose rationale remains unclear.
(2)
The Sport Science (SS) Expert: The sports biomechanics expert, the sports physiology expert, and so on, are the known domain knowledge holders, who can translate practical problems into scientific problems, yet lack clues to new knowledge required for the team.
(3)
The Causal Inference (Ci) Expert: They are the computational method owner, yet they struggle with insufficient understanding of application problems and domain knowledge. This is also the primary reason why most data science research fails to be practically applied in elite sports.
Clearly, all the necessaries for discovering new knowledge are already available—the problem, knowledge, and methods are simply held by different roles across distinct domains. All that is needed is to design a proper workflow and, through effective cross-domain collaboration, gradually convert the problem into a solution. This process involves the following nine steps:
Step 1: The Practical Problem: The elite sports team (coaches, athletes, and managers) initiates the cycle by identifying and articulating a critical performance-related challenge from their real-world experience. This process transforms a general, on-the-ground observation (e.g., “we consistently lose momentum in the final phase of a race”) into a well-defined practical problem that requires a solution.
Step 2: The Academic Problem: The sport science (SS) expert takes the practical problem and reframes it using their specialized domain knowledge in areas like biomechanics or physiology. They translate the coach’s qualitative question into a structured, researchable academic problem with measurable variables (e.g., “What is the causal relationship between torso stability and late-race velocity decay?”).
Step 3: The Computing Problem: Collaboratively, the SS expert and the causal inference (CI) expert define the technical pathway to answer the academic question. They transform the abstract academic problem into a specific, solvable computing problem by determining the exact data required, the analytical models to be used, and the algorithms needed for the analysis.
Step 4: The Collected Data: The CI and SS experts work together to execute the data acquisition plan outlined in the previous step. This action transforms the theoretical computing problem and its data requirements into a tangible, collected raw dataset that serves as the foundation for the entire analysis.
Step 5: The Validated Data: The CI expert takes the sole responsibility for rigorously cleaning, pre-processing, and validating the raw data. This critical quality control step transforms the potentially noisy and incomplete collected data into a high-quality, reliable, and validated dataset that is suitable for sophisticated causal analysis.
Step 6: The Selected Computing: Drawing on their methodological expertise, the CI expert evaluates various analytical tools and selects the most appropriate causal inference algorithm or statistical model for the specific research question and dataset. This decision transforms a broad portfolio of potential analytical methods into a single, carefully selected computing strategy.
Step 7: The Implemented Computing: The CI expert applies the chosen computational strategy to the validated dataset, running the analysis to uncover underlying patterns and relationships. This execution phase transforms the data and selected algorithms into quantitative computational results, such as statistical coefficients, p-values, or, most fundamentally, DAGs.
Step 8: The Interpreted Computing: The CI and SS experts collaborate again to interpret the meaning of the computational outputs. The CI expert explains the statistical significance, while the SS expert translates it into a real-world sports context, transforming the abstract numerical results into a scientifically sound and practically relevant causal insight.
Step 9: The Solved Problem: Finally, the actionable insight is presented back to the elite sports team, who then integrates this new knowledge into their training protocols or competition tactics. This final step transforms the validated scientific insight into a practical implemented solution, which directly addresses the initial problem and completes the collaborative cycle.
In elite sports, the field requirement is to evaluate the trustworthiness of the knowledge itself before adopting it in practice. In fact, as a field where practice has consistently outpaced science, many accomplished coaches and athletes remain unconscious of the knowledge underlying their achievements. Yet, they manage to adopt a trustworthy pathway or process to success before games and turned them into a more trustworthy experience afterward with successful outcomes. By this as a reference, here we formally define “field evaluation” for causal discovery trustworthiness in the practice of elite sports and other practical fields where RCTs, standard datasets, and even ground truth are unavailable. This evaluation is conducted along two dimensions:
(1)
Process-Based Evaluation: Although lacking the evaluation statistical measures in traditional methods, causal discovery offers greater interpretability in computational process. Referencing the criteria coaches and athletes use to adopt knowledge before a competition, cross-domain teams can evaluate the trustworthiness of knowledge at each step and ultimately determine the overall trustworthiness of the entire knowledge discovery process.
(2)
Outcome-Based Evaluation: Without ground truth from RCTs or standard datasets, practice serves as a source of practical feedback for evaluating knowledge utility. Referencing how coaches update beliefs based on post hoc performance, positive outcomes indicate contextual consistency—compatibility with successful performance under specific conditions—rather than universal causal validation. If adopting causally discovered knowledge yields positive outcomes, the knowledge’s practical utility score should increase; conversely, it should decrease.
To formalize this dual-dimension evaluation, we designed a heuristic and decision-support oriented “field evaluation trustworthiness score,” denoted as T. This score yields a value between 0 and 1, with a default of 0.5 representing neutral confidence (i.e., as likely to be trustworthy as not). The score integrates both process-based and outcome-based evaluations. The score T consists of two main components: a process trustworthiness score, denoted as Tp, and an outcome experience factor, represented by A0.
(1)
Process-Based Trustworthiness Score (Tp): This component quantifies the intrinsic trustworthiness of the knowledge discovery process itself, independent of its practical outcomes. Si represents the trustworthiness score for each of the nine steps in the framework. It is assessed by the collaborative team and ranges from −2 (indicating a significant flaw or untrustworthy action) to +2 (indicating a highly rigorous and transparent action), with a default value of 0 (step completed but without special validation). The sum of these scores, (∑si), ranges from −18 to +18. By dividing this sum by 36, we normalize it to a scale of −0.5 to +0.5, with a default of 0.5. This normalized value is then added to a baseline of 0.5, ensuring that TP is bounded between 0 (a fundamentally flawed process) and 1 (a maximally trustworthy process), with a default of 0.5, as shown in Equation (1):
T p = 1 36 i = 1 9 s i + 0.5
Ideally, this evaluation should be conducted by an independent, multi-disciplinary team of SS and CI experts who were not involved in the project, following a structured consensus methodology like the Delphi consensus protocol [47]. When such independent evaluation is not feasible, as in the cases presented in this paper, a structured self-assessment by the project team should be conducted and explicitly reported. Each dimension is scored on a scale of −2 (e.g., flawed, unjustified) to +2 (e.g., excellent, rigorous), with 0 representing a neutral or standard execution. This scoring system, centered at zero, aligns with classic methods for subjective quality assessment, such as the mean opinion score (MOS), as it mirrors the human cognitive process of judging against a neutral baseline [48].
(2)
Outcome experience factor (A0) dynamically adjusts the score based on empirical results. This factor is calculated by accumulating the number of positive and negative outcomes. Each positive outcome increases trustworthiness but never exceeds 1, while each negative outcome decreases trustworthiness but never falls below zero. Thus, the outcome trustworthiness metric serves as an expansion factor for the process trustworthiness score, setting the score’s values to range between 0 and 1. In addition, to control the speed of this adjustment, we introduce γ (gamma), a sensitivity parameter (a small positive constant with a default of 1) that controls how quickly the metric responds to new outcomes. A smaller γ makes the model more conservative, requiring more evidence for significant adjustments, while a larger γ allows for a more rapid response.
These two components are then integrated into the final trustworthiness score, T, using a logistic function. The process score, Tp, establishes baseline trustworthiness of the discovered causal knowledge before its first adoption, while the outcome factor A0 serves as a heuristic confidence update, reflecting practical utility in the observed context without claiming to validate the causal graph’s universal correctness. It pushes the final score towards the extremes of 1 (certainty of trustworthiness) or 0 (certainty of untrustworthiness). The complete formula for the overall score is given in Equation (2):
T = 1 1 + 1 T p / T p e γ A 0
This score, thereby, provides a holistic and dynamic measure, starting with the procedural integrity of the knowledge discovery and iteratively refining it based on real-world performance, mirroring the way trust is practically built and evaluated in elite sports.

4. A Case of Olympic Application

To demonstrate the practical value and effectiveness of our proposed interdisciplinary collaborative framework, we present a case study from the domain of elite sports: investigating the causal impact of ambient temperature and humidity on the performance of elite 20 km race walkers. This task was proposed just two months before the 2024 Paris Olympics, making it extremely time-critical. At the time, the race walking events in Paris were anticipated to take place in high temperatures, and the national team urgently needed scientific knowledge about the impact pathways between ambient temperature and race performance to develop fine-grained cooling and pacing strategies. However, within such a tight timeframe, traditional intervention-based experiments were infeasible, and the only available data were historical records from the past major competitions. The complexity, high stakes, and urgent need for interdisciplinary knowledge in this scenario made it an ideal case for validating our framework.
The Collaborative Team: Three Roles
In accordance with our framework’s design, we assembled an interdisciplinary collaborative team composed of three core roles to ensure that the entire process—from problem formulation to solution implementation—was practically relevant, scientifically rigorous, and computationally effective.
(1)
Elite Sports Team: This role was filled by the national race walking team, including the head coach, athletes, and administrative staff. They are the practitioners and the originators of the problem, possessing the most direct on-field experience and the most pressing practical needs. They had observed the team’s performance decline and even instances of heatstroke during competitions in hot weather (such as the 2023 Budapest World Championships), leading them to pose the core practical challenge: “How can we scientifically adjust our tactics to maintain competitiveness in high-temperature environments?”
(2)
Sport Science Expert: This role was undertaken by experts in exercise physiology and sport training from the National Institute of Sport Science. They are the holders of domain expertise, capable of translating the coaching staff’s vague “feelings” and practical issues into a researchable scientific problem. They understand the energy metabolism, thermoregulation mechanisms, and technical-tactical characteristics of race walking, enabling them to provide theoretical support and interpretation for data analysis from physiological and biomechanical perspectives.
(3)
Causal Inference Expert: This role was filled by computational scientists from the National Institute of Sport Science and the Institute of Artificial Intelligence in Sports at Capital University of Physical Education and Sports. They are the experts in computational methods, proficient in data science, statistics, and causal discovery algorithms. They were responsible for processing complex data and for selecting and implementing the most appropriate computational models to uncover the hidden causal relationships within the data, though they have limited professional knowledge of the sport of race walk itself.
The Collaborative Workflow: Nine Steps
Next, we will detail how these three roles collaborated closely within our designed nine-step workflow to progressively transform an urgent practical problem into an actionable Olympic preparation strategy.
Step 1: Formulate the Practical Problem
  • Executor: Elite sports team.
  • Specific Work: With only a few months left before the Paris Olympics, the national race walking team observed a trend of rising ambient temperatures in recent major competitions and noted that the team’s performance in hot races was unsatisfactory. They urgently needed a data-driven, scientific basis that went beyond individual coaching experience to guide athletes on how to implement more refined pacing, replenishment, and cooling strategies in the anticipated high temperatures of Paris to strive for the best possible results.
Step 2: Define the Scientific Problem
  • Executor: Sport science expert.
  • Specific Work: Upon receiving the practical problem, the sport science expert translated it into a clear scientific research question. Based on knowledge of exercise physiology (e.g., that core body temperature, skin temperature, and metabolic rate are key endogenous variables affecting endurance performance), they proposed the core scientific hypothesis: “Mediated by endogenous variables such as core temperature, skin temperature, and metabolic rate, ambient temperature and humidity have a dynamic and evolving impact on pacing strategy across different segments of a race walk.” This question shifted the research focus from vague “tactical optimization” to the precise “dynamic causal relationship between environmental conditions and split times.”
Step 3: Design the Computational Problem
  • Executors: Sport science expert and causal inference expert.
  • Specific Work: The two experts collaborated to convert the scientific problem into a computable and operational data science problem.
    Data Requirements Definition: They determined the need to collect two core types of data: (1) Performance Data: Final times and the most granular split times possible (10 km, 5 km, and 1 km) for elite athletes (top eight finishers) from past major competitions (the Olympics and World Championships) and (2) Environmental Data: Pre-race and post-race ambient temperature and humidity for the corresponding competitions.
    Analysis Pathway Planning: They planned a three-part analysis: (1) Correlation analysis, to initially verify if a relationship exists between the temperature/humidity and performance; (2) causal discovery, to uncover the causal graph among the variables; and (3) regression modeling, to quantify the magnitude of the effects.
Step 4: Collect Data
  • Executors: Sport science expert and causal inference expert.
  • Specific Work: Official result lists and on-site weather reports for seven championships (2015–2023) were downloaded from the World Athletics portal; missing meteorological values were linearly interpolated from Weather Spark (0.5° grid, center ≤ 35 km from the venue) to the local official start time (≤15 min discrepancy). Manual splits were transcribed independently by two researchers from the official on-screen timing displayed in China Central Television (CCTV) race feeds; discrepancies were resolved by a third reviewer, yielding perfect inter-rater agreement (ICC = 1.00). Measurement uncertainties (Weather Spark ± 0.6 °C, ±4% RH; timer ± 0.01 s) were recorded for later propagation. Ultimately, a dataset containing the performance and corresponding environmental data for 56 elite female and 56 elite male athletes was collected, as Table 2 shows. WC denotes World Championships, OC denotes Olympic Games, IAAF denotes International Association of Athletics Federations, CCTV denotes China Media Group, WS denotes Weather Spark, and NA indicates missing data. BT denotes ambient temperature before the event, AT denotes ambient temperature after the event, BH denotes ambient humidity before the event, and AH denotes ambient humidity after the event.
Step 5: Validate Data
  • Executor: Causal inference expert.
  • Specific Work: The causal inference expert conducted rigorous quality control on the collected raw data. A critical step was to verify the consistency of the environmental data from different sources (World Athletics vs. Weather Spark). By calculating the intraclass correlation coefficient (ICC), they confirmed that the pre-race temperature (ICC = 0.963), post-race temperature (ICC = 0.960), pre-race humidity (ICC = 0.867), and post-race humidity (ICC = 0.865) from both sources were highly consistent (p < 0.001), allowing the data to be merged for analysis. This step ensured the reliability and validity of the subsequent analyses.
Step 6: Select the Computational Method
  • Executor: Causal inference expert.
  • Specific Work: Based on the computational problem and data characteristics, the causal inference expert selected a series of computational methods. First, Pearson correlation (two-tailed, α = 0.05) was used for exploratory analysis. For the core causal discovery phase, considering the potential for unobserved confounding factors in a race walk (such as an athlete’s individual heat acclimatization or in-race cooling interventions), the expert ruled out the PC algorithm, which assumes no latent variables. Instead, they chose the FCI (fast causal inference) algorithm (gCastle 1.2.2, Fisher-Z conditional independence test, α = 0.01, maxP = 3, complete-rule-set), which is capable of handling potential latent variables. This choice reflected a deep understanding of the problem’s complexity and significantly enhanced the robustness of the conclusions.
Step 7: Implement the Computation
  • Executor: Causal inference expert.
  • Specific Work: The pipeline was executed in Python 3.11. The correlation analysis revealed a significant positive correlation between ambient temperature and split times, especially in the first half of women’s race (r(54) = 0.782, p < 0.001). Subsequently, the FCI algorithm was applied to the 10 km split data to allow latent confounders; edges present in ≥80% of 100 bootstrap runs were retained. As a sensitivity check, the same pipeline was re-run after removing the fastest and slowest split sequences from each race (top six), yielding ≥92% edge overlap with the original top eight PAG, confirming stable structure. The computational results (as shown in Figure 2) visually displayed the causal arrows between variables and the potential presence of latent variables (indicated by bi-directed arrows).
Step 8: Interpret the Computational Results
  • Executors: Sport science expert and causal inference expert.
  • Specific Work: This was the crucial step for transforming data into knowledge.
    The causal inference expert first interpreted the causal graph from a technical perspective: “The results show that pre-race temperature is a direct cause of the first-half split time, which in turn directly affects the second-half split time for female athletes. Furthermore, the bi-directed arrow between pre-race temperature and the first-half split time indicates the presence of an unobserved confounder that affects both.”
    The sport science expert then translated this technical conclusion into the language of sport science: “This means that high temperature primarily impacts the final result by influencing the pacing during the first half of the race, rather than simply depleting energy in the second half as commonly believed. That ‘confounder’ likely represents individualized interventions, such as pre-race ice vests or in-race water dousing for cooling, which may modulate the body’s response to external heat and thereby influence the chosen pace for the first half.”
Step 9: Solve the Practical Problem
  • Executors: The entire team, with the final delivery to the elite sports team.
  • Specific Work: The team translated the interpreted causal insights into direct, actionable preparation advice for the national team. The final report highlighted: (1) Core Insight: The impact of high temperature on performance primarily manifests in the first half of the race. Therefore, cooling and replenishment strategies must be actively implemented from the very beginning, not just when fatigue sets in during the second half. (2) Tactical Recommendation: It was recommended that when formulating tactics for the Paris Olympics, the team should pre-plan their first-half pacing strategy and cooling plan based on the forecasted starting temperature in order to conserve energy and create favorable conditions for the final push. This report was submitted to the national team one week before the race, providing a crucial quantitative reference for their final decisions.
Field Evaluation of Trustworthiness
  • In the context of real-world applications in elite sports, we often lack a “gold standard” for evaluation (such as a pre-known ground-truth causal graph or parallel randomized controlled trials, RCTs). Therefore, we adopt the “field evaluation” method defined in the framework. This method comprehensively assesses the trustworthiness of the knowledge discovery from two dimensions: process and outcome.
  • Process-Based Trustworthiness
The core of process-based trustworthiness evaluation lies in the collaborative team—comprising the elite sports team, sports scientists, and causal inference experts—jointly scoring the rigor, transparency, and collaborative effectiveness of the nine steps in the knowledge discovery pipeline. The score for each step, sᵢ, ranges from −2 (indicating severe flaws) to +2 (indicating excellent execution), with a default value of 0. To demonstrate this process concretely, we have constructed an illustrative scoring table for this case in Table 3. In this table, the team has scored each step and provided the core justification for the score.
Based on the scores from the table above, we can calculate the process-based trustworthiness for this case. Assuming a total score ∑sᵢ = 16, according to Equation (1), substituting the value:
T p = 16 36 + 0.5 0.444 + 0.5 = 0.944
This high score, approaching 1.0 (where the Tp range is 0 to 1), indicates that the collaborative team had established a very high degree of trust in the knowledge generation process even before its application. This trust is not blind but is built on the solid foundation of each step being traceable, interpretable, and evaluable.
3.
Outcome-Based Evaluation
The ultimate test of knowledge is its practical application. In elite sports, the final competition result is the most direct and powerful way to evaluate the value of knowledge.
Application and Outcome: The national race-walking team adopted the core insights from this study, adjusting their cooling and pacing strategies for the Paris Olympics to place greater emphasis on energy management and thermoregulation during the first half of the race. A female athlete employed a strategy consistent with our recommendations—using an ice scarf for cooling from the start and establishing an early lead—ultimately achieving an Olympic gold medal. However, we explicitly caution that elite sport performance is multifactorial, involving training, preparation, competition dynamics, and chance; this outcome demonstrates contextual consistency with the framework’s recommendations rather than validating the causal correctness of the discovered pathways.
Trustworthiness Update: This observed outcome provided positive contextual feedback within our evaluation framework. According to the heuristic outcome evaluation factor, (A0), this case updates the trustworthiness score (Equation (2)), elevating T from 0.944 to 0.978. It is crucial to note that this quantitative update reflects the increased confidence in the practical utility of the framework within this specific context and not proof of causal efficacy. The score serves as a decision-support heuristic for future applications, acknowledging that single-case outcomes cannot establish generalizable causal validation.
Substituting the values:
T = 1 1 + 1 0.944 / 0.944 e 1 1
T = 1 1 + 0.022 = 1 1.022 0.978
This update elevates the trustworthiness from a high baseline of 0.944 to a near-certain value of 0.978. It quantitatively demonstrates how a successful real-world result provides positive contextual feedback, transforming the highly trusted causal knowledge into battle-tested, practical experience for the team.
Through the complete demonstration of this case, we have shown that the framework is not only theoretically closed-loop but also feasible and effective in high-stakes practices like Olympic preparations. It successfully guided an interdisciplinary team to discover causal knowledge of practical value from public data in a very short timeframe and ultimately provided one additional evidence-informed input among multiple preparation factors for the team. We explicitly disclaim that the framework ‘caused’ the observed Olympic outcome; rather, the case illustrates how the framework generated actionable insights compatible with successful performance.

5. Discussion

5.1. Principal Findings

We provide one of the earliest applications of causal discovery algorithms to Olympic-level race strategy using public data. The nine-step interdisciplinary workflow (Step 1 practical problem → step 9 solved problem) successfully translated a high-stakes coaching question into an evidence-informed tactical adjustment, demonstrating:
(1)
Quality-controlled, time-synchronized public data can yield actionable causal hypotheses.
(2)
The field evaluation trustworthiness score heuristic offers transparent, Delphi-anchored trust quantification in the absence of RCT ground truth.
(3)
Multi-domain collaboration and co-interpretation is essential to turn DAG edges into coach-ready language.

5.2. Methodological Rigor and Limitations

(1)
Evaluation Heuristic: The process-based trustworthiness score is intentionally a decision-support tool, not a validated psychometric index. We anchored it to Delphi consensus protocol and mean opinion score; however, independent raters were not feasible within the Olympic timeline. Future work should invite external panels and compare the score against experimental benchmarks.
(2)
FCI Robustness: FCI parameters (Fisher-Z, α = 0.01, maxP = 3, complete-rule-set) and bootstrap stability (≥80%, B = 100) are now fully disclosed. Sensitivity analysis—trimming the fastest and slowest split per race (top six)—yielded ≥92% edge overlap, indicating that the temperature → first-half pacing pathway is robust to athlete-selection perturbations. Nevertheless, the observational nature of the data precludes definitive causal claims; the DAG should be treated as hypothesis-generating.
(3)
Bias Mitigation: We adopted: (i) double-blind problem translation, (ii) mandatory “uncertainty edge” review, and (iii) bootstrap stability thresholds. These reduce but do not eliminate confirmation bias inherent when the same team scores its own process; we now explicitly state this limitation.
(4)
Outcome-Validation Decoupling: We explicitly distinguish between the competitive success observed in the case study and the validation of causal claims. The Olympic outcome demonstrates that the framework-generated insights were contextually consistent with successful performance under specific environmental conditions, not that the causal discovery algorithm ‘caused’ the victory. This aligns with the framework’s purpose as a decision-support tool rather than a deterministic predictor. Elite sport performance is influenced by numerous uncontrolled factors including training regimens, athlete preparation, competition dynamics, and chance; our framework provided one evidence-informed input among many preparation strategies.

5.3. Applicability Boundaries

Structured endurance sports with top-N finish lists (cycling, swimming, and rowing) are ideal entry points. The framework is less ready for: noisy team-sport event data where outcome definition is contested; sports without public leader-boards (minor leagues); and settings lacking at least one sport-science and one causal inference expert.
Small clubs can downscale by using open video analysis tools (OpenPose (v1.7.0) and Tracker) instead of commercial IMU, partnering with university labs for CI expertise, and adopting a four-step light version (problem → data → FCI → action card) while retaining bootstrap stability checks.
Importantly, we distinguish between theoretical repeatability and practical deployability. Theoretical repeatability refers to the framework’s logical consistency and methodological soundness when applied to similar structured endurance sports with available public data—the computational steps and collaborative workflow remain valid across these contexts. Practical deployability, however, depends on organizational capacity to assemble the required tri-domain expertise, secure data access, and operationalize the nine-step workflow within real-world time constraints.

5.4. Future Directions

While the present study offers a first working proof of concept, it also marks the starting line for a broader research agenda. We have shown that public competition data, when quality-controlled and interpreted through an interdisciplinary causal-discovery loop, can yield coach-ready insights within weeks rather than years. This transforms data from a passive historical record into a proactive tool for discovering the marginal gains that decide championships. To turn this promise into routine practice, we now outline four concrete next steps:
(1)
Experimental Validation: Crossover heat-chamber trials to test the temperature → pacing hypothesis under controlled conditions.
(2)
Multi-Modal Challenges: The integration of multi-modal data (IMU, video, HRV, and wearables) implies more variables and longer pathways, necessitating the introduction of causal discovery for time series—a topic not yet addressed in this paper.
(3)
Semi-Automation: A web-based “nine-step wizard” that guides coaches through problem translation, data upload, bootstrap FCI, and auto-generation of action cards.
(4)
Repository: An open repository of anonymized case studies (rowing, swimming, and cycling) to serve as a practical playbook.

5.5. Concluding Statement

This paper offers a repeatable, transparent pathway from public data to coach-ready causal insights, without claiming to “cause” Olympic medals. By openly sharing parameters, bootstrap code, and limitations, we invite the community to stress-test, refine, and extend the workflow—turning the abundance of sports data into genuine, evidence-based marginal gains.

Author Contributions

Conceptualization, D.C., Z.H. and Z.J.; methodology, D.C.; software, D.C.; validation, D.C., Z.J. and Z.H.; formal analysis, D.C.; investigation, X.Z. and W.Y.; resources, Z.J.; data curation, X.Z. and W.Y.; writing—original draft preparation, D.C.; writing—review and editing, D.C. and Z.J.; visualization, D.C.; supervision, D.C., Z.H. and Z.J.; project administration, D.C.; funding acquisition, D.C. and Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Project 25-03, which is supported by the Fundamental Research Funds for the China Institute of Sport Science.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and local ethical requirements. The protocol was reviewed and approved by the Science and Technology Ethics Review Committee of China Institute of Sport Science (approval code CISSLA-20250306, approval date 6 March 2025). All data were de-identified and obtained from open-access repositories; individual informed consent was therefore not required, as confirmed by the above ethics committee.

Data Availability Statement

The data supporting this study’s findings are available from the corresponding author upon reasonable request. The video data were obtained from publicly available sports broadcasting platforms (e.g., WCBA, FIBA official channels). Due to copyright restrictions and privacy protection policies regarding elite athletes, the original video files cannot be publicly archived or redistributed.

Conflicts of Interest

The authors declare no competing financial interests.

References

  1. Rangasamy, K.; As’ari, M.A.; Rahmad, N.A.; Ghazali, N.F.; Ismail, S. Deep learning in sport video analysis: A review. TELKOMNIKA Telecommun. Comput. Electron. Control 2020, 18, 1926–1933. [Google Scholar] [CrossRef]
  2. Caprioli, L.; Romagnoli, C.; Campoli, F.; Edriss, S.; Padua, E.; Bonaiuto, V.; Annino, G. Reliability of an Inertial Measurement System Applied to the Technical Assessment of Forehand and Serve in Amateur Tennis Players. Bioengineering 2025, 12, 30. [Google Scholar] [CrossRef]
  3. Rajšp, A.; Fister, I. A systematic literature review of intelligent data analysis methods for smart sport training. Appl. Sci. 2020, 10, 3013. [Google Scholar] [CrossRef]
  4. Wu, F.; Wang, Q.; Bian, J.; Ding, N.; Lu, F.; Cheng, J.; Dou, D.; Xiong, H. A Survey on Video Action Recognition in Sports: Datasets, Methods and Applications. IEEE Trans. Multimed. 2023, 25, 7943–7966. [Google Scholar] [CrossRef]
  5. Cochrane, A. Effectiveness and Efficiency: Random Reflections on Health Services; Nuffield Provincial Hospitals Trust: London, UK, 1972; Available online: https://www.nuffieldtrust.org.uk/research/effectiveness-and-efficiency-random-reflections-on-health-services (accessed on 5 June 2025).
  6. Périard, J.D.; Wilson, M.G.; Tebeck, S.T.; Gilmore, J.B.; Stanley, J.; Girard, O. Influence of the Thermal Environment on Work Rate and Physiological Strain during a UCI World Tour Multistage Cycling Race. Med. Sci. Sports Exerc. 2022, 55, 32–45. [Google Scholar] [CrossRef]
  7. Liu, X.; Ornelas, E.; Shi, H. Was COVID-19 a Game Changer for the Tokyo and Beijing Olympics? J. Sports Econ. 2024, 25, 866–886. [Google Scholar] [CrossRef]
  8. Schlembach, C.; Schmidt, S.L.; Schreyer, D.; Wunderlich, L. Forecasting the Olympic medal distribution–A socioeconomic machine learning model. Technol. Forecast. Soc. Change 2022, 175, 121314. [Google Scholar] [CrossRef]
  9. Franklin, J.M.; Patorno, E.; Desai, R.J.; Glynn, R.J.; Martin, D.; Quinto, K.; Pawar, A.; Bessette, L.G.; Lee, H.; Garry, E.M.; et al. Emulating randomized clinical trials with nonrandomized real-world evidence studies: First results from the RCT DUPLICATE initiative. Circulation 2021, 143, 1002–1013. [Google Scholar] [CrossRef] [PubMed]
  10. Knechtle, B.; Weiss, K.; Valero, D.; Scheer, V.; Villiger, E.; Nikolaidis, P.T.; Andrade, M.; Cuk, I.; Gajda, R.; Rosemann, T.; et al. Change in elevation predicts 100 km ultra marathon performance. Sci. Rep. 2025, 15, 25176. [Google Scholar] [CrossRef] [PubMed]
  11. Racinais, S.; Havenith, G.; Aylwin, P.; Ihsan, M.; Taylor, L.; Adami, P.E.; Adamuz, M.-C.; Alhammoud, M.; Alonso, J.M.; Bouscaren, N.; et al. Association between thermal responses, medical events, performance, heat acclimation and health status in male and female elite athletes during the 2019 Doha World Athletics Championships. Br. J. Sports Med. 2022, 56, 439–445. [Google Scholar] [CrossRef]
  12. Bullock, G.S.; Hughes, T.; Arundale, A.H.; Ward, P.; Collins, G.S.; Kluzek, S. Black Box Prediction Methods in Sports Medicine Deserve a Red Card for Reckless Practice: A Change of Tactics is Needed to Advance Athlete Care. Sports Med. 2022, 52, 1729–1735. [Google Scholar] [CrossRef]
  13. Glymour, C.; Zhang, K.; Spirtes, P. Review of causal discovery methods based on graphical models. Front. Genet. 2019, 10, 524. [Google Scholar] [CrossRef]
  14. Shrier, I.; Platt, R.W. Reducing bias through directed acyclic graphs. BMC Med. Res. Methodol. 2008, 8, 70. [Google Scholar] [CrossRef]
  15. Nuzzo, J.L.; Finn, H.T.; Herbert, R.D. Causal mediation analysis could resolve whether training-induced increases in muscle strength are mediated by muscle hypertrophy. Sports Med. 2019, 49, 1309–1315. [Google Scholar] [CrossRef] [PubMed]
  16. Kalkhoven, J.T. Athletic Injury Research: Frameworks, Models and the Need for Causal Knowledge. Sports Med. 2024, 54, 1121–1137. [Google Scholar] [CrossRef] [PubMed]
  17. Miliani, M.; Auriemma, S.; Bondielli, A.; Chersoni, E.; Passaro, L.; Sucameli, I.; Lenci, A. ExpliCa: Evaluating Explicit Causal Reasoning in Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025; Che, W., Nabende, J., Shutova, E., Eds.; Association for Computational Linguistics: Vienna, Austria, 2025; pp. 17335–17355. [Google Scholar] [CrossRef]
  18. Aristotle. Aristotle Physics; Public Domain, 350 BC; Peripatetic Press: Merrimack, NH, USA, 1980. [Google Scholar]
  19. Aristotle. Aristotle Metaphysics; Public Domain, 350 BC; Penguin Classics: New York, NY, USA, 1999. [Google Scholar]
  20. Bacon, F. Novum Organum; Public Domain; Bottom of the Hill Publishing: San Francisco, CA, USA, 1620. [Google Scholar]
  21. Galilei, G. Dialogues Concerning Two New Sciences; Public Domain; Louis Elzevir: Leiden, The Netherlands, 1638. [Google Scholar]
  22. Newton, I. Mathematical Principles of Natural Philosophy; Royal Society: London, UK, 1687. [Google Scholar]
  23. Abdelbaky, F.M. Impacts of Mental Toughness Program on 20 Km Race Walking. Ovidius Univ. Ann. Ser. Phys. Educ. Sport/Sci. Mov. Health 2012, 12, 67–71. [Google Scholar]
  24. Silva Gde, O.S.; Bredt Sda, G.T.; Praça, G.M. Does experience mitigate the deleterious effect of mental fatigue on tactical performance? A study in youth soccer academies. Int. J. Sports Sci. Coach. 2025, 21, 154–169. [Google Scholar] [CrossRef]
  25. Angrist, J.D.; Pischke, J.S. Mostly Harmless Econometrics: An Empiricist’s Companion; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  26. Goldberger, A.S. Structural equation methods in the social sciences. Econom. J. Econom. Soc. 1972, 40, 979–1001. [Google Scholar] [CrossRef]
  27. Guo, C.; Hu, X.; Xu, C.; Zheng, X. Association between Olympic Games and children’s growth: Evidence from China. Br. J. Sports Med. 2022, 56, 1110–1114. [Google Scholar] [CrossRef]
  28. Imbens, G.W.; Rubin, D.B. Causal Inference in Statistics, Social, and Biomedical Sciences; Cambridge University Press: Cambridge, UK, 2015; Available online: https://www.cambridge.org/core/books/causal-inference-for-statistics-social-and-biomedical-sciences/71126BE90C58F1A431FE9B2DD07938AB (accessed on 4 February 2026).
  29. Gibbs, C.P.; Elmore, R.; Fosdick, B.K. The causal effect of a timeout at stopping an opposing run in the NBA. Ann. Appl. Stat. 2022, 16, 1359–1379. [Google Scholar] [CrossRef]
  30. Pearl, J. Causality; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  31. Landes, J.; Osimani, B.; Poellinger, R. Epistemology of causal inference in pharmacology. Eur. J. Philos. Sci. 2018, 8, 3–49. [Google Scholar] [CrossRef]
  32. Spirtes, P.; Zhang, K. Causal discovery and inference: Concepts and recent methodological advances. Appl. Inform. 2016, 3, 3. [Google Scholar] [CrossRef]
  33. Malinsky, D.; Danks, D. Causal discovery algorithms: A practical guide. Philos. Compass 2018, 13, e12470. [Google Scholar] [CrossRef]
  34. Vowels, M.J.; Camgoz, N.C.; Bowden, R. D’ya Like DAGs? A Survey on Structure Learning and Causal Discovery. ACM Comput. Surv. 2023, 55, 1–36. [Google Scholar] [CrossRef]
  35. Spirtes, P.; Glymour, C. An Algorithm for Fast Recovery of Sparse Causal Graphs. Soc. Sci. Comput. Rev. 1991, 9, 62–72. [Google Scholar] [CrossRef]
  36. Spirtes, P.; Glymour, C.; Scheines, R. Causation, Prediction, and Search; MIT Press: Cambridge, MA, USA, 2001; Available online: https://books.google.com/books?hl=zh-CN&lr=&id=OZ0vEAAAQBAJ&oi=fnd&pg=PR9&dq=causation+prediction+and+search&ots=08GOsAn13b&sig=PwZlgAVd0GVZEjMTvwtl1FnJKlg (accessed on 15 March 2025).
  37. Chickering, D.M. Optimal structure identification with greedy search. J. Mach. Learn. Res. 2002, 3, 507–554. [Google Scholar]
  38. Hume, D. A Treatise of Human Nature; Public Domain; International Relations and Security Network: Zürich, Switzerland, 1739; Available online: https://www.gutenberg.org/ebooks/4705 (accessed on 4 February 2026).
  39. Russell, B. Our Knowledge of the External World; Routledge: Oxfordshire, UK, 1914; Available online: https://archive.org/details/bub_gb_FC0djL2CDNgC/page/n21/mode/2up (accessed on 4 February 2026).
  40. Lewis, D. Counterfactuals; Blackwell Publishers: Oxford, UK, 1973; Available online: https://philpapers.org/rec/LEWC-2 (accessed on 4 February 2026).
  41. Shimizu, S.; Hoyer, P.O.; Hyvärinen, A.; Kerminen, A. A linear non-Gaussian acyclic model for causal discovery. J. Mach. Learn. Res. 2006, 7, 2003–2030. [Google Scholar]
  42. Amar, D.; Sinnott-Armstrong, N.; Ashley, E.A.; Rivas, M.A. Graphical analysis for phenome-wide causal discovery in genotyped population-scale biobanks. Nat. Commun. 2021, 12, 350. [Google Scholar] [CrossRef] [PubMed]
  43. Marmarelis, M.G.; Hasan, A.; Azizzadenesheli, K.; Alvarez, M.; Anandkumar, A. Off-policy Predictive Control with Causal Sensitivity Analysis. In Proceedings of the Forty-First Conference on Uncertainty in Artificial Intelligence, PMLR, Rio de Janeiro, Brazil, 21–25 July 2025; pp. 2958–2972. [Google Scholar]
  44. Wang, Z.; Ma, P.; Xue, Z.; Dai, Y.; Ji, Z.; Wang, S. Privacy-preserving and Verifiable Causal Prescriptive Analytics. Proc. ACM Manag. Data 2025, 3, 1–27. [Google Scholar] [CrossRef]
  45. Carvalho, D.D.; Goethel, M.F.; Erblang, M.; Vilas-Boas, J.P.; Pyne, D.B.; Fernandes, R.J.; Lopes, P. Impact of an Overload Period on Heart Rate Variability, Sleep Quality, Motivation, and Performance in High-level Swimmers: Use of Explainable Artificial Intelligence (XAI) to Assess Training Load Variations. Sports Med. 2025, 1–20. [Google Scholar] [CrossRef]
  46. Wang, M.; Yang, Y.; Li, F.; Liu, L.; Cao, W. A novel robust mixture-of-experts model with causal priors for interpretable water quality diagnosis. Adv. Eng. Inform. 2026, 71, 104267. [Google Scholar] [CrossRef]
  47. Hasson, F.; Keeney, S.; Mckenna, H. Research guidelines for the Delphi survey technique. J. Adv. Nurs. 2000, 32, 1008–1015. [Google Scholar] [CrossRef] [PubMed]
  48. Viswanathan, M.; Viswanathan, M. Measuring speech quality for text-to-speech systems: Development and assessment of a modified mean opinion score (MOS) scale. Comput. Speech Lang. 2005, 19, 55–83. [Google Scholar] [CrossRef]
Figure 1. The 9-step framework.
Figure 1. The 9-step framework.
Asi 09 00043 g001
Figure 2. DAG of the case of race walk.
Figure 2. DAG of the case of race walk.
Asi 09 00043 g002
Table 1. A comparison of causal discovery methods.
Table 1. A comparison of causal discovery methods.
ClassRepresentative
Algorithms
Philosophical LogicMathematical FoundationComputational
Output
Genomics Use-Scenario
Constraint-basedPC, FCIDifference making Non-parametric CI tests; Causal Markov + FaithfulnessCPDAG/PAGSparse regulatory networks.
Score-basedGES, GFCIExplanatory factorsGaussian/multinomial likelihood + sparsity penaltyCPDAG/PAGHigh-dimensional genomics
Functional causal modelsLiNGAM, ANM, PNLCausal asymmetryStructural equation Y = f(X,e), e independent of XDAG + noise-residual functionWhere directionality is crucial
Table 2. Variables and data resources used in this case.
Table 2. Variables and data resources used in this case.
YearGame20 km
Mark
10 km
Mark
5 km
Mark
1 km
Mark
BTATBHAH
2015WCIAAFIAAFIAAFNAIAAFWSIAAFWS
2016OCIAAFCCTVNANAIAAFWSWSWS
2017WCIAAFIAAFIAAFNAIAAFIAAFIAAFIAAF
2019WCIAAFIAAFIAAFIAAFIAAFIAAFIAAFIAAF
2021OCIAAFCCTVCCTVCCTVIAAFWSWSWS
2022WCIAAFIAAFIAAFIAAFIAAFIAAFIAAFIAAF
2023WCIAAFIAAFIAAFIAAFIAAFIAAFIAAFIAAF
Table 3. Process-based scoring in this case.
Table 3. Process-based scoring in this case.
StepScore (sᵢ)Justification
Step 1:
Formulate practical problem
+2Highly relevant and urgent: the problem was directly raised by the national team and directly linked to Olympic preparations in immense practical value and timeliness.
Step 2:
Define scientific problem
+2Precise translation: sports scientists successfully translated the coach’s vague concerns into a clear, testable scientific hypothesis focusing on “dynamic impact.”
Step 3:
Design computational problem
+2Efficient collaboration: sports science and casual inference experts collaborated closely to jointly define clear data requirements and a multi-stage analysis pathway.
Step 4:
Collect data
+1Resourceful but challenging: the team successfully collected data from multiple sources, but manual entry method indicated that the data was not perfectly readily available.
Step 5:
Validate data
+2Rigorous methodology: the consistency of data from multiple sources was rigorously validated using the ICC, laying a solid foundation for the reliability of later analyses.
Step 6:
Select computational method
+2Prudent selection: considering potential confounders, the FCI algorithm, capable of handling latent variables, was chosen over the simpler PC algorithm, reflecting a deep understanding of the problem’s complexity.
Step 7:
Execute computation
+1Standard execution: the computation was executed smoothly as planned, producing the expected correlation matrices and causal graph, thus completing the technical task.
Step 8:
Interpret computational results
+2Key to interdisciplinary fusion: this was a critical step for value realization. Tri-domain experts jointly interpreted the results, translated the abstract causal graph into a meaningful concept of potential interventions, and made the conclusion understandable.
Step 9:
Solve practical problem
+2Highly actionable: the final conclusion was distilled into a direct, clear tactical recommendation rather than remaining at a theoretical level, successfully closing the loop.
Total Score +16The overall process is highly trustworthy.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cui, D.; Jiang, Z.; Zhang, X.; Yang, W.; He, Z. A Multi-Domain Collaborative Framework for Practical Application of Causal Knowledge Discovery from Public Data in Elite Sports. Appl. Syst. Innov. 2026, 9, 43. https://doi.org/10.3390/asi9020043

AMA Style

Cui D, Jiang Z, Zhang X, Yang W, He Z. A Multi-Domain Collaborative Framework for Practical Application of Causal Knowledge Discovery from Public Data in Elite Sports. Applied System Innovation. 2026; 9(2):43. https://doi.org/10.3390/asi9020043

Chicago/Turabian Style

Cui, Dandan, Zili Jiang, Xiangning Zhang, Wenchao Yang, and Zihong He. 2026. "A Multi-Domain Collaborative Framework for Practical Application of Causal Knowledge Discovery from Public Data in Elite Sports" Applied System Innovation 9, no. 2: 43. https://doi.org/10.3390/asi9020043

APA Style

Cui, D., Jiang, Z., Zhang, X., Yang, W., & He, Z. (2026). A Multi-Domain Collaborative Framework for Practical Application of Causal Knowledge Discovery from Public Data in Elite Sports. Applied System Innovation, 9(2), 43. https://doi.org/10.3390/asi9020043

Article Metrics

Back to TopTop