1. Introduction
Physicians recognize disease through distinct patterns in a patient’s physiology or behavior that diverge from what is considered typical for healthy functioning. In order to reliably identify such deviations, it is first necessary to understand the defining characteristics of healthy human physiology. Only then can we detect, and ideally predict at an early stage, the subtle changes that indicate the beginning of pathological processes [
1,
2].
This is particularly important in neurology and psychiatry, where many disorders develop gradually over years. The sooner emerging changes in physiological dynamics are identified, the greater the chance of timely intervention: whether to slow down disease progression, prevent further decline, or preserve quality of life. Early prediction also raises the possibility that, in the future, therapeutic advances may allow for interventions at the most effective time.
A key step towards achieving this goal is to improve our ability to quantify healthy physiological variability. Traditional linear measures only provide a partial picture. By contrast, fractal and non-linear analytical methods offer a richer, more realistic representation of physiological signals by capturing the inherent complexity that characterizes healthy systems.
The conceptual foundations of fractal analysis can be traced back to Benoît Mandelbrot, who coined the term ‘fractal’ (from the Latin ‘fractus’, meaning ‘broken’) in 1967. Mandelbrot observed that natural forms cannot be described by Euclidean geometry: ‘
Lightning does not travel in a straight line, nor does a dog’s bark. Mountains are not cones and clouds are not spheres…’ [
3]. Such natural structures display self-similarity across scales. The same is true of physiological signals. For example, when an electrocardiogram (ECG) trace is observed with different magnifications, its rugged fluctuations remain recognizably similar. One segment of the signal resembles another, which is called self-similarity: a defining feature of fractal geometry.
This behavior can be described mathematically. Some fractals are exact, which is characteristic of generated mathematical graphs, whereas statistical fractals better describe natural forms. The mathematical foundations for these concepts were established in the 18th to 19th centuries by Sierpiński, Koch, Cantor, Weierstrass, Julia and Peano, among others. However, their functions were long considered ‘pathological,’ as they could not be analyzed using traditional methods [
3].
Fractal dimension is a quantitative measure of how a structure occupies space. Physiological structures, such as the folds and gyri of the human brain, maximized space usage in non-Euclidean ways. Similarly, the fractal dimension of a physiological signal estimates its temporal complexity, or how ‘wrinkled’ or irregular it is over time. Of the available algorithms, Higuchi’s fractal dimension (HFD; [
4]) is particularly well-suited to electrophysiological signals, as it captures temporal complexity directly [
5,
6,
7,
8,
9,
10].
While classical Euclidean geometry describes smooth, regular shapes, biological structures and processes display irregular, non-integer, fractional dimensions [
5,
11,
12]. Fractal objects consist of nested sub-units that resemble the larger whole—a hallmark of self-similarity or scale invariance. This principle is exemplified by many natural systems, including vascular trees, pulmonary structures, Purkinje cell networks, coral reefs, coastlines and cloud formations [
3,
11,
12].
In human physiology, fractal organization enables the efficient distribution of resources across complex, spatially extended systems such as blood flow, nutrient delivery, gas exchange and the electrical conductance of neuronal networks within the central nervous system (CNS). Importantly, fractal concepts apply not only to anatomical form but also to the processes occurring within these structures. Physiological processes generate irregular fluctuations across multiple time scales, and these temporal patterns also exhibit statistical self-similarity [
5,
10,
11,
12,
13,
14].
Recently, it has become normalized that very young researchers who are just entering the field are publishing review papers using the PRISMA approach (made for meta-analytic studies), while the original intent of a review format was for the author to give an overview after doing their own research, giving an experience-based opinion and conclusion for the accomplishments of a research field. Concurrently, many authors are using agents to search and summarize the literature, despite the known inability to check the sources (which is a consequence of automatization of plagiarism), nor provenance. We are also aware that many publishers are not able to cope with the problem of citing nonexistent publications without a system. With this work, we wanted to do something different (and old-fashioned); after years of trying to understand the intricate characteristics of physiological complexity, we focus on the evolution of how our understanding of physiological complexity processes developed, leading to innovative interpretations and possible practical use. We thought it would be interesting to illustrate how, over the years, a researcher’s perspective might change, as change is the only constant. The research questions that we asked changed from ‘how we can use fractal analysis to detect depression’ (to differentiate patients from healthy controls, based on fractal analysis) to whether even ‘a cluster of different nonlinear biomarkers can capture the state of the aberrated autonomic system in depression’, and even then, ‘how we can connect that with a clinical presentation that clinicians see in a patient’.
This work summarizes (via human intelligence only) the application of fractal and nonlinear analysis in clinical neuroscience, from our own and our immediate network’s research. In the beginning, we focused on one or two nonlinear measures as potential biomarkers (also analyzing how compatible they are); then, we realized that every single nonlinear measure provides different information from a signal under study. Then, we realized that clusters of biomarkers actually represent the autonomic profiles, and those can be either used for further machine learning or for knowledge discovery via ontology-based application profiles, which is the latest direction of our work. We also highlight recent advances in the fractal and fractional calculus-based modeling of physiological transport processes (through highly inhomogeneous structures of human tissue) that support the development of digital twins, as we understand that complex structures require improved mathematical modeling.
This work is organized into sections that cover necessary theoretical reviews from the older literature (seldom cited recently/a form of ‘academic amnesia’ we want to correct), a section on the interpretation of regularity changes, and then sections dedicated to different research projects answering specific tasks (from the detection of depression, to differentiating between the phases of disease, to cluster biomarkers, to NIBS therapeutic effect interpretation, to mild cognitive impairment early detection, to movement disorders’ early detection), followed by a methodological comparison of two nonlinear measures, where we also discuss the technical aspects of preprocessing, the contrasting algorithms used, and the limitations of this approach. Finally, we discuss one recent application of the fractal and fractional approach that is important for the adequate development of digital twins, and close with conclusions and further directions.
2. What Is Decomplexification?
The concept of decomplexification, a pathological reduction in physiological complexity described by Goldberger, Pincus, Hausdorff and colleagues, offers a unifying framework for interpreting neurological disorders [
5]. Within this framework, HFD and related nonlinear metrics quantify multiscale irregularity in neural activity, enabling the early, disease-specific detection of altered brain dynamics.
Early theoretical studies of the healthy heartbeat revealed that physiological systems combine organization (order) and variability (disorder), rather than being simply orderly. As Goldberger and colleagues [
5] observed, this healthy state exhibits hierarchical, long-range organization, which is enabled by scale-invariance or self-similarity. This ‘organized variability’ is the characteristic signature of a natural, healthy process. As Goldberger observed, a fractal system exhibits a type of ‘roughness’ or irregularity that remains statistically similar across a wide range of scales. This underlying scale-invariant mechanism allows for a wide range of interindividual variability—a key challenge for engineers modeling these processes—and aligns with contemporary models of biological self-regulation, such as the Active Inference model [
13,
14,
15]. As this fractal nature appears to be a fundamental mechanism of physiological structures and functions, it follows that standard mechanistic or reductionist (e.g., Fourier-based) approaches to signal analysis are inadequate [
7]. To better model physiological systems, fractal and nonlinear approaches should be adopted instead [
16,
17].
Healthy complexity is often defined as an organism’s natural ability to adapt to constant external and internal changes. As we age, however, this physiological complexity decreases slightly, making the system less adaptable. Disease, however, is different. Rather than a gradual decline, disease often causes an abrupt breakdown of this fractal organization, meaning the physiological system becomes severely disrupted. This breakdown creates opportunities for new approaches to early detection and monitoring. This leads to what Goldberger calls the ‘central paradox’: many illnesses, despite being labeled ‘disorders’, exhibit strikingly predictable (periodic) behavior [
5]. For instance, patients diagnosed with Parkinson’s disease often exhibit an ‘exactly predictable’ tremor, which is an involuntary movement. This ‘stereotype of disease’, where the pathological pattern is rigid and regular, is a key concept for clinicians and is the very opposite of healthy variability [
18,
19].
Our goal is to match this clinical stereotype with a mathematical framework that can quantify it using physiological signals. Using our knowledge of healthy physiology, we can predict disease as a measurable breakdown of long-range order, resulting in a loss of self-similar structure. This pathological loss of complexity is termed ‘decomplexification’ [
5]. In other words, the measurable loss of physiological complexity, which leads to more predictable, rigid and periodic behavior, is the mathematical counterpart to clinical stereotypy.
This loss of complexity is evident in the emergence of highly periodic dynamics (and a corresponding loss of information) in numerous clinical states. The first recorded instance of this occurred in 1816, when Dr John Cheyne described the ‘highly predictable oscillatory nature’ of congestive heart failure. This was evident in abnormal cyclic breathing patterns (
Figure 1), now known as Cheyne–Stokes breathing. This rigid regularity contrasts sharply with the normal irregular and complex coupling of healthy heart and breathing rhythms. Following Cheyne and Stokes, Dr. Riemann cataloged numerous ‘dynamical diseases’ with strong periodicities that contrast sharply with the deterministic chaos of healthy dynamics. This principle applies broadly; decomplexification is evident in predictable patterns observed in blood cell counts in leukemia, epileptic seizures, certain psychiatric behaviors (e.g., OCD), sudden infant death syndrome (SIDS) and heart failure, to name a few [
19,
20]. In conclusion, these highly structured, periodic patterns represent a significant reduction in complexity (i.e., healthy variability) in pathological conditions. Many authors demonstrated this phenomenon in the field of physiological complexity, allowing clinicians to recognize the ‘stereotypy of disease’ [
1,
5,
12,
13,
19,
20]. We will discuss some recent applications of this concept in psychiatry and neurology.
3. What Does Regularity in Physiology Mean?
Steven M. Pincus is an (unjustly) relatively unknown author who has made a significant contribution to our understanding of how we should model physiological processes. His initial work addressed randomness and degrees of irregularity in physiological datasets, clarifying that terms such as ‘random’ or ‘chance’ are often misunderstood and that non-mathematicians often believe that probability theory adequately addresses them. He explains that contrary to the moment statistics and variability measures typically used, which neglect the historical context of the data, a regularity statistic such as approximate entropy (ApEn) is needed. ApEn can detect irregularities that may indicate emerging clinical symptoms [
19], and forms part of a general theoretical development as the natural information–theoretical parameter for a process [
19,
20]. The loss of information, mentioned above in the Decomplexification Section, is addressed by entropy-based measures in a more statistical physics-based context. That is also applied in [
21].
This regularity statistic is directly associated with the concept of decomplexification. It links the compromised physiology in multiple systems to more regular, patterned behavior (e.g., sinus rhythm heart rate tracings), whereas normal physiology is linked to greater irregularity and complexity (e.g., abnormal heart rhythms in premature babies or sudden infant death syndrome), as discussed by Pincus and Goldberger [
18,
19,
20]. The formulation of ApEn was driven by the need to quantify ensemble regularity versus randomness in physiological signals. ApEn measures the logarithmic likelihood of patterns remaining close over observations: greater regularity produces smaller ApEn values [
18,
19,
20]. When compared to the fractal dimension, the greater regularity (higher predictability) usually leads to (or can be interpreted as) lower complexity: two facets of the same coin.
The key difference between ApEn and standard frequency- or time-based measures is the order of the data. Standard statistics are primarily concerned with how ‘smeared’ the data are around the mean, so the sequence of samples is irrelevant. In entropy-based measures such as ApEn, however, the order of the data is the central factor. Pincus demonstrated this by showing that shuffling the order of samples in a physiological dataset results in white noise (completely random structure). This illustrates a loss of intrinsic relationships and highlights that a loss of complexity also entails a loss of information from the system. The primary focus of this parameter is discerning these changes in order, from apparently random to very regular [
18,
19,
20]. Consequently, pathologically low complexity (decomplexification) is still a feature of an even partially organized system, while randomness is characteristic of the total absence of an intrinsic structure, or in physiology, existing controls via feedback loops.
However, interpreting entropy can be difficult, since physiological signals comprise both stochastic and deterministic components. While low complexity in pathological cases is often clear, high complexity can be ambiguous. In our own research [
22], we found that patients with depression had higher levels of complexity in their EEGs. Surprisingly, their EEG complexity increased further during recovery. This led us to question whether this change was due to a return to healthy, adaptable complexity, or whether it was instead due to ‘pathological randomness’ resulting from a decoupling of underlying central control mechanisms [
23]. This demonstrates the utility of ApEn as a sophisticated tool for probing system dynamics and the importance of flagging it when something is wrong. Its correspondence to mechanistic inferences regarding subsystem autonomy, feedback and coupling has been demonstrated in various contexts [
24]. For instance, in studies of cortisol secretion in depressed men, ApEn findings indicated a loss of regulatory control [
25]. ApEn has also been applied to daily self-rating mood data to identify mood regulation dynamics that could inform phenotypic classification [
26]. ApEn typically increases with greater system coupling and external influences, thus providing an explicit measure of autonomy. Conversely, greater regularity (lower ApEn) corresponds to greater component and subsystem isolation, as observed in sudden infant death syndrome (SIDS), and is associated with compromised physiology [
18,
19,
20].
4. Applications of Complexity Measures in Psychiatry: Depression Detection and Monitoring of Therapeutic Outcomes
The application of complexity measures in psychiatry has shown particular promise in the detection of major depressive disorder (MDD). This line of inquiry is driven by neurobiological findings such as the demonstration that, in patients with MDD, the uncinate fasciculus—a deep white matter tract connecting the prefrontal cortices with the limbic system—is impaired [
27]. This structural impairment can be as high as 35%, leaving only 65% intact, and provides a potential explanation for why patients with mood disorders struggle to control their emotions. This damage may force the brain to develop compensatory mechanisms, often recruiting atypical areas to perform the same task (‘a different program’). Building on this, in 2014/15, we speculated that, as in movement disorders, this deep-structure compensation might produce detectable changes in cortical excitability, which could be measured using EEG. While other studies have confirmed this fronto-limbic disconnection and disrupted global network topology using fMRI and digital tractography [
28], we aimed to establish whether these disturbances could be detected using EEG, which is much more accessible and cost-effective.
Table 1 gives an overview of several similar studies at the same classification task.
Several publications have originated from this research. We developed a methodology that uses fractal and entropy measures (specifically HFD and SampEn), extracted from resting-state (closed-eyes) EEG, as features for machine learning models. Our initial results showed good separation between patients with major depressive disorder (MDD) and age-matched healthy controls (HCs) [
23]. A subsequent publication explored this further by comparing seven popular machine learning classifiers on the same dataset. This study demonstrated that, when characterized by these nonlinear features, any of the tested classifiers could accurately distinguish between MDD patients and HCs [
22]. Principal component analysis (PCA) confirmed that the data were clearly separable based on these features; what is more, when two measures are combined, even the first three PCAs would suffice for accurate detection tasks, regardless of the model used. The consistent performance of all the classifiers led to the key conclusion that, for this specific task, the primary diagnostic power may lie in the nonlinear characterization itself, rather than in the sophistication of the machine learning model used. However, this approach is not without limitations. The field continues to suffer from relatively small datasets (in our study, N was 46, in Ahmadlou [
29] it was 24, in [
30] it was 45, in Bairy [
31] it was 60, in [
32] was 30, and in Bahcmann [
33] it was 26, but even in newer attempts of the same tasks, the sample size is usually seldom below 100), which carries a significant risk of overfitting. This lack of generalizability, coupled with the ‘curse of dimensionality’ and ‘blind spots’ [
34], prevents clinically useful automation and highlights the need for continued validation [
22,
34]. In addition, we noted that the model also learns so-called ‘nuisance variables’: namely, the characteristic of a presented training dataset that we did not specify as important, but it learns it anyway, and in essence, tries to identify those features in a new unseen dataset.
In another experiment, we adapted this approach to distinguish between phases of major depressive disorder (MDD), by recording resting-state electroencephalograms (EEGs) from patients over a longer period and comparing their recordings during exacerbation (episode) versus remission. The non-linear measures again provided a clear distinction between these two states, but a surprising finding emerged: the complexity of the patients’ EEG signals was higher during remission than during an acute episode [
22]. We offered a neurophysiological explanation for this, drawing parallels with the changes in excitability seen in movement disorders and the uncinate fasciculus damage known to occur in MDD [
27]. These results suggest that inexpensive, non-invasive EEG analysis using nonlinear methods could potentially be employed for the clinical monitoring of patient progress and the efficacy of therapeutic interventions.
However, this finding also raises a critical question. While the lower complexity during the acute episode aligns with the decomplexification hypothesis, the further increase in complexity during remission is ambiguous. It remains unclear whether this elevated complexity indicates a genuine return to healthy, adaptive variability, or whether it conceals an escalation in pathological randomness—a system whose intrinsic stabilizing mechanisms and feedback loops are impaired, resulting in a loss of overall integrity.
While earlier studies on serious depression [
35] concluded that the brain treated with antidepressants is definitely different than a healthy brain (in biochemical sense based on easy relapse after the same challenge), we can also take into account that once compensation (to circumvent missing or insufficient connectivity) occurs, the remission does not overlap with the return to ‘healthy’ functioning, but resolves affective control by other means. Building on this, a subsequent review [
36,
37] compared the results of various groups that used nonlinear measures (often involving machine learning) with those that used standard spectral or time-based measures. The findings strongly suggested that the non-linear approach is more accurate and has a greater chance of being translated into clinical practice, since many of these measures can be calculated in real time to inform clinical decisions more effectively. Based on the effect sizes, it is evident that this approach is superior and more advisable than the current standard for signal analysis. However, despite this early attention in computational psychiatry, we also recognized significant limitations in the generalizability and practical real-life applications that must be addressed [
38].
As Klonowski noticed [
2], the main reason why those nonlinear approaches are not widely accepted and used more might lie in the simple fact that the majority of electrophysiological equipment (devices that record EEG, ECG, and EMG) have integrated only frequency-based or time-based (FFT-related) measures software. We discuss this technical aspect (and others) of signal analysis and ML-related issues more in
Section 9 and
Section 10, below.
5. Electrocardiogram-Extracted Cluster Biomarkers for MDD
While the previous section focused on EEG-based feature extraction, we must recognize that ECG signal is also an important source for detection, given the close relationship between the brain and the heart as two dynamic systems (cardio/pulmonary regulation). This neuroanatomical link is justified by the Central Autonomic Network (CAN), which physically connects major neural hubs involved in the pathophysiology of MDD (e.g., the dorsolateral prefrontal cortex (DLPC), the insula and the hippocampi) with the autonomic nervous system (ANS). Furthermore, the vagus nerve is widely accepted as being critical for ANS allostasis, and it is known that the ANS is aberrated in depression. In practice, the ECG signal is simpler and easier to measure, and many medical-grade wearable devices and a well-established telehealth monitoring infrastructure are already available.
Steven Pincus (1994) [
19] was the first to explore the validity of ECG-extracted nonlinear measures systematically. He demonstrated that newborns at risk of sudden infant death syndrome (SIDS) could be identified by using approximate entropy (ApEn) analysis of their ECGs. Although the ECG records compared, which were of a healthy infant and the one who escaped SIDS, had almost the same mean, their ApEn had a seven-fold difference, which was the basis for an automatic warning control in pediatric intensive care departments at that time [
20]. This approach was later applied to depression and anxiety by Kemp and colleagues [
39,
40,
41], who also used measures such as detrended fluctuation analysis (DFA) and Poincaré plots to distinguish patients from healthy controls successfully. They noted that distinctions were more pronounced with greater depression severity [
39,
40,
41]. The same was found to be significant in detecting anxiety, which often co-occurs with MDD. This line of analysis has even been extended to detect suicidal ideation [
42] and has been corroborated by other physiological measures, such as electrodermal hypoactivity [
43].
Following years of research, it has become evident that researchers should never rely on just one non-linear measure. Systematic comparisons [
44] show that each measure extracts unique information about the signal. The ‘art’ lies in finding an optimal combination of measures to support clinical decision-making. In a recent pivotal study, Weber and colleagues [
45] re-analyzed the UK Biobank dataset (comprising 15,768 participants) to determine whether clusters of ECG-derived HRV biomarkers could accurately reflect known autonomic nervous system (ANS) abnormalities in depression and correlate with clinical symptoms. Using K-means clustering, they identified distinct psychophysiological profiles. They pinpointed a high-risk cluster (LRS, or low relative sympathetic cluster) that was associated with maladaptive stress-coping strategies. Interestingly, they identified two clusters with low HRV but different risk profiles: one cluster with high relative sympathetic activity (HRS) and a lower breathing rate was associated with greater resilience. In comparison, a cluster with dominant relative parasympathetic activity and a higher breathing rate was associated with a higher prevalence of depression and suicide attempts. These biologically derived clusters aligned perfectly with the psychometric scales, suggesting that susceptibility to depression is associated with more rigid and maladaptive coping strategies [
45]. Although has been known for quite a long time that chronic stress and depression are associated with low heart rate variability, this particular result is actually interpreted by aligning with the Active Inference approach [
16,
17,
18]. In the LRS cluster, rigid priors based on depresogenic inferences lead to a hypervigilance state where a constantly increased metabolic need is anticipated (i.e., increased breathing rate/BR), which increases the prediction error. To minimize the prediction error, the parasympathetic tonus is elevated as a result of an ongoing attempt to counterbalance an overactive sympathetic nervous system, guiding the body toward sickness behavior, reflecting the maladaptive physiological coping behaviors [
45].
This approach strongly supports the recent position paper of the European College of Neuropsychopharmacology (ECNP) regarding their Precision Psychiatry Roadmap [
46]. The paper advocates moving away from the current ineffective nosology and focusing instead on biology-based markers to improve clinical decision-making. Building on this, our current work combines fractal and nonlinear electrophysiological clusters with additional peripheral metallomic biomarkers [
47]. This could enhance the effectiveness of large-scale, out-of-clinic screening, particularly for underserved subpopulations such as perimenopausal and postmenopausal women, who are at a higher risk of cardiovascular disease and mortality when experiencing depression (in print). We knew for quite long time, that reliance on just one measure extracted from electrophysiological signal is misleading, that Burns and Ryan rightly stated [
48], demonstrating that each slightly different (in mathematical sense) measure is capable of extracting different information. Our current research already connects the above-mentioned and enhanced (with fractal and several entropy-based) measures with items on the CIDI psychometric scale to form ontology-based application profiles that are a result of applied semantic modeling. As a derivative of knowledge modeling and FAIR guiding principles, those ontologies are machine-actionable, providing an even better option for all data that are AI-Ready, thus allowing our research to move from a couple of statistical learning models of choice to being discoverable, accessible, interoperable and replicable (in press).
6. Complexity-Based Explanation for Why NIBS Are Effective in Depression Treatment
Information obtained by applying non-linear measures to neurodegenerative disorders has revealed a new application in neurology. For example, in movement disorders, neurologists have long recognized that elevated levels of cortical excitability (which can be detected as elevated complexity) in patients with Parkinson’s disease (PD) are often a result of the brain attempting to compensate for significant structural (deep brain structures/substantia nigra) damage. The brain’s remarkable plasticity enables it to utilize existing functional structures to perform new tasks when a specific structure is damaged, as is evident following a stroke or injury. We believe that this finding of elevated excitability could also be used for diagnostic purposes.
We hypothesized that the same principle might apply to major depressive disorder (MDD). A key feature of MDD is an inability to regulate emotions, which corresponds with a known physical deficiency: the uncinate fasciculus—the deep white matter tract that links the prefrontal cortex with the limbic system—is compromised [
27]. When its functionality is reduced (e.g., to 65% compared to healthy controls), it cannot transmit regulatory messages effectively. We questioned whether, as in PD, this deep-structure compromise in MDD would lead to detectable changes in cortical excitability. First, we confirmed that patients with MDD do exhibit altered levels of cortical excitability (complexity), which we used for detection [
21,
22,
28].
This raises the question of how non-invasive brain stimulation (NIBS), such as rTMS and tDCS, affects this situation. In order to gain an understanding of the physical process of stimulation, we replicated the mathematical approach of Mutanen and colleagues [
39], who used recurrence plots to demonstrate that, following a TMS stimulus, the shift in brain state persists beyond the duration of the stimulation itself. Using our own tDCS data, we confirmed this, finding that the brain remained in a ‘highly improbable state’ for more than half an hour after stimulation ended—a timeframe that meets the neuroscientific benchmark for plasticity [
40]. We hypothesized that the therapeutic efficacy of NIBS lies in its ability to modulate pathological excitability. Our work suggests that rTMS and tDCS help patients with depression by lowering their already elevated excitability for a period of time [
41]. This also corresponds with the fact that responsive patients often require another series of treatments after around six months (maintenance), as the effect is not permanent, but provides significant relief.
Our colleagues from our immediate network confirmed the idea that changes in complexity can describe this phenomenon. Lebiecka and colleagues [
42] used HFD to analyze 64-channel EEG in patients with major MDD and bipolar depressive disorder (BDD) undergoing repetitive transcranial magnetic stimulation (rTMS). Their results showed that HFD can be a useful marker for evaluating the effectiveness of rTMS and for differentiating between responders and non-responders. Differences in HFD were demonstrated after the application of rTMS, but the trend detected differed in MDD and bipolar depression patients (BDD). Researchers measured the difference induced with stimulation after the 1st, 10th and 20th session to differentiate between responders and non-responders. In MDD responders, FD decreased after the longer stimulation (the whole therapeutic cycle of 20 sessions). In MDD non-responders, FD increased after the 1st and after the 10th session, but overall, after the long stimulation period, it did not change. When they compared MDD responders and MDD non-responders, FD was higher in responders in every band, but also before and after the stimulation. In BD, the trends were the opposite. That means that even without testing after the first session, the clinician can estimate the difference between responders and non-responders [
42]. This suggests that changes in HFD can unambiguously indicate whether the effect of stimulation is positive or nonexistent; identifying responders early allows for effective and timely adjustments to the best therapeutic outcomes. Maybe the next step would be to apply similar methodology to [
49].
7. Applications in Neurology
The concept of decomplexification—a pathological loss of physiological complexity originally articulated by Goldberger, Pincus, Hausdorff and their colleagues—provides a unifying framework for understanding neurological disorders [
5,
19,
50,
51,
52]. Within this framework, HFD and related nonlinear metrics serve as powerful tools for quantifying multiscale irregularity in neural activity. They reveal early and disease-specific alterations in brain dynamics.
Fractal-based biomarkers have been successfully applied to several neurological conditions. In epilepsy, HFD capture aperiodic shifts related to cortical excitability and treatment effects [
41,
53,
54]. In stroke patients, HFD decreases acutely and increases during recovery, tracking neuroplasticity [
5,
42]. Furthermore, HFD distinguishes between different levels of consciousness and depth of anesthesia [
50,
55,
56] and reveals network-specific alterations in migraines [
57,
58,
59,
60]. Our earlier work demonstrated globally elevated cortical excitability in PD [
61,
62], which is likely to reflect cortical compensation for subcortical damage—an effect that can be quantified through nonlinear EEG analysis. This utility also applies to multiple sclerosis (MS), where fractal dimension has been used to quantify the neurodynamic state of the sensorimotor cortex. Fatigue in MS [
62] was found to distort the dynamics in the primary somatosensory cortex (S1), and a personalized neuromodulation (tDCS) treatment that successfully alleviated fatigue was also found to revert these fractal dynamics to a normal state [
5,
55,
63]. Furthermore, HFD discriminates between levels of consciousness, including the minimal vegetative state, and captures anesthetic modulation of the balance between excitation and inhibition [
50,
56,
64]. Network-specific reductions in HFD in migraine-related cortical [
63,
64] systems further support the clinical utility of this approach [
50,
51,
53]. Furthermore, after Smits and colleagues [
65], we demonstrated that mild cognitive impairment (MCI) [
66], which is considered a prodromal phase of AD, can be detected early in seemingly healthy aging individuals via HFD extracted from their EEG. By reanalyzing a dataset from a study that aimed at elucidating what kind of physical and/or cognitive activity keeps seniors in the best shape (exogames’ utility), we showed that around 20% of those people considered to be ‘healthy’ are already showing the signs of developing MCI (significant loss of complexity in parieto-occipital regions’ EEG-extracted HFD), which was later validated by a dedicated neurologist, as the study was performed out of the clinical setting [
63,
66]. The effect sizes were higher than in the previous literature. Based on those features, even PCA showed that the subgroups were clearly separable, demonstrating that further classification was not inevitable with such powerful EEG characterization of a signal [
66]. Those results could be used to screen the population that are older than 65, when prevention is still possible.
From previous neuroimaging studies it is expected to observe lesions and, before that, functional and/or structural changes in the occipital and parietal changes in Alzheimer Disease (AD) [
64]; interestingly, despite many mentions in the neuroimaging literature, our results on MCI were most prominent on parietal positions (P7, P8, P3, P4, on
Figure 2). As we cannot confirm what the cause of those aberrations is based on the complexity changes in a composite signal detected from the cortex/surface of the brain, we can only hypothesize that in MCI, the first changes most likely occurred in the parietal regions, and maybe with progression, they became so prominent that it could be measured in occipital regions, too. In any case, we know from other projects where we used HFD for detection that we must not claim any direct detection (as the source of EEG as composite signal might change every millisecond, and we cannot confidently reconstruct the sources at this point), but only consider this kind of result as a first detection of change in a trend, and interpret that as a red flag that should be communicated with neurologists to further investigate the case. We base our conviction on the fact that in many prior studies, it was reported that some 10%+ autopsies in that age range show that seemingly healthy older people had already started developing AD without passing the threshold of clinical presentation to confirm it.
Nevertheless, despite the limitations of this methodology, we are convinced that like in many other successful strategies of early detection, even sensing the slightest changes that can be quantified in this particular problem, a prodromal phase to one of the most devastating dementias such as AD, could serve the purpose of out-of-clinic (or in primary care) low-cost detection, so timely protective strategies could be put in place.
Beyond specific pathologies, fractal analysis has proven to be a powerful tool for characterizing the complex organization of the brain’s intrinsic, large-scale networks, known as resting state networks (RSNs). It has been used to characterize hemodynamic activity from functional magnetic resonance imaging (fMRI), demonstrating its ability to map the fractal properties of blood oxygen level-dependent (BOLD) signals within these networks [
63]. Importantly, this is not limited to hemodynamics: neuronal dynamics from EEG can also be used to functionally differentiate these same RSNs, providing a robust electrophysiological basis for network complexity [
67]. This mechanistic insight is also crucial for understanding brain stimulation. Recent work shows that the brain’s ‘state’ of complexity before a stimulus affects the response [
42,
67,
68,
69]. Specifically, pre-stimulus fractal dimension, along with oscillatory power (e.g., in the gamma band), can predict the amplitude of TMS-evoked potentials. This supports the model of the brain operating near a state of ‘criticality’ and highlights the potential of FD as a biomarker of brain reactivity and the immediate effects of neuromodulation [
69,
70,
71,
72].
8. Early Detection of Parkinson’s Disease Using sEMG Nonlinear Markers
In addition to EEG, the nonlinear analysis of surface electromyography (sEMG) offers another underutilized method for the early detection of PD [
71,
72]. EMG signals reflect the motor unit synchronization and spinal network dynamics [
73,
74,
75] and can be treated as a time series, making them highly suitable for fractal- and entropy-based analysis. This is particularly relevant, given that PD has a long prodromal phase, which can last four to six years before classical motor symptoms become visible. During this period, 60–80% of dopaminergic neurons in the substantia nigra may already have degenerated, highlighting the importance of early detection for neuroprotective intervention. Furthermore, given misdiagnosis rates of up to 30% (PD is often confused with essential tremor, MS, or Huntington’s disease), there is an urgent need for robust physiological markers.
Pioneering work by Meigal and colleagues [
76,
77,
78,
79] demonstrated that sEMG nonlinear parameters are more effective than traditional spectral features at distinguishing patients with PD from healthy controls. By treating sEMG as a nonlinear, deterministic–chaotic process, they found that PD was characterized by significantly increased determinism (≈32% vs. 11–17% in controls) and markedly lower sample entropy (SampEn) (≈0.93 vs. 1.02–1.17), as well as a reduced correlation dimension (CD) (≈4.9 vs. 6.8–6.9). In stark contrast, conventional metrics such as RMS amplitude and median frequency showed no discriminative power. These results strongly suggest heightened motor unit synchronization and reduced neural complexity—classic signatures of decomplexification. A follow-up study achieved 85% diagnostic accuracy using these nonlinear parameters alone [
75], underscoring their translational potential. Notably, the reductions in complexity were most pronounced in bicep recordings, which aligns with the tremor-dominant PD phenotype. This suggests that akinetic-rigid variants may require alternative protocols.
During the same period, we experimented with the effect of TMS in PD, but could not confirm a statistically significant and lasting effect. One byproduct of that research in our Neurophysiology lab was that we were constantly witnessing the elevated excitability of the PD patient’s cortex(M1), which corroborates our later realization of how specific compensatory processes affect their EEG, and supports the further applications in yet another field.
9. Comparison of HFD and SampEn Importance for Movement Disorders Tracking
Although sEMG analysis in medicine is traditionally dominated by spectral or amplitude measures, it has repeatedly been shown that nonlinear analyses provide additional and often more sensitive information. Klonowski showed [
2] the mathematical reason why Fourier-based algorithms, made for successfully treating mechanistic processes, remove essential information from the raw electrophysiological signals; in the second step of the procedure, in which one has to decompose the original signal on spectral components, the original frequency component is lost and replaced with two other neighboring components, providing the core reason why spectral-based approaches are not adequate for complex time series analysis [
2]. The reductionist approach simply does not work for complex living systems. These methods can reveal underlying motor strategies [
76,
77], hidden rhythms [
78] and fatigue [
79]. Critically, they can also detect pathological changes [
72,
73]. For instance, fractal analysis has been used to relate muscle force to sEMG complexity [
80,
81,
82,
83,
84,
85,
86,
87] and, as previously mentioned, to distinguish patients with PD from healthy controls [
72].
However, choosing a nonlinear measure is not trivial [
48]. To investigate this, our pilot study [
88] compared HFD and SampEn when assessing sEMG from the first dorsal interosseous (FDI) muscle, induced by TMS stimulus, on healthy volunteers. We analyzed sEMG during mild (10–20% MVC), medium (20–40% MVC) and strong (40–70% MVC) contractions, both before (PRE) and after (POST) a single-pulse transcranial magnetic stimulation (spTMS) pulse. HFD estimates the complexity in the time domain, whereas SampEn quantifies the probability that patterns will remain similar as the sequence progresses.
Figure 3 shows the raw signal recorded at three different contraction levels from the FDI muscle, studied in this particular research project with TMS application.
The results confirmed that sEMG complexity computed by HFD decreases with increasing contraction intensity, and spTMS decreased it further at all levels. However, SampEn showed different behavior: it decreased from a mild to medium contraction, but increased from a medium to strong contraction. Nevertheless, it was consistently decreased by spTMS. This discrepancy, whereby HFD and SampEn changed differently in response to increased muscle contraction, prompted a calibration analysis using mathematically generated (Weierstrass functions used) sine waves. This test revealed that the two parameters have different frequency sensitivities. SampEn was more accurate at lower frequencies (0–40 Hz), whereas HFD was more accurate at higher frequencies (60–120 Hz).
This finding was crucial. The reduction in sEMG complexity with increased force may seem counterintuitive, given that a higher force involves more motor unit recruitment and higher discharge rates, which one might assume would result in a more complex signal. However, this depends on the muscle’s recruitment strategy. In the FDI muscle, for example, almost all motor units are recruited at ~30% MVC; any further force generated is due to frequency modulation (i.e., increased discharge rates) [
82,
83]. Therefore, the differing results from HFD and SampEn likely reflect their differential sensitivity to the changing frequency components of the sEMG as the contraction intensity rises.
As an illustration, EMG-signal-extracted HFD and SampEn were compared, and the calibration curves demonstrate (
Figure 4) the different frequency content sensitivities of HFD and SampEn [
88].
We concluded that changes in sEMG complexity associated with muscle contraction cannot be accurately depicted by a single complexity measure. Our results strongly suggest that sEMG analysis should incorporate both SampEn and HFD, as these measures provide complementary information about the signal’s different frequency components and, by extension, the underlying corticospinal activity.
10. Discussion of Fractal Analysis Application
The evolution of our understanding of how nonlinear analysis should be applied in order to extract important information from a system under the study before some pathological changes translate to clinical manifestation with this was not over. Several other research groups trying to answer similar research questions started applying more nonlinear measures together [
48], in the hope that their different mathematical origin would help to extract more information from the signal, but after combining those measures as features for various statistical learning algorithms for automate detection, to yield more accurate classification, it became obvious that our research hit the sigmoidal curve of the plateau of our knowledge. We then tried to analyze the limitations of this research and focused on the part of the protocol using various machine learning models [
89]. We already knew that in order to give optimal results, fractal and nonlinear measures should be applied to broadband instead of sub-bands extracted from the raw signal [
36,
38]. Also, minimal preprocessing was advised, due to loss of information with some forms of filtering that became standard long time ago [
36,
38]. After Klonowski, who demonstrated two decades ago why Fourier-based analysis of electrophysiological signals are less than optimal [
2], to say the least, Kalauzi [
90,
91] showed that they are also redundant, as HFD calculated from amplitudes of Fourier spectral components are their weighted functions. Esteller [
92] and other researchers, compared the different methods used to calculate fractal dimension, showing that Katz’s algorithm is most consistent in isolating epileptic states (likely due to its exponential transformation of FD and insensitivity to noise), while HFD was more accurate in estimating FD (but is more sensitive to noise). The box-counting algorithm is more reserved for space filling estimation tasks, and performs less optimally in the electrophysiological signal area; Petrosian’s method is less suitable for analog signal analysis, given its poor reproducibility of the dynamic range of synthetic FD. Similar to comparison of a list of entropy-based measures, their use depends on context, and each of them is informative. We showed that the characterization of electrophysiological signal with fractal and nonlinear measures is crucial for any classifier to yield high accuracy [
21].
In our extensive analysis of the application of ML models on electrophysiology-based nonlinear features, we described several other reasons why that kind of research is still not good enough to be translated to clinical practice. One of the problems was that the majority of researchers from the biomedical and neuroscientific field (who are applying ML) did not understand the intricacies of statistical learning. For example, they used most popular models like SVM (the use of embedded regularization frameworks is recommended, at least with the absolute shrinkage and selection operator [
89,
93]), when there were several others that were much better-suited to the nature of the problem. LOOCV and k-fold cross-validation were also popular procedures for validation (for model evaluation), and the model generalization capability is typically untested on independent samples [
94]. They often neglected the so-called ‘curse of dimensionality’ (which refers to issues that arise when the number of datapoints is small, relative to the intrinsic dimension of the data/usually treating the problem in high-dimension spaces), especially in actual neuroimaging studies, where the number of features is often higher than the number of subjects, triggering predictable overfitting. They rarely employed the Vapnik–Chevronenkis dimension [
93], which should be of standard use for model evaluation or reduction. The external validation is often missing; the samples are too small and are usually collected at one site, when in reality, multiple-site collection would be the solution. When developing a model, one does not wish to train the classifier on a general sample characteristic; for example, if using nonlinear measures, they may differ because some measures change with age [
89] or may be characteristic of a certain gender [
95,
96]. On top of subtleties from statistical learning theory, a similar layer of differences between the performance of algorithms and stark difference between any spectral measure in comparison to fractal is well documented [
90,
91]. Some authors refer to these as ‘nuisance variables’ because the algorithm learns to recognize that particular data set with all of its characteristics. Berisha [
34] described yet another common problem contributing to overfitting and unwarranted optimism in ML applications (especially in medicine): ‘blind spots’. He demonstrated hilarious inefficiency in ML algorithms used to delineate MCI patients from healthy controls; in one run, the developed model detected patients as controls, and in another, vice versa; it obviously hit the blind spot. This refers to the fact that we can never collect all the possible data: we always collect representative samples. In reality, between those collected samples are the big holes called ‘blind spots’ that any model sooner or later approaches and fails (the best case scenario is that models developed on certain training set never approach larger unknown data). The only solution for this particular problem is to constantly monitor and optimize already developed algorithms, even after deployment, which is expensive and annuls the essential goal of automation. Electrical engineers always approximate that as the error of digitalization, but those techniques are never thought of in life sciences, neuroscience or medicine. How can the majority of those problems be overcome? The usual recommendation is ‘collect more data,’ but in our field, this is very expensive (as it means much more data than usual) and minimizes the chances for the success of the project as a whole. Another question that arises then is ‘is automation of the task not sustainable?’. Then, we realize that no matter how the promised efficacy and cheap use of classifiers may resolve the problem for us, the reality denies that as a viable future standard. In our opinion, portable devices (for ECG mostly, many with medical grade quality of signal) might be the answer, but then one has to do all the precaution measures working with a platform or another provider cloud, where GDPR and the security of health data might be the issue, which is yet another hard problem to solve.
The next step that is inevitable in successful early detection (and consequently, monitoring of therapeutic outcomes) came with the advent of the cluster biomarkers discussed in the above
Section 5; it happened that we were looking on only one source—the human brain, represented by the EEG it generates—while we missed the connection between the heart and brain, the so-called heart–brain axis. In addition, the more we learned about the intricate dynamics of one system (brain or heart or muscles), we realized how ultimately, they were all connected and navigated by internal control mechanisms, that are known from anatomical connections (CAN mentioned above that physically connects regions of the brain like DLPFC, Hippocampi and Insula with the Autonomic Nervous System—ANS), realizing that the fact that those systems have a hierarchy must have a role in its overall complexity that was previously not taken into account. Extended CAN is also known as the Allostatic Network, and the role it plays, with numerous feedback loops acting as controlling mechanisms, complicated our grasp of its complexity even further. All things being synchronized was obvious when Friston and his network, including Andy Clark [
17], started advocating for the Active Inference [
16,
18] approach in their quest to understand how our perception and decision-making actually works (with all that complexity hierarchically organized). Contrary to previous consensus, they hypothesized that brain is actually a black box, relying on information from numerous sensors from the periphery of the body, where the back-propagation of error is crucial in avoiding surprises, which are unwanted for the survival. Numerous neuroscientific studies in effect confirmed this model accuracy, from basic biochemical, to Biological Psychiatry research, to our new look on how to interpret cluster biomarkers as ECG-extracted autonomic profiles that are correlated to the severity of depression [
49]. Even HRS and LRS cluster biomarkers (mentioned in Weber et al., 2025) are indicators of healthy vs. unhealthy or ‘maladaptive’ ways in which we cope with stress [
49]. Based on those findings, we detected the gap in the research that we are currently trying to fill, and that is ontology-based application profiles to serve clinicians to be able to quantify, via electrophysiological measurements, the standardized clinical severity of depression [in press]. Our current FAIR Mind project is trying to do that. On one hand, we have a list of electrophysiology-extracted measures (e.g., a cluster of 14 mixed measures) that illustrate the state of the patient’s autonomic health. For that, we are working on the extension of existing semantic artifacts to be able to both offer clinicians practical use of electrophysiological measurements and for those newly generated data to be machine-actionable (based on FAIR Guiding principles), which in reality means they are AI-Ready (in press). With this innovative application of already known fractal and nonlinear measures (as the basis for semantic modeling), connected via knowledge modeling with clinical presentation, we believe the future reuse and interoperability of data would be much easier and a more acceptable candidate for standardization in clinical practice.
We want to summarize here this tortuous path of eventually recognizing that complex systems cannot rely on simple biomarkers for anything, as so many factors are involved: both internal and external. Our knowledge progressed from a single fractal biomarker to several markers applied in parallel to a single source of signal, to cluster biomarkers that are representing the state of ANS that is important for regulation of downward systems. This new development, where we connected this electrophysiology research (already tested for viability) with standardized clinical protocols used to establish the diagnosis and severity of the disorder, is waiting for a replication, as we are waiting for approved access to a larger public dataset. This proves once more that the reality of doing scientific research is often disappointing in some instances, but we will eventually bring fractal and fractional to practical use.
After this empirically developed opinion above, we are going further with yet another promising use of fractal description of human physiology structure (in this case, skin that is a multilayered system, with many embedded structures serving various purposes) that played the role for a fractional calculus-based modeling framework to be used for digital twins, but in our view, this holds promise for other applications as well. We believe that the section about fractal and nonlinear biomarkers has one particular purpose in making early diagnoses possible and also allows for monitoring of the slightest changes that indicate the progress of a patient’s state. Making timely intervention possible is extensive, but focuses on the analysis of time series or analog signals, which is limited in a geometrical sense. From another point of view (like separation on fractal analysis in 2D and 3D), the next level would be modeling complex processes that are ongoing through such a complex fractal structure as any physiological structure. If one, for example, aims at developing a new drug (nowadays, it pertains solely to the use of AI to discover previously unknown links), besides discovering the promising biochemical candidate, an even more important part of that research would be how the target substance would be transported through a highly inhomogeneous structure (like the digestive system, tortuous network of blood vessels, or alveolar system, or the network of neurons where the transport of ions or electrical particles can occur). In the next section, we give an overview of that research.
11. Fractional Modeling of Human Physiology for Digital Twins
The significant variability in human physiology, particularly in pharmacokinetics and pharmacodynamics, highlights the limitations of traditional modeling approaches. A recent project (2022–2025) focused on developing a digital twin for transdermal fentanyl transport, which highlighted this issue. Although standard modeling software (COMSOL v 6.2) is excellent for many physical processes, it usually models the transport of substances across highly heterogeneous membranes, such as human skin, as a straightforward diffusion process (Fick’s second law) through a homogeneous gel. Our simulation studies confirmed that this is an obvious oversimplification of a real physiological tissue structure.
To better model reality, two crucial concepts were introduced: firstly, the fractal, tortuous path that a molecule takes through the inhomogeneous skin layers (where it can be stacked and released later, indicating the memory processes of skin), and secondly, a novel mathematical framework based on fractional calculus. This combined approach is necessary to realistically describe the anomalous diffusion process. This novel methodological approach was detailed in two publications: one introducing a time-delayed flux concept [
97], and the other focusing on inertial memory effects in molecular transport across nanoporous membranes [
98].
When observing the transport of a molecular drug from a reservoir (the patch with the drug) across the skin barrier, a key finding emerged. All generalized models that account for molecular inertia—and thus the non-locality of flux and the finite propagation speed—predict oscillatory changes and resonances in the flux profiles at high frequencies (
Figure 5).
Crucially, at low frequencies, the spectra of cumulative flux predicted by all generalized models converge with the classical model, indicating that they all predict the same total amount of stationary drug delivered. This work [
97] provides further clear evidence that in order to accurately model the complex, non-homogeneous nature of human physiology, a fractal and fractional approach is not only beneficial, but necessary.
By visual comparison (
Figure 5), it can be concluded that the Fickean prediction is too simple for complex human physiology, given the oversimplified estimate of the memory properties of this highly nonhomogeneous structure.
When we compared the results of the preexisting digital twin output (using just diffusion for modeling transport,
Figure 6), with all known theoretical pharmacokinetics and pharmacodynamics knowledge included in a developed simulator, the curve actually turned out to be a monotonous function, contrary to the real thing (courtesy of a colleague collecting healthy participants’ plasma samples to illustrate absorption). It demonstrated a completely different mathematical description as a quasi-oscillatory function with dampening amplitudes, vs. a DT monotonous function that was almost identical for all the patients [
97].
Just before the closure of this work, we published yet another study where we further developed this approach by examining boundary conditions [
99].
If we want to develop digital twins that accurately represent transport processes, uptake of the drugs and excretion from the system, more accurate representation of those extremely complex physiological processes that take into account the fractal description of a structure and highly inhomogeneous characteristics of human physiological processes are urgently needed.
12. Conclusions and Future Direction
Scientists from other fields, including medical professionals, often describe concepts of physiological complexity as ‘novel’, even though the mathematical concepts of fractals and non-linearity in natural shapes, forms and processes are by no standard new. The aim of this work is to address this misconception, which is probably the consequence of students on life science courses receiving little to no training in mathematics nor advanced statistical learning. We also wanted to offer an overview of the evolution of our understanding of how in different settings, we can describe and appropriately interpret the results of this analytic approach. Technological advancements in computer power and cloud computing, plus the ubiquity of portable/wearable devices with medical-grade signal quality, may contribute to the wider acceptance of technically feasible ways for early detection and monitoring outside of clinics, for real life situations. In this work, we present how these measures should be applied and interpreted to provide the most accurate descriptions of reality and to foster better approaches to early detection and forecasting in medicine, supported by new theoretical and technological advances. This approach to signal analysis, forecasting, early detection and monitoring has already produced sufficient evidence to qualify for translational studies, which would allow it to increase the accuracy and effectiveness of diagnostics, particularly in neurology and psychiatry.
Building on this foundation, the future direction of this field must shift from a proof of concept to robust clinical application. This involves bringing biomarkers from the laboratory into wearable technology and clinical practice. This requires these metrics to be validated on a large scale against the current standards, such as sEMG complexity for preclinical PD screening and the superiority of HFD over spectral methods for neurodegenerative processes. They must also be integrated into telehealth platforms, which is allowed via a semantic modeling link that makes those markers machine-actionable. However, the utility of these metrics extends beyond mere diagnostics. The next frontier is biomarker-guided therapeutics, where complexity measures such as pre-stimulus fractal dimension can predict how a patient will respond to neuromodulatory interventions. Therapeutic success itself is then redefined as the ‘normalization’ of pathological fractal dynamics, as demonstrated in the treatment of MS fatigue or rTMS treatment of depression. To achieve this, our methodologies must also evolve. The future lies in fusing multiple nonlinear metrics to create high-dimensional ‘complexity fingerprints’, recognizing that HFD and SampEn, for example, in combination with a list of other markers (to form cluster biomarkers) provide complementary information. It also lies in building next-generation digital twins upon more realistic foundations, such as the fractal geometry and fractional calculus framework, to accurately model the anomalous diffusion and complex dynamics that are inherent to human physiology.