Previous Article in Journal
Complexities: An Open Access Journal for the Field of Complex Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Simple Overview of Complex Systems and Complexity Measures

by
Luiz H. A. Monteiro
1,2
1
Escola de Engenharia, Universidade Presbiteriana Mackenzie, São Paulo 01302-907, SP, Brazil
2
Escola Politécnica, Universidade de São Paulo, São Paulo 05508-010, SP, Brazil
Complexities 2025, 1(1), 2; https://doi.org/10.3390/complexities1010002
Submission received: 30 April 2025 / Revised: 29 May 2025 / Accepted: 4 June 2025 / Published: 11 June 2025

Abstract

:
Defining a complex system and evaluating its complexity typically requires an interdisciplinary approach, integrating information theory, signal processing techniques, principles of dynamical systems, algorithm length analysis, and network science. This overview presents the main characteristics of complex systems and outlines several metrics commonly used to quantify their complexity. Simple examples are provided to illustrate the key concepts. Speculative ideas regarding these topics are also discussed here.

1. Introduction

This article discusses how complexity measures reflect the nontrivial architecture and/or the irregular behavior of a complex system, highlighting the importance of both structural and dynamical aspects. The title of this article, featuring the words “simple” and “complex”, is a rhetorical device used to emphasize the difficulty of defining an adjective without relying on another adjective with either the same or the opposite meaning. For instance, dictionaries often define “big” as something “large” (a synonym) or as something “not small” (an antonym). Similarly, “complex” is described as “intricate” or as “not simple”.
A helpful way to begin understanding the meaning of the adjective “complex” in the context of “complex system” might be to consider how this word is employed as a noun in a wide range of diverse realities. For instance, the terms guilt complex and superiority complex are used in psychology, the industrial complex and the residential complex in civil engineering, and the vitamin B complex and the Golgi complex in biology. In these six examples, there is a diversity of interconnected entities (thoughts, buildings, and organic substances). In the expression “complex system”, the adjective “complex” conveys this same idea: a network of many elements with distinct characteristics [1,2,3,4]. However, the possibility of systems with very different natures being considered complex imposes a difficulty: there is no single definition of a complex system [1,2,3,4]. And if there is no single definition, there is no single way to measure its complexity [1,2,3,4]. A factor contributing to this lack of consensus is the domain-specific dependency: a feature that is relevant in one study may be irrelevant in another. Despite these issues, there is broad agreement that biomes [5], neural circuits [6], protein interactions [7], power grids [8], networks of air transport [9] and maritime transport [10], Earth’s climate [11], the spread of contagious diseases and rumors [12], the global economy [13], and galaxy clusters [14] are complex systems. Recent examples include the COVID-19 pandemic [15], the cryptocurrency transaction recording network [16], the diffusion process of electric vehicles [17], and large language models, such as ChatGPT (Generative Pre-trained Transformer) [18]. Recognizing that a system is complex is often easier than defining what it means to be complex.
In the examples mentioned above, the collective behavior of the systems goes beyond the sum of the individual behaviors of their parts [1,2,3,4]. Indeed, the function of the brain cannot be understood by examining the firing of a single neuron; the performance of the worldwide economic network cannot be predicted by analyzing the actions of a single bank. As a consequence, modeling a complex system is a difficult task because models necessarily involve simplifications, which lead to incomplete descriptions of the reality being studied [1,19,20,21,22]. Neglecting seemingly minor aspects of a complex system in a theoretical study can significantly undermine the reliability of the model’s predictions [1,2,3,4]. The traditional reductionist method, which examines components separately, has been criticized by classical authors on complexity, such as L. von Bertalanffy [23] and E. Morin [24], who advocate for more holistic and integrative approaches.
Although complexity is typically associated with multi-agent interactions, simply structured systems exhibiting chaotic or random behavior are sometimes described as complex due to their irregular dynamics. Usually, the complexity of both isolated and interconnected systems is assessed based on the variability of the time series that they display, changes in their qualitative dynamical behaviors, the length of the algorithms capable of generating them, and the topological properties of the graphs representing them. Here, these topics are explored in Section 2, Section 3, Section 4 and Section 5. In Section 6, the main conclusions and some speculations are presented.
A remark: under a mean-field approximation, complex systems composed of many elements can be roughly described by low-dimensional dynamical systems [25,26,27,28,29,30,31,32]. It is clear that such an approximation implies a loss of complexity. However, for simplicity, all examples examined here are first-order systems.

2. Complexity as Information

From the perspective of telecommunication engineering, the complexity of a system can be quantified based on the variability of its time series. In 1948, C. E. Shannon proposed a mathematical formula to quantify the information contained in a message sent from a source through a communication channel [33,34]. Consider that the probability of the k-th message being transmitted is p k . For Shannon, its information content h k is related to p k by h k = log ( 1 / p k ) . Thus, if p k = 1 , then h k = 0 (messages reporting events that are certain to occur lack both variability and novelty); if p k 0 , then h k (messages reporting rare events carry a lot of information). Consider also that there are A alternative messages (or symbols) that can be transmitted; thus, k = 1 A p k = 1 . The average information content per message sent is defined as H and is obtained from [33,34]:
H = k = 1 A p k h k = k = 1 A p k log 1 p k = k = 1 A p k log p k
This weighted average of h k is called Shannon entropy or information entropy. An analogue formula, called thermodynamic entropy S, was introduced in studies on statistical mechanics conducted by J.W. Gibbs about 70 years before Shannon’s work. Thermodynamic entropy reflects the amount of disorder associated with the microscopic states of a physical system [35,36]. In Quantum Mechanics, J. von Neumann proposed a similar expression in 1927 [37] to quantify the uncertainty of a quantum system’s state.
In Equation (1), the choice of base for log is arbitrary. If base 2 is taken, H provides the average number of bits needed to represent the information content of the message. For instance, if A = 2 and p 1 = p 2 = 1 / 2 , then H = 1 bit; that is, one bit is enough to specify which of two equally probable events has occurred (for instance, if 0 denotes heads and 1 denotes tails, then one bit is enough to describe the result of a coin toss).
The value of H (in base 2) of a source represents the average number of bits needed to encode its messages in an optimal coding scheme [33,34]. The maximum value of H is obtained for p k = 1 / A for k = 1 , 2 , , A [33,34]. In this case, H = H m a x = log A . Therefore, the entropy is maximized if all messages are equally likely. This case represents the highest uncertainty and the greatest average information gain per message transmitted through the channel. In statistical mechanics, the maximum entropy is given by S = k B ln W , in which k B is the Boltzmann’s constant and W is the number of accessible microstates corresponding to a given macroscopic state [35,36]. Thus, W is the number of microscopic configurations in which the system can be arranged that result in the same macrostate. The formula S = k log W is inscribed on the tombstone of L. E. Boltzmann, who derived it in 1877. The nontrivial relation between information and thermodynamics was evidenced by the thought experiment known as Maxwell’s demon, which suggests that the thermodynamic entropy can be reduced by manipulating information [38].
The information entropy H has been used to evaluate the complexity of time series, based on the variability of its symbols (or values). In these applications, p k represents the probability of the k-th symbol appearing in the time series. For instance, the LMC (López-Ruiz, Mancini, Calbet) complexity measure C L M C is defined as follows [39]:
C L M C = H k = 1 A p k 1 A 2
and the SDL (Shiner, Davison, Landsberg) complexity measure as follows [40]:
C S D L = H H m a x a 1 H H m a x b
in which a and b are positive constants (here, a = b = 1 ). Observe that C L M C = 0 and C S D L = 0 for H = 0 (perfect order) and for p k = 1 / A , which implies H = H m a x (absolute disorder). Therefore, for these authors, complexity lies between full predictability and pure randomness. These measures have been employed to analyze, for instance, the sunspot cycle [41], pseudorandom bit generators [42], and the Brazilian exchange rate [43].
Note that these complexity measures require a finite number A of symbols, but a time series may consist of a long sequence of real numbers. The conversion of real numbers into symbols through a quantization process affects the value of H. The smaller the partition, the more precise the symbolic representation; however, it may also increase sensitivity to noise. For instance, it is intuitive to see that random noise of high magnitude (when compared to the original signal) increases H because it adds uncertainty to the system, leading to a more uniform probability distribution and greater unpredictability.
Information entropy was originally conceived to address stochastic sources of information; however, it, along with its derivative complexity measures, has also been used to characterize the complexity of deterministic sources of apparent randomness [39,40,44,45,46]. Figure 1 illustrates four dynamical behaviors obtained for the sine map given by [47,48]:
x ( t + 1 ) = F ( x ( t ) ) = q sin ( π x ( t ) )
If 0 q 1 , then 0 x ( t ) 1 . The dynamics of this deterministic discrete-time dynamical system have already been studied [49,50,51,52]. For t , x ( t ) tends toward zero for 0 q < 0.318 , it approaches a non-zero steady-state for 0.318 < q 0.720 , and it tends toward either an oscillatory or a chaotic solution for 0.720 < q 1 (because, in this parameter range, windows of periodic behavior emerge, embedded within regions characterized by chaotic dynamics). Figure 1 shows that x ( t ) tends toward oscillation (with period 2) for q = 0.8 and toward a chaotic regime for q = 1 . Chaos implies high sensitivity to initial conditions [53,54]. Thus, starting from two arbitrarily close initial conditions ( x ( 0 ) = 0.8 and x ( 0 ) = 0.801 in this plot), the system generates two bounded and aperiodic solutions that diverge (exponentially) from each other (in the corresponding state space). Since, in practical situations, the initial condition cannot be known with infinite precision, a small deviation in its value can lead to a solution that is radically different from the one observed in the actual chaotic system. The characteristic timescale associated with the divergence of two nearby chaotic solutions is inversely proportional to the system’s largest (positive) Lyapunov exponent [48,55].
The complexity of the solutions shown in Figure 1 can be assessed by computing H, C L M C , and C S D L , as presented in Figure 2. In the plot of H, downward peaks are observed in the low-period periodic windows, which are reflected as upward peaks in the plots of C L M C and C S D L . To generate this figure, a sine map was iterated for 2000 time steps, starting from the initial condition x ( 0 ) = 0.8 ; then, the first 1200 iterations were discarded to remove transients. Also, interval [ 0 , 1 ] was divided into A equally sized subintervals (here, A = 100 ). In this context, p k corresponds to the relative frequency with which the k-th subinterval is visited by the remaining 800 iterations. Therefore, in this quantization process, a time series of real numbers is represented by a sequence of symbols, with each symbol corresponding to a distinct subinterval. For instance, for q < 0.720 , only a single subinterval is visited; consequently, p k = 1 for this subinterval and p k = 0 for all others, implying H = 0 bits. For q = 0.8 , x ( t ) oscillates between 0.475 and 0.798 (after discarding transients). Thus, for the two subintervals containing these values, p k = 1 / 2 ; for the others, p k = 0 . Therefore, for q = 0.8 , H = [ ( 1 / 2 ) × log 2 2 ] + [ ( 1 / 2 ) × log 2 2 ] = 1 bit, C L M C = 1 × [ ( ( 1 / 2 ) ( 1 / 100 ) ) 2 + ( ( 1 / 2 ) ( 1 / 100 ) ) 2 ] = 0.480 bits, and C S D L = [ 1 / ( log 2 100 ) ] × [ 1 ( 1 / ( log 2 100 ) ) ] = 0.128 . For q = 1 , the 100 subintervals are visited with approximately equal frequencies; that is, p k 1 / 100 for the 100 symbols. Hence, H log 2 100 = 6.644 bits, C L M C 0 bits, and C S D L 0 . The chaotic behavior of maps with sinusoidal nonlinearity has been applied, for instance, in image encryption [56].
A variety of entropy measures inspired by information entropy have been proposed [57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75]. Permutation entropy [61,65,76,77] is one of them. It evaluates the temporal structure of a time series by analyzing the relative ordering of its values within short windows. For instance, the first seven values obtained by iterating the sine map from x ( 0 ) = 0.8 for q = 1 are { 0.800 , 0.588 , 0.962 , 0.118 , 0.364 , 0.910 , 0.280 , } . By sliding a window of size d = 3 across this time series (without skipping any numbers), five three-element sequences are formed: { 0.800 , 0.588 , 0.962 } , { 0.588 , 0.962 , 0.118 } , { 0.962 , 0.118 , 0.364 } , { 0.118 , 0.364 , 0.910 } , and { 0.364 , 0.910 , 0.280 } . By assigning 0 to the lowest value, 2 to the highest value, and 1 to the intermediate value, these five sequences can be rewritten as follows: { 1 , 0 , 2 } , { 1 , 2 , 0 } , { 2 , 0 , 1 } , { 0 , 1 , 2 } , and { 1 , 2 , 0 } . Recall that by permuting the numbers 0, 1, and 2, six possible sequences can be formed: σ 1 = { 0 , 1 , 2 } , σ 2 = { 0 , 2 , 1 } , σ 3 = { 1 , 0 , 2 } , σ 4 = { 1 , 2 , 0 } , σ 5 = { 2 , 0 , 1 } , and σ 6 = { 2 , 1 , 0 } . Let ρ i be the relative frequency of the i-th sequence. In this example, ρ 1 = ρ 3 = ρ 5 = 1 / 5 , ρ 4 = 2 / 5 , and ρ 2 = ρ 6 = 0 . The permutation entropy P is computed from the following:
P = i = 1 d ! ρ i log 1 ρ i
in which d ! is the maximum number of permutations. For this example, by taking base 2, P = 3 × ( 1 / 5 ) × log 2 5 + ( 2 / 5 ) × log 2 ( 5 / 2 ) = 1.922 bits. The maximum permutation entropy is P m a x = log d ! (in this example, P m a x = log 2 3 ! = 2.585 bits), which can be used to normalize P. Evidently, other size windows can be considered in this calculation (in this example, for d = 2 , P = 1 bit). A low value of P implies regularity; a high value indicates irregularity (due to randomness or chaos). Figure 3 presents P as a function of q for the sine map for d = 2 and d = 3 with 0.9 q 1 . In this figure, the same 800 iterations per value of q, as used in Figure 2, were considered. Observe that d strongly affects P.
Permutation entropy is generally robust to noise, as small fluctuations may not alter the ordinal patterns within the windows [61]. For the time series with seven numbers shown above, a 10% noise does not affect the relative ordering in the three-element sequences.
Another well-known alternative entropy measure was proposed by C. Tsallis [57]. It is written as follows:
T ( s ) = κ 1 k = 1 A p k α α 1
in which α and κ are constants. The Tsallis entropy T ( s ) of the system s reduces to the Shannon entropy H (or to the thermodynamic entropy S) in the limit α 1 (by applying L’Hôpital’s rule). Hence, it can be considered a generalization of the standard entropy. Let the Tsallis entropy of the systems s 1 and s 2 be obtained from the following:
T ( s 1 ) = κ 1 k = 1 A p k α α 1
T ( s 2 ) = κ 1 j = 1 B p j α α 1
with k = 1 A p k = 1 and j = 1 B p j = 1 , in which A and B are the numbers of available symbols (or microstates) of the systems s 1 and s 2 , respectively. Assume that these systems are statistically independent; that is, p k j = p k p j . The Tsallis entropy of the combined system s 1 + s 2 can be written as follows [78]:
T ( s 1 + s 2 ) = κ 1 k = 1 A p k α j = 1 B p j α α 1 = T ( s 1 ) + T ( s 2 ) + ( 1 α ) T ( s 1 ) T ( s 2 ) / κ
Therefore, the Tsallis entropy is purely additive only when α = 1 (in fact, the information entropy and the thermodynamic entropy are additive) [57]. For α 1 , T ( s ) is non-additive, suggesting the presence of nontrivial correlations. Under this condition, the whole is not equal to the sum of its parts, which is a common feature of complex systems. Equation (9) shows that α < 1 implies super-additivity, while α > 1 implies sub-additivity [79]. Tsallis entropy has been employed in numerous studies related to complexity [78,79,80,81,82,83].
There is no single universal definition of entropy that applies optimally to all types of data or systems. For instance, Shannon entropy does not take into account the sequential structure within data. Permutation entropy translates a time series into a symbolic sequence that captures its temporal structure. Shannon entropy does not distinguish the complexity of two discrete probability distributions that are similar, potentially missing subtle temporal correlations. Tsallis entropy generalizes Shannon entropy by modifying how probabilities contribute to the overall measure through the inclusion of a tunable parameter α , making it suitable for systems with non-additive behavior.
Entropy measures have also been used to deal with qubits (quantum bits, existing in a superposition of both 0 and 1), particularly when analyzing quantum entanglement and the uncertainty associated with quantum states [84,85,86,87,88].
In this section, only discrete probability distributions have been considered up to now; however, the continuous domain has also been analyzed. Shannon defined information entropy for the one-dimensional continuous case, denoted here by H c , as follows [33,34]:
H c = k L 1 L 2 p ( y ) log p ( y ) d y
in which y is a random variable such that y [ L 1 , L 2 ] . The normalization condition for the probability density function p ( y ) is expressed as L 1 L 2 p ( y ) d y = 1 . In the case of a n-dimensional distribution, n integrals are required in Equation (10).
The LMC complexity measure for the one-dimensional continuous case, denoted here by C L M C c , is determined from the following [89]:
C L M C c = e H c L 1 L 2 p ( y ) p u 2 d y
in which p u represents the uniform probability distribution over the domain L 1 y L 2 ; therefore, p u = 1 / ( L 2 L 1 ) (for L 1 and L 2 + , p u = 0 ). Similar to the discrete formulation, this complexity measure is expressed as the product of a term associated with disorder and another term quantifying the deviation from the equiprobable distribution. It is important to note, however, that unlike the discrete case, in which H is always a non-negative quantity, H c can assume negative values [90]. Hence, to ensure positivity, as well as invariance under translation and rescaling transformations [89], the exponential form e H c is used instead of H c . This complexity measure has been applied in various contexts, including the analysis of the periodic table [91] and the structure of neutron stars [92].
Another relevant information measure for the continuous case was proposed by R.A. Fisher in 1925 [93]. For the one-dimensional probability density function p ( y ; θ ) , the Fisher information I is a measure of how much information an observable random variable x carries about an unknown parameter θ of p ( y ; θ ) ( θ can be, for instance, the mean or variance of p ( y ; θ ) ). The greater the value of I, the more precisely the parameter θ can be estimated from y. Fisher proposed calculating I as follows [93]:
I = L 1 L 2 ln p ( y ; θ ) θ 2 p ( y ; θ ) d y
Since Fisher information can quantify the degree of structure or predictability in a complex system, it has been used in complexity analyses. For one-dimensional systems, the Fisher–Shannon complexity is defined as I J , in which J = e 2 H c / ( 2 π e ) , with I J 1 [94,95,96]. This metric has been computed in studies on different topics, such as atomic and molecular physics [96,97], orthogonal polynomials [98], and soil degradation [99].

3. Complexity as Bifurcations and Chaos

It is common to read that a complex system is more than the sum of its parts [1,2,3,4,23,24]. The failure of the superposition principle to hold is a characteristic of nonlinear systems [100,101].
Nonlinearity is essential for the occurrence of bifurcation, which happens when there is a change in the qualitative dynamical behavior of a system (and/or in its phase portrait) due to the variation in the value of one or more parameters [100,101]. A system that can exhibit a range of qualitatively different dynamical behaviors may be considered complex, even if it is isolated or simply structured.
Bifurcations can cause, for instance, the transition from a steady-state solution to a periodic one, or from a periodic solution to a chaotic one [100,101]. Figure 4 exhibits the bifurcation diagram of the sine map for 0 q 1 . The vertical axis represents the values of x ( t ) obtained for t (that is, after removing transients). This figure shows that for q = 0.2 and q = 0.6 , x ( t ) converges to a steady-state; for q = 0.8 , x ( t ) tends to oscillate between two values; for q = 1 , x ( t ) takes on many distinct values in the interval [ 0 , 1 ] . Clearly, these results are compatible with those presented in Figure 1. Observe that, as q increases, the attractor qualitatively changes. Each change corresponds to a bifurcation.
The sine map for 0 q 1 and the logistic map [54], written as x ( t + 1 ) = G ( x ( t ) ) = μ x ( t ) ( 1 x ( t ) ) with 0 μ 4 and 0 x ( t ) 1 , exhibit similar plots for H, C L M C , C S D L , and the bifurcation diagram. In fact, both are unimodal maps; specifically, both F = q sin ( π x ) and G = μ x ( 1 x ) are continuous functions that have a single maximum for x [ 0 , 1 ] and map this interval to itself. Also, both undergo a period-doubling route to chaos [100,101]. From the perspective of dynamical system theory, a system that exhibits a cascade of bifurcations and chaos can be viewed as complex.
Figure 5 shows the bifurcation diagram of the sine map for 0 q 5 . The plot for q > 1 cannot be reproduced by the logistic map. Thus, the sine map and the logistic map present equivalent complexity for 0 q 1 and 0 μ 4 , respectively; however, the sine map can be considered more complex for q > 1 due to the richness of its behaviors. In contrast, the dynamics of the logistic map for μ > 4 are trivial: as t , x ( t ) diverges to .
A bifurcation usually indicates the boundary between different dynamical regimes, and it can be responsible for the emergence or disappearance of spatiotemporal patterns. From the perspective of dynamical systems theory, self-organization can be linked to a bifurcation. Self-organization, a concept popularized by I. Prigogine [102], is often associated with complexity and it refers to the creation of ordered structures through local interactions among the elements of a system, without any external input or central control. Thus, order may spontaneously arise when the value of a parameter is either above or below a critical threshold. For instance, in a fluid heated between two metallic plates, convection cells form spontaneously when the temperature of the plates surpasses a critical value, leading to a self-organized fluid motion. This example is paradigmatic in studies of climate models, in which the fluid represents the Earth’s atmosphere, as considered in the work of E. N. Lorenz [53]. A pattern that persists thanks to the dissipation of energy in an open system is commonly known as a dissipative structure [102]. Other typical examples [103,104,105,106,107,108] include the Turing instability [109], which leads to the emergence of spatial patterns in reaction–diffusion systems; traveling waves, which preserve their shape while propagating through space over time; and spatiotemporal chaos, characterized by a lack of periodicity in time and irregularity across space.
Studies on bifurcation and chaos have been presented in numerous papers and books on dynamics [47,48,53,54,100,101,102,110,111,112,113,114,115,116].

4. Complexity as Algorithm Length

From the perspective of computer science, A. N. Kolmogorov [117,118] (along with others, like R. J. Solomonoff [119,120] and G. J. Chaitin [121,122]) proposed that the complexity of a piece of information (or a system) is the length of the shortest algorithm capable of generating it when executed on a universal Turing machine [123,124]. Observe that this formulation of algorithmic complexity depends on the theoretical model of computation introduced by A. M. Turing [125,126]. For instance, the string “11111” can be generated by the program “print 1 five times”; however, the program used to generate the string “8h5m1” would be “print 8, then h, then 5, then m, then 1”. The longer the code, the higher the Kolmogorov complexity. Hence, repeating data have lower Kolmogorov complexity than incompressible data.
The Kolmogorov complexity of a time series obtained for 0 q 1 from the sine map (given by Equation (4)) can be computed as follows. First, discard transients (as in Figure 2, Figure 3, Figure 4 and Figure 5). Then, convert the remaining real-valued sequence x ( t ) into a binary sequence b ( t ) , in which b ( t ) = 0 if 0 x ( t ) 0.5 , and b ( t ) = 1 if 0.5 < x ( t ) 1 . After compressing b ( t ) , the size of the compressed sequence provides a rough upper bound for the Kolmogorov complexity of b ( t ) .
Compression is the process of encoding a sequence into a more compact form by eliminating redundancy and repeating patterns [127,128]. The compressed sequence must retain the same information as the original and require fewer bits to represent it. For instance, a crude compressed version of 00000111111110 can be written as (5, 0), (8, 1), (1, 0), in which the first number in each pair is the count of the second number [127,128]. The idea of assessing complexity through data compression underlies the metrics derived from the works of A. Lempel and J. Ziv [129,130,131].
A Lempel–Ziv complexity measure can be computed by applying the L Z 78 algorithm (introduced in their paper [131], published in 1978). This measure, denoted here as L Z , serves as a practical (computable) approximation of the Kolmogorov complexity. It quantifies the complexity of a binary sequence by identifying patterns, which are used to build a dictionary [132]. To determine L Z , start with an empty dictionary. The original sequence must be read from left to right. At each step, find the shortest subsequence that has not been recorded in the dictionary yet, and then add it to the dictionary. A cut is made after identifying this new subsequence, and the reading of the original sequence must resume from the next bit immediately following the cut. The L Z -complexity is the number of distinct subsequences recorded in the dictionary. These subsequences can be used to reconstruct the original binary sequence. For instance, consider the string 01001. The first cut is made after the first 0, yielding 0 | 1001 , and 0 is added to the dictionary. The second cut is made after the first 1, resulting in 0 | 1 | 001 , and 1 is added to the dictionary. The third cut is made after the third 0 in order to create a new substring; that is, 0 | 1 | 00 | 1 . Note that the final digit, 1, is already in the dictionary. Thus, the dictionary contains three substrings: ( 0 , 1 , 00 ) ; therefore, L Z = 3 . For the string 0000011101, L Z = 5 (because it is parsed as 0 | 00 | 001 | 1 | 10 | 1 ).
Consider two sequences with the same number of bits. The sequence with a higher L Z has more subsequences in the dictionary than the one with a lower L Z . The higher the L Z of a sequence, the harder it is to compress. Figure 6 shows L Z as a function of q for the sine map. This plot is based on the same data used in Figure 2, after converting x ( t ) for each q into a binary sequence b ( t ) as described above. As expected, the L Z value for q = 1 is greater than that for q = 0.8 .
The L Z -complexity has been computed in a wide range of studies [133,134,135,136]. Its computation provides a model-free measure of both the information content and structural complexity of finite sequences.

5. Complexity as Connectivity

Since complex systems are usually made up of many interacting elements (which can be cells, molecules, computers, living beings), it is relevant to characterize the topological properties of the connections among these elements. Hence, graph theory has been employed to analyze complex systems [108,137,138,139,140,141,142]. For instance, a complex network approach has been applied to examine the progression of Alzheimer’s disease [143] and to evaluate the underlying dynamics of electroencephalogram (EEG) signals for diagnosing epilepsy [144]. There are methods for converting time series into graphs [145,146,147,148], enabling the application of graph theory concepts to analyses of temporal patterns.
Simplistically, a complex network can be defined as a graph with a connectivity structure that is neither completely regular nor entirely random [108,137,138,139,140,141,142]. In undirected graphs, this structure can be characterized by the degree k of a node, which is the number of links connected to that node; the degree distribution D ( k ) , which represents the percentage of nodes with degree k; the shortest path length between two nodes, which is the minimum number of edges connecting them; and the clustering coefficient c of a node, which is defined as the ratio of the number of actual links among its neighbors to the maximum possible number of such links. Additionally, the average degree k , the average shortest path length , and the average clustering coefficient c are computed by averaging k, , and c over all N nodes in the network [108,137,138,139,140,141,142].
The two most commonly used models of complex networks in academic research are known as small-world [149] and scale-free [150]. Both models have strengths and weaknesses [108,151,152,153]. A small-world graph is created by starting with a regular ring network with a fixed number of nodes, in which each node is connected to m neighbors on the right and m neighbors on the left, and then randomly rewiring some of the m regular connections on the right [149]. The rewired graph can present a high average clustering coefficient and a low average shortest path length, as observed in several real-world networks [108,137,140,149]; however, it is formed from an artificially constructed starting point and exhibits a bell-shaped degree distribution, which is rather uncommon in open real-world networks. A scale-free graph can be created by considering that the number of nodes increases as time progresses. In this model, the probability of a new node connecting to an existing node is directly proportional to the degree of that existing node [150]. This graph has a low average shortest path length and a scale-free degree distribution, as found in several real-world networks [108,138,140,150]; however, it has a low average clustering coefficient, and the rule used to grow the network assumes that the new node is aware of the degrees of all nodes in the growing network, which is unrealistic. Despite these criticisms, both models share an important feature: the graphs that they generate are based on simple probabilistic rules that are easy to describe. Also, they inspire the development of models that incorporate topological features found in both small-world and scale-free graphs [154,155,156,157,158,159,160,161].
A simple way of characterizing the complexity of a graph is through Shannon entropy H. Accordingly, the degree structure entropy of a graph is computed from Equation (1), in which p k represents the percentage of nodes with degree k [108,141,162,163,164]. In this context, H quantifies the heterogeneity in the degree distribution D ( k ) . As an example, consider the graphs shown in Figure 7. In (a), all seven nodes have k = 2 ; hence, p 2 = 1 and p k = 0 for k 2 , resulting in H = 0 bits. In (b), five nodes have k = 2 , one node have k = 1 , and one node have k = 3 ; thus, p 1 = 1 / 7 , p 2 = 5 / 7 , and p 3 = 1 / 7 , leading to H = 1.149 bits. Thus, (b) is a more complex graph than (a), since it presents a more irregular connection pattern reflected in a higher value of H. From the perspective of network science, complexity signifies an intricate pattern of connectivity.
Several methods for evaluating graph complexity, grounded in node connectivity and entropy measures, have been reported in the literature [83,165,166,167,168,169].

6. Discussion and Outlook for Future Research

When examining the time evolution of a system, complexity refers to symbol variability (Section 2 and Section 4) and/or changes in qualitative behavior (Section 3); when examining the physical structure of a system, complexity refers to nontrivial connectivity (Section 5).
In my point of view, complexity measures should explicitly increase with the number of nodes/edges comprising the networked system and with the symbol repertoire of a signal source. For a complex graph, I propose that the node-edge complexity  C N E of a graph be calculated simply as C N E = N × E , in which N is the number of the nodes and E is the number of edges. By taking this formula into account, the node-edge complexity of the human brain and of the GPT-3 can be quantified and compared. For a human brain with 1 × 10 11 neurons [170] and 1 × 10 15 synapses [171], C N E = 1 × 10 26 . For the GPT-3 [172], N = 96 × 12288 (96 layers with 12288 hidden units per layer) and E = 175 × 10 9 parameters (synaptic weights); therefore, C N E = 2 × 10 17 , which is comparable to the complexity of the zebrafish brain [173] (by considering 1 × 10 3 synapses per neuron). Other functions relating C N E to N and E could be suggested (for instance, C N E = γ N + δ E , C N E = γ N log ( δ E + 1 ) , and C N E = N γ E δ , in which γ and δ are constants). The proposed node-edge complexity may yield more meaningful results in networks exhibiting spatial uniformity; that is, in the absence of characteristic spatial subscales, where the average node degree varies substantially with the position within the network. This may not constitute an original idea; nevertheless, I have not found it in the literature.
Computing the variability of a real-valued series is tricky, because it depends on the quantification process: the smaller the partition, the greater the number of symbols A; however, there is no upper limit to A. Hence, it is more convenient to compute a complexity measure when dealing with binary series. I propose that the LZH complexity of a binary series be simply calculated as C L Z H = L Z × H , in which L Z is the Lempel–Ziv complexity and H is the Shannon entropy; thus, both the number of essential substrings and the relative frequency of bits are taken into account. The originality of this simplistic proposal is also uncertain.
Evidently, the value obtained by computing any complexity measure usually has no standalone meaning. Such measures are primarily useful for comparing the complexity of time series or graphs in relative terms.
This overview began with a list of the uses of the word “complex” as a noun in three fields of knowledge: psychology, civil engineering, and biology. In these fields, and in all others, there are paradoxical situations. Here are examples for each field: Can the attempt to suppress negative thoughts actually strengthen them, due to the hard-to-understand feedback loops of human cognition? Does the construction of additional roads to reduce traffic actually lead to more congestion by encouraging more people to drive, since the roads become initially less congested? Can the introduction of a new species, which would increase biodiversity, destabilize the system and lead to the extinction of species, thereby reducing biodiversity? Complexity often permeates paradoxes.
This overview ends with a list of speculative (and rather unoriginal) questions about complex systems. Is there a relationship between the structural complexity of tourist complexes and their profits? Can the fluctuations in cryptocurrency values predict economic crises? To what extent has the average genome size of living species influenced, and been influenced by, Darwinian evolution? Is there a relationship between functional complexity and structural complexity in proteins? Is there a minimum node-edge complexity of a neural system that makes the emergence of consciousness possible? How can a signal sent by extraterrestrial intelligent life, potentially much more complex than our own, be recognized? The study of complex systems, which relies on interdisciplinary approaches, is crucial for advancing our understanding of nature and society.

Funding

L.H.A.M. is partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under the grant #302946/2022-5. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)-finance code 001.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

References

  1. Batty, M.; Torrens, P.M. Modelling and prediction in a complex world. Futures 2005, 37, 745–766. [Google Scholar] [CrossRef]
  2. Cumming, G.S.; Collier, J. Change and identity in complex systems. Ecol. Soc. 2005, 10, 29. [Google Scholar] [CrossRef]
  3. Ladyman, J.; Lambert, J.; Wiesner, K. What is a complex system? Eur. J. Philos. Sci. 2013, 3, 33–67. [Google Scholar] [CrossRef]
  4. Estrada, E. What is a complex system, after all? Found. Sci. 2024, 29, 1143–1170. [Google Scholar] [CrossRef]
  5. Riva, F.; Graco-Roza, C.; Daskalova, G.N.; Hudgins, E.J.; Lewthwaite, J.M.M.; Newman, E.A.; Ryo, M.; Mammola, S. Toward a cohesive understanding of ecological complexity. Sci. Adv. 2023, 9, eabq4207. [Google Scholar] [CrossRef]
  6. Rubinov, M.; Sporns, O. Complex network measures of brain connectivity: Uses and interpretations. NeuroImage 2010, 52, 1059–1069. [Google Scholar] [CrossRef]
  7. Guruharsha, K.G.; Rual, J.F.; Zhai, B.; Mintseris, J.; Vaidya, P.; Vaidya, N.; Beekman, C.; Wong, C.; Rhee, D.Y.; Cenaj, O.; et al. A protein complex network of Drosophila melanogaster. Cell 2011, 147, 690–703. [Google Scholar] [CrossRef] [PubMed]
  8. Pagani, G.A.; Aiello, M. The Power Grid as a complex network: A survey. Phys. A 2013, 392, 2688–2700. [Google Scholar] [CrossRef]
  9. Zanin, M.; Lillo, F. Modelling the air transport with complex networks: A short review. Eur. Phys. J.-Spec. Top. 2013, 215, 5–21. [Google Scholar] [CrossRef]
  10. Alvarez, N.G.; Adenso-Díaz, B.; Calzada-Infante, L. Maritime traffic as a complex network: A systematic review. Netw Spat. Econ. 2021, 21, 387–417. [Google Scholar] [CrossRef]
  11. Mihailovic, D.T.; Mimic, G.; Arsenic, I. Climate predictions: The chaos and complexity in climate models. Adv. Meteorol. 2014, 2014, 878249. [Google Scholar] [CrossRef]
  12. Brockmann, D.; Helbing, D. The hidden geometry of complex, network-driven contagion phenomena. Science 2013, 342, 1337–1342. [Google Scholar] [CrossRef]
  13. Fan, Y.; Ren, S.T.; Cai, H.B.; Cui, X.F. The state’s role and position in international trade: A complex network perspective. Econ. Model. 2014, 39, 71–81. [Google Scholar] [CrossRef]
  14. Vazza, F. On the complexity and the information content of cosmic structures. Mon. Not. Roy. Astron. Soc. 2017, 465, 4942–4955. [Google Scholar] [CrossRef]
  15. Silva, C.J.; Cantin, G.; Cruz, C.; Fonseca-Pinto, R.; Passadouro, R.; dos Santos, E.S.; Torres, D.F.M. Complex network model for COVID-19: Human behavior, pseudo-periodic solutions and multiple epidemic waves. J. Math. Anal. Appl. 2022, 514, 125171. [Google Scholar] [CrossRef] [PubMed]
  16. Lin, D.; Wu, J.J.; Yuan, Q.; Zheng, Z.B. Modeling and understanding ethereum transaction records via a complex network approach. IEEE Trans. Circuits Syst. II-Express Briefs 2020, 67, 2737–2741. [Google Scholar] [CrossRef]
  17. Zhao, D.; Ji, S.F.; Wang, H.P.; Jiang, L.W. How do government subsidies promote new energy vehicle diffusion in the complex network context? A three-stage evolutionary game model. Energy 2021, 230, 120899. [Google Scholar] [CrossRef] [PubMed]
  18. Floridi, L.; Chiriatti, M. GPT-3: Its nature, scope, limits, and consequences. Minds Mach. 2020, 30, 681–694. [Google Scholar] [CrossRef]
  19. Manson, S.M. Simplifying complexity: A review of complexity theory. Geoforum 2001, 32, 405–414. [Google Scholar] [CrossRef]
  20. Sarukkai, S. Complexity and randomness in Mathematics: Philosophical reflections on the relevance for economic modelling. J. Econ. Surv. 2011, 25, 464–480. [Google Scholar] [CrossRef]
  21. Turner, J.R.; Baker, R.M. Complexity theory: An overview with potential applications for the social sciences. Systems 2019, 7, 4. [Google Scholar] [CrossRef]
  22. Kesic, S. Complexity and biocomplexity: Overview of some historical aspects and philosophical basis. Ecol. Complex. 2024, 57, 101072. [Google Scholar] [CrossRef]
  23. von Bertalanffy, L. General System Theory: Foundations, Development, Applications; George Braziller: New York, NY, USA, 1969. [Google Scholar]
  24. Morin, E. On Complexity; Hampton Press: New York, NY, USA, 2008. [Google Scholar]
  25. McCauley, E.; Wilson, W.G.; de Roos, A.M. Dynamics of age-structured and spatially structured predator-prey interactions: Individual-based models and population-level formulations. Am. Nat. 1993, 142, 412. [Google Scholar] [CrossRef] [PubMed]
  26. Boccara, N.; Cheong, K.; Oram, M. A probabilistic automata network epidemic model with births and deaths exhibiting cyclic behaviour. J Phys.-A Math. Gen. 1994, 27, 1585. [Google Scholar] [CrossRef]
  27. van der Laan, J.D.; Lhotka, L.; Hogeweg, P. Sequential predation: A multi-model study. J. Theor. Biol. 1995, 174, 149–167. [Google Scholar] [CrossRef]
  28. Sherratt, J.A.; Eagan, B.T.; Lewis, M.A. Oscillations and chaos behind predator-prey invasion: Mathematical artifact or ecological reality? Philos. Trans. R. Soc. B-Biol. Sci. 1997, 352, 21–38. [Google Scholar] [CrossRef]
  29. Durrett, R. Stochastic spatial models. SIAM Rev. 1999, 41, 677. [Google Scholar] [CrossRef]
  30. Moreno, Y.; Pastor-Satorras, R.; Vespignani, A. Epidemic outbreaks in complex heterogeneous networks. Eur. Phys. J. B 2002, 26, 521–529. [Google Scholar] [CrossRef]
  31. Silva, H.A.L.R.; Monteiro, L.H.A. Self-sustained oscillations in epidemic models with infective immigrants. Ecol. Complex. 2014, 17, 40–45. [Google Scholar] [CrossRef]
  32. Pastor-Satorras, R.; Castellano, C.; Van Mieghem, P.; Vespignani, A. Epidemic processes in complex networks. Rev. Mod. Phys. 2015, 87, 925. [Google Scholar] [CrossRef]
  33. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  34. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Chicago, IL, USA, 1998. [Google Scholar]
  35. Callen, H.B. Thermodynamics and an Introduction to Thermostatistics; John Wiley & Sons: New York, NY, USA, 1991. [Google Scholar]
  36. Reif, F. Fundamentals of Statistical and Thermal Physics; Waveland Press: Long Grove, IL, USA, 2009. [Google Scholar]
  37. von Neumann, J. Mathematical Foundations of Quantum Mechanics: New Edition; Princeton University Press: Princeton, NJ, USA, 2018. [Google Scholar]
  38. Maruyama, K.; Nori, F.; Vedral, V. Colloquium: The physics of Maxwell’s demon and information. Rev. Mod. Phys. 2009, 81, 1. [Google Scholar] [CrossRef]
  39. Lopez-Ruiz, R.; Mancini, H.L.; Calbet, X. A statistical measure of complexity. Phys. Lett. A 1995, 209, 321–326. [Google Scholar] [CrossRef]
  40. Shiner, J.S.; Davison, M.; Landsberg, P.T. Simple measure for complexity. Phys. Rev. E 1999, 59, 1459. [Google Scholar] [CrossRef]
  41. Consolini, G.; Tozzi, R.; De Michelis, P. Complexity in the sunspot cycle. Astron. Astrophys. 2009, 506, 1381–1391. [Google Scholar] [CrossRef]
  42. González, C.M.; Larrondo, H.A.; Rosso, O.A. Statistical complexity measure of pseudorandom bit generators. Phys. A 2005, 354, 281–300. [Google Scholar] [CrossRef]
  43. Piqueira, J.R.C.; Mortoza, L.P.D. Brazilian exchange rate complexity: Financial crisis effects. Commun. Nonlinear Sci. Numer. Simulat. 2012, 17, 1690–1695. [Google Scholar] [CrossRef]
  44. Martin, M.T.; Plastino, A.; Rosso, O.A. Statistical complexity and disequilibrium. Phys. Lett. A 2003, 311, 126–132. [Google Scholar] [CrossRef]
  45. Martin, M.T.; Plastino, A.; Rosso, O.A. Generalized statistical complexity measures: Geometrical and analytical properties. Phys. A 2006, 369, 439–462. [Google Scholar] [CrossRef]
  46. Yamano, T. A statistical measure of complexity with nonextensive entropy. Phys. A 2004, 340, 131–137. [Google Scholar] [CrossRef]
  47. Devaney, R.L. A First Course in Chaotic Dynamical Systems; Perseus Books: Cambridge, MA, USA, 1982. [Google Scholar]
  48. Sprott, J.C. Chaos and Time-Series Analysis; Oxford University Press: Oxford, UK, 2003. [Google Scholar]
  49. Lalescu, C.C. Patterns in the sine map bifurcation diagram. arXiv 2010, arXiv:1011.6552. [Google Scholar] [CrossRef]
  50. Griffin, J. The Sine Map. 2013. Available online: https://people.maths.bris.ac.uk/~macpd/ads/sine.pdf (accessed on 30 April 2025).
  51. Zhang, Q.; Xiang, Y.; Fan, Z.H.; Bi, C. Study of universal constants of bifurcation in a chaotic sine map. In Proceedings of the 2013 Sixth International Symposium on Computational Intelligence and Design, Hangzhou, China, 28–29 October 2013; Volume 2, p. 177. [Google Scholar] [CrossRef]
  52. Dong, C.; Rajagopal, K.; He, S.; Jafari, S.; Sun, K. Chaotification of Sine-series maps based on the internal perturbation model. Results Phys. 2021, 31, 105010. [Google Scholar] [CrossRef]
  53. Lorenz, E.N. Deterministic nonperiodic flow. J. Atmos. Sci. 1963, 20, 130–141. [Google Scholar] [CrossRef]
  54. May, R. Simple mathematical models with very complicated dynamics. Nature 1976, 261, 459–467. [Google Scholar] [CrossRef] [PubMed]
  55. Aurell, E.; Boffetta, G.; Crisanti, A.; Paladin, G.; Vulpiani, A. Predictability in the large: An extension of the concept of Lyapunov exponent. J. Phys. A-Math. Gen. 1997, 30, 1. [Google Scholar] [CrossRef]
  56. Liu, W.H.; Sun, K.H.; Zhu, C.X. A fast image encryption algorithm based on chaotic map. Opt. Lasers Eng. 2016, 84, 26–36. [Google Scholar] [CrossRef]
  57. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  58. Pincus, S. Approximate entropy (ApEn) as a complexity measure. Chaos 1995, 5, 110–117. [Google Scholar] [CrossRef]
  59. Gell-Mann, M.; Lloyd, S. Information measures, effective complexity, and total information. Complexity 1996, 2, 44–52. [Google Scholar] [CrossRef]
  60. Feldman, D.P.; Crutchfield, J.P. Measures of statistical complexity: Why? Phys. Lett. A 1998, 238, 244–252. [Google Scholar] [CrossRef]
  61. Bandt, C.; Pompe, B. Permutation entropy: A natural complexity measure for time series. Phys. Rev. Lett. 2002, 88, 174102. [Google Scholar] [CrossRef] [PubMed]
  62. Yamano, T. A statistical complexity measure with nonextensive entropy and quasi-multiplicativity. J. Math. Phys. 2004, 45, 1974–1987. [Google Scholar] [CrossRef]
  63. Rosso, O.A.; De Micco, L.; Larrondo, H.A.; Martín, M.T.; Plastino, A. Generalized statistical complexity measure. Int. J. Bifurc. Chaos 2010, 20, 775–785. [Google Scholar] [CrossRef]
  64. Gao, J.B.; Hu, J.; Tung, W.W. Entropy measures for biological signal analyses. Nonlinear Dyn. 2012, 68, 431–444. [Google Scholar] [CrossRef]
  65. Morabito, F.C.; Labate, D.; La Foresta, F.; Bramanti, A.; Morabito, G.; Palamara, I. Multivariate multi-scale permutation entropy for complexity analysis of Alzheimer’s disease EEG. Entropy 2012, 14, 1186–1202. [Google Scholar] [CrossRef]
  66. Gao, J.B.; Liu, F.Y.; Zhang, J.F.; Hu, J.; Cao, Y.H. Information entropy as a basic building block of Complexity Theory. Entropy 2013, 15, 3396–3418. [Google Scholar] [CrossRef]
  67. Lesne, A. Shannon entropy: A rigorous notion at the crossroads between probability, information theory, dynamical systems and statistical physics. Math. Struct. Comput. Sci. 2014, 24, e240311. [Google Scholar] [CrossRef]
  68. Sánchez-Moreno, P.; Angulo, J.C.; Dehesa, J.S. A generalized complexity measure based on Rényi entropy. Eur. Phys. J. D 2014, 68, 212. [Google Scholar] [CrossRef]
  69. Ribeiro, H.V.; Jauregui, M.; Zunino, L.; Lenzi, E.K. Characterizing time series via complexity-entropy curves. Phys. Rev. E 2017, 95, 062106. [Google Scholar] [CrossRef]
  70. Amigó, J.M.; Balogh, S.G.; Hernández, S. A brief review of generalized entropies. Entropy 2018, 20, 813. [Google Scholar] [CrossRef]
  71. Rohila, A.; Sharma, A. Phase entropy: A new complexity measure for heart rate variability. Physiol. Meas. 2019, 40, 105006. [Google Scholar] [CrossRef] [PubMed]
  72. Kang, H.; Zhang, X.F.; Zhang, G.B. Phase permutation entropy: A complexity measure for nonlinear time series incorporating phase information. Phys. A 2021, 568, 125686. [Google Scholar] [CrossRef]
  73. Li, S.E.; Shang, P.J. A new complexity measure: Modified discrete generalized past entropy based on grain exponent. Chaos Solitons Fractals 2022, 157, 111928. [Google Scholar] [CrossRef]
  74. Zhang, B.Y.; Shang, P.J. Distance correlation entropy and ordinal distance complexity measure: Efficient tools for complex systems. Nonlinear Dyn. 2024, 112, 1153–1172. [Google Scholar] [CrossRef]
  75. Das, D.; Ray, A.; Hens, C.; Ghosh, D.; Hassan, M.K.; Dabrowski, A.; Kapitaniak, T.; Dana, S.K. Complexity measure of extreme events. Chaos 2024, 34, 121104. [Google Scholar] [CrossRef]
  76. Riedl, M.; Müller, A.; Wessel, N. Practical considerations of permutation entropy. Eur. Phys. J.-Spec. Top. 2013, 222, 249–262. [Google Scholar] [CrossRef]
  77. Liu, L.F.; Miao, S.X.; Cheng, M.F.; Gao, X.J. Permutation entropy for random binary sequences. Entropy 2015, 17, 8207–8216. [Google Scholar] [CrossRef]
  78. Tsallis, C. Beyond Boltzmann–Gibbs–Shannon in Physics and elsewhere. Entropy 2019, 21, 696. [Google Scholar] [CrossRef]
  79. Balasis, G.; Eftaxias, K. A study of non-extensivity in the Earth’s magnetosphere. Eur. Phys. J.-Spec. Top. 2009, 174, 219–225. [Google Scholar] [CrossRef]
  80. Baranger, M.; Latora, V.; Rapisarda, A. Time evolution of thermodynamic entropy for conservative and dissipative chaotic maps. Chaos Solitons Fractals 2002, 13, 471–478. [Google Scholar] [CrossRef]
  81. Varotsos, P.A.; Sarlis, N.V.; Skordas, E.S. Tsallis entropy index q and the complexity measure of seismicity in natural time under time reversal before the M9 Tohoku Earthquake in 2011. Entropy 2018, 20, 757. [Google Scholar] [CrossRef] [PubMed]
  82. Keller, S.M.; Gschwandtner, U.; Meyer, A.; Chaturvedi, M.; Roth, V.; Fuhr, P. Cognitive decline in Parkinson’s disease is associated with reduced complexity of EEG at baseline. Brain Commun. 2020, 2, fcaa207. [Google Scholar] [CrossRef]
  83. Wen, T.; Jiang, W. Measuring the complexity of complex network by Tsallis entropy. Phys. A 2019, 526, 121054. [Google Scholar] [CrossRef]
  84. Cerf, N.J.; Adami, C. Negative entropy and information in quantum mechanics. Phys. Rev. Lett. 1997, 79, 5194. [Google Scholar] [CrossRef]
  85. Cerf, N.J.; Adami, C. Information theory of quantum entanglement and measurement. Phys. D 1998, 120, 62–81. [Google Scholar] [CrossRef]
  86. Piqueira, J.R.C.; Serboncini, F.A.; Monteiro, L.H.A. Biological models: Measuring variability with classical and quantum information. J. Theor. Biol. 2006, 242, 309–313. [Google Scholar] [CrossRef] [PubMed]
  87. Müller-Lennert, M.; Dupuis, F.; Szehr, O.; Fehr, S.; Tomamichel, M. On quantum Rényi entropies: A new generalization and some properties. J. Math. Phys. 2013, 54, 122203. [Google Scholar] [CrossRef]
  88. Witten, E. A mini-introduction to information theory. Riv. Nuovo Cimento 2020, 43, 187–227. [Google Scholar] [CrossRef]
  89. Catalán, R.G.; Garay, J.; López-Ruiz, R. Features of the extension of a statistical measure of complexity to continuous systems. Phys. Rev. E 2002, 66, 011102. [Google Scholar] [CrossRef]
  90. Wehrl, A. General properties of entropy. Rev. Mod. Phys. 1978, 50, 221. [Google Scholar] [CrossRef]
  91. Angulo, J.C.; Antolin, J. Atomic complexity measures in position and momentum spaces. J. Chem. Phys. 2008, 128, 164109. [Google Scholar] [CrossRef] [PubMed]
  92. Chatzisavvas, K.C.; Psonis, V.P.; Panos, C.P.; Moustakidis, C.C. Complexity and neutron star structure. Phys. Lett. A 2009, 373, 3901–3909. [Google Scholar] [CrossRef]
  93. Fisher, R.A. Theory of statistical estimation. Proc. Camb. Philos. Soc. 1925, 22, 700–725. [Google Scholar] [CrossRef]
  94. Vignat, C.; Bercher, J.F. Analysis of signals in the Fisher-Shannon information plane. Phys. Lett. A 2003, 312, 27–33. [Google Scholar] [CrossRef]
  95. Rudnicki, L.; Toranzo, I.V.; Sánchez-Moreno, P.; Dehesa, J.S. Monotone measures of statistical complexity. Phys. Lett. A 2016, 380, 377–380. [Google Scholar] [CrossRef]
  96. Sen, K.D.; Antolín, J.; Angulo, J.C. Fisher-Shannon analysis of ionization processes and isoelectronic series. Phys. Rev. A 2007, 76, 032502. [Google Scholar] [CrossRef]
  97. Esquivel, R.O.; Angulo, J.C.; Antolín, J.; Dehesa, J.S.; López-Rosa, S.; Flores-Gallegos, N. Analysis of complexity measures and information planes of selected molecules in position and momentum spaces. Phys. Chem. Chem. Phys. 2010, 12, 7108. [Google Scholar] [CrossRef]
  98. Sánchez-Moreno, P.; Dehesa, J.S.; Manzano, D.; Yáñez, R.J. Spreading lengths of Hermite polynomials. J. Comput. Appl. Math. 2010, 233, 2136–2148. [Google Scholar] [CrossRef]
  99. Aguiar, D.; Menezes, R.S.C.; Antonino, A.C.D.; Stosic, T.; Tarquis, A.M.; Stosic, B. Quantifying soil complexity using Fisher Shannon method on 3D X-ray computed tomography scans. Entropy 2023, 25, 1465. [Google Scholar] [CrossRef]
  100. Alligood, K.T.; Sauer, T.D.; Yorke, J.A. Chaos: An Introduction to Dynamical Systems; Springer: New York, NY, USA, 1996. [Google Scholar]
  101. Argurys, J.; Faust, G.; Haase, M.; Friedrich, R. An Exploration of Dynamical Systems and Chaos; Springer: New York, NY, USA, 2015. [Google Scholar]
  102. Nicolis, G.; Prigogine, I. Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order Through Fluctuations; Wiley: New York, NY, USA, 1977. [Google Scholar]
  103. Edelstein-Keshet, L. Mathematical Models in Biology; McGraw-Hill: Toronto, ON, Canada, 1988. [Google Scholar]
  104. Kolmogorov, A.N.; Petrovsky, I.G.; Piskunov, N.S. Study of the diffusion equation with growth of the quantity of matter and its application to a biology problem. In Dynamics of Curved Fronts; Pelcé, P., Ed.; Academic Press: Cambridge, MA, USA, 1988; pp. 105–130. [Google Scholar] [CrossRef]
  105. Ross, M.C.; Hohenberg, P.C. Pattern-formation outside of equilibrium. Rev. Mod. Phys. 1993, 65, 851. [Google Scholar] [CrossRef]
  106. Murray, J.D. Mathematical Biology II: Spatial Models and Biomedical Applications; Springer: New York, NY, USA, 2003. [Google Scholar]
  107. Malchow, H.; Petrovskii, S.V.; Venturino, E. Spatiotemporal Patterns in Ecology and Epidemiology: Theory, Models, Simulations; CRC Press: London, UK, 2008. [Google Scholar]
  108. Monteiro, L.H.A. Sistemas Dinâmicos Complexos; Livraria da Física: São Paulo, SP, Brazil, 2010. (In Portuguese) [Google Scholar]
  109. Turing, A.M. The chemical basis of morphogenesis. Philos. Trans. R. Soc. B 1952, 237, 37. [Google Scholar] [CrossRef]
  110. Li, T.Y.; York, J.A. Period 3 implies chaos. Am. Math. Mon. 1975, 82, 985–992. [Google Scholar] [CrossRef]
  111. Eckmann, J.P.; Ruelle, D. Ergodic-theory of chaos and strange attractors. Rev. Mod. Phys. 1985, 57, 617. [Google Scholar] [CrossRef]
  112. Ott, E.; Grebogi, C.; Yorke, J.A. Controlling chaos. Phys. Rev. Lett. 1990, 64, 1196. [Google Scholar] [CrossRef] [PubMed]
  113. Guckenheimer, J.; Holmes, P. Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields; Springer: New York, NY, USA, 2002. [Google Scholar]
  114. Kuznetsov, Y. Elements of Applied Bifurcation Theory; Springer: New York, NY, USA, 2004. [Google Scholar]
  115. Monteiro, L.H.A. Sistemas Dinâmicos; Livraria da Física: São Paulo, SP, Brazil, 2002. (In Portuguese) [Google Scholar]
  116. Strogatz, S.H. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  117. Kolmogorov, A.N. Three approaches to the quantitative definition of information. Int. J. Comput. Math. 1968, 2, 157–168. [Google Scholar] [CrossRef]
  118. Kolmogorov, A.N. Logical basis for information theory and probability theory. IEEE Trans. Inf. Theory 1968, 14, 662–664. [Google Scholar] [CrossRef]
  119. Solomonoff, R.J. A formal theory of inductive inference. Part I. Inf. Control 1964, 7, 1–22. [Google Scholar] [CrossRef]
  120. Solomonoff, R.J. A formal theory of inductive inference. Part II. Inf. Control 1964, 7, 224–254. [Google Scholar] [CrossRef]
  121. Chaitin, G.J. On the length of programs for computing finite binary sequences. J. ACM 1966, 13, 547–569. [Google Scholar] [CrossRef]
  122. Chaitin, G.J. Algorithmic information theory. IBM J. Res. Dev. 1977, 21, 350–359. [Google Scholar] [CrossRef]
  123. Wallace, C.S.; Dowe, D.L. Minimum message length and Kolmogorov complexity. Comput. J. 1999, 42, 270–283. [Google Scholar] [CrossRef]
  124. Zenil, H. A review of methods for estimating algorithmic complexity: Options, challenges, and new directions. Entropy 2020, 22, 612. [Google Scholar] [CrossRef]
  125. Turing, A.M. On computable numbers, with an application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 1937, 42, 230–265. [Google Scholar] [CrossRef]
  126. Martin-Delgado, M.A. Alan Turing and the origins of complexity. Arbor-Cienc. Pensam. Cult. 2013, 189, a083. [Google Scholar] [CrossRef]
  127. Rice, R.F.; Plaunt, J.R. Adaptive variable-length coding for efficient compression of spacecraft television data. IEEE Trans. Commun. Technol. 1971, 19, 889–897. [Google Scholar] [CrossRef]
  128. Sayood, K. Introduction to Data Compression; Morgan Kaufmann: Cambridge, MA, USA, 2017. [Google Scholar]
  129. Lempel, A.; Ziv, J. On the complexity of finite sequences. IEEE Trans. Inf. Theory 1976, 22, 75–81. [Google Scholar] [CrossRef]
  130. Ziv, J.; Lempel, A. A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory 1977, 23, 337–343. [Google Scholar] [CrossRef]
  131. Ziv, J.; Lempel, A. Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory 1978, 24, 530–536. [Google Scholar] [CrossRef]
  132. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: New York, NY, USA, 1991. [Google Scholar]
  133. Yan, R.Q.; Gao, R.X. Complexity as a measure for machine health evaluation. IEEE Trans. Instrum. Meas. 2004, 53, 1327–1334. [Google Scholar] [CrossRef]
  134. Aboy, M.; Hornero, R.; Abásolo, D.; Alvarez, D. Interpretation of the Lempel-Ziv complexity measure in the context of biomedical signal analysis. IEEE Trans. Biomed. Eng. 2006, 53, 2282–2288. [Google Scholar] [CrossRef]
  135. Kim, K.; Lee, M. The impact of the COVID-19 pandemic on the unpredictable dynamics of the cryptocurrency market. Entropy 2021, 23, 1234. [Google Scholar] [CrossRef]
  136. Li, Y.X.; Geng, B.; Jiao, S.B. Dispersion entropy-based Lempel-Ziv complexity: A new metric for signal analysis. Chaos Solitons Fractals 2022, 161, 112400. [Google Scholar] [CrossRef]
  137. Strogatz, S.H. Exploring complex networks. Nature 2001, 410, 268–276. [Google Scholar] [CrossRef] [PubMed]
  138. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47. [Google Scholar] [CrossRef]
  139. Newman, M.E.J. The structure and function of complex networks. SIAM Rev. 2003, 45, 167. [Google Scholar] [CrossRef]
  140. Boccaletti, S.; Latora, V.; Moreno, Y.; Chavez, M.; Hwanga, D.U. Complex networks: Structure and dynamics. Phys. Rep. 2006, 424, 175–308. [Google Scholar] [CrossRef]
  141. Costa, L.D.; Rodrigues, F.A.; Travieso, G.; Boas, P.R.V. Characterization of complex networks: A survey of measurements. Adv. Phys. 2007, 56, 167–242. [Google Scholar] [CrossRef]
  142. Battiston, F.; Cencetti, G.; Iacopini, I.; Latora, V.; Lucas, M.; Patania, A.; Young, J.G.; Petri, G. Networks beyond pairwise interactions: Structure and dynamics. Phys. Rep. 2020, 874, 1–92. [Google Scholar] [CrossRef]
  143. Morabito, F.C.; Campolo, M.; Labate, D.; Morabito, G.; Bonanno, L.; Bramanti, A.; de Salvo, S.; Marra, A.; Bramanti, P. A longitudinal EEG study of Alzheimer’s disease progression based on a complex network approach. Int. J. Neural Syst. 2015, 25, 1550005. [Google Scholar] [CrossRef]
  144. Supriya, S.; Siuly, S.; Wang, H.; Zhang, Y.C. Epilepsy detection from EEG using complex network techniques: A review. IEEE Rev. Biomed. Eng. 2023, 16, 292–306. [Google Scholar] [CrossRef]
  145. Lacasa, L.; Luque, B.; Ballesteros, F.; Luque, J.; Nuno, J.C. From time series to complex networks: The visibility graph. Proc. Natl. Acad. Sci. USA 2008, 105, 4972–4975. [Google Scholar] [CrossRef] [PubMed]
  146. Luque, B.; Lacasa, L.; Ballesteros, F.; Luque, J. Horizontal visibility graphs: Exact results for random time series. Phys. Rev. E 2009, 80, 046103. [Google Scholar] [CrossRef]
  147. Zhang, J.; Small, M. Complex network from pseudoperiodic time series: Topology versus dynamics. Phys. Rev. Lett. 2006, 96, 238701. [Google Scholar] [CrossRef] [PubMed]
  148. Campanharo, A.S.L.O.; Sirer, M.I.; Malmgren, R.D.; Ramos, F.M.; Amaral, L.A.N. Duality between time series and networks. PLoS ONE 2011, 6, e23378. [Google Scholar] [CrossRef] [PubMed]
  149. Watts, D.J.; Strogatz, S.H. Collective dynamics of `small-world’ networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef]
  150. Barabási, A.L.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509–512. [Google Scholar] [CrossRef]
  151. Lima-Mendez, G.; van Helden, J. The powerful law of the power law and other myths in network biology. Mol. Biosyst. 2009, 5, 1482–1493. [Google Scholar] [CrossRef]
  152. Willinger, W.; Alderson, D.; Doyle, J.C. Mathematics and the Internet: A source of enormous confusion and great potential. Not. Am. Math. Soc. 2009, 56, 586. [Google Scholar] [CrossRef]
  153. Broido, A.D.; Clauset, A. Scale-free networks are rare. Nat. Commun. 2019, 10, 1017. [Google Scholar] [CrossRef]
  154. Klemm, K.; Eguiluz, V.M. Growing scale-free networks with small-world behavior. Phys. Rev. E 2002, 65, 057102. [Google Scholar] [CrossRef]
  155. Holme, P.; Kim, B.J. Growing scale-free networks with tunable clustering. Phys. Rev. E 2002, 65, 026107. [Google Scholar] [CrossRef] [PubMed]
  156. Zhang, Z.Z.; Rong, L.L.; Wang, B.; Zhou, S.G.; Guan, J.H. Local-world evolving networks with tunable clustering. Physica A 2007, 380, 639–650. [Google Scholar] [CrossRef]
  157. Boccaletti, S.; Hwang, D.U.; Latora, V. Growing hierarchical scale-free networks by means of nonhierarchical processes. Int. J. Bifurc. Chaos 2007, 17, 2447–2452. [Google Scholar] [CrossRef]
  158. Yang, H.X.; Wu, Z.X.; Du, W.B. Evolutionary games on scale-free networks with tunable degree distribution. EPL 2012, 99, 10006. [Google Scholar] [CrossRef]
  159. Colman, E.R.; Rodgers, G.J. Complex scale-free networks with tunable power-law exponent and clustering. Phys. A 2013, 392, 5501–5510. [Google Scholar] [CrossRef]
  160. Wang, L.; Li, G.F.; Ma, Y.H.; Yang, L. Structure properties of collaboration network with tunable clustering. Inf. Sci. 2020, 506, 37–50. [Google Scholar] [CrossRef]
  161. Licciardi, A.N., Jr.; Monteiro, L.H.A. A network model of social contacts with small-world and scale-free features, tunable connectivity, and geographic restrictions. Math. Biosci. Eng. 2024, 21, 4801–4813. [Google Scholar] [CrossRef]
  162. Sole, R.; Valverde, V. Information theory of complex networks: On evolution and architectural constraints. In Complex Networks; Springer: New York, NY, USA, 2004; Volume 650, pp. 189–207. [Google Scholar] [CrossRef]
  163. Wang, B.; Tang, H.W.; Guo, C.H.; Xiu, Z.L. Entropy optimization of scale-free networks’ robustness to random failures. Phys. A 2006, 363, 591–596. [Google Scholar] [CrossRef]
  164. Xu, X.L.; Hu, X.F.; He, X.Y. Degree dependence entropy descriptor for complex networks. Adv. Manuf. 2013, 1, 284–287. [Google Scholar] [CrossRef]
  165. Zhang, Q.; Li, M.Z.; Deng, Y. A new structure entropy of complex networks based on nonextensive statistical mechanics. Int. J. Mod. Phys. C 2016, 27, 1650118. [Google Scholar] [CrossRef]
  166. Lei, M.L.; Liu, L.R.; Wei, D.J. An improved method for measuring the complexity in complex networks based on structure entropy. IEEE Access 2019, 7, 159190–159198. [Google Scholar] [CrossRef]
  167. Shakibian, H.; Charkari, N.M. Link segmentation entropy for measuring the network complexity. Soc. Netw. Anal. Min. 2022, 12, 85. [Google Scholar] [CrossRef]
  168. Zhang, Z.B.; Li, M.Z.; Zhang, Q. A clustering coefficient structural entropy of complex networks. Phys. A 2024, 655, 130170. [Google Scholar] [CrossRef]
  169. Vafaei, N.; Sheikhi, F.; Ghasemi, A. A community-based entropic method to identify influential nodes across multiple social networks. Soc. Netw. Anal. Min. 2025, 15, 23. [Google Scholar] [CrossRef]
  170. Herculano-Houzel, S. The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost. Proc. Natl. Acad. Sci. USA 2012, 109, 10661–10668. [Google Scholar] [CrossRef]
  171. Huttenlocher, P.R. Synaptic density in human frontal cortex—Developmental changes and effects of aging. Brain Res. 1979, 163, 195–205. [Google Scholar] [CrossRef]
  172. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. arXiv 2020, arXiv:2005.14165. [Google Scholar] [CrossRef]
  173. Hinsch, K.; Zupanc, G.K.H. Generation and long-term persistence of new neurons in the adult zebrafish brain: A quantitative analysis. Neuroscience 2007, 146, 679–696. [Google Scholar] [CrossRef]
Figure 1. Time evolutions of x ( t ) for the sine map. The initial condition of the red curves is x ( 0 ) = 0.8 . The initial condition of the blue curves is x ( 0 ) = 0.3 for q = 0.2 , q = 0.6 , and q = 0.8 ; and x ( 0 ) = 0.801 for q = 1 .
Figure 1. Time evolutions of x ( t ) for the sine map. The initial condition of the red curves is x ( 0 ) = 0.8 . The initial condition of the blue curves is x ( 0 ) = 0.3 for q = 0.2 , q = 0.6 , and q = 0.8 ; and x ( 0 ) = 0.801 for q = 1 .
Complexities 01 00002 g001
Figure 2. Information entropy, LMC complexity, and SDL complexity of the sine map as functions of q for 0.720 < q 1 . For 0 q 0.720 (not shown), H = 0 , C L M C = 0 , and C S D L = 0 , because x ( t ) approaches a stationary solution. As q increases, the variability of the asymptotic values of x ( t ) changes, as reflected in these three plots.
Figure 2. Information entropy, LMC complexity, and SDL complexity of the sine map as functions of q for 0.720 < q 1 . For 0 q 0.720 (not shown), H = 0 , C L M C = 0 , and C S D L = 0 , because x ( t ) approaches a stationary solution. As q increases, the variability of the asymptotic values of x ( t ) changes, as reflected in these three plots.
Complexities 01 00002 g002
Figure 3. Permutation entropy of the sine map as a function of q for d = 2 and d = 3 with 0.9 q 1 .
Figure 3. Permutation entropy of the sine map as a function of q for d = 2 and d = 3 with 0.9 q 1 .
Complexities 01 00002 g003
Figure 4. Bifurcation diagram of the sine map for 0 q 1 .
Figure 4. Bifurcation diagram of the sine map for 0 q 1 .
Complexities 01 00002 g004
Figure 5. Bifurcation diagram of the sine map for 0 q 5 .
Figure 5. Bifurcation diagram of the sine map for 0 q 5 .
Complexities 01 00002 g005
Figure 6. Lempel–Ziv complexity of the sine map for 0.75 q 1 , after converting x ( t ) into b ( t ) with 800 bits. The algorithm L Z 78 was used in the computation of L Z .
Figure 6. Lempel–Ziv complexity of the sine map for 0.75 q 1 , after converting x ( t ) into b ( t ) with 800 bits. The algorithm L Z 78 was used in the computation of L Z .
Complexities 01 00002 g006
Figure 7. In (a), all seven nodes have two edges; in (b), node 4 has three edges, and node 1 has one edge.
Figure 7. In (a), all seven nodes have two edges; in (b), node 4 has three edges, and node 1 has one edge.
Complexities 01 00002 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Monteiro, L.H.A. A Simple Overview of Complex Systems and Complexity Measures. Complexities 2025, 1, 2. https://doi.org/10.3390/complexities1010002

AMA Style

Monteiro LHA. A Simple Overview of Complex Systems and Complexity Measures. Complexities. 2025; 1(1):2. https://doi.org/10.3390/complexities1010002

Chicago/Turabian Style

Monteiro, Luiz H. A. 2025. "A Simple Overview of Complex Systems and Complexity Measures" Complexities 1, no. 1: 2. https://doi.org/10.3390/complexities1010002

APA Style

Monteiro, L. H. A. (2025). A Simple Overview of Complex Systems and Complexity Measures. Complexities, 1(1), 2. https://doi.org/10.3390/complexities1010002

Article Metrics

Back to TopTop