Next Article in Journal
A Kalman Filter-Based Localization Calibration Method Optimized by Reinforcement Learning and Information Matrix Fusion
Previous Article in Journal
Finite-Time Thermodynamics and Complex Energy Landscapes: A Perspective
Previous Article in Special Issue
Weibull-Type Incubation Period and Time of Exposure Using γ-Divergence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Localizing Synergies of Hidden Factors in Complex Systems: Resting Brain Networks and HeLa GeneExpression Profile as Case Studies

by
Marlis Ontivero-Ortega
1,2,
Gorana Mijatovic
3,
Luca Faes
3,4,
Fernando E. Rosas
5,6,7,
Daniele Marinazzo
8 and
Sebastiano Stramaglia
1,*
1
INFN, Sezione di Bari, Dipartimento Interateneo di Fisica, Università degli Studi di Bari Aldo Moro, 70126 Bari, Italy
2
Cuban Center for Neuroscience, Havana 53-72637112, Cuba
3
Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
4
Dipartimento di Ingegneria, Università di Palermo, 90128 Palermo, Italy
5
Department of Informatics, Center for Consciousness Science, Sussex AI, University of Sussex, Brighton BN1 9RH, UK
6
Center for Psychedelic Research, Centre for Complexity Science, Department of Brain Science, Imperial College London, London SW7 2AZ, UK
7
Center for Eudaimonia and Human Flourishing, University of Oxford, Oxford OX1 2JD, UK
8
Department of Data Analysis, Ghent University, 9000 Ghent, Belgium
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(8), 820; https://doi.org/10.3390/e27080820 (registering DOI)
Submission received: 28 May 2025 / Revised: 28 July 2025 / Accepted: 31 July 2025 / Published: 1 August 2025
(This article belongs to the Special Issue Entropy in Biomedical Engineering, 3rd Edition)

Abstract

Factor analysis is a well-known statistical method to describe the variability of observed variables in terms of a smaller number of unobserved latent variables called factors. Even though latent factors are conceptually independent of each other, their influence on the observed variables is often joint and synergistic. We propose to quantify the synergy of the joint influence of factors on the observed variables using O-information, a recently introduced metric to assess high-order dependencies in complex systems; in the proposed framework, latent factors and observed variables are jointly analyzed in terms of their joint informational character. Two case studies are reported: analyzing resting fMRI data, we find that DMN and FP networks show the highest synergy, consistent with their crucial role in higher cognitive functions; concerning HeLa cells, we find that the most synergistic gene is STK-12 (AURKB), suggesting that this gene is involved in controlling the HeLa cell cycle. We believe that our approach, representing a bridge between factor analysis and the field of high-order interactions, will find wide application across several domains.

1. Introduction

Utilizing latent variables to represent datasets provides a robust approach for identifying the hidden structure and underlying drivers of complex systems, thereby enhancing data manageability and interpretability. Factor analysis (FA) [1] is a powerful statistical method widely used in a variety of fields, including behavioral sciences [2], social sciences [3], life sciences [4], physical sciences [5] and business [6], to describe variability among observed variables in terms of a smaller number of unobserved latent variables called factors. Specifically, observed variables are modeled as linear combinations of factors, short of some error or noise terms, and this method is extremely effective in reducing the set of relevant variables in large datasets. Factor analysis is closely related to principal component analysis [7] and therefore to singular value decomposition [8]. Principal component analysis entails a rotation of variable space in order to capture the maximum variance from the new variables. The three methods become identical when the error terms, or equivalently the variability not explained by the common factors, all have the same variance [9].
Interestingly, even though latent variables derived from factor analysis (FA) are uncorrelated by design, they cooperate to produce effects on the observed variables. This makes it valuable to localize, within the complex system, those observed variables whose behavior relies on several factors, so we can quantify the synergistic impact of these factors. Intuitively we expect that such variables, whose role is to integrate the information from several factors, are those depending on more than one factor, as illustrated in Figure 1, where two factors influence the observed variables: the observed variables which depend on both factors are those encoding the joint action of the two factors, thus embodying the cooperation of the two latent variables. This work aims to quantify the cooperative effects of latent factors by using the concept of synergy, a notion recently developed within the framework of partial information decomposition [10] to identify variables responsible for integration in complex systems. In particular, we propose using the O-information metrics [11] to quantify cooperation and localize synergistic effects in the complex system. We refer the reader to [12] for a deep overview of the informational architecture of complex systems and to [13] for a discussion about biases in O-information estimation. While another method for localizing higher-order effects in complex systems, detailed in [14], utilizes gradients of O-information on groups of observed variables, our distinctive approach introduces latent variables to discard redundancy. This ensures that the net impact of these latent variables on the observed variables is solely synergistic.

2. Methods

Let us recall the definition of O-information [11], a metric measuring the balance between redundancy and synergy, the two basic types of high-order statistical dependencies. Let x = { x 1 , , x n } denote a set of n stochastic variables, and let x i denote the set of all the variables in x but x i . The O-information is defined as [11]:
O i n f ( x ) = ( n 2 ) H ( x ) + i = 1 n H ( x i ) H ( x i ) ,
where H is the Shannon entropy:
H ( x ) = d x p ( x ) log p ( x ) ,
p ( x ) being the probability density function of x .
If O i n f > 0 , the system is redundancy-dominated. On the other hand, when O i n f < 0 , the dependencies are better explained as patterns that can be observed in the joint state of multiple variables but not in subsets of these; in other words, the system is synergy-dominated.
Now, we recall the generative model for FA, i.e.:
x = L f + η ̲ ,
where x is the N-dimensional vector of observed variables, f is a vector of m latent factors, and L is the N × m matrix of loadings whilst η ̲ is a vector of N Gaussian noise terms with variances σ ̲ . Factors are assumed to be Gaussian zero mean unit variance independent variables; factors and noise terms are uncorrelated. Given a suitable number of samples of x , both loadings and factor scores can be estimated by maximum likelihood.
For clarity, we concentrate now on the case of a group of latent factors influencing a single target variable. Let f 1 , f 2 , , f m be m independent latent variables acting as drivers for the target
x = i = 1 m L i f i + η ,
where η is a Gaussian noise term with variance σ 2 . The structure corresponding to this equation is a collider, corresponding to net synergy. We recall that all fs are assumed to be zero mean unit variance Gaussian variables and that x has unit variance, i.e., i = 1 m L i 2 + σ 2 = 1 . The mutual information between the observed variable x and factor f i is given by I ( x ; f i ) = 1 2 log ( 1 L i 2 ) , with I ( x ; f i ) 1 2 L i 2 for small L i ; it follows that the influence of factors on the observed variable is controlled by loadings L i .
Now we turn to consider higher-order dependencies. Straightforward calculations, based on the formula for the entropy of a multivariate Gaussian distribution [15], provide the O-information for the group of n = m + 1 variables { f 1 , f m , x } :
O i n f = 1 2 log 1 i = 1 m L i 2 m 1 i = 1 m 1 k i L k 2
Note that this quantity is always less than zero or it vanishes, thus confirming the occurrence of net synergy. We remark that O i n f is zero if all L s vanish but one. For small L s , O i n f is the order
O i n f i k L i 2 L k 2 .
The formula above quantitatively supports the scenario described in Figure 1, i.e., an observed variable is synergistic for two latent factors only if it is dependent on both latent factors. In the Appendix A, we report a theorem which generalizes the non-positivity of O i n f , shown above for Gaussian variables, to the case of a general probability distribution, when a group of independent variables influence a target variable. This theorem is useful in applications where latent factors are inferred by Independent Component Analysis (ICA) [16].
The calculation above suggests the following procedure to highlight synergy: (i) fix the number of latent factors; (ii) fit the FA model to data and extract the latent factor scores; (iii) for each observed variable, evaluate O i n f for the multiplet constituted by the observed variable plus the latent factors O i n f representing the net synergy; and (iv) the variables with the most negative values of O i n f are the most involved in synergistic behavior; in other words, they are responsible for integration in the system.
We remark that to evaluate O i n f one may either (i) take the estimated loading matrices and use Equation (2), or (ii) perform a direct evaluation by firstly estimating samples of factors, then evaluating the O i n f of factors and the measured variables using (1); in this work, we used (ii).

3. Results

In the following section, we describe two applications of the proposed framework in biomedical engineering, i.e., the analysis of resting-state fMRI time series of healthy subjects, and the analysis of gene expression data from HeLa cells.

3.1. fMRI Data

We use the public dataset described in Poldrack et al. [17]. This dataset was obtained from the OpenfMRI database, with accession number ds000030, and was already used in [18]. We use resting-state fMRI data from 121 healthy controls and 152 time points. The demographics are reported in the original paper.
Data were preprocessed with FSL (FMRIB Software Library v5.0) [19]. The volumes were corrected for motion, after which slice timing correction was applied to correct for temporal alignment. All voxels were spatially smoothed with a 6 mm FWHM (full width at half maximum) isotropic Gaussian kernel and after intensity normalization, a band pass filter was applied between 0.01 and 0.08 Hz. In addition, linear and quadratic trends were removed. We next regressed out the motion time courses, the average cerebrospinal fluid signal, and the average white matter signal. Global signal regression was not performed. Data were transformed to the MNI152 template, such that a given voxel had a volume of 3 mm × 3 mm × 3 mm. Finally, we averaged the signal in 268 ROIs. In order to localize the results within the intrinsic connectivity network of the resting brain, we assigned each of these ROIs to one of the nine resting-state networks (seven cortical networks, plus subcortical regions and cerebellum) as described in [20]. Time series from subjects were firstly z-scored and then concatenated in a matrix with 268 observed variables and 152 × 121 = 18392 samples.
We applied the factor analysis function factoran of MATLAB to fit this matrix, with the number of hidden factors fixed at 20 (we found that results are robust to slight variations of the number of factors). Some factors thus obtained were recognized to be related to common trends of data; therefore, we removed from the subsequent analysis 3 factors mostly correlated to the common trend, and we were left with 17 factors.
Then, for each region, we evaluated the O-information of that region and the 17 latent factors; the statistical significance of O i n f can be assessed using surrogates obtained by random permutation of the samples of the target region.
The results are shown in Figure 2. We find that FP and DMN show the highest synergy, and that generically synergy is higher in associative areas supporting cognitive functions and lower in sensory-motor areas, consistent with previous analyses which use other tools to evaluate higher-order dependencies [21,22,23].

3.2. HeLa Data

We apply the proposed approach to data from the cell culture HeLa. The data correspond to 94 genes and 48 time points [24], with an hour interval separating two successive readings (the HeLa cell cycle lasts 16 h). The 94 genes were selected, from the full dataset described in [25], on the basis of the association with cell cycle regulation and tumor development. Since the number of samples is less than the number of variables, in this case we use principal component analysis instead of FA. As described in [26], the first two principal components show exponentially decaying correlations whilst the third principal component seems to be connected with cell cycle as it shows oscillations with a period close to 16. We discard the first principal component and consider here the second and the third components to find the genes which exhibit synergy with regard to these two factors.
The results are depicted in Figure 3; 24 genes, out of 94, show a synergy which is statistically higher than those from surrogates after Bonferroni correction. The highest synergy is obtained in correspondence of STK-12 (Aurora Kinase B), which is known to work as a transcriptional brake, controlling the expression of genes involved in cellulase production [27]. A synergistic interaction between Aurora B and ZAK has been found in triple-negative breast cancer [28]: our analysis shows that it may also play a key role also in the control of the HeLa cell cycle.

4. Discussion

Many complex systems can be effectively described by latent factors that influence observable variables: this representation helps eliminating data redundancy, so as to reveal synergies that might otherwise be hidden. In this work, we introduce a novel perspective on analyzing higher-order dependencies. We propose evaluating the informational characteristics of circuits that include both latent factors and observed variables. This method allows us to pinpoint specific variables within the system whose behavior synergistically depends on these factors, thus identifying them as information integrators. Specifically, we suggest using O-information to assess the informational character of groups of variables comprising the latent factors and each individual observed variable. This enables us to quantify the synergistic role played by each observed variable.
We apply the proposed methodology to fMRI data from healthy individuals, and found less synergy in sensory networks with regard to networks which support complex cognitive processes such as planning and execution of goal-directed behavior [29]. Applying our method to genetic data from HeLa cells, we found that the most synergistic gene is STK-12, whose synergistic role was already assessed by performing an analysis at the level of observed gene expressions.
We believe this approach can offer further insights into complex systems that lend themselves to a suitable representation using latent variables.
Further research will focus on integrating our proposed approach with other methods for inferring latent factors, such as ICA, as well as tackling the crucial problem of selecting the most appropriate factors for analysis. Indeed, choosing the optimal number of factors can significantly impact the results; there is no single perfect method to fix it, and instead, it is often a combination of statistical criteria, theoretical considerations, and practical judgment. We remark that increasing the number of factors cannot decrease the synergy of any observed variable, according to (2); however, the relative synergy of two observed variables may change. The topic of further investigation will be the search for a protocol for the suitable number of latent variables to measure synergies. Another interesting area of study will be to explore the relationship between the synergies of factors (as introduced here) and those measurable on groups of observed variables. This essentially means investigating the connection between underlying mechanisms and emergent behaviors within this context [30].

Author Contributions

Methodology M.O.-O. and S.S.; Investigation M.O.-O. and F.E.R.; original draft—Writing M.O.-O., G.M., L.F., D.M. and S.S.; Software S.S.; Writing—revised draft M.O.-O., G.M., L.F., F.E.R., D.M. and S.S.; Supervision G.M., L.F., D.M. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the project “HONEST—High-Order Dynamical Networks in Computational Neuroscience and Physiology: an Information-Theoretic Framework”, Italian Ministry of University and Research (funded by MUR, PRIN 2022, code 2022YMHNPY, CUP: B53D23003020006) (M.O.-O., L.F. and S.S.); and by the project “Higher-order complex systems modeling for personalized medicine”, Italian Ministry of University and Research (funded by MUR, PRIN 2022-PNRR, code P2022JAYMH, CUP: H53D23009130001) (S.S.).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in https://openfmri.org accessed on 28 May 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Here we prove a theorem about the non-positivity of the O-information, for arbitrary probability distributions, when a group of independent variables influence a target variable. Consider n random variables X n = ( X 1 , , X n ) that follow a joint distribution with the following structure: p ( x ) = p ( x n | x n 1 ) i = 1 n 1 p ( x i ) .
Statement 1.
O i n f ( X n ) 0 .
Proof. 
Firstly we recall that O-information is the difference between the total correlation (TC) and the dual total correlation (DTC) [11], i.e., O i n f ( X n ) = T C ( X n ) D T C ( X n ) . It can be shown that
TC ( X n ) = j = 2 n I ( X j 1 ; X j ) ,
DTC ( X n ) = I ( X n 1 ; X n ) + k = 2 n 1 I ( X k 1 ; X k | X k + 1 n ) ,
where X i j = ( X i , , X j ) is a shorthand notation. Then, because of the joint distribution of X n , it is clear that
I ( X i 1 ; X i ) = 0
for all i < n . This implies that, for this particular type of joint distribution, we have
TC ( X n ) = I ( X n 1 ; X n ) .
Note that this is the first term in the expression for the DTC above, and hence this implies T C ( X n ) D T C ( X n ) for this type of distribution. Moreover, we can represent the value of the O-information in this case as follows:
O i n f ( X n ) = TC ( X n ) DTC ( X n ) = k = 2 n 1 I ( X k 1 ; X k | X k + 1 n ) 0 ,
which proves the statement. □

References

  1. Everitt, S. An Introduction to Latent Variable Model; Chapman and Hall: London, UK, 1984. [Google Scholar]
  2. Carrol, J.B. Human Cognitive Abilities: A Survey of Factor-Analytic Studies; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  3. Stevens, J.P. Applied Multivariate Statistics for the Social Sciences; Psychology Press: London, UK, 2002. [Google Scholar]
  4. Harman, H.H. Modern Factor Analysis; The University of Chicago Press: Chicago, IL, USA, 1976. [Google Scholar]
  5. Love, D.; Hallbauer, D.K.; Amos, A.; Hranova, R.K. Factor analysis as a tool in groundwater quality management: Two southern African case studies. Phys. Chem. Earth 2004, 29, 1135. [Google Scholar] [CrossRef]
  6. Stewart, D.W. The application and misapplication of factor analysis in marketing research. J. Mark. Res. 1981, 18, 51. [Google Scholar] [CrossRef]
  7. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification; John Wiley & Sons: New York, NY, USA, 2001. [Google Scholar]
  8. Wall, M.E.; Rechtsteiner, A.; Rocha, L. Singular value decomposition and principal component analysis. In A Practical Approach to Microarray Data Analysis; Berrar, D., Dubitzky, W., Granzow, M., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2003; pp. 91–109. [Google Scholar]
  9. Tipping, M.E.; Bishop, C.M. Probabilistic principal component analysis. J. R. Stat. Soc. Ser. B 1999, 21, 611. [Google Scholar] [CrossRef]
  10. Williams, P.L.; Beer, R.D. Nonnegative decomposition of multivariate information. arXiv 2010, arXiv:1004.2515. [Google Scholar] [CrossRef]
  11. Rosas, F.E.; Mediano, P.A.M.; Gastpar, M.; Jensen, H.J. Quantifying high-order interdependencies via multivariate extensions of the mutual information. Phys. Rev. E 2019, 100, 032305. [Google Scholar] [CrossRef]
  12. Luppi, A.I.; Rosas, F.E.; Mediano, P.A.; Menon, D.K.; Stamatakis, E.A. Information decomposition and the informational architecture of the brain. Trends Cogn. Sci. 2024, 28, 352–368. [Google Scholar] [CrossRef]
  13. Gehlen, J.; Li, J.; Hourican, C.; Tassi, S.; Mishra, P.P.; Lehtimäki, T.; Kähönen, M.; Raitakari, O.; Bosch, J.A.; Quax, R. Bias in O-Information Estimation. Entropy 2024, 26, 837. [Google Scholar] [CrossRef] [PubMed]
  14. Scagliarini, T.; Nuzzi, D.; Antonacci, Y.; Faes, L.; Rosas, F.E.; Marinazzo, D.; Stramaglia, S. Gradients of O-information: Low-order descriptors of high-order dependencies. Phys. Rev. Res. 2023, 5, 013025. [Google Scholar] [CrossRef]
  15. Gut, A. An Intermediate Course in Probability; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  16. Hyvärinen, A.; Hurri, J.; Hoyer, P.O. Independent Component Analysis. In Natural Image Statistics. Computational Imaging and Vision; Springer: London, UK, 2009; Volume 39. [Google Scholar]
  17. Poldrack, R.A.; Congdon, E.; Triplett, W.; Gorgolewski, K.J.; Karlsgodt, K.H.; Mumford, J.A.; Sabb, F.W.; Freimer, N.B.; London, E.D.; Cannon, T.D.; et al. A phenome-wide examination of neural and cognitive function. Sci. Data 2016, 3, 160110. [Google Scholar] [CrossRef]
  18. Sannino, S.; Stramaglia, S.; Lacasa, L.; Marinazzo, D. Visibility graphs for fMRI data: Multiplex temporal graphs and their modulations across resting-state networks. Netw. Neurosci. 2017, 1, 208–221. [Google Scholar] [CrossRef]
  19. Jenkinson, M.; Beckmann, C.F.; Behrens, T.E.; Woolrich, M.W.; Smith, S.M. FSL. NeuroImage 2012, 62, 782–790. [Google Scholar] [CrossRef]
  20. Yeo, B.T.; Krienen, F.M.; Sepulcre, J.; Sabuncu, M.R.; Lashkari, D.; Hollinshead, M.; Roffman, J.L.; Smoller, J.W.; Zöllei, L.; Polimeni, J.R.; et al. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J. Neurophysiol. 2011, 106, 1125–1165. [Google Scholar] [CrossRef]
  21. Luppi, A.I.; Mediano, P.A.; Rosas, F.E.; Holland, N.; Fryer, T.D.; O’Brien, J.T.; Rowe, J.B.; Menon, D.K.; Bor, D.; Stamatakis, E.A. A synergistic core for human brain evolution and cognition. Nat. Neurosci. 2022, 25, 771–782. [Google Scholar] [CrossRef]
  22. Scagliarini, T.; Sparacino, L.; Faes, L.; Marinazzo, D.; Stramaglia, S. Gradients of O-information highlight synergy and redundancy in physiological applications. Front. Netw. Physiol. 2024, 3, 1335808. [Google Scholar] [CrossRef] [PubMed]
  23. Varley, T.F.; Pope, M.; Faskowitz, J.; Sporns, O. Multivariate information theory uncovers synergistic subsystems of the human cerebral cortex. Commun. Biol. 2023, 6, 451. [Google Scholar] [CrossRef] [PubMed]
  24. Whitfield, M.L.; Sherlock, G.; Saldanha, A.J.; Murray, J.I.; Ball, C.A.; Alexander, K.E.; Matese, J.C.; Perou, C.M.; Hurt, M.M.; Brown, P.O. Identification of genes periodically expressed in the human cell cycle and their expression in tumors. Mol. Biol. Cell 2002, 13, 1977. [Google Scholar] [CrossRef]
  25. Hoerl, A.R.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55. [Google Scholar] [CrossRef]
  26. Zamparo, M.; Stramaglia, S.; Banavar, J.R.; Maritan, A. Inverse problem for multivariate time series using dynamical latent variables. Physica A 2012, 391, 3159–3169. [Google Scholar] [CrossRef]
  27. Lin, L.; Wang, S.; Li, X.; He, Q.; Benz, J.P.; Tian, C. STK-12 acts as a transcriptional brake to control the expression of cellulase-encoding genes in Neurospora crassa. PLoS Genet. 2019, 15, e1008510. [Google Scholar] [CrossRef]
  28. Tang, J.; Gautam, P.; Gupta, A.; He, L.; Timonen, S.; Akimov, Y.; Wang, W.; Szwajda, A.; Jaiswal, A.; Turei, D.; et al. Network pharmacology modeling identifies synergistic Aurora B and ZAK interaction in triple-negative breast cancer. NPJ Syst. Biol. Appl. 2019, 5, 20. [Google Scholar] [CrossRef]
  29. Hwang, E.J.; Sato, T.R.; Sato, T.K. A canonical scheme of bottom-up and top-down information flows in the frontoparietal network. Front. Neural Circuits 2021, 15, 691314. [Google Scholar] [CrossRef] [PubMed]
  30. Rosas, F.E.; Mediano, P.A.; Luppi, A.I.; Varley, T.F.; Lizier, J.T.; Stramaglia, S.; Jensen, H.J.; Marinazzo, D. Disentangling high-order mechanisms and high-order behaviours in complex systems. Nat. Phys. 2022, 18, 476–477. [Google Scholar] [CrossRef]
Figure 1. Diagram intuitively representing the proposed approach. Two hidden factors influence the observed variables: the overlap between the two loading profiles L 1 and L 2 corresponds to variables which are synergistic for factors f 1 and f 2 .
Figure 1. Diagram intuitively representing the proposed approach. Two hidden factors influence the observed variables: the overlap between the two loading profiles L 1 and L 2 corresponds to variables which are synergistic for factors f 1 and f 2 .
Entropy 27 00820 g001
Figure 2. The opposite of the O-information (measuring net synergy) is plotted for the 268 regions in the fMRI dataset.
Figure 2. The opposite of the O-information (measuring net synergy) is plotted for the 268 regions in the fMRI dataset.
Entropy 27 00820 g002
Figure 3. The opposite of the O-information (measuring net synergy) is plotted for the 24 genes in the HeLa dataset whose synergy is recognized as statistically significant against surrogates after Bonferroni correction. The highest synergy is obtained in the correspondence of the gene STK12.
Figure 3. The opposite of the O-information (measuring net synergy) is plotted for the 24 genes in the HeLa dataset whose synergy is recognized as statistically significant against surrogates after Bonferroni correction. The highest synergy is obtained in the correspondence of the gene STK12.
Entropy 27 00820 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ontivero-Ortega, M.; Mijatovic, G.; Faes, L.; Rosas, F.E.; Marinazzo, D.; Stramaglia, S. Localizing Synergies of Hidden Factors in Complex Systems: Resting Brain Networks and HeLa GeneExpression Profile as Case Studies. Entropy 2025, 27, 820. https://doi.org/10.3390/e27080820

AMA Style

Ontivero-Ortega M, Mijatovic G, Faes L, Rosas FE, Marinazzo D, Stramaglia S. Localizing Synergies of Hidden Factors in Complex Systems: Resting Brain Networks and HeLa GeneExpression Profile as Case Studies. Entropy. 2025; 27(8):820. https://doi.org/10.3390/e27080820

Chicago/Turabian Style

Ontivero-Ortega, Marlis, Gorana Mijatovic, Luca Faes, Fernando E. Rosas, Daniele Marinazzo, and Sebastiano Stramaglia. 2025. "Localizing Synergies of Hidden Factors in Complex Systems: Resting Brain Networks and HeLa GeneExpression Profile as Case Studies" Entropy 27, no. 8: 820. https://doi.org/10.3390/e27080820

APA Style

Ontivero-Ortega, M., Mijatovic, G., Faes, L., Rosas, F. E., Marinazzo, D., & Stramaglia, S. (2025). Localizing Synergies of Hidden Factors in Complex Systems: Resting Brain Networks and HeLa GeneExpression Profile as Case Studies. Entropy, 27(8), 820. https://doi.org/10.3390/e27080820

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop