Joint Topology Learning and Latent Input Identification Using Spatio-Temporally Linear Structured SEM
Abstract
1. Introduction
2. Related Work
2.1. Sparse Structural Equation Modeling
2.2. Robust Inference Under Measurement Uncertainty
2.3. Blind Identification and Joint Recovery
3. Proposed Algorithm
3.1. SEM Model
- A single exogenous stimulus may affect a subset of nodes, and a single node may be influenced by multiple distinct stimuli.
- Each stimulus typically persists over a specific duration, with onset and offset times that vary across different inputs.
3.2. Problem Formulation
- promotes a sparse network topology, consistent with the belief that most real-world networks are characterized by a relatively small number of direct causal links to highlight the major dependences.
- encourages a low-rank structure in , reflecting the assumption that the network dynamics are driven by a limited number of latent stimuli.
- facilitates row-sparsity to identify specific nodes that serve as the primary entry points for exogenous influences.
- enforces piecewise temporal smoothness, ensuring that the identified inputs persist over meaningful durations rather than manifesting as transient noise.
Discussion on Identifiability and Well-Posedness
- Violation of topological constraints: The adjacency matrix is constrained to have zero diagonal entries () and is regularized by the norm to encourage sparsity. If a transformation attempts to absorb part of into (i.e., ), the resulting would generally acquire non-zero diagonal elements or become densely populated, directly conflicting with the no-self-loop constraint and incurring a large penalty.
- Violation of input structure: Conversely, transferring information from into would disrupt the low-rank and piecewise smooth temporal structure of . The nuclear norm penalty would increase because the rank of would likely exceed the true number of latent sources Q. Moreover, the temporal consistency term would penalize any newly introduced abrupt changes that are not characteristic of the actual exogenous stimuli.
3.3. Numerical Method
3.3.1. Update of
3.3.2. Update of
- Dual Update (Projection Step):This is performed via element-wise clamping: , where . Here, is the extrapolated primal variable used to ensure convergence stability.
- Primal Update (Proximal Step): The update for combines the quadratic fidelity term and the penalty. First, an intermediate vector is computed by solving the optimality condition of the quadratic part of the Lagrangian:Subsequently, the primal variable is obtained by applying the proximal operator (block soft-thresholding) to :
- Extrapolation Step: To stabilize the interaction between primal and dual variables, the extrapolated variable is updated for the next dual step:where is the relaxation parameter used in the extrapolation step to stabilize the interaction between primal and dual variables, ensuring the algorithm converges to the saddle point of the subproblem.
3.3.3. Update of
3.3.4. Update of
| Algorithm 1: Major steps of proposed XLS-SEM |
![]() |
4. Numerical Experiments
- TLS-SEM and sTLS-SEM [20]: Robust variants designed to account for errors in variables. While TLS-SEM addresses measurement noise, sTLS-SEM further incorporates sparsity regularization on the input matrix .
4.1. Synthetic Data
- Mean squared error (MSE) of the adjacency matrix: where denotes the estimated adjacency matrix, evaluating topology recovery.
- MSE of the signal reconstruction (denoising): where denotes the estimated output matrix.
4.1.1. Visual Illustration and Convergence
4.1.2. Impact of Input and Output SNR
4.1.3. Impact of Sample Size T
4.1.4. Impact of Latent Input Complexity (Q and P)
4.2. Real-World Dataset
4.2.1. Diabetes Clinical Records
4.2.2. Beijing Air Quality Data
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| SEM | Structural Equation Modeling |
| ADMM | Alternating Direction Method of Multipliers |
| MSE | Mean Squared Error |
| SNR | Signal-to-Noise Ratio |
| PDHG | Primal-Dual Hybrid Gradient |
References
- Shuman, D.I.; Narang, S.K.; Frossard, P.; Ortega, A.; Vandergheynst, P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag. 2013, 30, 83–98. [Google Scholar] [CrossRef]
- Leus, G.; Marques, A.G.; Moura, J.M.F.; Ortega, A.; Shuman, D.I. Graph signal processing: History, development, impact, and outlook. IEEE Signal Process. Mag. 2023, 40, 49–60. [Google Scholar] [CrossRef]
- Yan, Y.; Hou, J.; Song, Z.; Kuruoglu, E.E. Signal processing over time-varying graphs: A systematic review. arXiv 2024, arXiv:2412.00462. [Google Scholar] [CrossRef]
- Sandryhaila, A.; Moura, J.M.F. Discrete signal processing on graphs. IEEE Trans. Signal Process. 2013, 61, 1644–1656. [Google Scholar] [CrossRef]
- Ortega, A.; Frossard, P.; Kovacevic, J.; Moura, J.M.F.; Vandergheynst, P. Graph Signal Processing: Overview, Challenges, and Applications. Proc. IEEE 2018, 106, 808–828. [Google Scholar] [CrossRef]
- Kaplan, D. Structural Equation Modeling: Foundations and Extensions, 2nd ed.; Sage Publications: Thousand Oaks, CA, USA, 2009. [Google Scholar]
- Bentler, P.M.; Weeks, D.G. Linear structural equations with latent variables. Psychometrika 1980, 45, 289–308. [Google Scholar] [CrossRef]
- Jöreskog, K.G. A general method for estimating a linear structural equation system. In Structural Equation Modeling; Academic Press: New York, NY, USA, 1973; pp. 85–112. [Google Scholar]
- Friston, K.J. Functional and effective connectivity: A review. Brain Connect. 2011, 1, 13–36. [Google Scholar] [CrossRef]
- Giannakis, G.B.; Shen, Y.; Karanikolas, G.V. Monitoring and optimization of cyber-physical networks: A graph signal processing approach. IEEE Signal Process. Mag. 2018, 35, 34–46. [Google Scholar]
- Mei, J.; Moura, J.M.F. Signal processing on graphs: Estimating the structure of a graph. IEEE Trans. Signal Process. 2017, 65, 2045–2058. [Google Scholar]
- Baingana, B.; Mateos, G.; Giannakis, G.B. Proximal-gradient algorithms for tracking cascades over social networks. IEEE J. Sel. Top. Signal Process. 2014, 8, 563–575. [Google Scholar] [CrossRef]
- Cai, X.; Bazerque, J.A.; Giannakis, G.B. Inference of gene regulatory networks with sparse structural equation models exploiting genetic perturbations. PLoS Comput. Biol. 2013, 9, e1003068. [Google Scholar] [CrossRef]
- Giannakis, G.B.; Shen, Y.; Karanikolas, G.V. Topology identification and learning over graphs: Accounting for nonlinearities and dynamics. Proc. IEEE 2018, 106, 787–807. [Google Scholar] [CrossRef]
- Goldberger, A.S. Structural equation methods in the social sciences. Econometrica 1972, 40, 979–1001. [Google Scholar] [CrossRef]
- Muthén, B. A general structural equation model with dichotomous, ordered categorical, and continuous latent variable indicators. Psychometrika 1984, 49, 115–132. [Google Scholar] [CrossRef]
- Ning, X.; Karypis, G. SLIM: Sparse linear methods for top-N recommender systems. In Proceedings of the 2011 IEEE 11th International Conference on Data Mining (ICDM), Vancouver, BC, Canada, 11–14 December 2011; pp. 497–506. [Google Scholar]
- Browne, M.W. Generalized least squares estimators in the analysis of covariance structures. S. Afr. Stat. J. 1974, 8, 1–24. [Google Scholar] [CrossRef]
- Bollen, K.A. An alternative two stage least squares (2SLS) estimator for latent variable equations. Psychometrika 1996, 61, 109–121. [Google Scholar] [CrossRef]
- Ceci, E.; Shen, Y.; Giannakis, G.B.; Barbarossa, S. Graph-based learning under perturbations via total least-squares. IEEE Trans. Signal Process. 2020, 68, 2870–2882. [Google Scholar] [CrossRef]
- Shames, I.; Teixeira, A.M.H.; Sandberg, H.; Johansson, K.H. Distributed identification of network topology with noisy node data. In Proceedings of the 51st IEEE Conference on Decision and Control (CDC), Maui, HI, USA, 10–13 December 2012; pp. 5205–5210. [Google Scholar]
- Preti, M.G.; Bolton, T.A.W.; Griffa, A.; Van De Ville, D. Graph signal processing for neuroimaging to reveal dynamics of brain structure-function coupling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
- Mazurek, K.; Hohol, M. Revisiting information cascades in online social networks. Mathematics 2024, 13, 77. [Google Scholar] [CrossRef]
- Segarra, S.; Marques, A.G.; Mateos, G.; Ribeiro, A. Blind identification of graph filters. IEEE Trans. Signal Process. 2017, 65, 1146–1159. [Google Scholar] [CrossRef]
- Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM 2011, 58, 1–37. [Google Scholar] [CrossRef]
- Thanou, D.; Shuman, D.I.; Frossard, P. Learning parametric dictionaries for graph signals. IEEE Trans. Signal Process. 2017, 65, 2517–2530. [Google Scholar]
- Tibshirani, R.; Saunders, M.; Rosset, S.; Zhu, J. Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 2005, 67, 91–108. [Google Scholar] [CrossRef]
- Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, J.; Zhang, X. Low-rank matrix recovery via nonconvex optimization and ADMM algorithm. Mathematics 2023, 11, 652. [Google Scholar]
- Hong, M.; Luo, Z.Q.; Razaviyayn, M. Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems. SIAM J. Optim. 2016, 26, 337–364. [Google Scholar] [CrossRef]
- Liu, Q.; Shen, Z.; Gu, Y. Linearized ADMM for nonconvex nonsmooth optimization with convergence analysis. IEEE Access 2019, 7, 76131–76144. [Google Scholar] [CrossRef]
- Bollen, K.A. Structural Equations with Latent Variables; John Wiley & Sons: New York, NY, USA, 1989. [Google Scholar]
- Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
- Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
- Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef]
- Kahn, M. Diabetes Data Set. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/dataset/34/diabetes (accessed on 15 February 2026).







| Method | Precision | Recall | F1-Score | MSEA |
|---|---|---|---|---|
| LS-SEM | 0.233 | 0.355 | 0.281 | 0.228 |
| TLS-SEM | 0.320 | 0.316 | 0.318 | 0.218 |
| sTLS-SEM | 0.261 | 0.303 | 0.280 | 0.193 |
| XLS-SEM (Ours) | 0.352 | 0.329 | 0.340 | 0.184 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Zhou, J.; Yang, R.; Shi, X.; Feng, S. Joint Topology Learning and Latent Input Identification Using Spatio-Temporally Linear Structured SEM. Mathematics 2026, 14, 837. https://doi.org/10.3390/math14050837
Zhou J, Yang R, Shi X, Feng S. Joint Topology Learning and Latent Input Identification Using Spatio-Temporally Linear Structured SEM. Mathematics. 2026; 14(5):837. https://doi.org/10.3390/math14050837
Chicago/Turabian StyleZhou, Jie, Rui Yang, Xintong Shi, and Shuyang Feng. 2026. "Joint Topology Learning and Latent Input Identification Using Spatio-Temporally Linear Structured SEM" Mathematics 14, no. 5: 837. https://doi.org/10.3390/math14050837
APA StyleZhou, J., Yang, R., Shi, X., & Feng, S. (2026). Joint Topology Learning and Latent Input Identification Using Spatio-Temporally Linear Structured SEM. Mathematics, 14(5), 837. https://doi.org/10.3390/math14050837


