Next Article in Journal
Symmetry Breaking in Interacting Ring-Shaped Superflows of Bose–Einstein Condensates
Previous Article in Journal
Symmetry in Quantum Optics Models
Open AccessArticle

Structure Learning of Gaussian Markov Random Fields with False Discovery Rate Control

1
Computer Science, Hanyang University ERICA, Ansan 15588, Korea
2
Department of Mathematics, Wroclaw University of Science and Technology, 50-370 Wroclaw, Poland
3
Institute of Mathematics, University of Wroclaw, 50-384 Wroclaw, Poland
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(10), 1311; https://doi.org/10.3390/sym11101311
Received: 11 September 2019 / Revised: 14 October 2019 / Accepted: 15 October 2019 / Published: 18 October 2019
In this paper, we propose a new estimation procedure for discovering the structure of Gaussian Markov random fields (MRFs) with false discovery rate (FDR) control, making use of the sorted 1 -norm (SL1) regularization. A Gaussian MRF is an acyclic graph representing a multivariate Gaussian distribution, where nodes are random variables and edges represent the conditional dependence between the connected nodes. Since it is possible to learn the edge structure of Gaussian MRFs directly from data, Gaussian MRFs provide an excellent way to understand complex data by revealing the dependence structure among many inputs features, such as genes, sensors, users, documents, etc. In learning the graphical structure of Gaussian MRFs, it is desired to discover the actual edges of the underlying but unknown probabilistic graphical model—it becomes more complicated when the number of random variables (features) p increases, compared to the number of data points n. In particular, when p n , it is statistically unavoidable for any estimation procedure to include false edges. Therefore, there have been many trials to reduce the false detection of edges, in particular, using different types of regularization on the learning parameters. Our method makes use of the SL1 regularization, introduced recently for model selection in linear regression. We focus on the benefit of SL1 regularization that it can be used to control the FDR of detecting important random variables. Adapting SL1 for probabilistic graphical models, we show that SL1 can be used for the structure learning of Gaussian MRFs using our suggested procedure nsSLOPE (neighborhood selection Sorted L-One Penalized Estimation), controlling the FDR of detecting edges. View Full-Text
Keywords: Gaussian Markov random field; Inverse Covariance Matrix Estimation; FDR control Gaussian Markov random field; Inverse Covariance Matrix Estimation; FDR control
Show Figures

Figure 1

MDPI and ACS Style

Lee, S.; Sobczyk, P.; Bogdan, M. Structure Learning of Gaussian Markov Random Fields with False Discovery Rate Control. Symmetry 2019, 11, 1311.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop