TAN-FGBMLE: Tree-Augmented Naive Bayes Structure Learning Based on Fast Generative Bootstrap Maximum Likelihood Estimation for Continuous-Variable Classification
Abstract
1. Introduction
- This method constructs an optimization framework based on a neural network generator, thereby avoiding the high computational cost of repeatedly optimizing weight combinations in the traditional bootstrap. Instead of relying solely on resampling weights, the generator also incorporates an additional source of randomness. The combination of resampling information and stochastic perturbations allows the model to capture the essential representation of the optimization problem. By leveraging the expressive capacity of neural networks, this strategy enables the generator to flexibly adapt to complex distributional characteristics, substantially improving both the efficiency and accuracy of density estimation.
- Unlike traditional bootstrap methods, which require a complete optimization of the likelihood function at each resampling step, our approach condenses this repetitive and costly procedure into a single, efficient computation. This paper proposes a novel two-stage algorithm for the FGBMLE estimation process. In the first stage, the neural generator rapidly produces a set of candidate parameters that cover the potential distribution space by leveraging resampling information and additional randomness. In the second stage, maximum likelihood estimation is performed on this limited set to obtain the optimal configuration of mixture weights.
- The proposed FGBMLE is applied to TAN structure learning. Compared with traditional KDE and GMMs, FGBMLE has greater adaptability and stability in univariate and bivariate density estimation, enabling more reliable calculation of class-conditional mutual information, and thus the optimized TAN obtains more reasonable dependency structures. It significantly improves the stability of mutual information estimation and the reliability of structure learning.
2. Preliminaries
2.1. Latent Mixture Models and Semiparametric Estimation
2.2. Bayesian Networks and TAN Model
3. Structural Learning Process Based on the TAN-FGBMLE
3.1. FGBMLE Two-Stage Algorithm
| Algorithm 1 FGBMLE Stage-I |
| Input: Dataset ; epochs T; candidate number l; initial generator ; uniform prior ; learning rate . Output: generator producing candidate set . 1: for to T do 2: Sample bootstrap weights: 3: Sample latent noise: 4: Generate candidate parameters: 5: Sample index variable: 6: Compute objective using Equation (12) 7: Update generator parameters via SGD using learning rate 8: end for 9: return |
| Algorithm 2 FGBMLE Stage-II |
| Input: Trained generator ; dataset ; tolerance ; number of candidates l. Output: Optimized mixture weights . 1: Initialize: , 2: while do 3: Set 4: for to l do 5: Sample bootstrap weights: 6: Sample latent noise: 7: Update using Equation (17) 8: end for 9: end while 10: return |
3.2. TAN-FGBMLE Framework
| Algorithm 3 TAN-FGBMLE |
| Input: Dataset ; trained generator from Algorithm 1; optimized mixture weights from Algorithm 2; number of features d; number of candidate parameters l. Output: TAN structure and classification rule . 1: for each class c do 2: Estimate marginal distributions for all using Equation (18) 3: Compute conditional mutual information using Equation (19) 4: end for 5: Construct a weighted complete graph with edge weights 6: Extract maximum spanning tree to determine feature dependencies 7: Orient the spanning tree and add class node c to obtain TAN structure 8: for each feature do 9: Compute conditional probability using Equation (20) 10: end for 11: Define classification rule 12: return TAN structure and |
4. Experiment Results
4.1. Simulation Experiments
4.2. Structure Recovery Experiment
4.3. Comparative Experiments with Extended Naive Bayes and Discriminative Models
4.4. Classification Performance on UCI Benchmark Datasets
4.5. TAN-FGBMLE for Graph Structure Learning on Air Quality Data
4.6. Summary of Strengths and Limitations
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Gelman, A.; Carlin, J.B.; Stern, H.S.; Rubin, D.B. Bayesian Data Analysis; Chapman and Hall: New York, NY, USA, 1995. [Google Scholar] [CrossRef]
- Zhang, J.; Shi, M.H.; Lang, X.S.; You, Q.J.; Jing, Y.L.; Huang, D.Y.; Dai, H.Y.; Kang, J. Dynamic risk evaluation of hydrogen station leakage based on fuzzy dynamic Bayesian network. Int. J. Hydrogen Energy 2024, 50, 1131–1145. [Google Scholar] [CrossRef]
- Zhang, J.F.; Jin, M.; Wan, C.P.; Dong, Z.J.; Wu, X.H. A Bayesian network-based model for risk modeling and scenario deduction of collision accidents of inland intelligent ships. Reliab. Eng. Syst. Saf. 2024, 243, 109816. [Google Scholar] [CrossRef]
- Muñoz-Valencia, C.S.; Quesada, J.A.; Orozco, D.; Barber, X. Employing Bayesian Networks for the Diagnosis and Prognosis of Diseases: A Comprehensive Review. arXiv 2023, arXiv:2304.06400. [Google Scholar] [CrossRef]
- Lewis, D.D. Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval. In Proceedings of the 10th European Conference on Machine Learning (ICML), Chemnitz, Germany, 21–23 April 1998. [Google Scholar] [CrossRef]
- Domingos, P.; Pazzani, M. On the Optimality of the Simple Bayesian Classifier under Zero-One Loss. Mach. Learn. 1997, 29, 103–130. [Google Scholar] [CrossRef]
- Friedman, N.; Geiger, D.; Goldszmidt, M. Bayesian Network Classifiers. Mach. Learn. 1997, 29, 131–163. [Google Scholar] [CrossRef]
- Bielza, C.; Larranaga, P. Discrete Bayesian Network Classifiers: A Survey. ACM Comput. Surv. 2014, 47, 1–43. [Google Scholar] [CrossRef]
- Kraskov, A.; Stogbauer, H.; Grassberger, P. Estimating Mutual Information. Phys. Rev. E 2004, 69, 066138. [Google Scholar] [CrossRef]
- Silverman, B.W. Density Estimation for Statistics and Data Analysis; Routledge: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
- Wei, C.H.; Peng, B.; Li, C.; Liu, Y.Y.; Ye, Z.W.; Zuo, Z.Q. A Two-Stage Optimized Robust Kernel Density Estimation for Bayesian Classification with Outliers. Int. J. Mach. Learn. Cyber. 2025, 1–25. [Google Scholar] [CrossRef]
- Peel, D.; McLachlan, G.J. Robust mixture modelling using the t distribution. Stat. Comp. 2025, 10, 339–348. [Google Scholar] [CrossRef]
- Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; Chapman and Hall: New York, NY, USA, 1993. [Google Scholar] [CrossRef]
- Blei, D.M.; Kucukelbir, A.; McAuliffe, J.D. Variational Inference: A Review for Statisticians. J. Am. Stat. Assoc. 2017, 112, 859–877. [Google Scholar] [CrossRef]
- Laird, N.M. Nonparametric Maximum Likelihood Estimation of a Mixing Distribution. J. Am. Stat. Assoc. 1978, 73, 805–811. [Google Scholar] [CrossRef]
- Zhang, C.H. Compound Decision Theory and Empirical Bayes Methods. Ann. Stat. 2003, 31, 379–390. [Google Scholar] [CrossRef]
- Koenker, R.; Mizera, I. Convex Optimization, Shape Constraints, Compound Decisions, and Empirical Bayes Rules. J. Am. Stat. Assoc. 2014, 109, 674–685. [Google Scholar] [CrossRef]
- Feng, L.; Dicker, L.H. Approximate Nonparametric Maximum Likelihood for Mixture Models: A Convex Optimization Approach to Fitting Arbitrary Multivariate Mixing Distributions. Comput. Stat. Data Anal. 2018, 122, 80–91. [Google Scholar] [CrossRef]
- Ronn, B.B.; Skovgaard, I.M. Nonparametric Maximum Likelihood Estimation of Randomly Time-Transformed Curves. Ann. Stat. 2009, 37, 1–17. [Google Scholar] [CrossRef]
- Li, Y.; Ye, Z. Boosting in Univariate Nonparametric Maximum Likelihood Estimation. IEEE Signal Process. Lett. 2021, 28, 623–627. [Google Scholar] [CrossRef]
- Efron, B. Empirical Bayes Deconvolution Estimates. Biometrika 2016, 103, 1–20. [Google Scholar] [CrossRef]
- Shao, H.J.; Yao, S.C.; Sun, D.C.; Zhang, A.; Liu, S.Z.; Liu, D.X.; Wang, J.; Abdelzaher, T. ControlVAE: Controllable Variational Autoencoder. In Proceedings of the 37th International Conference on Machine Learning (PMLR), Virtual, 13–18 July 2020; Volume 119, pp. 8655–8664. Available online: https://dl.acm.org/doi/10.5555/3524938.3525741 (accessed on 24 October 2025).
- Rubin, D.B. The Bayesian Bootstrap. Ann. Stat. 1981, 9, 130–134. [Google Scholar] [CrossRef]
- Lam, H.; Liu, Z.Y. Bootstrap in High Dimension with Low Computation. In Proceedings of the 40th International Conference on Machine Learning (PMLR), Honolulu, HI, USA, 23–29 July 2023; Volume 202, pp. 18419–18453. Available online: https://proceedings.mlr.press/v202/lam23a.html (accessed on 24 October 2025).
- Kagerer, K. A Hat Matrix for Monotonicity Constrained B-Spline and P-Spline Regression; Technical Report; University of Regensburg: Regensburg, Germany, 2015; Available online: https://epub.uni-regensburg.de/31450/ (accessed on 24 October 2025).
- Nadaraya, E.A.; Kotz, S. Nonparametric Estimation of Probability Densities and Regression Curves; Springer: Dordrecht, The Netherlands, 1989; Available online: https://link.springer.com/book/10.1007/978-94-009-2583-0 (accessed on 24 October 2025).
- Efron, B. Bootstrap Methods: Another Look at the Jackknife. In Breakthroughs in Statistics: Methodology and Distribution; Springer: New York, NY, USA, 1992. [Google Scholar] [CrossRef]
- Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann: San Francisco, CA, USA, 2014. [Google Scholar] [CrossRef]
- Wei, C.; Li, C.; Liu, Y.; Chen, S.; Zuo, Z.; Wang, P.; Ye, Z. Causal Discovery and Reasoning for Continuous Variables with an Improved Bayesian Network Constructed by Locality Sensitive Hashing and Kernel Density Estimation. Entropy 2025, 27, 123. [Google Scholar] [CrossRef]
- Spirtes, P.; Glymour, C.N.; Scheines, R. Causation, Prediction, and Search; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley and Sons: New York, NY, USA, 1999. [Google Scholar] [CrossRef]
- Chow, C.K.; Liu, C. Approximating Discrete Probability Distributions with Dependence Trees. IEEE Trans. Inf. Theory 1968, 14, 462–467. [Google Scholar] [CrossRef]
- Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 8–13 December 2014; Available online: https://dl.acm.org/doi/proceedings/10.5555/2969033 (accessed on 24 October 2025).
- Noh, H.; You, T.; Mun, J.; Han, B. Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization. In Proceedings of the 31st International Conference on Neural Information Processing System (NIPS), Long Beach, CA, USA, 4–9 December 2017; Available online: https://dl.acm.org/doi/10.5555/3295222.3295264 (accessed on 24 October 2025).
- Hornik, K.; Stinchcombe, M.; White, H. Multilayer Feedforward Networks Are Universal Approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
- Hornik, K. Approximation Capabilities of Multilayer Feedforward Networks. Neural Netw. 1991, 4, 251–257. [Google Scholar] [CrossRef]
- Wei, H.; Xie, R.; Yang, L.; Xu, Z.; Li, Z. MetaInfoNet: Learning Task-Guided Information for Sample Reweighting. arXiv 2020, arXiv:2012.05273. [Google Scholar] [CrossRef]
- Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: New York, NY, USA, 2009. [Google Scholar] [CrossRef]
- Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization Methods for Large-Scale Machine Learning. SIAM Rev. 2018, 60, 223–311. [Google Scholar] [CrossRef]
- Wei, G.C.; Tanner, M.A. A Monte Carlo Implementation of the EM Algorithm for the Mixture of Experts Model. J. Am. Stat. Assoc. 1990, 85, 699–704. [Google Scholar] [CrossRef]
- Mohamed, S.; Lakshminarayanan, B. Learning in Implicit Generative Models. arXiv 2016, arXiv:1610.03483. [Google Scholar] [CrossRef]
- Kalisch, M.; Buhlmann, P. Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm. J. Mach. Learn. Res. 2007, 8, 613–636. Available online: https://dl.acm.org/doi/10.5555/1314498.1314520 (accessed on 24 October 2025).
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein Generative Adversarial Networks. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia, 6–11 August 2017; Available online: https://dl.acm.org/doi/abs/10.5555/3305381.3305404 (accessed on 24 October 2025).
- Tsamardinos, I.; Brown, L.E.; Aliferis, C.F. The max-min hill-climbing Bayesian network structure learning algorithm. Mach Learn 2006, 65, 31–78. [Google Scholar] [CrossRef]
- Zhang, C.; Chen, S.; Ke, H. Research on Model Selection-Based Weighted Averaged One-Dependence Estimators. Mathematics 2024, 12, 2306. [Google Scholar] [CrossRef]
- Wang, L.; Zhang, S.; Mammadov, M.; Li, K.; Zhang, X. Semi-supervised Weighting for Averaged One-Dependence Estimators. Appl. Intell. 2022, 52, 4057–4073. [Google Scholar] [CrossRef]
- Yu, L.; Gan, S.; Chen, Y.; Luo, D. A Novel Hybrid Approach: Instance Weighted Hidden Naive Bayes. Mathematics 2021, 9, 2982. [Google Scholar] [CrossRef]
- Jiang, L.; Zhang, L.; Li, C.; Wu, J. A Correlation-Based Feature Weighting Filter for Naive Bayes. IEEE Trans. Knowl. Data Eng. 2019, 31, 201–213. [Google Scholar] [CrossRef]
- Friedman, N.; Goldszmidt, M.; Wyner, A. Data Analysis with Bayesian Networks: A Bootstrap Approach. arXiv 2013, arXiv:1301.6695. [Google Scholar] [CrossRef]







| Distribution | Gaussian Mixture Model (GMM) | Gamma Mixture Model (GaMM) |
|---|---|---|
| d-dimension | ||
| Model | Method | W1 | ISE | MSE | KL |
|---|---|---|---|---|---|
| GMM | FGBMLE | 0.335 | 0.009 | 0.006 | 0.045 |
| Bootstrap | 0.310 | 0.010 | 0.008 | 0.067 | |
| KDE | 0.482 | 0.026 | 0.014 | 0.112 | |
| GaMM | FGBMLE | 0.035 | 0.270 | 0.011 | 0.083 |
| Bootstrap | 0.038 | 0.510 | 0.014 | 0.094 | |
| KDE | 0.072 | 0.693 | 0.023 | 0.141 |
| Method | W1 | ISE | MSE | KL |
|---|---|---|---|---|
| KDE | 0.154 | 0.0314 | 0.0125 | 0.0847 |
| GMM () | 0.102 | 0.0246 | 0.0097 | 0.0632 |
| GMM () | 0.068 | 0.0179 | 0.0064 | 0.0415 |
| TAN-FGBMLE | 0.041 | 0.0103 | 0.0048 | 0.0286 |
| Method | SHD (n = 1000) | SHD (n = 2000) |
|---|---|---|
| TAN-KDE | 0.4 ± 0.5 | 0.2 ± 0.4 |
| TAN-GMM | 0.5 ± 0.4 | 0.3 ± 0.6 |
| TAN-FGBMLE | 0.3 ± 0.4 | 0.2 ± 0.2 |
| Model | Accuracy (%) | Log-Likelihood |
|---|---|---|
| NB | 83.5 ± 1.2 | −2.37 ± 0.08 |
| TAN | 86.8 ± 0.9 | −2.13 ± 0.07 |
| AODE | 87.4 ± 0.8 | −2.09 ± 0.05 |
| WAODE | 87.9 ± 0.8 | −2.07 ± 0.05 |
| HNB | 88.1 ± 0.7 | −2.02 ± 0.05 |
| CFWNB | 88.5 ± 0.6 | −1.98 ± 0.04 |
| KDB-2 | 88.6 ± 0.6 | −1.97 ± 0.04 |
| Logistic Regression | 88.9 ± 0.6 | −1.95 ± 0.04 |
| Random Forest | 90.2 ± 0.5 | −1.89 ± 0.03 |
| RBF-SVM | 89.8 ± 0.5 | −1.90 ± 0.04 |
| TAN-FGBMLE | 91.1 ± 0.4 | −1.82 ± 0.03 |
| No. | Dataset | Instances | Attributes | Classes |
|---|---|---|---|---|
| 1 | Abalone | 4177 | 8 | 3 |
| 2 | Breast Cancer | 569 | 30 | 2 |
| 3 | Car Evaluation | 1728 | 6 | 4 |
| 4 | Credit Approval | 690 | 15 | 2 |
| 5 | Dermatology | 366 | 34 | 6 |
| 6 | E. coli | 336 | 7 | 8 |
| 7 | Glass | 214 | 9 | 6 |
| 8 | Haberman | 306 | 3 | 2 |
| 9 | Heart Disease | 303 | 13 | 2 |
| 10 | ILPD | 583 | 9 | 2 |
| 11 | Ionosphere | 351 | 34 | 2 |
| 12 | Iris | 150 | 4 | 3 |
| 13 | Landsat Satellite | 2000 | 36 | 6 |
| 14 | Parkinsons | 195 | 22 | 2 |
| 15 | Pima Indians Diabetes | 768 | 8 | 2 |
| 16 | Student Performance | 649 | 33 | 2 |
| 17 | Vehicle | 846 | 18 | 4 |
| 18 | Wine | 178 | 13 | 3 |
| 19 | Wine Quality | 1599 | 11 | 10 |
| 20 | Yeast | 1484 | 8 | 10 |
| Dataset Name | TAN-KDE | NBC | FBC | KNN | C4.5 | NN | SVM | TAN-FGBMLE |
|---|---|---|---|---|---|---|---|---|
| Abalone | 0.512 ± 0.067 | 0.498 ± 0.058 | 0.505 ± 0.061 | 0.528 ± 0.064 | 0.490 ± 0.055 | 0.545 ± 0.060 | 0.551 ± 0.066 | 0.612 ± 0.059 |
| Breast Cancer | 0.861 ± 0.042 | 0.845 ± 0.046 | 0.850 ± 0.039 | 0.872 ± 0.040 | 0.858 ± 0.044 | 0.875 ± 0.036 | 0.868 ± 0.042 | 0.889 ± 0.038 |
| Car Evaluation | 0.902 ± 0.035 | 0.890 ± 0.033 | 0.896 ± 0.034 | 0.918 ± 0.037 | 0.910 ± 0.038 | 0.922 ± 0.032 | 0.916 ± 0.036 | 0.934 ± 0.031 |
| Credit Approval | 0.823 ± 0.047 | 0.812 ± 0.042 | 0.818 ± 0.043 | 0.834 ± 0.041 | 0.826 ± 0.048 | 0.842 ± 0.039 | 0.838 ± 0.044 | 0.856 ± 0.037 |
| Dermatology | 0.931 ± 0.028 | 0.920 ± 0.030 | 0.925 ± 0.027 | 0.936 ± 0.031 | 0.929 ± 0.029 | 0.942 ± 0.025 | 0.938 ± 0.028 | 0.951 ± 0.023 |
| E. coli | 0.825 ± 0.055 | 0.810 ± 0.052 | 0.814 ± 0.050 | 0.838 ± 0.048 | 0.822 ± 0.053 | 0.846 ± 0.047 | 0.840 ± 0.051 | 0.862 ± 0.046 |
| Glass | 0.673 ± 0.071 | 0.652 ± 0.074 | 0.660 ± 0.070 | 0.684 ± 0.072 | 0.676 ± 0.068 | 0.691 ± 0.066 | 0.687 ± 0.071 | 0.712 ± 0.065 |
| Haberman | 0.738 ± 0.058 | 0.720 ± 0.061 | 0.728 ± 0.057 | 0.741 ± 0.060 | 0.732 ± 0.062 | 0.752 ± 0.054 | 0.746 ± 0.059 | 0.764 ± 0.053 |
| Heart Disease | 0.836 ± 0.046 | 0.820 ± 0.048 | 0.828 ± 0.044 | 0.844 ± 0.047 | 0.832 ± 0.045 | 0.850 ± 0.041 | 0.846 ± 0.046 | 0.868 ± 0.040 |
| ILPD | 0.752 ± 0.062 | 0.740 ± 0.059 | 0.746 ± 0.061 | 0.758 ± 0.057 | 0.749 ± 0.060 | 0.763 ± 0.055 | 0.760 ± 0.058 | 0.778 ± 0.053 |
| Ionosphere | 0.887 ± 0.036 | 0.872 ± 0.039 | 0.880 ± 0.035 | 0.894 ± 0.038 | 0.885 ± 0.037 | 0.901 ± 0.033 | 0.896 ± 0.036 | 0.912 ± 0.032 |
| Iris | 0.955 ± 0.028 | 0.940 ± 0.032 | 0.948 ± 0.029 | 0.958 ± 0.027 | 0.950 ± 0.030 | 0.962 ± 0.026 | 0.959 ± 0.028 | 0.970 ± 0.025 |
| Landsat Satellite | 0.704 ± 0.065 | 0.688 ± 0.068 | 0.694 ± 0.064 | 0.716 ± 0.066 | 0.707 ± 0.067 | 0.724 ± 0.061 | 0.719 ± 0.065 | 0.738 ± 0.060 |
| Parkinsons | 0.823 ± 0.051 | 0.810 ± 0.054 | 0.816 ± 0.050 | 0.829 ± 0.052 | 0.820 ± 0.053 | 0.834 ± 0.049 | 0.828 ± 0.052 | 0.846 ± 0.047 |
| Pima Indians Diabetes | 0.775 ± 0.059 | 0.762 ± 0.061 | 0.770 ± 0.058 | 0.782 ± 0.056 | 0.774 ± 0.060 | 0.788 ± 0.055 | 0.784 ± 0.058 | 0.802 ± 0.054 |
| Student Performance | 0.741 ± 0.064 | 0.728 ± 0.067 | 0.734 ± 0.062 | 0.748 ± 0.066 | 0.739 ± 0.065 | 0.755 ± 0.060 | 0.750 ± 0.063 | 0.768 ± 0.058 |
| Vehicle | 0.746 ± 0.057 | 0.730 ± 0.060 | 0.738 ± 0.056 | 0.751 ± 0.059 | 0.742 ± 0.058 | 0.756 ± 0.054 | 0.752 ± 0.057 | 0.770 ± 0.052 |
| Wine | 0.944 ± 0.030 | 0.930 ± 0.033 | 0.938 ± 0.029 | 0.948 ± 0.031 | 0.940 ± 0.032 | 0.952 ± 0.027 | 0.949 ± 0.030 | 0.960 ± 0.026 |
| Wine Quality | 0.706 ± 0.062 | 0.691 ± 0.065 | 0.698 ± 0.061 | 0.711 ± 0.064 | 0.703 ± 0.063 | 0.718 ± 0.059 | 0.714 ± 0.062 | 0.732 ± 0.058 |
| Yeast | 0.602 ± 0.071 | 0.586 ± 0.074 | 0.592 ± 0.069 | 0.608 ± 0.072 | 0.598 ± 0.070 | 0.614 ± 0.066 | 0.610 ± 0.071 | 0.628 ± 0.065 |
| Average | 0.782 ± 0.054 | 0.768 ± 0.057 | 0.774 ± 0.053 | 0.789 ± 0.056 | 0.780 ± 0.055 | 0.795 ± 0.051 | 0.791 ± 0.054 | 0.812 ± 0.049 |
| Edge (Undirected) | TAN-KDE | TAN-FGBMLE |
|---|---|---|
| PM2.5–PM10 | 0.71 | 0.92 |
| temperature–NO2 | 0.64 | 0.88 |
| Proximity to industrial areas–SO2 | 0.62 | 0.85 |
| Proximity to industrial areas–CO | 0.58 | 0.81 |
| humidity–PM2.5 | 0.49 | 0.74 |
| CO–population density | 0.56 | 0.79 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wei, C.; Zhang, T.; Li, C.; Wang, P.; Ye, Z. TAN-FGBMLE: Tree-Augmented Naive Bayes Structure Learning Based on Fast Generative Bootstrap Maximum Likelihood Estimation for Continuous-Variable Classification. Entropy 2025, 27, 1216. https://doi.org/10.3390/e27121216
Wei C, Zhang T, Li C, Wang P, Ye Z. TAN-FGBMLE: Tree-Augmented Naive Bayes Structure Learning Based on Fast Generative Bootstrap Maximum Likelihood Estimation for Continuous-Variable Classification. Entropy. 2025; 27(12):1216. https://doi.org/10.3390/e27121216
Chicago/Turabian StyleWei, Chenghao, Tianyu Zhang, Chen Li, Pukai Wang, and Zhiwei Ye. 2025. "TAN-FGBMLE: Tree-Augmented Naive Bayes Structure Learning Based on Fast Generative Bootstrap Maximum Likelihood Estimation for Continuous-Variable Classification" Entropy 27, no. 12: 1216. https://doi.org/10.3390/e27121216
APA StyleWei, C., Zhang, T., Li, C., Wang, P., & Ye, Z. (2025). TAN-FGBMLE: Tree-Augmented Naive Bayes Structure Learning Based on Fast Generative Bootstrap Maximum Likelihood Estimation for Continuous-Variable Classification. Entropy, 27(12), 1216. https://doi.org/10.3390/e27121216

