DCBAN: A Dynamic Confidence Bayesian Adaptive Network for Reconstructing Visual Images from fMRI Signals
Abstract
1. Introduction
- (1)
- DCBAN is an end-to-end dynamic confidence Bayesian adaptive diffusion model network for fMRI brain signal decoding and visual image reconstruction. Experiments demonstrate state-of-the-art accuracy and generation quality in both tasks.
- (2)
- Deep Nested Singular Value Decomposition (DeepSVD) is introduced for robust low-rank feature extraction. By embedding the low-rank structure into each layer of the neural network, it suppresses noise and enhances the model’s ability to represent complex structures, improving the structural fidelity of the reconstructed image.
- (3)
- Bayesian Adaptive Fractional Ridge Regression (BAFRR), based on singular value space preprocessing, adaptively adjusts the regularization level. This adaptability accommodates the complexity of different visual stimuli, improving the model’s decoding accuracy and generalization ability.
- (4)
- The Dynamic Confidence Adaptive Diffusion Model (DCAF) predicts the reliability of decoding features dynamically. It considers the dynamic characteristics of different decoding features during visual image reconstruction, significantly enhancing the diversity and naturalness of decoded visual images in complex scenarios.
2. Materials and Methods
2.1. Dataset
2.2. Model Overview
2.3. Deep Nested Singular Value Decomposition for Fine-Grained Feature Extraction in fMRI Data
2.3.1. Nested Singular Value Decomposition Optimization
- (1)
- The output matrix from the layer i − 1 is convolved with a 3 × 3 filter to obtain .
- (2)
- Singular value decomposition (SVD) is performed on , resulting in the left and right singular vectors and , respectively.
- (3)
- To further enhance the independence and effectiveness of feature representation, the singular vectors and at layer i are subjected to stronger constraints by the low-rank constraint objective function . The low-rank constraint objective function consists primarily of the signal matching term, regularization term, and orthogonality constraint term [46], as shown in Equation (1).
- (4)
- The Mean Squared Error (MSE) layer, as shown in Equation (3), calculates the error between the network output and the target matrix by computing the squared difference between the two and averaging over all matrix elements.
- (5)
- The global loss term not only optimizes the overall performance of the network but also applies fine-grained low-rank control at the local level. It is calculated by combining the low-rank constraint objective function with the mean squared error , as shown in Equation (4).
- (6)
- The Adam optimizer [47] updates the parameters of the deep nested Singular Value Decomposition (DeepSVD) network, as shown in Equation (5).
2.3.2. Calculation of Singular Values and Vectors
- (1)
- Computation of Strongly Correlated Singular Values: The matrix is computed by calculating the norm of each column of the matrix , as shown in Equation (6).
- (2)
- Computation of Singular Vectors: The matrix is derived by normalizing the output matrix , as shown in Equation (7).
2.4. Bayesian Adaptive Feature Decoding
2.4.1. Automated Regularization Score Optimization
- (1)
- target score grid definition
- (2)
- cross-validation evaluation
- (3)
- optimal score selection
2.4.2. Bayesian Adaptive Fractional Ridge Regression
- (1)
- Solution of unregularized coefficients
- (2)
- Bayesian adaptive grid optimization
- (a)
- Construction of the proxy model
- (b)
- Evaluate candidate
- (c)
- Grid optimization
- (3)
- Solution by fractional ridge regression
2.4.3. Semantic Feature Prediction and Text Feature Prediction
2.5. Dynamic Confidence Adaptive Fusion Diffusion Visual Image Reconstruction
- (1)
- Feature Alignment
- (2)
- Confidence Network for Dynamic Quantization of Semantic Features
- (3)
- Dynamic Adaptive Control Generation of Text Features
- (4)
- Diffusion Model for Visual Image Reconstruction
3. Results
3.1. Experimental Setup
3.2. Evaluation Metrics
- (1)
- Low-level visual similarity
- (2)
- High-Level Semantic Fidelity Evaluation
3.3. Performance Analysis and Evaluation
3.3.1. Qualitative Comparison of the Reconstruction Results Between Our Method and Other Methods
3.3.2. Validation of the Dynamic Control Mechanism in the Generation Phase
3.3.3. Cross-Subject Consistency Evaluation
3.3.4. Quantitative Comparison with Advanced Reconstruction Methods
3.3.5. Ablation Study
3.3.6. Comparison of Feature Extraction Method Effectiveness
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Rakhimberdina, Z.; Jodelet, Q.; Liu, X.; Murata, T. Natural Image Reconstruction from fMRI Using Deep Learning: A Survey. Front. Neurosci. 2021, 15, 795488. [Google Scholar] [CrossRef]
- Du, B.; Cheng, X.; Duan, Y.; Ning, H. fMRI Brain Decoding and Its Applications in Brain–Computer Interface: A Survey. Brain Sci. 2022, 12, 228. [Google Scholar] [CrossRef] [PubMed]
- Xia, W.; De Charette, R.; Oztireli, C.; Xue, J.-H. DREAM: Visual Decoding from REversing HumAn Visual SysteM. In Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 1–6 January 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 8211–8220. [Google Scholar] [CrossRef]
- Okada, G.; Sakai, Y.; Shibakawa, M.; Yoshioka, T.; Itai, E.; Shinzato, H.; Yamamoto, O.; Kurata, K.; Tamura, T.; Jitsuiki, H.; et al. Examining the Usefulness of the Brain Network Marker Program Using fMRI for the Diagnosis and Stratification of Major Depressive Disorder: A Non-Randomized Study Protocol. BMC Psychiatry 2023, 23, 63. [Google Scholar] [CrossRef] [PubMed]
- Zheltyakova, M.; Kireev, M.; Korotkov, A.; Medvedev, S. Neural Mechanisms of Deception in a Social Context: An fMRI Replication Study. Sci. Rep. 2020, 10, 10713. [Google Scholar] [CrossRef] [PubMed]
- Scarpelli, S.; Alfonsi, V.; Gorgoni, M.; Giannini, A.M.; De Gennaro, L. Investigation on Neurobiological Mechanisms of Dreaming in the New Decade. Brain Sci. 2021, 11, 220. [Google Scholar] [CrossRef]
- Norman, K.A.; Polyn, S.M.; Detre, G.J.; Haxby, J.V. Beyond Mind-Reading: Multi-Voxel Pattern Analysis of fMRI Data. Trends Cogn. Sci. 2006, 10, 424–430. [Google Scholar] [CrossRef]
- Ramezanpour, H.; Thier, P. Decoding of the Other’s Focus of Attention by a Temporal Cortex Module. Proc. Natl. Acad. Sci. USA 2020, 117, 2663–2670. [Google Scholar] [CrossRef]
- Kay, K.N.; Naselaris, T.; Prenger, R.J.; Gallant, J.L. Identifying Natural Images from Human Brain Activity. Nature 2008, 452, 352–355. [Google Scholar] [CrossRef]
- Fujiwara, Y.; Miyawaki, Y.; Kamitani, Y. Modular Encoding and Decoding Models Derived from Bayesian Canonical Correlation Analysis. Neural Comput. 2013, 25, 979–1005. [Google Scholar] [CrossRef]
- Higashi, T.; Maeda, K.; Ogawa, T.; Haseyama, M. Estimation of Visual Features of Viewed Image from Individual and Shared Brain Information Based on fMRI Data Using Probabilistic Generative Model. In Proceedings of the ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1335–1339. [Google Scholar] [CrossRef]
- Akamatsu, Y.; Harakawa, R.; Ogawa, T.; Haseyama, M. Brain Decoding of Viewed Image Categories via Semi-Supervised Multi-View Bayesian Generative Model. IEEE Trans. Signal Process. 2020, 68, 5769–5781. [Google Scholar] [CrossRef]
- Qian, X.; Wang, Y.; Huo, J.; Feng, J.; Fu, Y. fMRI-PTE: A Large-Scale fMRI Pretrained Transformer Encoder for Multi-Subject Brain Activity Decoding. arXiv 2023, arXiv:2311.00342. [Google Scholar] [CrossRef]
- Beliy, R.; Gaziv, G.; Hoogi, A.; Strappini, F.; Golan, T.; Irani, M. From Voxels to Pixels and Back: Self-Supervision in Natural-Image Reconstruction from fMRI. Adv. Neural Inf. Process. Syst. 2019, 32, 130–135. [Google Scholar] [CrossRef]
- Du, C.; Du, C.; Huang, L.; Wang, H.; He, H. Structured Neural Decoding with Multitask Transfer Learning of Deep Neural Network Representations. IEEE Trans. Neural Netw. Learn. Syst. 2020, 33, 600–614. [Google Scholar] [CrossRef] [PubMed]
- Kuang, M.; Zhan, Z.; Gao, S. Natural Image Reconstruction from fMRI Based on Node–Edge Interaction and Multi–Scale Constraint. Brain Sci. 2024, 14, 234. [Google Scholar] [CrossRef] [PubMed]
- Qian, X.; Wang, Y.; Fu, Y.; Sun, X.; Xue, X.; Feng, J. Joint fMRI Decoding and Encoding with Latent Embedding Alignment. arXiv 2023, arXiv:2303.14730. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar] [CrossRef]
- Zhao, Z.; Jing, H.; Wang, J.; Wu, W.; Ma, Y. Image Structure Reconstruction from fMRI by Unsupervised Learning Based on VAE. In Artificial Neural Networks and Machine Learning—ICANN 2022; Springer Nature: Cham, Switzerland, 2022; pp. 137–148. [Google Scholar] [CrossRef]
- Chen, K.; Ma, Y.; Sheng, M.; Zheng, N. Foreground-Attention in Neural Decoding: Guiding Loop-Enc-Dec to Reconstruct Visual Stimulus Images from fMRI. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–8. [Google Scholar] [CrossRef]
- Luo, J.; Cui, W.; Liu, J.; Li, Y.; Guo, Y.; Xu, S.; Wang, L. Visual Image Decoding of Brain Activities Using a Dual Attention Hierarchical Latent Generative Network with Multiscale Feature Fusion. IEEE Trans. Cogn. Dev. Syst. 2022, 15, 761–773. [Google Scholar] [CrossRef]
- Gaziv, G.; Beliy, R.; Granot, N.; Hoogi, A.; Strappini, F.; Golan, T.; Irani, M. Self-Supervised Natural Image Reconstruction and Large-Scale Semantic Classification from Brain Activity. Nat. Neurosci. 2022, 254, 119121. [Google Scholar] [CrossRef]
- Meng, L.; Yang, C. Semantics-Guided Hierarchical Feature Encoding Generative Adversarial Network for Natural Image Reconstruction from Brain Activities. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–9. [Google Scholar] [CrossRef]
- Ozcelik, F.; Choksi, B.; Mozafari, M.; Reddy, L.; VanRullen, R. Reconstruction of Perceived Images from fMRI Patterns and Semantic Brain Exploration Using Instance-Conditioned GANs. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA; pp. 1–8. [Google Scholar] [CrossRef]
- Ren, Z.; Li, J.; Xue, X.; Li, X.; Yang, F.; Jiao, Z.; Gao, X. Reconstructing Seen Image from Brain Activity by Visually-Guided Cognitive Representation and Adversarial Learning. Nat. Neurosci. 2021, 228, 117602. [Google Scholar] [CrossRef]
- Halac, M.; Isik, M.; Ayaz, H.; Das, A. Multiscale Voxel Based Decoding for Enhanced Natural Image Reconstruction from Brain Activity. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Mozafari, M.; Reddy, L.; VanRullen, R. Reconstructing Natural Scenes from fMRI Patterns Using BigBigAN. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Huang, S.; Sun, L.; Yousefnezhad, M.; Wang, M.; Zhang, D. Functional Alignment-Auxiliary Generative Adversarial Network-Based Visual Stimuli Reconstruction via Multi-Subject fMRI. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 2715–2725. [Google Scholar] [CrossRef]
- Fang, T.; Zheng, Q.; Pan, G. Alleviating the Semantic Gap for Generalized fMRI-to-Image Reconstruction. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2024; Volume 36, pp. 15096–15107. [Google Scholar]
- Ferrante, M.; Boccato, T.; Toschi, N. Semantic Brain Decoding: From fMRI to Conceptually Similar Image Reconstruction of Visual Stimuli. arXiv 2022, arXiv:2212.06726. [Google Scholar] [CrossRef]
- Lu, Y.; Du, C.; Zhou, Q.; Wang, D.; He, H. MindDiffuser: Controlled Image Reconstruction from Human Brain Activity with Semantic and Structural Diffusion. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; pp. 5899–5908. [Google Scholar] [CrossRef]
- Meng, L.; Yang, C. Dual-Guided Brain Diffusion Model: Natural Image Reconstruction from Human Visual Stimulus fMRI. Bioengineering 2023, 10, 1117. [Google Scholar] [CrossRef]
- Chen, Z.; Qing, J.; Xiang, T.; Yue, W.L.; Zhou, J.H. Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 22710–22720. [Google Scholar] [CrossRef]
- Scotti, P.; Banerjee, A.; Goode, J.; Shabalin, S.; Nguyen, A.; Dempster, A.; Verlinde, N.; Yundler, E.; Weisberg, D.; Norman, K. Reconstructing the Mind’s Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors. Adv. Neural Inf. Process. Syst. 2024, 36, 24705–24728. [Google Scholar] [CrossRef]
- Sun, J.; Li, M.; Moens, M.F. Decoding Realistic Images from Brain Activity with Contrastive Self-Supervision and Latent Diffusion. arXiv 2023, arXiv:2310.00318. [Google Scholar] [CrossRef]
- Ozcelik, F.; VanRullen, R. Natural Scene Reconstruction from fMRI Signals Using Generative Latent Diffusion. Sci. Rep. 2023, 13, 15666. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Ma, Y.; Zhou, W.; Zhu, G.; Zheng, N. BrainCLIP: Bridging Brain and Visual-Linguistic Representation via CLIP for Generic Natural Visual Stimulus Decoding. arXiv 2023, arXiv:2302.12971. [Google Scholar] [CrossRef]
- Zeng, B.; Li, S.; Liu, X.; Gao, S.; Jiang, X.; Tang, X.; Hu, Y.; Liu, J.; Zhang, B. Controllable Mind Visual Diffusion Model. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 26–27 February 2024; Volume 38, pp. 6935–6943. [Google Scholar] [CrossRef]
- Quan, R.; Wang, W.; Tian, Z.; Ma, F.; Yang, Y. Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–24 June 2024; pp. 233–243. [Google Scholar] [CrossRef]
- Liu, Y.; Ma, Y.; Zhu, G.; Jing, H.; Zheng, N. See Through Their Minds: Learning Transferable Neural Representation from Cross-Subject fMRI. arXiv 2024, arXiv:2403.06361. [Google Scholar] [CrossRef]
- Careil, M.; Benchetrit, Y.; King, J.-R. Dynadiff: Single-stage Decoding of Images from Continuously Evolving fMRI. arXiv 2025, arXiv:2505.14556. [Google Scholar] [CrossRef]
- Caffo, B.S.; Crainiceanu, C.M.; Verduzco, G.; Joel, S.; Mostofsky, S.H.; Bassett, S.S.; Pekar, J.J. Two-Stage Decompositions for the Analysis of Functional Connectivity for fMRI with Application to Alzheimer’s Disease Risk. Nat. Neurosci. 2010, 51, 1140–1149. [Google Scholar] [CrossRef]
- Takagi, Y.; Nishimoto, S. High-Resolution Image Reconstruction with Latent Diffusion Models from Human Brain Activity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14453–14463. [Google Scholar] [CrossRef]
- Scotti, P.S.; Tripathy, M.; Villanueva, C.K.T.; Kneeland, R.; Chen, T.; Narang, A.; Santhirasegaran, C.; Xu, J.; Naselaris, T.; Norman, K.A.; et al. MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data. ICML 2024, 38, 6935–6943. [Google Scholar] [CrossRef]
- Allen, E.J.; St-Yves, G.; Wu, Y.; Breedlove, J.L.; Prince, J.S.; Dowdle, L.T.; Nau, M.; Caron, B.; Pestilli, F.; Charest, I.; et al. A Massive 7T fMRI Dataset to Bridge Cognitive Neuroscience and Artificial Intelligence. Nat. Neurosci. 2022, 25, 116–126. [Google Scholar] [CrossRef] [PubMed]
- Rokem, A.; Kay, K. Fractional ridge regression: A fast, interpretable reparameterization of ridge regression. GigaScience 2020, 9, giaa133. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
- Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar] [CrossRef]
- Mockus, J. The Bayesian Approach to Global Optimization. In System Modeling and Optimization; Springer: Berlin/Heidelberg, Germany, 1982; Volume 38, pp. 473–481. [Google Scholar] [CrossRef]
- Xiao, H.; Liu, S.; Zuo, K.; Xu, H.; Cai, Y.; Liu, T.; Yang, Z. Multiple Adverse Weather Image Restoration: A Review. Neurocomputing 2024, 618, 129044. [Google Scholar] [CrossRef]
- Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
- Takagi, Y.; Nishimoto, S. Improving visual image reconstruction from human brain activity using latent diffusion models via 1301multiple decoded inputs. arXiv 2023, arXiv:2306.11536. [Google Scholar]
- Zhou, W.; Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]










| Module | Parameter Name | Type | Quantities | Reason for Selection | 
|---|---|---|---|---|
| Deep-SVD | Nested Singular Value Decomposition Optimization Layers | Integer | 57 | A deeper structure captures complex spatial patterns; 57 strikes the optimal balance between performance and computational cost. | 
| Initial Learning Rate | Continuous | 10−3 | A larger learning rate benefits early-stage fast convergence, paired with learning rate decay for stable optimization. | |
| Singular Value Decomposition Objective Function Parameters , , | Continuous | = 1, = −1, = 1 | Controls the positive-negative balance of decomposition constraints. Preliminary experiments show this combination achieves the best reconstruction accuracy. | |
| Adam Optimizer Parameter , , | Continuous | = 0.9, = 0.999, = 10−8 | Standard stable configuration for adapting to sparse, high-dimensional fMRI feature training. | |
| BAFRR | SMALL_BIAS | Continuous | 10−2 | Controls small bias terms in grid initialization to ensure numerical stability. | 
| BIG_BIAS | Continuous | 10−3 | Sets a larger bias range to cover the possible solution space. | |
| Log Space Sampling Step Size A | Continuous | 0.2 | Balances search accuracy and computational efficiency. | |
| Kernel Length Scale Initial Value | Continuous | 1 | Matches amplitude, maintaining initial scale invariance. | |
| Non-uniform Adaptive Grid Redistribution s | Continuous | 1.75 | Controls the rate of change in grid sparsity density. | |
| Focus Adjustment Parameter c | Continuous | 3 | Enhances fitting accuracy in high-gradient regions. | |
| Bayesian Optimization Iterations | Integer | 20 | Balances search space exploration and time cost. | |
| Initial Regularization Search Range | Interval | [0, 1] | Covers the most common range of regularization strengths. | |
| Target Score Grid Step Size | Continuous | 0.05 | Ensures search accuracy while reducing computational load. | |
| Cross-validation Folds k | Integer | 5 | Improves evaluation stability with limited samples. | |
| DCAF | Confidence Network Bottleneck Dimension | Integer | 4 | Reduces feature dimension to prevent overfitting while retaining key information. | 
| Time Decay Step Range | Interval | [0, 50] | Controls the confidence decay rate, balancing short-term and long-term dependencies. | |
| Diffusion Sampling Steps | Integer | 50 | Balances generation quality and inference time. | |
| DDIM Step Size Parameter η | Continuous | 0.0 | Ensures experimental reproducibility. | |
| General Settings | Random Seed | Integer | 42 | Ensures experimental reproducibility. | 
| Early Stopping Strategy | Boolean | Enabled | Prevents overfitting and saves computational resources. | 
| Method | |||||||
|---|---|---|---|---|---|---|---|
| MindEyeV2 [44] | 0.322 | 0.431 | 96.1% | 98.6% | 95.4% | 94.5% | 0.619 | 
| Minddiffuser [31] | 0.278 | 0.354 | - | - | - | 76.5% | - | 
| STTM [40] | 0.333 | 0.334 | 95.7% | 98.5% | 95.8% | 95.7% | 0.611 | 
| LDM [43] | 0.246 | 0.410 | 78.1% | 85.6% | 83.8% | 82.1% | 0.811 | 
| Ours | 0.361 | 0.423 | 93.3% | 98.8% | 96.0% | 97.8% | 0.609 | 
| Method | Eff ↓ | ||||
|---|---|---|---|---|---|
| without DeepSVD | 0.342 | 0.398 | 94.3% | 95.9% | 0.632 | 
| without BAFRR | 0.336 | 0.401 | 93.8% | 95.6% | 0.640 | 
| without DCAF | 0.331 | 0.386 | 92.5% | 94.2% | 0.651 | 
| without DeepSVD and BAFRR | 0.310 | 0.375 | 91.6% | 92.8% | 0.667 | 
| Baseline (No three modules) | 0.295 | 0.359 | 89.4% | 90.7% | 0.688 | 
| Ours | 0.381 | 0.293 | 96.0% | 97.8% | 0.609 | 
| PCA | SVD | t-SNE | UMAP | CNN | NMF | PCA + DeepSVD (Ours) | 
|---|---|---|---|---|---|---|
| 0.84 | 0.80 | 0.41 | 0.48 | 0.56 | 0.67 | 0.87 | 
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. | 
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, W.; Cai, Y.; Zhang, R.; Li, J.; Ye, Z.; Wang, Z. DCBAN: A Dynamic Confidence Bayesian Adaptive Network for Reconstructing Visual Images from fMRI Signals. Brain Sci. 2025, 15, 1166. https://doi.org/10.3390/brainsci15111166
Wang W, Cai Y, Zhang R, Li J, Ye Z, Wang Z. DCBAN: A Dynamic Confidence Bayesian Adaptive Network for Reconstructing Visual Images from fMRI Signals. Brain Sciences. 2025; 15(11):1166. https://doi.org/10.3390/brainsci15111166
Chicago/Turabian StyleWang, Wenju, Yuyang Cai, Renwei Zhang, Jiaqi Li, Zinuo Ye, and Zhen Wang. 2025. "DCBAN: A Dynamic Confidence Bayesian Adaptive Network for Reconstructing Visual Images from fMRI Signals" Brain Sciences 15, no. 11: 1166. https://doi.org/10.3390/brainsci15111166
APA StyleWang, W., Cai, Y., Zhang, R., Li, J., Ye, Z., & Wang, Z. (2025). DCBAN: A Dynamic Confidence Bayesian Adaptive Network for Reconstructing Visual Images from fMRI Signals. Brain Sciences, 15(11), 1166. https://doi.org/10.3390/brainsci15111166
 
        



 
       