A Comprehensive Study of MCS-TCL: Multi-Functional Sampling for Trustworthy Compressive Learning
Abstract
1. Introduction
- We propose a novel framework, Multi-functional Compressive Sensing Sampling-based Trustworthy Compressive Learning (MCS-TCL), which unifies image sampling, compression, and feature extraction within a single process while explicitly modelling uncertainty in the final predictions.
- To enable direct task execution from compressed measurements without explicit reconstruction, we introduce a method that transforms measurement vectors into structured image-like representations, allowing for seamless integration with convolutional operations.
- We incorporate evidential deep learning (EDL) into the task-specific network and propose a tailored loss function that effectively quantifies prediction uncertainty and encourages the model to assign stronger evidence to the correct class.
- Experimental results on image classification tasks demonstrate that MCS-TCL achieves state-of-the-art performance at a low sampling rate while reducing model size by 86.57%, highlighting both its effectiveness and efficiency.
2. Related Works
2.1. Compressive Sensing
2.2. Compressive Learning
2.3. Evidential Deep Learning
3. Proposed Methods
3.1. CS Sampling with Measurement Transformation
3.1.1. Algorithm Conversion from Measurements to Feature Maps
3.1.2. Validity of Algorithm
3.1.3. Characteristics of Generated Feature Maps
3.2. Estimating the Uncertainty
4. Experimental Results
4.1. Datasets
- Caltech101 contains 8677 images from 101 classes. The image size is roughly 300 × 200 pixels. The number of images included in each category varies from 40 to 800.
- UC Merced Land Use Dataset has 21 classes of objects, with each class consisting of 100 images. All image sizes are 256 × 256 pixels.
- RESISC45 consists of 31,500 images. For each of the 45 classes, there are 700 images with a size of 256 × 256 pixels.
4.2. Metrics
4.3. Implementation Details
4.4. Ablation Study
4.4.1. Effectiveness of Block Size
4.4.2. Effectiveness of Loss Function
ACC | AUC | ||||
---|---|---|---|---|---|
✓ | 94.21 | 99.68 | |||
✓ | 94.21 | 99.74 | |||
✓ | ✓ | 94.37 | 99.82 | ||
✓ | ✓ | 94.76 | 99.82 | ||
✓ | ✓ | ✓ | 95.12 | 99.86 |
4.5. Comparison with State-of-the-Art CL Works
4.5.1. Configuration of Models
4.5.2. Qualitative Comparison of Input Image to Task-Specified Network
4.5.3. Model Size Comparison
4.5.4. Classification Accuracy Comparison
4.5.5. Robustness to Degraded Input Images
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Candès, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
- Liutkus, A.; Martina, D.; Popoff, S.; Chardon, G.; Katz, O.; Lerosey, G.; Gigan, S.; Daudet, L.; Carron, I. Imaging with nature: Compressive imaging using a multiply scattering medium. Sci. Rep. 2014, 4, 5552. [Google Scholar] [CrossRef]
- Lustig, M.; Donoho, D.L.; Santos, J.M.; Pauly, J.M. Compressed sensing MRI. IEEE Signal Process. Mag. 2008, 25, 72–82. [Google Scholar] [CrossRef]
- Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Sun, T.; Kelly, K.F.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 2008, 25, 83–91. [Google Scholar] [CrossRef]
- Rousset, F.; Ducros, N.; Farina, A.; Valentini, G.; d’Andrea, C.; Peyrin, F. Adaptive basis scan by wavelet prediction for single-pixel imaging. IEEE Trans. Comput. Imaging 2016, 3, 36–46. [Google Scholar] [CrossRef]
- Wu, Z.; Zhang, J.; Mou, C. Dense deep unfolding network with 3D-CNN prior for snapshot compressive imaging. arXiv 2021, arXiv:2109.06548. [Google Scholar] [CrossRef]
- Dong, W.; Shi, G.; Li, X.; Ma, Y.; Huang, F. Compressive Sensing via Nonlocal Low-Rank Regularization. IEEE Trans. Image Process. 2014, 23, 3618–3632. [Google Scholar] [CrossRef]
- Kim, Y.; Nadar, M.S.; Bilgin, A. Compressed sensing using a Gaussian scale mixtures model in wavelet domain. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; IEEE: New York, NY, USA, 2010; pp. 3365–3368. [Google Scholar]
- Zhang, J.; Zhao, C.; Zhao, D.; Gao, W. Image compressive sensing recovery using adaptively learned sparsifying basis via L0 minimization. Signal Process. 2014, 103, 114–126. [Google Scholar] [CrossRef]
- Zhang, J.; Zhao, D.; Gao, W. Group-based sparse representation for image restoration. IEEE Trans. Image Process. 2014, 23, 3336–3351. [Google Scholar] [CrossRef] [PubMed]
- Zhang, J.; Zhao, C.; Gao, W. Optimization-inspired compact deep compressive sensing. IEEE J. Sel. Top. Signal Process. 2020, 14, 765–774. [Google Scholar] [CrossRef]
- You, D.; Xie, J.; Zhang, J. ISTA-NET++: Flexible deep unfolding network for compressive sensing. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar]
- Song, J.; Chen, B.; Zhang, J. Memory-augmented deep unfolding network for compressive sensing. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual, 20–24 October 2021; pp. 4249–4258. [Google Scholar]
- Chen, B.; Zhang, J. Content-aware scalable deep compressed sensing. IEEE Trans. Image Process. 2022, 31, 5412–5426. [Google Scholar] [CrossRef]
- Song, J.; Mou, C.; Wang, S.; Ma, S.; Zhang, J. Optimization-Inspired Cross-Attention Transformer for Compressive Sensing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 6174–6184. [Google Scholar]
- Mohassel, P.; Zhang, Y. Secureml: A system for scalable privacy-preserving machine learning. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; IEEE: New York, NY, USA, 2017; pp. 19–38. [Google Scholar]
- Calderbank, R.; Jafarpour, S.; Schapire, R. Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain. Preprint 2009. Available online: https://www.semanticscholar.org/paper/Compressed-Learning-%3A-Universal-Sparse-Reduction-in-Calderbank/627c14fe9097d459b8fd47e8a901694198be9d5d#citing-papers (accessed on 4 September 2025).
- Reboredo, H.; Renna, F.; Calderbank, R.; Rodrigues, M.R. Compressive classification. In Proceedings of the 2013 IEEE International Symposium on Information Theory, Istanbul, Turkey, 7–12 July 2013; IEEE: New York, NY, USA, 2013; pp. 674–678. [Google Scholar]
- Reboredo, H.; Renna, F.; Calderbank, R.; Rodrigues, M.R.D. Projections designs for compressive classification. In Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing, Austin, TX, USA, 3–5 December 2013; pp. 1029–1032. [Google Scholar] [CrossRef]
- Lohit, S.; Kulkarni, K.; Turaga, P.; Wang, J.; Sankaranarayanan, A.C. Reconstruction-free inference on compressive measurements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 7–12 June 2015; pp. 16–24. [Google Scholar]
- Adler, A.; Elad, M.; Zibulevsky, M. Compressed learning: A deep neural network approach. arXiv 2016, arXiv:1610.09615. [Google Scholar] [CrossRef]
- Zisselman, E.; Adler, A.; Elad, M. Compressed learning for image classification: A deep neural network approach. In Handbook of Numerical Analysis; Elsevier: Amsterdam, The Netherlands, 2018; Volume 19, pp. 3–17. [Google Scholar]
- Tran, D.T.; Yamaç, M.; Degerli, A.; Gabbouj, M.; Iosifidis, A. Multilinear compressive learning. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 1512–1524. [Google Scholar] [CrossRef]
- Mou, C.; Zhang, J. TransCL: Transformer makes strong and flexible compressive learning. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5236–5251. [Google Scholar] [CrossRef] [PubMed]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Sensoy, M.; Kaplan, L.; Kandemir, M. Evidential deep learning to quantify classification uncertainty. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar] [CrossRef]
- Kimishima, F. Multi-functional Compressive Sensing Sampling-based Trustworthy Compressive Learning. In Proceedings of the 2025 Data Compression Conference (DCC), Snowbird, UT, USA, 18–21 March 2025. [Google Scholar]
- Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic decomposition by basis pursuit. SIAM Rev. 2001, 43, 129–159. [Google Scholar] [CrossRef]
- Zhang, J.; Ghanem, B. ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1828–1837. [Google Scholar]
- Shi, W.; Jiang, F.; Liu, S.; Zhao, D. Image compressed sensing using convolutional neural network. IEEE Trans. Image Process. 2019, 29, 375–388. [Google Scholar] [CrossRef]
- Canh, T.N.; Jeon, B. Multi-scale deep compressive sensing network. In Proceedings of the 2018 IEEE Visual Communications and Image Processing (VCIP), Taichung, Taiwan, 9–12 December 2018; IEEE: New York, NY, USA, 2018; pp. 1–4. [Google Scholar]
- Wu, Y.; Rosca, M.; Lillicrap, T. Deep compressed sensing. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; PMLR: San Diego, CA, USA, 2019; pp. 6850–6860. [Google Scholar]
- Shi, W.; Jiang, F.; Liu, S.; Zhao, D. Scalable convolutional neural network for image compressed sensing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12290–12299. [Google Scholar]
- Zhang, Z.; Liu, Y.; Liu, J.; Wen, F.; Zhu, C. AMP-Net: Denoising-based deep unfolding for compressive image sensing. IEEE Trans. Image Process. 2020, 30, 1487–1500. [Google Scholar] [CrossRef]
- Davenport, M.A.; Duarte, M.F.; Wakin, M.B.; Laska, J.N.; Takhar, D.; Kelly, K.F.; Baraniuk, R.G. The smashed filter for compressive classification and target recognition. In Proceedings of the Computational Imaging V, San Jose, CA, USA, 28 February 2007; SPIE: St. Bellingham, WA, USA, 2007; Volume 6498, pp. 142–153. [Google Scholar]
- Baraniuk, R.G.; Wakin, M.B. Random projections of smooth manifolds. Found. Comput. Math. 2009, 9, 51–77. [Google Scholar] [CrossRef]
- Kulkarni, K.; Turaga, P. Recurrence textures for human activity recognition from compressive cameras. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; IEEE: New York, NY, USA, 2012; pp. 1417–1420. [Google Scholar]
- Kulkarni, K.; Turaga, P. Reconstruction-free action inference from compressive imagers. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 772–784. [Google Scholar] [CrossRef]
- Calderbank, R.; Jafarpour, S. Finding needles in compressed haystacks. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; IEEE: New York, NY, USA, 2012; pp. 3441–3444. [Google Scholar]
- Wimalajeewa, T.; Chen, H.; Varshney, P.K. Performance limits of compressive sensing-based signal classification. IEEE Trans. Signal Process. 2012, 60, 2758–2770. [Google Scholar] [CrossRef]
- Haupt, J.; Castro, R.; Nowak, R.; Fudge, G.; Yeh, A. Compressive Sampling for Signal Classification. In Proceedings of the 2006 Fortieth Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 29 October–1 November 2006; pp. 1430–1434. [Google Scholar] [CrossRef]
- Lohit, S.; Kulkarni, K.; Turaga, P. Direct inference on compressive measurements using convolutional neural networks. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 1913–1917. [Google Scholar] [CrossRef]
- Hollis, B.; Patterson, S.; Trinkle, J. Compressed Learning for Tactile Object Recognition. IEEE Robot. Autom. Lett. 2018, 3, 1616–1623. [Google Scholar] [CrossRef]
- Xu, Y.; Kelly, K.F. Compressed domain image classification using a multi-rate neural network. arXiv 2019, arXiv:1901.09983. [Google Scholar]
- Gal, Y. Uncertainty in Deep Learning. Ph.D. Dissertation, Cambridge University, Cambridge, UK, 2016. Available online: https://scholar.google.com/citations?view_op=view_citation&hl=ja&user=SIayDoQAAAAJ&cstart=300&pagesize=100&sortby=pubdate&citation_for_view=SIayDoQAAAAJ:kNdYIx-mwKoC (accessed on 4 September 2025).
- Guo, C.; Pleiss, G.; Sun, Y.; Weinberger, K.Q. On calibration of modern neural networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; PMLR: San Diego, CA, USA, 2017; pp. 1321–1330. [Google Scholar]
- Malinin, A.; Gales, M. Predictive uncertainty estimation via prior networks. Adv. Neural Inf. Process. Syst. 2018, 31, 7047–7058. [Google Scholar]
- Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. In Classic Works of the Dempster-Shafer Theory of Belief Functions; Springer: Berlin/Heidelberg, Germany, 2008; pp. 57–72. [Google Scholar]
- Dempster, A.P. A generalization of Bayesian inference. J. R. Stat. Soc. Ser. B (Methodol.) 1968, 30, 205–232. [Google Scholar] [CrossRef]
- Gao, J.; Chen, M.; Xu, C. Collecting Cross-Modal Presence-Absence Evidence for Weakly-Supervised Audio-Visual Event Perception. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 18827–18836. [Google Scholar]
- Li, B.; Han, Z.; Li, H.; Fu, H.; Zhang, C. Trustworthy long-tailed classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 6970–6979. [Google Scholar]
- Xu, Z.; Yue, X.; Lv, Y.; Liu, W.; Li, Z. Trusted fine-grained image classification through hierarchical evidence fusion. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 10657–10665. [Google Scholar]
- Han, Z.; Zhang, C.; Fu, H.; Zhou, J.T. Trusted multi-view classification with dynamic evidential fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 2551–2566. [Google Scholar] [CrossRef] [PubMed]
- Ren, J.; Jiang, L.; Peng, H.; Liu, Z.; Wu, J.; Philip, S.Y. Evidential temporal-aware graph-based social event detection via dempster-shafer theory. In Proceedings of the 2022 IEEE International Conference on Web Services (ICWS), Barcelona, Spain, 10–16 July 2022; IEEE: New York, NY, USA, 2022; pp. 331–336. [Google Scholar]
- Ren, J.; Peng, H.; Jiang, L.; Liu, Z.; Wu, J.; Yu, Z.; Philip, S.Y. Uncertainty-guided boundary learning for imbalanced social event detection. IEEE Trans. Knowl. Data Eng. 2023, 36, 2701–2715. [Google Scholar] [CrossRef]
- Gan, L. Block compressed sensing of natural images. In Proceedings of the 2007 15th International Conference on Digital Signal Processing, Cardiff, UK, 1–4 July 2007; IEEE: New York, NY, USA, 2007; pp. 403–406. [Google Scholar]
- Gao, X.; Zhang, J.; Che, W.; Fan, X.; Zhao, D. Block-based compressive sensing coding of natural images by local structural measurement matrix. In Proceedings of the 2015 Data Compression Conference, Snowbird, UT, USA, 7–9 April 2015; IEEE: New York, NY, USA, 2015; pp. 133–142. [Google Scholar]
- Burrus, C.S.; Parks, T. Convolution Algorithms; Citeseer: New York, NY, USA, 1985; Volume 6, p. 15. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; Technical Report; University of Toronto: Toronto, ON, Canada, 2009; Available online: https://scholar.google.com/citations?view_op=view_citation&hl=ja&user=xegzhJcAAAAJ&citation_for_view=xegzhJcAAAAJ:d1gkVwhDpl0C (accessed on 4 September 2025).
- Li, F.F.; Andreeto, M.; Ranzato, M.; Perona, P. Caltech 101. CaltechDATA. 2022. Available online: https://data.caltech.edu/records/mzrjq-6wc02 (accessed on 4 September 2025).
- Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
- Cheng, G.; Han, J.; Lu, X. Remote sensing image scene classification: Benchmark and state of the art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
- McClish, D.K. Analyzing a portion of the ROC curve. Med. Decis. Mak. 1989, 9, 190–195. [Google Scholar] [CrossRef]
- Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32, 721. [Google Scholar]
- Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- De Lathauwer, L.; De Moor, B.; Vandewalle, J. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 2000, 21, 1253–1278. [Google Scholar] [CrossRef]
Block Size | ||||
---|---|---|---|---|
2 | 4 | 8 | 16 | |
Image Size | 128 × 128 | 64 × 64 | 32 × 32 | 16 × 16 |
ACC | 97.94 | 95.12 | 88.16 | 69.92 |
AUC | 99.96 | 99.86 | 98.35 | 96.87 |
Model Name | Configuration | SR |
---|---|---|
MCL | 334 × 334 × 1 | 0.2522 |
TransCL | 36,864 × 3 | 0.2500 |
MCS-TCL (ours) | 36,864 × 3 | 0.2500 |
MCL | 235 × 235 × 1 | 0.1248 |
TransCL | 18,432 × 3 | 0.1250 |
MCS-TCL (ours) | 18,432 × 3 | 0.1250 |
MCL | 163 × 163 × 1 | 0.0601 |
TransCL | 9216 × 3 | 0.0625 |
MCS-TCL (ours) | 9216 × 3 | 0.0625 |
SR | Method | Meas | Model Size (MB) | Multi-Adds (G) | Number of Params |
---|---|---|---|---|---|
6.25% | MCL [23] | 442,368 | 16,650.11 | 54.4 | 423,250,378 |
TransCL [24] | 442,368 | 17,556.83 | 54.4 | 423,437,770 | |
MCS-TCL | 27,648 | 2500.32 | 27.23 | 87,502,181 | |
12.5% | MCL [23] | 442,368 | 16,650.11 | 54.4 | 423,250,378 |
TransCL [24] | 442,368 | 17,557.66 | 54.4 | 423,643,594 | |
MCS-TCL | 55,296 | 2500.32 | 27.23 | 87,502,181 | |
25% | MCL [23] | 442,368 | 16,650.11 | 54.4 | 423,250,378 |
TransCL [24] | 442,368 | 17,559.23 | 54.4 | 424,036,810 | |
MCS-TCL | 110,592 | 2500.32 | 27.23 | 87,502,181 |
Dataset | Method | SR | ||
---|---|---|---|---|
6.25% | 12.5% | 25% | ||
Caltech101 | MCL [23] | 90.67/87.88 | 90.02/88.42 | 91.05/87.52 |
TransCL [24] | 94.24/89.45 | 94.28/90.53 | 94.47/90.49 | |
MCS-TCL | 94.09/88.53 | 94.66/88.54 | 94.58/88.54 | |
UC Merced | MCL [23] | 84.60/98.97 | 86.67/99.27 | 86.19/99.12 |
TransCL [24] | 93.49/99.21 | 93.33/99.90 | 95.24/99.44 | |
MCS-TCL | 96.83/99.92 | 96.43/99.93 | 96.71/99.92 | |
RESISC45 | MCL [23] | 80.61/98.84 | 80.74/99.03 | 80.78/98.87 |
TransCL [24] | 90.78/99.45 | 91.53/99.54 | 91.88/99.42 | |
MCS-TCL | 93.24/99.75 | 93.19/99.75 | 93.15/99.75 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kimishima, F.; Yang, J.; Zhou, J. A Comprehensive Study of MCS-TCL: Multi-Functional Sampling for Trustworthy Compressive Learning. Information 2025, 16, 777. https://doi.org/10.3390/info16090777
Kimishima F, Yang J, Zhou J. A Comprehensive Study of MCS-TCL: Multi-Functional Sampling for Trustworthy Compressive Learning. Information. 2025; 16(9):777. https://doi.org/10.3390/info16090777
Chicago/Turabian StyleKimishima, Fuma, Jian Yang, and Jinjia Zhou. 2025. "A Comprehensive Study of MCS-TCL: Multi-Functional Sampling for Trustworthy Compressive Learning" Information 16, no. 9: 777. https://doi.org/10.3390/info16090777
APA StyleKimishima, F., Yang, J., & Zhou, J. (2025). A Comprehensive Study of MCS-TCL: Multi-Functional Sampling for Trustworthy Compressive Learning. Information, 16(9), 777. https://doi.org/10.3390/info16090777