Recursive Bayesian Decoding in State Observation Models: Theory and Application in Quantum-Based Inference
Abstract
1. Introduction
1.1. Related Work and State of the Art
Model | Discrete-State | Particle-Based | Gaussian | Gaussian Sum |
---|---|---|---|---|
filtering | [18,19] | [10,24,25,30,31,33] | [10,34,35,36] | [45,46,47,48,49,50,58,62,63] |
smoothing | [18,19] | [10,27,29,30] | [10,37,38,39,40] | [32,50,51,52,53,54,56,57,58] |
decoding | [20,21,23] | [23,26,28,29] | [23,41,42,43,44] | [11,59,60] |
1.2. Contributions and Outline
- Provides new theoretical insights into the relationships between MAP predecessor decoding and classical inference methods;
- Breaks down predecessor decoding for Gaussian and Gaussian mixture models;
- Presents systematically structured overviews of classical methods for these models;
- Compares the decoder with classical methods in an extensive numerical evaluation;
- Offers multiple interpretations of the results and a thorough general discussion.
1.3. Notation
2. Recursive Marginal and MAP Inference in State Observation Models
2.1. Marginal Inference
2.2. MAP Inference (Decoding)
3. Recursive Inference in Gaussian Models
3.1. Gaussian Marginals
3.2. Gaussian Decoding
- (i)
- Inferring the marginal means ;
- (ii)
- Inferring the marginal MAP assignments ;
- (iii)
- Inferring the MAP predecessors of ;
- (iv)
- Inferring the joint MAP sequence .
Algorithm 1 Filtering and Predecessor Decoding in Gaussian Models. |
▹ predicting
▹ updating
▹ decoding
|
4. Recursive Inference in Gaussian Mixture Models
4.1. Marginal Inference
4.2. Mixture Decoding
Algorithm 2 Filtering and Predecessor Decoding in Gaussian Mixture Models. |
▹ mixture filtering
▹ mixture decoding
▹ mixture reduction
|
5. Quantum-Based Inference
5.1. Observation Models
5.2. Transition Model and Initialization
5.3. Evaluation and Implementation
6. Discussion
- for Gaussian filtered (Figure 6a);
- for Mixture filtered (Figure 6b);
- for Mixture smoothed (Figure 6d);
- Only for the Mixture decoded trajectory (Figure 6f).
- Gaussian vs. Mixture Model is the destined and mathematically motivated interpretation, highlighting the behavior induced by the models across the discussed inference methods. The Gaussian model constitutes an oversimplification; in particular, Gaussian distributions are underfitting the double-slit pattern. In contrast, the more complex Gaussian mixture model reduces the model error (bias) by better capturing the interference sub-patterns.
- Classical vs. Quantum Particles is an applicable interpretation for the considered experiment setup. For classical or real particles, the expected detection pattern in a double-slit experiment would form a bimodal mixture. However, because the distance between the two slits is small relative to the detector distance, the resulting modes would significantly overlap, effectively resembling a single Gaussian. Thus, the single Gaussian model approximates the classical outcome, supporting this interpretation. But beware of the counterintuitive nature of the subject: If we try to gather additional information about the particle, e.g., by sensing through which slit it went, then this would slightly improve the accuracy with real particles but increase the uncertainty with quantum particles, as the observation at the slit instantly collapses the waveform such that we cannot exploit interference patterns at the detector.
- Single vs. Double-Slit interpretation of the comparison for quantum particles remains reasonable, as the double-slit pattern is enveloped by a single-slit profile and as the specified mixture approximation preserves symmetry. Whether and how the addition of more slits, leading to more complex interference patterns, improves the inference results remains an open question.
General Discussion and Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Abdallah, M.A. Advancing Decision-Making in AI Through Bayesian Inference and Probabilistic Graphical Models. Symmetry 2025, 17, 635. [Google Scholar] [CrossRef]
- Yang, Z.; Zhang, X.; Xiang, W.; Lin, X. A Novel Particle Filter Based on One-Step Smoothing for Nonlinear Systems with Random One-Step Delay and Missing Measurements. Sensors 2025, 25, 318. [Google Scholar] [CrossRef] [PubMed]
- Ali, W.; Li, Y.; Chen, Z.; Raja, M.A.Z.; Ahmed, N.; Chen, X. Application of Spherical-Radial Cubature Bayesian Filtering and Smoothing in Bearings Only Passive Target Tracking. Entropy 2019, 21, 1088. [Google Scholar] [CrossRef]
- Cassidy, M.J.; Penny, W.D. Bayesian nonstationary autoregressive models for biomedical signal analysis. IEEE Trans. Biomed. Eng. 2002, 49, 1142–1152. [Google Scholar] [CrossRef]
- Behnamfar, F.; Alajaji, F.; Linder, T. MAP decoding for multi-antenna systems with non-uniform sources: Exact pairwise error probability and applications. IEEE Trans. Commun. 2009, 57, 242–254. [Google Scholar] [CrossRef]
- Gao, Y.; Tang, X.; Li, T.; Chen, Q.; Zhang, X.; Li, S.; Lu, J. Bayesian Filtering Multi-Baseline Phase Unwrapping Method Based on a Two-Stage Programming Approach. Appl. Sci. 2020, 10, 3139. [Google Scholar] [CrossRef]
- Retkute, R.; Thurston, W.; Gilligan, C.A. Bayesian Inference for Multiple Datasets. Stats 2024, 7, 434–444. [Google Scholar] [CrossRef]
- Huang, Z.; Sarovar, M. Smoothing of Gaussian quantum dynamics for force detection. Phys. Rev. A 2018, 97, 042106. [Google Scholar] [CrossRef]
- Ma, K.; Kong, J.; Wang, Y.; Lu, X.M. Review of the Applications of Kalman Filtering in Quantum Systems. Symmetry 2022, 14, 2478. [Google Scholar] [CrossRef]
- Särkkä, S. Bayesian Filtering and Smoothing; Institute of Mathematical Statistics Textbooks; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar] [CrossRef]
- Rudić, B.; Pichler-Scheder, M.; Efrosinin, D. Valid Decoding in Gaussian Mixture Models. In Proceedings of the 2024 IEEE 3rd Conference on Information Technology and Data Science (CITDS), Debrecen, Hungary, 26–28 August 2024. [Google Scholar] [CrossRef]
- Griffiths, D.J.; Schroeter, D.F. The Wave Function. In Introduction to Quantum Mechanics; Cambridge University Press: Cambridge, UK, 2018; pp. 3–24. [Google Scholar]
- Feynman, R.P.; Leighton, R.B.; Sands, M. The Feynman Lectures on Physics, Vol. III: The New Millennium Edition: Quantum Mechanics; The Feynman Lectures on Physics; Basic Books: New York, NY, USA, 2011. [Google Scholar]
- Cohen-Tannoudji, C.; Diu, B.; Laloë, F. Quantum Mechanics, 2nd ed.; Wiley-VCH: Weinheim, Germany, 2019; Volume 1–3. [Google Scholar]
- Rolleigh, R. The Double Slit Experiment and Quantum Mechanics. In Hendrix College Faculty Archives; Hendrix College: Conway, AR, USA, 2010. [Google Scholar]
- Dodonov, V.V.; Man’ko, M.A. CAMOP: Quantum Non-Stationary Systems. Phys. Scr. 2010, 82, 031001. [Google Scholar] [CrossRef]
- Koller, D.; Friedman, N. Probabilistic Graphical Models: Principles and Techniques; Adaptive Computation and Machine Learning; The MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
- Rabiner, L.R. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 1989, 77, 257–286. [Google Scholar] [CrossRef]
- Cappé, O.; Moulines, E.; Rydén, T. Filtering and Smoothing Recursions. In Inference in Hidden Markov Models; Springer: New York, NY, USA, 2005; pp. 51–76. [Google Scholar] [CrossRef]
- Viterbi, A.J. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Inf. Theory 1967, 13, 260–269. [Google Scholar] [CrossRef]
- Forney, G.D. The Viterbi algorithm. Proc. IEEE 1973, 61, 268–278. [Google Scholar] [CrossRef]
- Zegeye, W.K.; Dean, R.A.; Moazzami, F. Multi-Layer Hidden Markov Model Based Intrusion Detection System. Mach. Learn. Knowl. Extr. 2019, 1, 265–286. [Google Scholar] [CrossRef]
- Rudić, B.; Pichler-Scheder, M.; Efrosinin, D.; Putz, V.; Schimbäck, E.; Kastl, C.; Auer, W. Discrete- and Continuous-State Trajectory Decoders for Positioning in Wireless Networks. IEEE Trans. Instrum. Meas. 2020, 69, 6016–6029. [Google Scholar] [CrossRef]
- Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef]
- Kotecha, J.; Djuric, P. Gaussian sum particle filtering. IEEE Trans. Signal Process. 2003, 51, 2602–2612. [Google Scholar] [CrossRef]
- Godsill, S.; Doucet, A.; West, M. Maximum a Posteriori Sequence Estimation Using Monte Carlo Particle Filters. Ann. Inst. Stat. Math. 2001, 53, 82–96. [Google Scholar] [CrossRef]
- Godsill, S.; Doucet, A.; West, M. Monte Carlo Smoothing for Nonlinear Time Series. J. Am. Stat. Assoc. 2004, 99, 156–168. [Google Scholar] [CrossRef]
- Mejari, M.; Piga, D. Maximum—A Posteriori Estimation of Linear Time-Invariant State-Space Models via Efficient Monte-Carlo Sampling. ASME Lett. Dyn. Syst. Control 2021, 2, 011008. [Google Scholar] [CrossRef]
- Klaas, M.; Briers, M.; de Freitas, N.; Doucet, A.; Maskell, S.; Lang, D. Fast particle smoothing: If I had a million particles. In Proceedings of the 23rd International Conference on Machine Learning (ICML ’06), New York, NY, USA, 25–29 June 2006; pp. 481–488. [Google Scholar] [CrossRef]
- Doucet, A.; Johansen, A.M. A Tutorial on Particle Filtering and Smoothing: Fifteen Years Later. In The Oxford Handbook of Nonlinear Filtering; Crisan, D., Rozovsky, B., Eds.; Oxford University Press: Oxford, UK, 2011; pp. 656–704. [Google Scholar]
- Daum, F.E.; Huang, J. Curse of dimensionality and particle filters. In Proceedings of the 2003 IEEE Aerospace Conference Proceedings (Cat. No. 03TH8652), Big Sky, MT, USA, 8–15 March 2003; Volume 4, pp. 1979–1993. [Google Scholar]
- Lolla, T.; Lermusiaux, P.F.J. A Gaussian Mixture Model Smoother for Continuous Nonlinear Stochastic Dynamical Systems: Theory and Scheme. Mon. Weather Rev. 2017, 145, 2743–2761. [Google Scholar] [CrossRef]
- van Leeuwen, P.J.; Künsch, H.R.; Nerger, L.; Potthast, R.; Reich, S. Particle filters for high-dimensional geoscience applications: A review. Q. J. R. Meteorol. Soc. 2019, 145, 2335–2365. [Google Scholar] [CrossRef]
- Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. Trans. ASME- Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
- Smith, G.L.; Schmidt, S.F.; McGee, L.A. Application of Statistical Filter Theory to the Optimal Estimation of Position and Velocity on Board a Circumlunar Vehicle; NASA Technical Report; National Aeronautics and Space Administration: Washington, DC, USA, 1962.
- Julier, S.J.; Uhlmann, J.K. New extension of the Kalman filter to nonlinear systems. In Proceedings of the Signal Processing, Sensor Fusion, and Target Recognition VI, Orlando, FL, USA, 21–25 April 1997; SPIE: Bellingham, MA, USA, 1997; Volume 3068, pp. 182–193. [Google Scholar] [CrossRef]
- Rauch, H.E.; Tung, F.; Striebel, C.T. Maximum likelihood estimates of linear dynamic systems. AIAA J. 1965, 3, 1445–1450. [Google Scholar] [CrossRef]
- Särkkä, S. Unscented Rauch–Tung–Striebel Smoother. IEEE Trans. Autom. Control 2008, 53, 845–849. [Google Scholar] [CrossRef]
- Fraser, D.C.; Potter, J.E. The optimum linear smoother as a combination of two optimum linear filters. IEEE Trans. Autom. Control 1969, 14, 387–390. [Google Scholar] [CrossRef]
- Cosme, E.; Verron, J.; Brasseur, P.; Blum, J.; Auroux, D. Smoothing Problems in a Bayesian Framework and their Linear Gaussian Solutions. Mon. Weather Rev. 2012, 140, 683–695. [Google Scholar] [CrossRef]
- Ainsleigh, P.L. Theory of Continuous-State Hidden Markov Models and Hidden Gauss-Markov Models; Technical Report 11274; Naval Undersea Warfare Cent.: Newport, RI, USA, 2001. [Google Scholar]
- Ainsleigh, P.L.; Kehtarnavaz, N.; Streit, R.L. Hidden Gauss-Markov models for signal classification. IEEE Trans. Signal Process. 2002, 50, 1355–1367. [Google Scholar] [CrossRef]
- Turin, W. Continuous State HMM. In Performance Analysis and Modeling of Digital Transmission Systems; Springer: Boston, MA, USA, 2004; Chapter 7; pp. 295–340. [Google Scholar] [CrossRef]
- Ortiz, J.; Evans, T.; Davison, A.J. A visual introduction to Gaussian Belief Propagation. arXiv 2021, arXiv:2107.02308. [Google Scholar]
- Sorenson, H.W.; Alspach, D.L. Recursive Bayesian Estimation using Gaussian Sums. Automatica 1971, 7, 465–479. [Google Scholar] [CrossRef]
- Alspach, D.L.; Sorenson, H.W. Nonlinear Bayesian estimation using Gaussian sum approximations. IEEE Trans. Autom. Control 1972, 17, 439–448. [Google Scholar] [CrossRef]
- Anderson, B.D.; Moore, J.B. Gaussian Sum Estimators. In Optimal Filtering; Kailath, T., Ed.; Information and System Sciences Series; Prentice-Hall: Saddle River, NJ, USA, 1979; pp. 211–222. [Google Scholar]
- Terejanu, G.; Singla, P.; Singh, T.; Scott, P.D. Adaptive Gaussian Sum Filter for Nonlinear Bayesian Estimation. IEEE Trans. Autom. Control 2011, 56, 2151–2156. [Google Scholar] [CrossRef]
- Horwood, J.T.; Aragon, N.D.; Poore, A.B. Gaussian Sum Filters for Space Surveillance: Theory and Simulations. J. Guid. Control. Dyn. 2011, 34, 1839–1851. [Google Scholar] [CrossRef]
- Šimandl, M.; Královec, J. Filtering, Prediction and Smoothing with Gaussian Sum Representation. IFAC Proc. Vol. 2000, 33, 1157–1162. [Google Scholar] [CrossRef]
- Kitagawa, G. The two-filter formula for smoothing and an implementation of the Gaussian-sum smoother. Ann. Inst. Stat. Math. 1994, 46, 605–623. [Google Scholar] [CrossRef]
- Kitagawa, G. Revisiting the Two-Filter Formula for Smoothing for State-Space Models. arXiv 2023, arXiv:2307.03428. [Google Scholar]
- Vo, B.N.; Vo, B.T.; Mahler, R.P.S. Closed-Form Solutions to Forward–Backward Smoothing. IEEE Trans. Signal Process. 2012, 60, 2–17. [Google Scholar] [CrossRef]
- Balenzuela, M.P.; Dahlin, J.; Bartlett, N.; Wills, A.G.; Renton, C.; Ninness, B. Accurate Gaussian Mixture Model Smoothing using a Two-Filter Approach. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami, FL, USA, 17–19 December 2018; pp. 694–699. [Google Scholar] [CrossRef]
- Balenzuela, M.P.; Wills, A.G.; Renton, C.; Ninness, B. A new smoothing algorithm for jump Markov linear systems. Automatica 2022, 140, 110218. [Google Scholar] [CrossRef]
- Rudić, B.; Sturm, V.; Efrosinin, D. On the Analytic Solution to Recursive Smoothing in Mixture Models. In Proceedings of the 2024 58th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 27–30 October 2024; pp. 1301–1305. [Google Scholar] [CrossRef]
- Rudić, B.; Sturm, V.; Efrosinin, D. Recursive Bayesian Smoothing with Gaussian Sums. TechRxiv 2025. preprint. [Google Scholar] [CrossRef]
- Cedeño, A.L.; González, R.A.; Godoy, B.I.; Carvajal, R.; Agüero, J.C. On Filtering and Smoothing Algorithms for Linear State-Space Models Having Quantized Output Data. Mathematics 2023, 11, 1327. [Google Scholar] [CrossRef]
- Rudić, B.; Pichler-Scheder, M.; Efrosinin, D. Maximum A Posteriori Predecessors in State Observation Models. In Proceedings of the 23rd International Conference on Information Technologies and Mathematical Modelling (ITMM’2024), Karshi, Usbekistan, 20–26 October 2024; pp. 391–395. [Google Scholar]
- Rudić, B.; Sturm, V.; Efrosinin, D. On Recursive Marginal and MAP Inference in State Observation Models. In Information Technologies and Mathematical Modelling. Queueing Theory and Related Fields; Dudin, A., Nazarov, A., Moiseev, A., Eds.; Springer: Cham, Switzerland, 2025; pp. 85–97. [Google Scholar] [CrossRef]
- Hennig, C. Methods for merging Gaussian mixture components. Adv. Data Anal. Classif. 2010, 4, 3–34. [Google Scholar] [CrossRef]
- Wills, A.G.; Hendriks, J.; Renton, C.; Ninness, B. A Bayesian Filtering Algorithm for Gaussian Mixture Models. arXiv 2017. [Google Scholar] [CrossRef]
- Crouse, D.F.; Willett, P.; Pattipati, K.; Svensson, L. A look at Gaussian mixture reduction algorithms. In Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011; pp. 1–8. [Google Scholar]
- Ardeshiri, T.; Granström, K.; Özkan, E.; Orguner, U. Greedy Reduction Algorithms for Mixtures of Exponential Family. IEEE Signal Process. Lett. 2015, 22, 676–680. [Google Scholar] [CrossRef]
- Einicke, G.A. Smoothing, Filtering and Prediction: Estimating the Past, Present and Future, 2nd ed.; Prime Publishing: Northbrook, IL, USA, 2019. [Google Scholar]
- Carreira-Perpiñán, M.Á. Mode-finding for mixtures of Gaussian distributions. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1318–1323. [Google Scholar] [CrossRef]
- Carreira-Perpiñán, M.Á. A review of mean-shift algorithms for clustering. arXiv 2015, arXiv:1503.00687. [Google Scholar]
- Pulkkinen, S.; Mäkelä, M.M.; Karmitsa, N. A Continuation Approach to Mode-Finding of Multivariate Gaussian Mixtures and Kernel Density Estimates. J. Glob. Optim. 2013, 56, 459–487. [Google Scholar] [CrossRef]
- Olson, E.; Agarwal, P. Inference on Networks of Mixtures for Robust Robot Mapping. In Robotics: Science and Systems VIII; The MIT Press: Cambridge, MA, USA, 2013; pp. 313–320. [Google Scholar] [CrossRef]
- Pfeifer, T.; Lange, S.; Protzel, P. Advancing Mixture Models for Least Squares Optimization. IEEE Robot. Autom. Lett. 2021, 6, 3941–3948. [Google Scholar] [CrossRef]
- Luo, B.J.; Francis, L.; Rodríguez-Fajardo, V.; Galvez, E.J.; Khoshnoud, F. Young’s double-slit interference demonstration with single photons. Am. J. Phys. 2024, 92, 308–316. [Google Scholar] [CrossRef]
Smoothing | |||
---|---|---|---|
Filtering | Correction | Two-Filter Smoothing | |
Prediction | Update | ||
Smoothing | |||
---|---|---|---|
Filtering | Correction | ||
Prediction | Update | ||
graph | |||
mean | |||
covariance | |||
gain | |||
Filtering | Two-Filter Smoothing | ||
---|---|---|---|
Prediction | Update | ||
graph | |||
weights | |||
means | |||
covariances | |||
indexing | |||
re-indexing | |||
forecasts | |||
forecast cov. | |||
gains |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rudić, B.; Pichler-Scheder, M.; Efrosinin, D. Recursive Bayesian Decoding in State Observation Models: Theory and Application in Quantum-Based Inference. Mathematics 2025, 13, 2012. https://doi.org/10.3390/math13122012
Rudić B, Pichler-Scheder M, Efrosinin D. Recursive Bayesian Decoding in State Observation Models: Theory and Application in Quantum-Based Inference. Mathematics. 2025; 13(12):2012. https://doi.org/10.3390/math13122012
Chicago/Turabian StyleRudić, Branislav, Markus Pichler-Scheder, and Dmitry Efrosinin. 2025. "Recursive Bayesian Decoding in State Observation Models: Theory and Application in Quantum-Based Inference" Mathematics 13, no. 12: 2012. https://doi.org/10.3390/math13122012
APA StyleRudić, B., Pichler-Scheder, M., & Efrosinin, D. (2025). Recursive Bayesian Decoding in State Observation Models: Theory and Application in Quantum-Based Inference. Mathematics, 13(12), 2012. https://doi.org/10.3390/math13122012