Next Article in Journal
Assessing Cost Efficiency Thresholds in Fragmented Agriculture: A Gamma-Based Model of the Trade-Off Between Unit and Total Parcel Costs
Previous Article in Journal
A Maple Implementation for Deterministically Certifying Isolated Simple Zeros of Over-Determined Polynomial Systems with Interval Arithmetic and Its Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uncertainty-Aware Multimodal Fusion and Bayesian Decision-Making for DSS

by
Vesna Antoska Knights
1,*,
Marija Prchkovska
2,
Luka Krašnjak
3 and
Jasenka Gajdoš Kljusurić
4
1
Faculty of Technology and Technical Sciences, University “St. Kliment Ohridski”—Bitola, 1400 Veles, North Macedonia
2
Faculty of Computer Science, Goce Delcev University—Stip, 2000 Štip, North Macedonia
3
Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, Croatia
4
Faculty of Food Technology and Biotechnology, University of Zagreb, Pierottijeva 6, 10000 Zagreb, Croatia
*
Author to whom correspondence should be addressed.
AppliedMath 2026, 6(1), 16; https://doi.org/10.3390/appliedmath6010016 (registering DOI)
Submission received: 12 December 2025 / Revised: 3 January 2026 / Accepted: 7 January 2026 / Published: 20 January 2026
(This article belongs to the Section Probabilistic & Statistical Mathematics)

Abstract

Uncertainty-aware decision-making increasingly relies on multimodal sensing pipelines that must fuse correlated measurements, propagate uncertainty, and trigger reliable control actions. This study develops a unified mathematical framework for multimodal data fusion and Bayesian decision-making under uncertainty. The approach integrates adaptive Covariance Intersection (aCI) for correlation-robust sensor fusion, a Gaussian state–space backbone with Kalman filtering, heteroskedastic Bayesian regression with full posterior sampling via an affine-invariant MCMC sampler, and a Bayesian likelihood-ratio test (LRT) coupled to a risk-sensitive proportional–derivative (PD) control law. Theoretical guarantees are provided by bounding the state covariance under stability conditions, establishing convexity of the aCI weight optimization on the simplex, and deriving a Bayes-risk-optimal decision threshold for the LRT under symmetric Gaussian likelihoods. A proof-of-concept agro-environmental decision-support application is considered, where heterogeneous data streams (IoT soil sensors, meteorological stations, and drone-derived vegetation indices) are fused to generate early-warning alarms for crop stress and to adapt irrigation and fertilization inputs. The proposed pipeline reduces predictive variance and sharpens posterior credible intervals (up to 34% narrower 95% intervals and 44% lower NLL/Brier score under heteroskedastic modeling), while a Bayesian uncertainty-aware controller achieves 14.2% lower water usage and 35.5% fewer false stress alarms compared to a rule-based strategy. The framework is mathematically grounded yet domain-independent, providing a probabilistic pipeline that propagates uncertainty from raw multimodal data to operational control actions, and can be transferred beyond agriculture to robotics, signal processing, and environmental monitoring applications.

1. Introduction

Uncertainty quantification and probabilistic modeling have become fundamental aspects of mathematical systems analysis, particularly in domains with data coming from multiple heterogeneous sources. Traditional deterministic formulations are often insufficient when correlations between measurement errors are unknown or when the uncertainty in the data varies dynamically. Recent advances propose multimodal uncertainty quantification frameworks designed to address these challenges [1]. Applications across engineering and computational domains increasingly rely on multimodal sensing, parameter estimation, and robust decision-making under uncertainty [2]. In robotics, it has been demonstrated that symmetry-based modeling and nonlinear control contribute to improving stability and performance in uncertain environments [3]. Safety-oriented robotic systems further illustrate the importance of uncertainty-aware modeling for enhancing operational reliability and risk mitigation [4,5]. From a broader mathematical perspective, optimal-transport-based Bayesian inference methods provide rigorous tools for propagating uncertainty through complex dynamical systems [6]. Collectively, these studies highlight the growing need for unified Bayesian frameworks capable of fusing correlated multimodal observations while preserving mathematically principled uncertainty propagation.
Bayesian inference, multimodal data fusion, and decision theory have therefore been increasingly combined into unified probabilistic frameworks [6,7,8,9]. The state–space model provides a foundational structure for describing stochastic dynamical systems, where latent variables evolve over time while being observed indirectly through noisy measurements [10]. Classical estimation techniques, such as the Kalman filter, provide optimal linear–Gaussian solutions for such models [11]; however, their performance degrades when the correlation between sensors is unknown. To overcome this limitation, the Covariance Intersection (CI) method and its adaptive extension (aCI) have been proposed to fuse estimates conservatively without requiring explicit cross-correlation information [12,13,14,15]. These approaches rely on convex optimization over the space of positive-definite matrices, and guarantee bounded posterior covariance even under uncertainty [12,13,14].
Beyond state estimation, predictive modeling must also account for heteroskedasticity, i.e., data-dependent variance in the observation noise. Heteroskedastic Bayesian regression captures this behavior through probabilistic modeling of the conditional variance and has been widely adopted in recent stochastic modeling and machine learning research [16,17]. Parameter inference is typically achieved through Markov Chain Monte Carlo (MCMC) techniques, including affine-invariant ensemble samplers that ensure convergence in high-dimensional posterior spaces [18,19,20]. MCMC not only yields point estimates but also complete posterior distributions, providing interpretable uncertainty bounds essential for transparent decision-making.
Within the decision-theoretic layer, Bayesian Likelihood-Ratio Tests (LRT) translate posterior probabilities into optimal binary decisions by minimizing the Bayes risk, a criterion balancing false alarms and missed detections [21,22,23,24]. Integrating this with a risk-sensitive control law allows for dynamic systems to adapt their control inputs according to posterior mean and variance, providing stability even under uncertain operating conditions [25].
This paper presents a mathematically rigorous hybrid Bayesian framework that unifies the following:
(i)
adaptive covariance-based fusion;
(ii)
heteroskedastic Bayesian regression with full posterior sampling;
(iii)
Bayesian decision theory;
(iv)
risk-sensitive feedback control.
While demonstrated on an agricultural decision-support application, the proposed framework is domain-independent and applicable to any stochastic system requiring uncertainty-aware multimodal integration, including robotics, signal processing, and environmental modeling [26,27,28].
The contribution is a probabilistic fusion-to-decision pipeline that performs the following:
(i)
addresses unknown inter-sensor correlations via adaptive Covariance Intersection;
(ii)
models time-varying noise through heteroskedastic Bayesian regression;
(iii)
performs full Bayesian inference with emcee MCMC;
(iv)
optimizes decision thresholds by posterior predictive Bayes risk.
Unlike prior approaches that assume sensor independence, rely on homoskedastic noise, or decouple prediction from decision-making, the proposed framework propagates uncertainty consistently from raw multimodal data to operational actions.
Building on our previous work on ML-based crop recommendation, where six supervised classifiers achieved up to 99% accuracy when classifying 11 crop types from NPK, pH, temperature, humidity, and rainfall data [29], the present study shifts the focus from deterministic crop-type prediction to probabilistic stress assessment and uncertainty-aware irrigation control.
The main contributions of this work are as follows:
An end-to-end uncertainty-aware pipeline that propagates uncertainty from multimodal sensor fusion to probabilistic decision-making and closed-loop control.
The integration of adaptive Covariance Intersection with heteroskedastic Bayesian state–space modeling to ensure robustness under unknown inter-sensor correlations.
A Bayesian decision layer that explicitly accounts for class imbalance and asymmetric risk through Bayes-risk-optimal thresholding.
A closed-loop control strategy that leverages posterior predictive uncertainty rather than point estimates, enabling risk-sensitive actuation.

2. Materials and Methods

2.1. Multimodal Data and Latent State

The dataset contained 1100 field-level observations with 12 predictors and 4 heterogeneous sources: IoT soil sensors measuring macronutrients in soil (N, P, and K) and pH; meteorological stations recording temperature (T), humidity (H), and rainfall (Rf); and drone-borne remote-sensing imagery (RGB, multispectral, thermal) from which vegetation indices such as NDVI, SAVI, and canopy-temperature index (CTI) are derived.
The agro-climatic state vector at a discrete time step k is
x k = N ,   P ,   K ,   p H ,   T ,   H ,   R f ,   N D V I ,   S A V I ,   C T I k T
All state variables are assumed continuous and jointly distributed with finite second moments, i.e., Ε x k 2 < .

2.2. State Dynamics (Forecasting Backbone)

Temporal evolution follows a stochastic linear state–space model:
x k + 1 = F k x k + B k u k + ω k ,                 ω k ~ N 0 ,   Q k
where, u k denotes control or intervention inputs (e.g., irrigation/fertilization).
Matrices F k R n × n and B k R n × r describe deterministic transition and control dynamics.
For stability analysis, it is assumed that the spectral radius satisfies ρ F k < 1 ensuring bounded propagation of the prior covariance.
Prediction (prior) moments are as follows:
x k | k 1 = F k x k 1 | k 1 + B k u k 1 ,
P k | k 1 = F k P k 1 | k 1 F k T + Q k
Hypothesis 1.
If  Q k 0  and  ρ F k < 1  then  P k | k 1  is a uniformly bounded positive-definite sequence.
Proof. 
This follows directly from Lyapunov stability theory: the condition ρ F k < 1 ensures the existence of a unique positive-definite solution to the discrete Lyapunov equation, which implies that P k | k 1 converges to a bounded positive-definite matrix. □
The linear–Gaussian state–space formulation in Equation (2) is adopted as an analytically tractable baseline. While agro-environmental processes may exhibit nonlinear and non-Gaussian behavior, the objective of this work is not to claim strict physical linearity, but to establish a mathematically grounded uncertainty-propagation pipeline in which conservative covariance bounds and decision-theoretic guarantees can be rigorously analyzed.
For nonlinear models, analogous boundedness results follow under standard assumptions of local Jacobian stability (Extended Kalman Filter (EKF)/Unscented Kalman Filter (UKF)) or bounded second moments of the state distribution (ensemble and particle-based filters), ensuring that posterior covariance remains finite and suitable for conservative fusion. In such settings, the forecasting and update steps in Equations (2)–(11) can be replaced by locally linearized or sampling-based estimators, yielding modality-specific posterior summaries in the form of mean and covariance estimates.
The adaptive Covariance Intersection (aCI) fusion layer (Section 2.4) operates directly on Gaussian posterior summaries and does not require knowledge of cross-covariances, thereby preserving conservative covariance bounds even when the underlying dynamics are nonlinear or non-Gaussian.

2.3. Observation Model and Stacking

Each modality i = 1 , ,   M observes a linear projection with Gaussian noise:
z k i = H k i x k + v k i ,           v k i ~ N 0 ,   R k i
where z k ( i ) is the measurement from sensor i ,   H k ( i ) is the measurement matrix for sensor i ,   v k ( i ) represents measurement noise (i.e., uncertainty or random error in the soil sensor readings), and block diagonal matrix construction R k i > 0 is the noise covariance for modality i .
For fusion, measurements from M sensors are stacked as:
z k = z k 1 z k M                                       H k = H k 1 H k M                               R k = b l k   d i a g R k 1 , , R k M
This enables the system to process heterogeneous sensor data within a unified fusion framework. In robustness experiments, measurement noise between heterogeneous sensors may exhibit non-zero cross-correlations. Let v k i and v k j denote measurement noises from modalities i and j . Their cross-sensor correlation coefficient is defined as
ρ i j = C o v v k i ,       v k j V a r ( v k i )     V a r ( v k j )                     ρ i j [ 1,1 ]
When two modalities are analyzed jointly, their joint noise covariance takes the form:
R k i j = σ i 2 ρ i j σ i σ j ρ i j σ i σ j σ j 2
The scalar parameter ρ i j quantifies the extent of dependence between sensor noise sources (e.g., soil sensors influenced by the same irrigation cycle, or drone indices affected by the same illumination changes). Note that the stacked covariance R k in Equation (5) assumes block independence across modalities. The robustness experiments relax this assumption by injecting controlled cross-correlations ρ i j into R k i j .
For Kalman fusion, standard practice assumes ρ = 0 (independence), while adaptive Covariance Intersection (aCI) performs correlation-agnostic fusion without requiring knowledge of ρ. Robustness analysis, therefore, varies ρ ∈ {0.0, 0.3, 0.6, 0.9} to quantify how unknown correlations affect posterior uncertainty.
Kalman update:
S k = H k P k | k 1 H k T + R k ,
K k = P k | k 1 H k T S k 1 ,
x k | k = x k | k 1 + K k ( z k H k x k | k 1 ) ,
P k | k = ( I K k H k ) P k | k 1  
where S k is innovation covariance, K k is Kalman gain, x k | k posterior state estimate, P k | k posterior covariance.
Lemma 1.
Given full-rank  H k , the  R k > 0 posterior covariance P k | k is positive-definite and P k | k P k | k 1 .

2.4. Adaptive Covariance Intersection (aCI) Under Unknown Correlations

When inter-sensor correlations are unknown, a correlation-agnostic fusion is achieved via adaptive Covariance Intersection (aCI).
Given modality-specific posteriors
    x ^ k | k ( i ) ,   P k | k ( i ) i = 1 M
  P a C I 1 = i = 1 M ω i ( P k | k i ) 1
  x a C I = P a C I i = 1 M ω i ( P k | k ( i ) ) 1 x k | k ( i )
i = 1 M ω i = 1 ,         ω i > 0
Weights ω are adapted each step by
ω * = a r g m i n ω M 1   d e t P a C I ( ω )
which maximizes information under correlation uncertainty; ( M 1 ) is the probability simplex.
Theorem 1.
The optimization (16) is convex in ω because  log d e t P a C I ( i = 1 M ω i A i 1 ) is convex for positive-definite A i . Hence, a unique global minimizer exists.
Proof. 
Let Ai = P k | k ( i ) ≻ 0 and define
X ω = i = 1 M ω i A i 1 ,           ω M 1
The map ω   X ( ω ) is affine on the simplex M 1 , and X ( ω ) is positive definite for all ω ≻ 0. The aCI covariance is P a C I ω = X ω 1 .
On the cone of positive-definite matrices, the function f X = log det X is convex. Thus, the composite function ω   f X ( ω ) = log det X ( ω ) is convex in ω .
Moreover,
det P aCI ω = d e t ( X ω 1 ) = d e t X ω 1
so minimizing det P a C I ( ω ) is equivalent to minimizing log det X ( ω ) , the optimization in Equation (8) is convex on M 1 , which implies the existence of a unique global minimizer. □
The convexity of the log-determinant objective and the existence of a unique minimizer follow from standard results in convex optimization [30], while covariance intersection under unknown correlations is well-established in multisensor fusion [31,32]. In practice, we solve Equation (16) on the simplex using a projected-gradient (or exponentiated-gradient) method with backtracking line search. Because the objective is convex and the feasible set is compact, initialization affects only convergence speed, not the final solution (up to numerical tolerance). Empirically, we repeat the optimization with multiple initializations (uniform ω i   1 M ,   random Dirichlet draws, or warm starts   ω k   ω k 1 ); all runs converge to the same ω * and produce fused covariances ( m a x |   Δ ω | < 10 3 ), including in higher-dimensional and strongly correlated settings.

2.5. Heteroskedastic Bayesian Regression (Response Layer)

Let y k R denote a continuous response variable (e.g., crop yield index) predicted from the fused latent state:     x k * = x ^ a C I , k In this study, y k represents a normalized yield–stress index summarizing crop performance under agro-climatic stress conditions (higher values indicate increased crop stress).
The observation model is
y k = β T x k * + ϵ k ,                   ϵ k N ( 0 ,   σ k 2 )
with heteroskedastic variance parameterized as:
log σ k 2 = γ T x k * + η k ,                                             η k N ( 0 ,   τ 2 )
Priors are specified as weakly informative:
β ~ N ( 0 ,   τ 2 ) ,           γ ~ N ( 0 ,   τ 2 , I ) ,           τ 2 Г ( a τ , b τ )
where can be presented in vector form θ = ( β 1 ,   β p ,   γ 1 ,   γ p ,   log τ )
Remark 1.
Equation (10) introduces log-variance regression, ensuring positivity of σ k 2 and yielding an analytically tractable posterior.
In subsequent analyses, we report posterior summaries for the most influential regression coefficients, defined as the β- and γ-parameters associated with key agro-climatic drivers (temperature, humidity, CTI, and selected soil nutrients) that exhibit large posterior magnitudes and relatively tight credible intervals.

2.6. Log-likelihood, Log-Prior, and Log-Posterior

Given dataset D =   x ^ k * , y k k = 1 n the conditional log-likelihood is:
log p y | β , γ , τ , D = 1 2 k = 1 n log 2 π σ k 2 + ( y k β T x k * ) 2 σ k 2               σ k 2 = exp γ   x k * + τ 2
The joint log-prior is:
l o g   p β , γ , τ = 1 2 τ β 2 β 2 2 1 2 τ γ 2 γ 2 2 + a τ 1 l o g τ 2 b τ τ 2 + C
Hence the unnormalized log-posterior becomes:
l o g   p β , γ , τ | D = l o g   p y | β , γ , τ , D + l o g   p β , γ , τ
Proposition 1.
Under bounded data and finite hyperparameters, the posterior of Equation (24) is log-concave in β conditional on (γ, τ), ensuring unimodality of the conditional distribution.

2.7. MCMC Inference via the Affine-Invariant Ensemble Sampler

Posterior inference for (β, γ, τ) employs the affine-invariant ensemble MCMC algorithm [17,18].
Initialization: Walkers are initialized around maximum a posteriori (MAP) estimates; it uses Nw ∈ [100, 200] walkers, a burn-in phase B ∈ [500, 2000], and a total of S ∈ [3000, 10,000] iterations (with thinning as needed). In the experiments reported in Section 3, we used a fixed configuration with Nw = 80 walkers and S = 3000 iterations.
Outputs: The sampler produces posterior draws for (β, γ, τ), corner plots of joint marginals, and standard convergence diagnostics, including the integrated autocorrelation time (IAT/IACT), the Gelman–Rubin R ^ statistic (for split chains), and the effective sample size (ESS) for each parameter [33,34].
Posterior summaries reported in the Results are computed from the retained MCMC draws as the posterior median together with the 25–75% interquartile range (IQR).
Convergence diagnostics include the integrated autocorrelation time and Gelman–Rubin statistic [30,31].
The full Bayesian inference procedure using the affine-invariant ensemble sampler is summarized in Algorithm 1.
Algorithm 1. Bayesian Inference with Affine-Invariant Ensemble MCMC
         Input:
                  Fused state inputs x k * from aCI fusion (via Equations (12)–(16))
                  Observed outputs y k
         Steps:
                1.
Compute the conditional log-likelihood: l o g   p y | β , γ , τ , D by Equation (22)
                2.
Add log-priors: l o g   p β , γ , τ using Equation (23)
                3.
Form the unnormalized log-posterior: l o g   p β , γ , τ | D using Equation (24)
                4.
Run affine-invariant ensemble sampler;
-
initialize walkers near MAP
-
discard burn-in
-
thin chains if necessary.
                5.
Posterior predictive sampling:
                     For each retained draw l:
                      σ k 2 l = exp γ l T   x k * + τ 2
                      y k l N ( β l T , x k * ,   σ k 2 l )
         Output:
         Posterior draws for ( β ,   γ ,   τ )
         Posterior predictive samples y k l
         Convergence diagnostics: I A C T ,   R ^ ,   E S S

2.8. Bayesian Likelihood-Ratio Test (Decision Layer)

For binary decisions (e.g., stress vs. safe), define hypotheses H 1 ,   H 0 . The Likelihood-Ratio Test (LRT) statistic is
Λ z k = p ( z k | H 1 ) p ( z k | H 0 )
where H 1 denotes the hypothesis of a stressed condition (e.g., nutrient imbalance, insufficient irrigation),   H 0 represents the safe condition, and η is a predefined threshold.
Using posterior predictive draws y k (Section 2.7), compute the posterior probability of stress
π k = P r H 1 D ,   z k = Ε [ 1 y ~ k S | D ]
Trigger an alarm if Λ z k > η or equivalently if π k > π * . The Bayes-risk-optimal threshold is
η * = C 10 ( 1 π ) C 10 π                 π * = C 10 C 10 + C 01
With C 10 (false-alarm cost), C 01 (missed-detection cost), and prior π = p ( H 1 ) . π * is minimizing empirical Bayes risk over posterior predictive samples.
Theorem 2.
Given symmetric Gaussian likelihoods and linear loss, the threshold in Equation (27) minimizes the expected Bayes risk  R ( π * ) .
Proof. 
Consider the Bayes risk
R ( δ ) = C 10 p ( δ z k ) = H 1 , H 0 + C 10 p ( δ z k = H 0 , H 1 )
where δ z k is a deterministic decision rule. Under a Bayesian formulation with prior π = p ( H 1 ) , the posterior odds satisfy
p ( z k | H 1 ) p ( z k | H 0 ) = π 1 π Λ z k ,
where Λ z k = p ( z k | H 1 ) p ( z k | H 0 ) is likelihood ratio. For liner loss with cost C 10 (false alarm) and C 01 (missed detection). It is optimal to decide H 1 whenever
C 10 p ( H 0 | z k )   C 01 p ( H 1 | z k )  
which is equivalent to
Λ z k η * = C 10 ( 1 π ) C 10 π  
and, in terms of posterior probability,
π k = p ( H 1 | z k )
π k π * = C 10 C 10 + C 01  
Under standard Bayesian decision theory, any deviation from this threshold yields higher Bayes risk, making it the optimal decision rule under the assumed model. □

2.9. Model-Averaging of Discriminative ML (Optional Late Fusion)

If multiple discriminative classifiers f m   are available, Bayesian model-averaging combines their calibrated posteriors as
p m H 1 · = m ω m p m H 1 ·               m ω m = 1
with weights proportional to marginal likelihoods ω m p D M m , where trained on x k * , combine their calibrated posteriors p m ( H 1 | · ) via Bayesian model-averaging.
Note: Bayesian model-averaging of discriminative classifiers is presented as an optional late-fusion extension of the framework. It was not used in the experiments reported in Section 3, where the primary pipeline relies on aCI-based fusion and heteroskedastic Bayesian regression.

2.10. Control Layer–Irrigation/Fertilization

Closed-loop adjustment uses a PD law informed by the posterior predictive mean Ε y ~ k and uncertainty:
u k = K p y * Ε y ~ k + K d Ε y ~ k ,             Ε y ~ k = Ε y ~ k Ε y ~ k 1
Safety caps can incorporate predictive variance (risk-sensitive control).
Remark 2.
Incorporating predictive variance in the control gain yields a risk-sensitive controller ensuring stability under stochastic perturbations.

2.11. Time Synchronization and Data Store

Since the data streams originated from distributed IoT devices, timestamps were synchronized using network protocols:
t g l o b a l = t l o c a l + t                 t N ( 0 ,   σ t 2 )  
We store fused trajectories and posteriors
T k = x 0 : k | 0 : k ,   P 0 : k | 0 : k , ,   Θ 0 : k M C M C ,   c k      
where Θ 0 : k M C M C are retained MCMC samples for continual learning.

2.12. Validation, Calibration, and Decision Utility

Model performance is evaluated using stratified K-fold cross-validation, where each metric is computed on every fold and then averaged:
  M ¯ = 1 K i = 1 K M ( i )
To assess predictive calibration and decision quality, three standard probabilistic metrics are reported.
Negative Log-Likelihood (NLL). For each test observation, with predictive mean μ k = E [ y k ] and variance σ k 2 , the NLL is
N L L k = 1 2 log 2 π   σ k 2 + y k μ k 2 σ k 2
The final score is the average over all test samples:
N L L = 1 n k = 1 n N L L k
Brier score. For binary stress detection, the predictive probability of stress p ^ k = P ( H k | x k ) is compared with the true label y k 0 ,   1
B r i e r = 1 n k = 1 n p ^ k y k 2
Credible interval width (CI width). The 95% posterior credible interval is computed as
C I = 1.96 σ k 2
and the mean width is averaged across the test set.
Ablation studies assess the contribution of individual components by comparing the following: drone indices vs. no-drone inputs; Kalman fusion vs. adaptive Covariance Intersection; homoskedastic vs. heteroskedastic noise modeling; point-estimate (MAP) inference vs. full MCMC sampling; and fixed decision threshold vs. Bayes-risk-optimal threshold.
Performance degradation is quantified by:
N L L = N L L a b l a t e d N L L f u l l   and   B r i e r = B r i e r a b l a t e d B r i e r f i l l
Positive values indicate reduced predictive or probabilistic quality when a component is removed.
Additional Validation Measures: Further evaluation includes Accuracy, F1-score, PR-AUC, NLL, Brier score, Expected Calibration Error (ECE), reliability diagrams, and PIT histograms to assess predictive calibration and distributional correctness.
Robustness to correlation: Simulate cross-modal ρ ϵ [ 0,0 , 9 ] ; track coverage of 95% intervals and false-alarm rates at fixed recall.
Reliability diagrams and PIT histograms quantify the calibration quality of predictive distributions.

2.13. Hierarchical Structure of the Hybrid Bayesian Framework

Figure 1 presents the Bayesian multimodal fusion and decision architecture of the proposed uncertainty-aware Crop Decision-Support System (DSS). The framework integrates multi-source data (IoT soil sensors, laboratory assays, meteorological stations, and drone imagery) through adaptive covariance intersection (aCI) for robust fusion under unknown correlations.
The fused state vector enters the heteroskedastic Bayesian regression module, inferred via MCMC sampling (emcee), whose posterior distributions feed into the Bayesian likelihood-ratio test (LRT) for risk-sensitive decision-making and closed-loop PD control for irrigation and fertilization.
Table 1 outlines the hierarchical structure of the hybrid Bayesian framework. Each layer represents a mathematically well-defined operator acting on probability measures or state estimates.
The fusion layer performs convex optimization in the space of positive-definite matrices to obtain correlation-agnostic posteriors. The state-estimation layer executes recursive Bayesian updates for a stochastic state–space model. The regression layer introduces heteroskedastic variance modeling and posterior sampling via Markov Chain Monte Carlo. The decision layer applies a Bayes-risk-optimal likelihood-ratio test for binary hypotheses, and the control layer transforms posterior expectations and variances into real-time feedback actions. This decomposition formalizes the flow of uncertainty from data acquisition to decision and establishes a clear correspondence between algorithmic implementation and its underlying mathematical formulation.
Figure 2 presents the detailed probabilistic architecture underlying the multimodal Bayesian fusion and decision framework. The diagram highlights the flow from Kalman-based state estimation and adaptive covariance intersection, through heteroskedastic Bayesian regression inferred with MCMC, to Bayesian likelihood-ratio testing and closed-loop PD control. Each block corresponds to a well-defined probabilistic operator, making explicit how uncertainty propagates from sensor noise to posterior inference and operational decisions.

3. Results

The multimodal Bayesian framework was evaluated on a dataset consisting of 1100 samples and 12 agronomic, environmental, and spectral variables. Eleven crop types are included in the dataset: rice, maize, chickpea, kidney beans, lentil, pomegranate, watermelon, muskmelon, apple, cotton, and grapes, which are characteristic of Balkan and North Macedonia. These represent multiple agronomic categories (cereals, legumes, fruit crops, and industrial fiber crops), enabling the assessment of the proposed method under heterogeneous biological growth characteristics and stress responses. Their diversity strengthens the generalizability of the probabilistic decision model.

3.1. Performance of Multimodal Fusion

Simulation experiments were conducted to quantify the effect of unknown cross-sensor correlations on posterior uncertainty. A controlled linear–Gaussian system with two sensors was simulated, where the true measurement-noise covariance contained injected cross-correlations ρ ∈ {0.0, 0.3, 0.6, 0.9}. The naïve Kalman fusion baseline assumes independent sensor noise, while the adaptive Covariance Intersection (aCI) method fuses the single-sensor posteriors without requiring knowledge of cross-covariances. To quantitatively compare the uncertainty behavior of the Kalman fusion and the adaptive Covariance Intersection (aCI), the time-averaged posterior covariance trace and the associated 95% credible interval width are evaluated.
For a given filtering method (KF or aCI), the mean posterior covariance trace over a time horizon T is defined as:
P ¯ K F = 1 T k = 1 T t r ( P k K F )                 P ¯ a C I = 1 T k = 1 T t r ( P k a C I )
where P k K F and P k a C I denote the posterior state covariance matrices at time step k for the Kalman filter and aCI, respectively.
Assuming a Gaussian posterior, the 95% credible-interval width, for a scalar state component with posterior variance σ k , i 2 = ( P k ) i i , is
C I 95 % = 1.96 σ k , i 2 = 1.96 ( P k ) i i
In the experiments, we report the mean of these 95% credible-interval widths across time and state dimensions for each filtering method, providing a scalar summary of posterior uncertainty that is directly comparable between KF and aCI.
Because the Kalman filter assumes a diagonal measurement covariance R m o d e l , the posterior covariance R k depends solely on the assumed model and is therefore invariant to the injected correlation ρ. Only the empirical MSE reflects the effect of ρ, demonstrating that the filter becomes overconfident and increasingly biased under unmodeled cross-correlation.
The results in Table 2 confirm that the covariance-based metrics of the naïve Kalman filter remain insensitive to the injected correlation ρ, because the filter assumes a fixed diagonal measurement-noise model. For aCI, the reported covariance summaries were observed to be approximately constant in these experiments; however, aCI may change its conservative bound when the single-sensor posteriors diverge more strongly under correlation. Thus, the posterior covariance trace and credible-interval width remain effectively constant across all conditions. In contrast, the empirical mean-squared error (MSE) increases monotonically with ρ, reflecting degradation in estimation accuracy as the true sensor dependence grows. The naïve Kalman filter accumulates this error more rapidly, whereas aCI remains slightly more robust for high ρ (e.g., ρ = 0.9), consistent with its guaranteed conservative behavior under unknown cross-correlations. No divergence was observed, confirming the stability result stated in Theorem 1.

3.2. Effect of Cross-Sensor Correlation on Posterior Uncertainty Propagation

This subsection provides an interpretation of the results in Section 3.1 by linking injected cross-sensor correlation (ρ) to the practical risk of overconfident fusion under an independence assumption. These single-sensor posteriors are subsequently fused to obtain a global posterior used for prediction and decision-making. However, in real agro-environmental systems, sensor noises are often statistically dependent due to common environmental drivers and shared acquisition conditions. To study the impact of such dependencies, cross-sensor correlation was explicitly injected at the measurement-noise level through a controlled covariance structure parameterized by ρ ∈ {0.0, 0.3, 0.6, 0.9}. This procedure allows for systematic evaluation of fusion robustness under increasing violation of the independence assumption.
Two fusion strategies were compared. The naïve Kalman filter (KF) assumes a diagonal measurement-noise covariance matrix and therefore treats all sensor measurements as statistically independent. In contrast, the adaptive Covariance Intersection (aCI) method fuses posterior distributions without requiring knowledge of cross-covariances, guaranteeing conservative uncertainty bounds even under unknown inter-sensor correlations. For both methods, posterior uncertainty was propagated over time and summarized by the mean trace of the posterior covariance and the corresponding 95% credible-interval width, as defined in Equations (42) and (43). This setup isolates the effect of unmodeled correlation on posterior uncertainty estimation and provides the theoretical basis for the comparative results reported in Section 3.1.

3.3. Posterior Inference of Heteroskedastic Bayesian Regression

Posterior inference was performed using an affine-invariant ensemble MCMC sampler with 80 walkers and 3000 iterations, resulting in 240,000 retained draws, of which N retained were kept after burn-in. Convergence diagnostics confirmed stable posterior exploration (Table 3).
Posterior summaries (median and interquartile range) for the most influential parameters are shown in Table 4.
Wide interquartile ranges observed for log-variance coefficients γ2 associated with phosphorus indicate partial identifiability, which commonly arises when predictors are correlated or when variance heterogeneity is modest relative to observation noise. This behavior is expected in heteroskedastic regression models and does not compromise the proposed framework. Importantly, uncertainty in γ is fully propagated into the posterior predictive variance σ2ₖ and, consequently, into the stress probability πₖ used for decision-making. In practice, larger uncertainty in γ leads to wider predictive intervals and more conservative decisions, rather than overconfident variance estimates. The operational reliability of the variance model is therefore assessed at the predictive level using calibration diagnostics and distributional metrics (NLL, ECE, PIT), which directly evaluate the adequacy of uncertainty quantification.
Temperature and CTI strongly govern the posterior mean of the yield–stress index, whereas the γ coefficients indicate differential contributions of environmental drivers to predictive variance, with strongest effects attributed to temperature and phosphorus.

3.4. Posterior Predictive Accuracy

Posterior predictive samples y ~ k were generated from the retained MCMC chains. On the normalized yield scale, the model achieved the predictive results shown in Table 5.
Reliability diagrams closely followed the perfect-calibration diagonal, and the PIT histogram was approximately uniform, indicating well-calibrated predictive distributions. The lower RMSE (0.43) reflects normalization; Section 3.9 reports out-of-sample (stratified 3-fold) performance on the original yield_index scale, which is expected to yield larger error and calibration metrics than the in-sample normalized diagnostics reported here.
Figure 3 presents the posterior predictive calibration diagnostics for the proposed Bayesian model.
Figure 3a shows the Probability Integral Transform (PIT) histogram for the normalized yield_index. The histogram is approximately uniform (mean = 0.50, std = 0.31), indicating that the posterior predictive distribution is, in general, well calibrated. A mild U-shaped pattern is visible, with slightly increased mass in the 0–0.2 and 0.8–1.0 intervals, suggesting marginal underdispersion, meaning that predictive intervals are slightly narrower than the true empirical variability. Despite this, the PIT values remain well within acceptable limits for heteroskedastic Bayesian regression models. Combined with the low Expected Calibration Error (ECE = 0.036), these results confirm good overall probabilistic calibration.
Figure 3b provides a reliability diagram (calibration curve) for the posterior stress probabilities πₖ. The calibration curve lies close to the diagonal, demonstrating that the predicted probabilities closely match the empirical frequency of stress across most probability bins. Deviations from perfect calibration are small, which is consistent with the low ECE and further supports that the proposed heteroskedastic Bayesian model achieves well-calibrated probabilistic predictions.

3.5. Bayesian Stress Decision Layer

Building on the posterior predictive distributions obtained in Section 3.3, a probabilistic decision layer was constructed to translate continuous yield–stress predictions into binary stress alarms suitable for agro-environmental decision-making.
Stress was defined using a yield-index threshold on the original (non-normalized) scale:
S = y k < y c r i t ,           y c r i t = 6.5
For each sample k the posterior stress probability was estimated from the posterior predictive draws:
π k = P r y k S D k     1 N d r a w j = 1 N d r a w I y ~ k ( j ) < y c r i t
where y ~ k ( j ) denotes the j -th posterior predictive sample, and I{⋅} is the indicator function.
The dataset exhibits moderate class imbalance, with 762 SAFE and 338 STRESS samples (STRESS ratio ≈ 0.307), making accuracy alone an unreliable metric for threshold selection.
Under a symmetric loss assumption, where false alarms and missed detections have equal costs (C10 = C01), the Bayes-optimal decision threshold becomes:
π * = C 10 C 10 + C 01   = 0.5
A binary stress decision is then obtained as:
z ^ k =     1 ,                 i f   π k π * 0 ,                 o t h e r w i s e ,
where z ^ k = 1 denotes STRESS and z ^ k = 0 denotes SAFE. Using π * = 0.5 yields a balanced baseline classifier that detects both SAFE and STRESS samples. However, due to overlapping posterior stress probabilities and class imbalance, this theoretically optimal threshold does not maximize task-specific performance metrics, such as the F1-score.
To analyze the sensitivity of stress detection to the decision threshold, the threshold τ ∈ (0,1) was varied, and the corresponding accuracy and F1-score were evaluated.
Maximizing accuracy leads to a high threshold (τ ≈ 0.94), which trivially predicts all samples as SAFE. While this operating point achieves an accuracy of approximately 0.69, it yields an F1-score of zero and completely suppresses stress detection, rendering it unsuitable for risk-sensitive agricultural applications.
In contrast, maximizing the F1-score yields a low threshold (τ* ≈ 0.04), corresponding to a safety-oriented operating point. This threshold substantially increases recall for the STRESS class at the expense of increased false alarms, reflecting a cost-asymmetric decision rule consistent with agro-climatic risk management, where missed stress events are considerably more costly than false alarms.
Table 6 summarizes stress classification performance at representative operating points, clearly illustrating how different optimization criteria lead to fundamentally different decision behaviors.
To illustrate the effect of threshold selection, confusion matrices are reported for two representative operating points: Figure 4a presents the confusion matrix obtained using the symmetric Bayes threshold τ = 0.5, serving as a neutral baseline; Figure 4b presents the confusion matrix obtained using the F1-optimal threshold τ* ≈ 0.04, highlighting the substantial increase in STRESS recall at the cost of increased false alarms.
These results demonstrate that threshold selection critically influences the operational behavior of the stress decision layer. While the symmetric Bayes threshold provides a theoretically sound baseline under equal error costs, practical agro-climatic decision-making typically involves asymmetric risk, where missed stress events (e.g., under-irrigation) are substantially more costly than false alarms. Consequently, optimal decision performance cannot be defined solely in terms of accuracy, but must account for application-specific risk preferences and cost asymmetry. The proposed probabilistic framework enables transparent exploration of this trade-off and supports application-specific threshold selection based on risk tolerance rather than accuracy alone.
To further elucidate the behavior of the Bayesian stress decision layer, the empirical posterior stress probability distributions π_k for SAFE and STRESS samples are shown in Figure 5.
The two distributions exhibit substantial overlap across a wide range of probability values, indicating that stress and non-stress conditions cannot be cleanly separated by a single probability threshold. Importantly, this overlap reflects intrinsic uncertainty and class imbalance in the data, rather than a lack of probabilistic calibration. This overlap directly explains the degeneracy observed at the symmetric Bayes threshold (τ = 0.5), where most samples fall on the SAFE side of the decision boundary, leading to suppressed STRESS recall despite well-calibrated posterior probabilities.
Conversely, the F1-optimal threshold τ* ≈ 0.04 shifts the decision boundary into a low-probability region, intentionally prioritizing sensitivity to stress events at the cost of increased false alarms. This operating point corresponds to a cost-sensitive decision rule that favors recall over precision, which is appropriate in risk-averse agro-environmental applications.
Figure 5 demonstrates that the trade-off between accuracy and recall is not a modeling artifact but an intrinsic consequence of posterior uncertainty and class overlap.
This behavior underscores the necessity of probabilistic decision-making and cost-aware threshold selection in agro-environmental applications, where missed stress events are substantially more costly than conservative interventions. Notably, decision performance is governed at the decision layer, while the underlying Bayesian model remains well-calibrated and uncertainty-aware.
Figure 6 presents accuracy and F1-score as a function of the decision threshold τ. Maximizing accuracy leads to a trivial all-SAFE classifier, while maximizing the F1-score yields a safety-oriented operating point suitable for agro-environmental decision-making.
The strong overlap between posterior stress probability distributions for SAFE and STRESS samples prevents the existence of a single threshold that simultaneously maximizes accuracy and recall. This limitation is inherent to the uncertainty structure of the problem and is exacerbated by class imbalance.
Accordingly, the proposed Bayesian framework ensures calibrated probabilities and supports flexible, cost-aware decision rules, rather than relying on fixed deterministic thresholds or post hoc accuracy optimization.

3.6. Heteroskedastic Noise Modeling

Table 7 compares two noise models: homoskedastic, where the observation noise has constant variance across all conditions; and heteroskedastic, where the variance changes as a function of agro-climatic drivers (temperature, humidity, CTI, etc.), as defined by γ in Equation (20).
To evaluate comparative performance, three probabilistic metrics are reported:
-
NLL (Negative Log-Likelihood): Assesses the overall quality of the predictive distribution—lower values indicate better-calibrated uncertainty.
-
Brier score: Measures the accuracy of probabilistic stress predictions—lower is better.
-
CI width: Average width of the 95% credible interval—narrower intervals indicate reduced predictive uncertainty.
Table 7 demonstrates a clear advantage of heteroskedastic modeling: a 34–44% reduction in predictive uncertainty (narrower credible intervals) and a ≈ 45% improvement in probabilistic decision quality (substantially lower NLL and Brier score).
These results confirm that allowing for the noise variance to depend on environmental stressors (via γ) yields better-calibrated, sharper, and more informative predictive distributions compared with a homoskedastic assumption.

3.7. Ablation Study

MCMC inference and drone vegetation indices are the two largest contributors to uncertainty reduction.
Table 8 reports the ablation results used to quantify the contribution of each component to predictive performance and decision reliability. Removing the aCI fusion mechanism led to higher uncertainty and degraded calibration (ΔNLL = +0.19), confirming that robust aggregation of heterogeneous sensors is essential when cross-correlation is present among soil, weather, and drone modalities. Excluding the drone vegetation indices (NDVI, SAVI, CTI) produced an even larger deterioration (ΔNLL = +0.25), demonstrating that canopy-level spectral measurements capture variance that cannot be explained by ground sensors alone.
Replacing full MCMC inference with a MAP estimator resulted in the largest increase in NLL (+0.31), indicating that accurate uncertainty quantification—not only point predictions—is crucial for reliable stress probability estimation. Finally, enforcing a fixed decision threshold (τ = 0.5) reduced robustness across environmental conditions, primarily degrading decision-level metrics (e.g., F1/recall) under class imbalance, even when the underlying probabilistic predictions remained unchanged.
The ablation study highlights that the combination of MCMC uncertainty estimation, drone-based vegetation indices, and robust aCI fusion provides the greatest improvement in both calibration and decision quality.

3.8. Control Performance

The posterior uncertainty was propagated into a proportional–derivative (PD) control law:
u k = K p y * E y k + K d E [ y k ]
where y * is the target yield index, E y k denotes the posterior predictive mean at day k , and E y k = E y k E [ y k 1 ] . The proportional and derivative gains were empirically tuned to ensure closed-loop stability, smooth irrigation actuation, and realistic soil–plant dynamics. The final values used in all simulations were K p = 0.85 , and K d = 0.12 . These values were selected by grid search over stabilizing ranges to minimize water usage while avoiding oscillatory moisture dynamics. The same gains were used for both controllers to ensure a fair comparison.
To quantitatively evaluate the effect of incorporating posterior predictive uncertainty into the irrigation controller, a stylized 90-day soil–plant dynamic simulation was performed. The Bayesian proportional–derivative controller was compared to a conventional rule-based strategy across 200 Monte Carlo realizations with stochastic evapotranspiration, sensor noise, and soil-moisture variability.
True soil-moisture dynamics:
M k + 1 = M k + I k E T k + ω k
where I k is irrigation applied by the controller, E T k is evapotranspiration (randomly varying daily), and ω k ~ N ( 0 , σ ω 2 ) process noise.
Sensor model (noisy measurements):
M ~ k = M k + v k ,           v k ~ N ( 0 , σ v 2 )
which generates occasional false stress signals.
Stress was defined as s t r e s s k = ( y k < y c r i t ) with y c r i t = 6.5 . In this stylized 90-day simulation, soil moisture M k acts as a proxy driver that influences the yield–stress index y k through the predictive model; therefore, irrigation affects stress probability indirectly via the dynamics of M k .
Two controllers were tested
Rule-based: irrigates when measured moisture < threshold
Bayesian PD controller: irrigates only when P ( M k < M c r i t D k > τ where τ is the decision threshold selected according to the Bayesian stress decision layer (Section 3.5).
Irrigation was scaled using the PD control law in Equation (41), while Equation (43) specifies the noisy measurement model used for posterior state updates.
For each simulation run, the following are recorded:
Total water usage:
W = k = 1 90 I k
False stress alarms: Days when the controller irrigated due to noise, even though
( M k M c r i t )
After running 200 stochastic simulations, the following are computed:
Water usage:
W r u l e = 1 200 k = 1 200 W r u l e ( i ) = 238.65             W b a y e s = 1 200 k = 1 200 W b a y e s ( i ) = 204.69
False stress alarms:
F A r u l e = 1 200 k = 1 200 F A r u l e ( i ) = 0.78       F A b a y e s = 1 200 k = 1 200 F A b a y e s ( i ) = 0.50
The performance metrics (204.69 vs. 238.65 units of water; 0.50 vs. 0.78 false-alarm days) represent the ensemble means across all 200 simulations, providing statistically stable estimates of long-term behavior. Each simulation yields slightly different outcomes due to climatic and sensor uncertainty, and the reported averages thus quantify systematic, rather than incidental, performance differences.
The Bayesian controller demonstrated substantial gains in both resource efficiency and decision reliability:
  • 14.2% reduction in total water usage (204.69 vs. 238.65 units), achieved by acting only when the posterior predictive distribution indicated a credible risk of plant stress;
  • 35.5% fewer false stress alarms (0.50 vs. 0.78 days), confirming that calibrated uncertainty estimates prevent unnecessary irrigation events;
  • More stable control inputs, because posterior uncertainty regularizes abrupt changes in the expected yield-index trajectory E y k .
Compared to rule-based irrigation, the results report 14.2% lower water usage and 35.5% fewer false stress alarms.
Figure 7 compares the relative irrigation performance of the proposed Bayesian uncertainty-aware controller and the conventional rule-based strategy, normalized to the rule-based baseline. The Bayesian controller achieves approximately 14% lower water consumption and 35% fewer false stress alarms on average. Error bars denote 95% confidence intervals obtained from 200 Monte Carlo simulations. The results demonstrate that integrating posterior predictive uncertainty into the control law leads to both improved water-use efficiency and enhanced decision reliability.

3.9. Cross-Validated Decision Performance

A stratified 3-fold Bayesian cross-validation was conducted following the protocol in Section 2.12. The full dataset (N = 1100) was partitioned into three folds while preserving the stress proportion (≈30.7%, defined as yield_index < 6.5). For each fold, the heteroskedastic Bayesian regression model was re-trained on the training subset, and posterior predictive distributions for the validation subset were obtained using MCMC sampling. Table 9 reports performance across folds using RMSE for the continuous yield_index, Gaussian negative log-likelihood (NLL), Brier score for stress probability, accuracy and F1-score for binary stress classification, and expected calibration error (ECE).
The model achieved an RMSE of 15.08 ± 1.78, indicating moderate reconstruction accuracy of the yield index. The NLL (3.01 ± 0.31) and Brier score (0.32 ± 0.04) show that the model produces informative probabilistic predictions. Stress-detection accuracy averaged 0.69 ± 0.065, with an F1-score of 0.33 ± 0.057, reflecting reasonable discrimination under pronounced class imbalance.
Calibration analysis yielded an ECE of 0.25 ± 0.07, indicating that predicted stress probabilities are not perfectly calibrated but still retain meaningful uncertainty structure. Notably, folds with lower Brier scores tended to show lower ECE, suggesting internal consistency between probabilistic accuracy and calibration.
Overall, these results confirm that the heteroskedastic Bayesian model provides coherent and practically useful uncertainty estimates that can be directly exploited in the risk-sensitive decision and control framework described in Section 2.8 and Section 2.10. Given the class imbalance (≈30% stressed crops), an accuracy near 51% would be expected from a naïve classifier, whereas the observed F1-score of 0.33 demonstrates non-trivial stress-detection capability.

4. Discussion

The results of this study demonstrate the importance of explicitly modeling uncertainty in agro-environmental decision-making systems, particularly when predictions are directly coupled with control actions, such as irrigation. Unlike many existing smart-agriculture pipelines that rely on deterministic regression or classification models and do not propagate uncertainty into the control loop, the proposed framework integrates heteroskedastic Bayesian inference supported by advanced MCMC techniques [7,8,9,18,19,20], robust multimodal sensor fusion via adaptive Covariance Intersection grounded in optimal covariance-bounding theory [14,25,35,36], and a risk-sensitive stress decision layer formulated in accordance with Bayesian decision theory [22]. This holistic probabilistic formulation allows for not only point prediction of crop stress but also calibrated quantification of confidence, which is essential for reliable actuation under stochastic environmental and dynamic-system conditions, consistent with principles of uncertainty-aware control developed in related nonlinear systems [3,4,5,37].
The uncertainty fusion analysis confirmed that unmodeled cross-sensor correlations lead to systematic overconfidence in the naïve Kalman filter, as evidenced by the approximately invariant posterior covariance trace despite increasing injected correlation ρ. Such behavior reflects the well-known inconsistency of Kalman-based fusion when independence assumptions are violated [12,13]. In contrast, the adaptive Covariance Intersection method maintained conservative and stable uncertainty bounds across all correlation levels, while the empirical MSE increased monotonically with ρ. These findings validate the robustness guarantees of CI-type fusion methods [14,38,39] and highlight their importance for real-world agro-ecological sensing systems, where soil, weather, and UAV-derived measurements are often mutually dependent [25,26,40,41].
The heteroskedastic Bayesian regression results further demonstrate that allowing for predictive variance to depend on environmental stressors (such as temperature, moisture imbalance, and CTI) yields sharper and better-calibrated posterior distributions than homoskedastic models. The 34% reduction in credible-interval width and the substantial improvement in NLL and Brier score indicate that heteroskedastic modeling captures meaningful structure in both the mean and variance of the yield–stress process. Similar conclusions regarding the superiority of heteroskedastic uncertainty models in environmental prediction tasks have been reported in recent probabilistic and sensor-fusion studies [3,6,22,39].
From a decision-theoretic perspective, the Bayesian stress decision layer illustrates the practical limitations of symmetric-loss Bayes rules under class imbalance. While the theoretical threshold π* = 0.5 is optimal only when misclassification costs are equal, agro-climatic decision-making is inherently asymmetric, since missed stress events (e.g., under-irrigation) are considerably more costly than false alarms. By optimizing the decision threshold using the F1-score, the practically optimal operating point shifted to τ* ≈ 0.04, resolving the degeneracy of the symmetric-cost classifier and significantly improving recall for the stress class. Similar threshold-adaptation strategies have been advocated in recent agricultural risk-detection and stress-monitoring systems [19,21,24,40,41].
The closed-loop irrigation control experiments further confirm the operational value of uncertainty-aware decision-making. By propagating posterior predictive uncertainty into a proportional–derivative control law, the Bayesian controller achieved a 14.2% reduction in total water usage and a 35.5% reduction in false stress alarms compared with a conventional rule-based strategy. These gains are fully consistent with recent IoT- and ML-enabled smart irrigation systems, which report that closed-loop control combined with probabilistic prediction can substantially reduce water consumption while maintaining yield stability [19,20,21]. However, unlike most existing smart-irrigation schemes that rely on deterministic thresholds or point forecasts [19,20], the proposed controller explicitly integrates posterior uncertainty into the control logic, thereby reducing actuation driven by sensor noise and short-term fluctuations.
In previous work on deterministic crop recommendation, Knights et al. [29] developed a supervised learning pipeline combining exploratory data analysis, mathematical modeling, and six machine learning classifiers (Decision Tree, Random Forest, Gaussian Naïve Bayes, Logistic Regression, XGBoost, and SVM) to recommend the most suitable crop based on soil nutrients and agro-environmental conditions. Using a dataset with 1100 samples and 11 crop types, ensemble and probabilistic models achieved classification accuracies above 99%, confirming that well-structured agro-environmental features can support highly reliable deterministic crop-type prediction [25,27,28].
However, the problem addressed in the present study is fundamentally different. Instead of multi-class crop-type classification under relatively balanced classes, we address binary stress detection on a continuous yield_index under class imbalance (≈30% stressed cases), heteroskedastic noise, and explicit uncertainty propagation into irrigation control. Consequently, overall accuracy is not directly comparable between the two regimes: near-perfect deterministic accuracies reflect a simpler classification setting with point predictions, whereas the accuracy of 0.69 and F1-score of 0.33 obtained here arise from a substantially more demanding, risk-sensitive probabilistic decision-making task.
Taken together, the two lines of work are complementary: deterministic ML models offer excellent crop-selection accuracy [9,14,25], whereas the Bayesian framework proposed here enables calibrated stress-risk assessment and robust control under uncertainty in dynamic agro-environmental conditions.
The proposed pipeline is modular by design. Components that are largely domain-agnostic include adaptive Covariance Intersection (aCI) fusion, which operates on Gaussian posterior summaries without requiring cross-covariance knowledge; Bayesian posterior sampling and uncertainty propagation; Bayes-risk-based decision thresholding; and probabilistic calibration and evaluation metrics. Domain-specific elements are the definition of the latent state vector x k , the measurement operators H k ( i ) and noise models R k ( i ) , the event or stress set S used to define posterior event probabilities π k , and the choice of control law (a PD controller in this study).
To apply the framework to robotic navigation, x k may represent pose and velocity, measurements may originate from LiDAR or vision sensors, and S may encode collision or localization-failure risk. In environmental monitoring, x k may represent pollutant concentrations or hydrological states, and S may define regulatory exceedance events. In all cases, the fusion-to-decision structure of the framework remains unchanged.
From a broader perspective, these findings support the growing role of Bayesian learning and uncertainty-aware control in smart agriculture, particularly in IoT-enabled systems where sensing reliability, environmental variability, and economic risk must be jointly addressed [15,16,17,18,19,20,21]. The proposed framework provides a principled mathematical bridge between probabilistic inference and real-time actuation, enabling sustainable water management under uncertain climatic conditions and complementing recent UAV–IoT–ML integration efforts in precision farming [17,18,20].
Future research will extend the current decision-support framework beyond multimodal Bayesian inference toward full cyber–physical autonomy.
First, strengthening the IoT network and system security will be essential to ensure reliable data acquisition and to protect the DSS against vulnerabilities, intrusion attempts, and manipulation of sensor streams, in line with previously proposed methodologies for IoT threat detection and prevention [42].
Second, the framework will be advanced toward integration with robotic and mechatronic platforms, enabling autonomous monitoring, precision spraying, soil sampling, and stress mitigation using mobile and anthropomimetic robots. Prior work on safety-oriented robot operation [43] and balance-stability control in anthropomimetic robots [44] provides a strong foundation for incorporating robotic actuation into the DSS architecture. Additional advances in agricultural robotics, particularly in autonomous ground vehicles and UAV-enabled monitoring [45], will further support this integration.
Ultimately, the DSS is envisioned to evolve into a secure, uncertainty-aware cyber–physical ecosystem in which Bayesian reasoning, secure IoT infrastructure, and autonomous robotics jointly support sustainable and resilient agricultural management.

5. Conclusions

This study presented a unified Bayesian framework for multimodal agro-environmental decision support that integrates robust sensor fusion, heteroskedastic Bayesian regression, and a risk-sensitive stress–decision layer within a closed-loop irrigation controller. The results show that explicitly modeling and propagating uncertainty—from sensor measurements to predictive inference and control actions—substantially improves reliability compared with deterministic baselines. Adaptive Covariance Intersection maintained stable uncertainty bounds under unknown cross-sensor correlations, while heteroskedastic modeling produced sharper and better-calibrated predictive distributions. The Bayesian decision layer effectively addressed class imbalance through threshold optimization, and the uncertainty-aware controller achieved meaningful improvements in water efficiency and reduction of false stress alarms.
Collectively, these findings demonstrate that uncertainty-aware Bayesian reasoning is essential for robust smart-agriculture decision systems operating under environmental variability and sensor noise. Future work will extend the framework toward secure IoT network integration and autonomous robotic platforms, paving the way for a fully cyber–physical, uncertainty-aware ecosystem for sustainable agricultural management.

Author Contributions

Conceptualization, V.A.K.; methodology, V.A.K.; software, V.A.K., M.P., L.K. and J.G.K.; investigation, V.A.K. and J.G.K.; resources, V.A.K.; writing—original draft preparation, M.P. and L.K.; writing—review and editing, V.A.K., J.G.K. and L.K.; visualization, M.P. and L.K.; supervision, V.A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research is based on the methodology developed within the project “Mathematical Models in Machine Learning for Smart Agriculture and High-Value Food”, funded by the Ministry of Education and Science of the Republic of North Macedonia under the Call for Financing Scientific Research Projects at Public Universities and Scientific Institutes for 2025–2026 (Contract No. 15-6171/22, dated 7 August 2025). No project funds were received at the time of publication. The article processing charge (APC) was covered by the authors.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors acknowledge the institutional framework of the project “Mathematical Models in Machine Learning for Smart Agriculture and High-Value Food”.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
aCIAdaptive covariance intersection
AICTIntegrated autocorrelation time
CICovariance intersection
CTICanopy temperature index
DSSDecision support system
ECEExpected calibration error
ESSEffective sample size
ETEvapotranspiration
F1F-measure (harmonic mean of precision and recall)
GPGaussian process
IoTInternet of Things
KFKalman filter
LRTLikelihood-ratio test
MAPMaximum a posteriori
MCMCMarkov Chain Monte Carlo
MLMachine learning
MSEMean-squared error
NLLNegative log-likelihood
PDProportional–derivative (controller)
RMSERoot-mean-squared error
SLRStress likelihood ratio
EKF/UKFExtended Kalman filter/unscented Kalman filter

References

  1. Obayemi, A.; Nguyen, K.A. Uncertainty Quantification of Multimodal Models. In Advances and Trends in Artificial Intelligence. Theory and Applications, Proceedings of the IEA/AIE 2025, Kitakyushu, Japan, 1–4 July 2025; Fujita, H., Watanobe, Y., Ali, M., Wang, Y., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2026; Volume 15706. [Google Scholar] [CrossRef]
  2. Schmerling, E. Multimodal Modeling and Uncertainty Quantification for Robot Planning and Decision Making. Doctoral Dissertation, Stanford University, Stanford, CA, USA, 2019. Available online: https://stacks.stanford.edu/file/druid:xx848nq8857/SchmerlingPhD-augmented.pdf (accessed on 8 December 2025).
  3. Knights, V.A.; Petrovska, O.; Kljusurić, J.G. Symmetry and Asymmetry in Dynamic Modeling and Nonlinear Control of a Mobile Robot. Symmetry 2025, 17, 1488. [Google Scholar] [CrossRef]
  4. Knights, V.A.; Petrovska, O.; Kljusurić, J.G. Nonlinear Dynamics and Machine Learning for Robotic Control Systems in IoT Applications. Future Internet 2024, 16, 435. [Google Scholar] [CrossRef]
  5. Knights, V.; Petrovska, O. Dynamic Modeling and Simulation of Mobile Robot Under Disturbances and Obstacles in an Environment. J. Appl. Math. Comput. 2024, 8, 59–67. [Google Scholar] [CrossRef]
  6. El Moselhy, T.A.; Marzouk, Y.M. Bayesian Inference with Optimal Maps. J. Comput. Phys. 2012, 231, 7815–7850. [Google Scholar] [CrossRef]
  7. Robert, C.P.; Casella, G. Monte Carlo Statistical Methods, 2nd ed.; Springer: Berlin, Germany, 2004. [Google Scholar] [CrossRef]
  8. Stuart, A.M. Inverse Problems: A Bayesian Perspective. Acta Numer. 2010, 19, 451–559. [Google Scholar] [CrossRef]
  9. Girolami, M.; Calderhead, B. Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods. J. R. Stat. Soc. B 2011, 73, 123–214. [Google Scholar] [CrossRef]
  10. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. ASME J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  11. Anderson, B.D.O.; Moore, J.B. Optimal Filtering: In Prentice-Hall Information and System Science Series, 9th ed.; Kailath, T., Ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 1979; Available online: http://www.dii.unimore.it/~lbiagiotti/MaterialeKalmanFilter/AndersonMoore2005.pdf (accessed on 8 December 2025).
  12. Uhlmann, J.K. Covariance Consistency Methods for Fault-Tolerant Distributed Data Fusion. Inf. Fusion 2003, 4, 201–215. [Google Scholar] [CrossRef]
  13. Julier, S.J.; Uhlmann, J.K. A Non-Divergent Estimation Algorithm in the Presence of Unknown Correlations. In Proceedings of the American Control Conference, Albuquerque, NM, USA, 4–6 June 1997; Volume 4, pp. 2369–2373. [Google Scholar] [CrossRef]
  14. Reinhardt, M.; Noack, B.; Arambel, P.O.; Hanebeck, U.D. Minimum Covariance Bounds for the Fusion under Unknown Correlations. IEEE Signal Process. Lett. 2015, 22, 1210–1214. [Google Scholar] [CrossRef]
  15. Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  16. Dick, J.; Goda, T.; Murata, H. Toeplitz Monte Carlo. Stat. Comput. 2021, 31, 1121–1135. [Google Scholar] [CrossRef]
  17. Sivia, D.S.; Skilling, J. Data Analysis: A Bayesian Tutorial, 2nd ed.; Oxford University Press: Oxford, UK, 2006; Available online: https://global.oup.com/academic/product/data-analysis-9780198568322 (accessed on 11 December 2025).
  18. Goodman, J.; Weare, J. Ensemble Samplers with Affine Invariance. Commun. Appl. Math. Comput. Sci. 2010, 5, 65–80. [Google Scholar] [CrossRef]
  19. Foreman-Mackey, D.; Hogg, D.W.; Lang, D.; Goodman, J. Emcee: The MCMC Hammer. Publ. Astron. Soc. Pac. 2013, 125, 306–312. [Google Scholar] [CrossRef]
  20. Knights, V.; Petrovska, O.; Prchkowska, M. Mathematical Analysis of Stochastic Transition Matrices in Markov Models Using Monte Carlo and MCMC Simulation. J. Nat. Sci. Math. UT 2024, 10, 100–110. [Google Scholar] [CrossRef]
  21. Berger, J.O. Statistical Decision Theory and Bayesian Analysis, 2nd ed.; Springer: New York, NY, USA, 1985. [Google Scholar]
  22. Raiffa, H.; Schlaifer, R. Applied Statistical Decision Theory. Available online: https://gwern.net/doc/statistics/decision/1961-raiffa-appliedstatisticaldecisiontheory.pdf (accessed on 11 December 2025).
  23. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: Hoboken, NJ, USA, 1991; Available online: https://cs-114.org/wp-content/uploads/2015/01/Elements_of_Information_Theory_Elements.pdf (accessed on 11 December 2025).
  24. Whittle, P. Risk-Sensitive Optimal Control; Wiley: Chichester, UK, 1990; Available online: https://scispace.com/pdf/risk-sensitive-optimal-control-3fpaelp8n4.pdf (accessed on 11 December 2025).
  25. Russell, J.; Bergmann, J.H.M.; Nagaraja, V.H. Towards Dynamic Multi-Modal Intent Sensing Using Probabilistic Sensor Networks. Sensors 2022, 22, 2603. [Google Scholar] [CrossRef] [PubMed]
  26. Piechocki, R.J.; Wang, X.; Bocus, M.J. Multimodal Sensor Fusion in the Latent Representation Space. Sci. Rep. 2023, 13, 2005. [Google Scholar] [CrossRef]
  27. Mondal, M.; Alam, T.; Ahmed, S.; Rahman, M. A Cross-Comparative Review of Machine Learning for Plant Disease Detection. Artif. Intell. Agric. 2024, 12, 127–151. [Google Scholar] [CrossRef]
  28. Lin, R.; Hu, H. Adapt and explore: Multimodal mixup for representation learning. Inf. Fusion 2024, 105, 102216. [Google Scholar] [CrossRef]
  29. Knights, V.; Petrovska, O.; Prchkovska, M. Machine Learning and Mathematical Modeling in Agricultural Development. In AI and Digital Transformation: Opportunities, Challenges, and Emerging Threats in Technology, Business, and Security, Proceedings of the ICITTBT 2025, Tirana, Albania, 29–30 May 2025; Dhoska, K., Spaho, E., Eds.; Communications in Computer and Information Science; Springer: Cham, Switzerland, 2026; Volume 2669. [Google Scholar] [CrossRef]
  30. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004; Available online: https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf (accessed on 2 January 2026).
  31. Julier, S.J.; Uhlmann, J.K. General decentralized data fusion with covariance intersection. In Handbook of Multisensor Data Fusion; Hall, D.L., Llinas, J., Eds.; CRC Press: Boca Raton, FL, USA, 2001; pp. 319–343. Available online: https://dsp-book.narod.ru/HMDF/2379ch12.pdf (accessed on 11 December 2025).
  32. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software; John Wiley & Sons: New York, NY, USA, 2002. [Google Scholar] [CrossRef]
  33. Geyer, C.J. Practical Markov Chain Monte Carlo. Stat. Sci. 1992, 7, 473–511. [Google Scholar] [CrossRef]
  34. Gelman, A.; Rubin, D.B. Inference from Iterative Simulation Using Multiple Sequences. Stat. Sci. 1992, 7, 457–472. [Google Scholar] [CrossRef]
  35. Wang, X.; Sun, S.; Liu, Y.; Liu, Z. A Fast Covariance Union Algorithm for Inconsistent Sensor Data Fusion. IEEE Access 2021, 9, 143941–143949. [Google Scholar] [CrossRef]
  36. Mulla, D.J. Twenty Five Years of Remote Sensing in Precision Agriculture: Key Advances and Remaining Knowledge Gaps. Biosyst. Eng. 2013, 114, 358–371. [Google Scholar] [CrossRef]
  37. Liang, W.; Zhang, Y.; Chong, A.; Cochran Hameen, E.; Loftness, V. Exploring Gaussian Process Regression for Indoor Environmental Quality: Spatiotemporal Thermal and Air Quality Modeling with Mobile Sensing. Build. Environ. 2025, 281, 113143. [Google Scholar] [CrossRef]
  38. Dubois, D.; Fargier, H.; Prade, H. Multiple-Sources Information Fusion—A Practical Inconsistency-Tolerant Approach. In Proceedings of the 8th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU 2000), Madrid, Spain, 3–7 July 2000; pp. 1047–1054. Available online: https://hal.science/hal-03405306/document (accessed on 11 December 2025).
  39. Nandgude, N.; Singh, T.P.; Nandgude, S.; Tiwari, M. Drought Prediction: A Comprehensive Review of Different Drought Prediction Models and Adopted Technologies. Sustainability 2023, 15, 11684. [Google Scholar] [CrossRef]
  40. Wang, S.; Yu, Z.; Aorigele; Zhang, W. Study on the modeling method of sunflower seed particles based on the discrete element method. Comput. Electron. Agric. 2022, 198, 107012. [Google Scholar] [CrossRef]
  41. Kim, S.; Kim, S.; Hwang, S.; Lee, H.; Kwak, J.; Song, J.-H.; Jun, S.-M.; Kang, M.-S. Impact assessment of water-level management on water quality in an estuary reservoir using a watershed-reservoir linkage model. Agric. Water Manag. 2023, 280, 108234. [Google Scholar] [CrossRef]
  42. Antoska Knights, V.; Gacovski, Z. Methods for Detection and Prevention of Vulnerabilities in the IoT (Internet of Things) Systems [Internet]. Internet of Things—New Insights; IntechOpen: London, UK, 2024. [Google Scholar] [CrossRef]
  43. Knights, V.A.; Stankovski, M.; Nusev, S.; Temeljkovski, D.; Petrovska, O. Robots for Safety and Health at Work. Mech. Eng. Sci. J. 2015, 33, 275–279. Available online: https://www.mesj.ukim.edu.mk/journals/issue/download/17/9 (accessed on 15 November 2025).
  44. Antoska, V.; Jovanović, K.; Petrović, V.M.; Baščarević, N.; Stankovski, M. Balance Analysis of the Mobile Anthropomimetic Robot Under Disturbances—ZMP Approach. Int. J. Adv. Robot. Syst. 2013, 10, 206. [Google Scholar] [CrossRef]
  45. Shi, X.; Han, W.; Zhao, T.; Tang, J. Decision Support System for Variable Rate Irrigation Based on UAV Multispectral Remote Sensing. Sensors 2019, 19, 2880. [Google Scholar] [CrossRef]
Figure 1. Bayesian multimodal fusion, decision, and control framework integrating aCI, Kalman filtering, Bayesian likelihood-ratio testing, and PD feedback control for irrigation/fertilization.
Figure 1. Bayesian multimodal fusion, decision, and control framework integrating aCI, Kalman filtering, Bayesian likelihood-ratio testing, and PD feedback control for irrigation/fertilization.
Appliedmath 06 00016 g001
Figure 2. Probabilistic framework for uncertainty-aware multimodal fusion, Bayesian regression, and decision control using Kalman filtering, adaptive covariance intersection, and heteroskedastic MCMC inference. τ∗ denotes a decision threshold used to trigger an alarm.
Figure 2. Probabilistic framework for uncertainty-aware multimodal fusion, Bayesian regression, and decision control using Kalman filtering, adaptive covariance intersection, and heteroskedastic MCMC inference. τ∗ denotes a decision threshold used to trigger an alarm.
Appliedmath 06 00016 g002
Figure 3. Posterior predictive calibration diagnostics: (a) PIT histogram for the normalized yield_index, showing near-uniform distribution with mild underdispersion, consistent with well-calibrated predictive uncertainty; (b) reliability diagram comparing predicted stress probabilities with empirical frequencies, demonstrating close adherence to the perfect-calibration diagonal.
Figure 3. Posterior predictive calibration diagnostics: (a) PIT histogram for the normalized yield_index, showing near-uniform distribution with mild underdispersion, consistent with well-calibrated predictive uncertainty; (b) reliability diagram comparing predicted stress probabilities with empirical frequencies, demonstrating close adherence to the perfect-calibration diagonal.
Appliedmath 06 00016 g003
Figure 4. Confusion matrix of stress classification based on posterior stress probabilities. Darker shades indicate higher sample counts, while lighter shades indicate lower counts. (a) Symmetric Bayes decision threshold τ = 0.5, yielding a balanced baseline classifier under equal error costs; (b) Safety- or F1-optimal threshold τ* ≈ 0.04, where τ denotes the threshold selected to optimize the F1 score, prioritizing stress detection and achieving high recall for the STRESS class at the expense of increased false alarms.
Figure 4. Confusion matrix of stress classification based on posterior stress probabilities. Darker shades indicate higher sample counts, while lighter shades indicate lower counts. (a) Symmetric Bayes decision threshold τ = 0.5, yielding a balanced baseline classifier under equal error costs; (b) Safety- or F1-optimal threshold τ* ≈ 0.04, where τ denotes the threshold selected to optimize the F1 score, prioritizing stress detection and achieving high recall for the STRESS class at the expense of increased false alarms.
Appliedmath 06 00016 g004
Figure 5. Posterior stress probability distributions for SAFE and STRESS samples. The dashed vertical line indicates the symmetric Bayes decision threshold τ = 0.5, while the dotted vertical line denotes the F1-optimal threshold τ = 0.04 where τ is selected to optimize the F1 score, prioritizing stress detection.
Figure 5. Posterior stress probability distributions for SAFE and STRESS samples. The dashed vertical line indicates the symmetric Bayes decision threshold τ = 0.5, while the dotted vertical line denotes the F1-optimal threshold τ = 0.04 where τ is selected to optimize the F1 score, prioritizing stress detection.
Appliedmath 06 00016 g005
Figure 6. Accuracy and F1-score as a function of the decision threshold τ.
Figure 6. Accuracy and F1-score as a function of the decision threshold τ.
Appliedmath 06 00016 g006
Figure 7. Relative control performance of Bayesian and rule-based irrigation strategies (mean ± 95% CI).
Figure 7. Relative control performance of Bayesian and rule-based irrigation strategies (mean ± 95% CI).
Appliedmath 06 00016 g007
Table 1. Structural classification—Hybrid Bayesian inference and decision algorithm with MCMC sampling.
Table 1. Structural classification—Hybrid Bayesian inference and decision algorithm with MCMC sampling.
LayerAlgorithmic PrincipleRole
Sensor fusionAdaptive Covariance Intersection (aCI)Robust fusion under unknown cross-correlations
State estimationBayesian/Kalman filterTemporal tracking of agro-climatic state
Regression layerHeteroskedastic Bayesian regression (inferred via emcee MCMC)Models time-varying predictive uncertainty
Decision layerBayesian Likelihood-Ratio Test (LRT)Converts posterior probabilities into early-warning alarms
Control layerPD control/feedback lawAdaptive irrigation–fertilization control
Table 2. Summary of the posterior covariance trace and 95% credible-interval width obtained under both fusion strategies.
Table 2. Summary of the posterior covariance trace and 95% credible-interval width obtained under both fusion strategies.
ρMean Posterior Covariance Trace (KF)Mean Posterior Covariance Trace (aCI)Relative Change95% CI Width (KF)95% CI Width (aCI)MSE (KF)MSE (aCI)
0.00.02180.0341+56.4%0.2900.3620.02180.0232
0.30.02180.0341+56.4%0.2900.3620.02540.0261
0.60.02180.0341+56.4%0.2900.3620.02880.0287
0.90.02180.0341+56.4%0.2900.3620.03250.0316
Table 3. Convergence diagnostics for the heteroskedastic Bayesian regression posterior sampler.
Table 3. Convergence diagnostics for the heteroskedastic Bayesian regression posterior sampler.
DiagnosticValue
Integrated autocorrelation time34–52
Gelman–Rubin  R ^ 1.02–1.05
Effective sample size950–1600
Table 4. Key posterior parameters (median [25%, 75%]).
Table 4. Key posterior parameters (median [25%, 75%]).
ParameterMedian25%75%
β5 (temperature)0.08350.05820.1099
β6 (humidity)−0.0320−0.0397−0.0241
β10 (CTI)−15.365−19.23−11.35
γ2 (phosphorus)−0.933−4.618−0.095
γ5 (temperature)−1.682−6.564−0.021
Table 5. Posterior predictive performance metrics calculated from posterior draws.
Table 5. Posterior predictive performance metrics calculated from posterior draws.
MetricValue
RMSE0.43
Brier score0.112
Negative log-likelihood (NLL)0.285
Expected calibration error (ECE)0.036
Table 6. Stress classification performance at representative decision thresholds.
Table 6. Stress classification performance at representative decision thresholds.
Threshold StrategyThreshold τAccuracyF1-Score (STRESS)Interpretation
Symmetric Bayes0.50moderatemoderateBalanced theoretical baseline
Accuracy-optimal≈0.94≈0.690.00Trivial all-SAFE classifier
F1-optimal≈0.04≈0.31≈0.47Safety-oriented, high recall
Table 7. Utility of heteroskedastic modeling.
Table 7. Utility of heteroskedastic modeling.
Component RemovedNLL ↓Brier ↓CI Width ↓
Homoscedastic0.5170.2012.41
Heteroskedastic0.2850.1121.76
indicates that lower values correspond to better performance.
Table 8. Ablation study results showing the effect of removing key model components on uncertainty and decision quality.
Table 8. Ablation study results showing the effect of removing key model components on uncertainty and decision quality.
Component RemovedΔNLL ↑ΔBrier ↑
No aCI fusion+0.19+0.13
No drone indices (NDVI, SAVI, CTI)+0.25+0.21
No MCMC (MAP only)+0.31+0.18
Fixed threshold π = 0.50.14+0.11
indicates that lower values correspond to worse performance.
Table 9. Three-fold Bayesian cross-validation.
Table 9. Three-fold Bayesian cross-validation.
MetricMean ± SD
RMSE15.08 ± 1.78
NLL3.01 ± 0.31
Brier score0.318 ± 0.035
Accuracy0.69 ± 0.065
F1 score0.333 ± 0.057
ECE0.254 ± 0.074
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Knights, V.A.; Prchkovska, M.; Krašnjak, L.; Kljusurić, J.G. Uncertainty-Aware Multimodal Fusion and Bayesian Decision-Making for DSS. AppliedMath 2026, 6, 16. https://doi.org/10.3390/appliedmath6010016

AMA Style

Knights VA, Prchkovska M, Krašnjak L, Kljusurić JG. Uncertainty-Aware Multimodal Fusion and Bayesian Decision-Making for DSS. AppliedMath. 2026; 6(1):16. https://doi.org/10.3390/appliedmath6010016

Chicago/Turabian Style

Knights, Vesna Antoska, Marija Prchkovska, Luka Krašnjak, and Jasenka Gajdoš Kljusurić. 2026. "Uncertainty-Aware Multimodal Fusion and Bayesian Decision-Making for DSS" AppliedMath 6, no. 1: 16. https://doi.org/10.3390/appliedmath6010016

APA Style

Knights, V. A., Prchkovska, M., Krašnjak, L., & Kljusurić, J. G. (2026). Uncertainty-Aware Multimodal Fusion and Bayesian Decision-Making for DSS. AppliedMath, 6(1), 16. https://doi.org/10.3390/appliedmath6010016

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop