Revisited Bayesian Sequential Indicator Simulation: Using a Log-Linear Pooling Approach

: It has been more than a decade since sequential indicator simulation was proposed to model geological features. Due to its simplicity and easiness of implementation, the algorithm attracts the practitioner’s attention and is rapidly becoming available through commercial software programs for modeling mineral deposits, oil reservoirs, and groundwater resources. However, when the algorithm only uses hard conditioning data, its inadequacy to model the long-range geological features has always been a research debate in geostatistical contexts. To circumvent this difﬁculty, one or several pieces of soft information can be introduced into the simulation process to assist in reproducing such large-scale settings. An alternative format of Bayesian sequential indicator simulation is developed in this work that integrates a log-linear pooling approach by using the aggregation of probabilities that are reported by two sources of information, hard and soft data. The novelty of this revisited Bayesian technique is that it allows the incorporation of several inﬂuences of hard and soft data in the simulation process by assigning the weights to their probabilities. In this procedure, the conditional probability of soft data can be directly estimated from hard conditioning data and then be employed with its corresponding weight of inﬂuence to update the weighted conditional portability that is simulated from the same hard conditioning and previously simulated data in a sequential manner. To test the algorithm, a 2D synthetic case study is presented. The ﬁndings showed that the resulting maps obtained from the proposed revisited Bayesian sequential indicator simulation approach outperform other techniques in terms of reproduction of long-range geological features while keeping its consistency with other expected local and global statistical measures.


Introduction
High-quality probabilistic modeling of categorical variables (e.g., geological domains based on lithological description) is a vital demand in many geoscience disciplines, as it provides the basement for modeling the continuous variables (e.g., grade of elements and properties of rock) throughout the domain of study.This modeling process offers several applications in mineral resource estimation [1,2], statistic reservoir modeling [3,4], and modeling of aquifers in hydrogeology [5], to name a few.To model the categorical variables, among others, truncated Gaussian simulation [6,7], plurigaussian simulation [8,9], multiplepoint statistics [5], and sequential indicator simulation [10][11][12] received significant attention for building such categorical models.The latter, modeling of a continuous variable, is more popular due to its simplicity and straightforwardness, and the algorithm itself is available in several commercial software programs.The resulting models are acceptable where there are no large-scale geological patterns.However, in the case of having curvilinear or longrange geological features, the algorithm produces very patchy and unstructured results, which is a legitimate criticism of this method [13,14].The reason is that sequential indicator simulation only takes into account the two-point statistical measures of the geological domains that are informed by hard data.To solve this issue, several variations of sequential indicator simulation have been developed that consider the soft information as secondary data to inform the large-scale geological variability at the target simulation nodes, which substantially improves the applicability of the method.In this context, hard and soft data refer to measured values at the borehole via the sampling points and interpretive geological models at target grid nodes, respectively.
A possible solution to include secondary/soft information in the process of simulation is to use the probability aggregation concept [15].A proper evaluation of different probability aggregation methods with special attention to their applicability using geoscientific data is conducted by Allard et al. [16].They showed that the log-linear pooling outperforms the linear pooling when integrating them with the truncated Gaussian and the Boolean models in geostatistics.The integration of log-linear pooling in several geostatistical algorithms has already been developed for various applications: interpolation of satellite images [17], 3D multiple-point statistics using 2D training images [18], multiple-point geostatistical simulations using secondary exhaustive information [19], and Bayesian sequential Gaussian simulation for modeling the continuous hydrogeophysical data [20].
Probability aggregation is also employed specifically in sequential indicator simulation.Among others, Bayesian sequential indicator simulation [21] was introduced to integrate a simple Bayesian updating rule to construct a local posterior distribution of the geological domains at the target grid node.This work was developed to model the lithofacies in static reservoirs where the soft data can be informed by seismic attributes for the different lithofacies.Another form of Bayes' law was introduced by Ortiz and Deutsch [22], where the indicator formalism in this sequential simulation technique is updated by using the multiple-point configuration of the soft data.The method is proposed to simulate the continuous variable in the context of mineral resource estimation, where the soft conditional probability is advised by using the production data such as blasthole.However, in these methods, the hard and soft data use equal constant weights, meaning identical influence on the final simulation results.To overcome this limitation, a revised version of Bayesian sequential indicator simulation is proposed in this study for the purpose of modeling the categorical variables (e.g., geological domains) that are integrated with a log-linear pooling approach.The soft information in this enhanced process of simulation is inferred from an interpolation technique using only the hard conditioning data at the sampling point.
The outline of the paper is as follows.In Section 2, the methodology is explained with its main mathematical properties.Then, the proposed approach is tested and validated through a synthetic case study in Section 3. Finally, Section 4 provides the discussion and conclusions.

Conventional Sequential Indicator Simulation
Sequential indicator simulation [10][11][12] is a stochastic methodology for modeling M categories, for which they are exhaustive and mutually exclusive at all data locations.This means that one of the categories must predominate at each individual sample point.
In order to perform a sequential indicator simulation algorithm, first, the categorical variable T h through the hard conditioning data h at sample locations χ is transformed into a matrix of M hard indicator variables, characterized as: In this equation, predominating category M at location χ that is converted to indicators necessitates that ∑ m µ=1 I(χ; µ) = 1.In the second step, a random path is identified to visit each node of the target grid only once.The next step is to estimate the corresponding category at randomly selected target node grid node χ , using the indicators that are obtained through Equation (1) at hard conditioning data.There are different methodologies in geostatistics that can be utilized for this purpose.Among others, the simple kriging method provides promising results.This geostatistical interpolation paradigm is built based on three constraints [23]: (a) the estimator is a weighted linear combination of the available hard data (linearity constraint), (b) the estimator is unbiased, that is, the expectation of the error is 0 (unbiasedness constraint), and (c) the error variance is minimum (optimality constraint).The first restriction necessitates writing the estimator as a linear combination (weighted average) of the neighboring hard data to estimate the variable of interest T h (χ) at target grid node χ : where data υ are composed of neighboring hard conditional data; m is the global declustered mean value of the corresponding variable T h (χ), ω SK β (χ ; µ) is the weights assigned to the variable T at the location χ β (β = 1, . . ., υ).The weights needed in Equation ( 2) are achievable by solving a covariance matrix for each χ as: where C is the covariance of variable T. Solving the kriging system in Equation ( 3) to obtain the weights ω SK β requires the knowledge of covariance of variable T. In this respect, the lineal model of regionalization is widely used to fit such covariances, owing to its mathematical simplicity and tractability [9,23].In this model, the covariance C is defined as weighted sum of L basic covariances, also called basic nested structures: where for each structure (l = 1, . . ., L), b l is the positive sill of basic permissible covariance model C l (h).
Therefore, one can use the simple kriging paradigm to estimate the corresponding category µ at the target grid node.Since the initial data is already converted to indicators, then the aim of simple kriging in this step is to establish the conditional probability of occurrence of each indicator I(χ; µ) at the corresponding target grid node χ : where data υ are composed of neighboring hard conditional indicators and previously simulated indicator values; the π µ is the global declustered proportion of each category (i.e., prior probability or prior proportion), which equivalently can be defined as E{I(χ; µ)}; ω SIK β (χ ; µ) is the weights assigned to the indicator variable I χ β ; µ at the χ β (β = 1, . . ., υ) of this indicator variable.The weights needed in Equation ( 5) are achievable by solving a covariance matrix for each χ using Equation (3).Then, the permissible model of C(h) can be inferred from the direct calculation of covariance over each indicator variable I(χ; µ) or it can be deduced from the calculation of variogram over indicator variable I(χ; µ).In this case, the variogram can then be converted to covariance to be embedded into Equation (3) for computing the corresponding weights ω SIK β (χ ; µ).Since each M category is independently estimated, an order relation deviation is expected for the estimated conditional probabilities.This signifies that the probabilities do not always sum to one, and the existence of negative probabilities is expected.The reason is that some of the weights obtained by solving the simple kriging system via Equation (3) receive negative or high values.These particular weights lead to producing some negative values or very high values for estimated indicators, which are more than 1 [24][25][26].Then, in the fourth step, the deviations in probabilities for order relation should be rectified [26].In this respect, one can utilize the bounds correction, signifying that all negative estimated values are set to 0, and all high values (more than 1) are set to 1.Then, a normalization of probabilities is needed to force the summation of all estimated conditional probabilities to reach 1 at all target grid nodes ∑ m µ=1 I(χ ; µ) = 1 [23].Once the probabilities of each indicator are estimated and their order related problems are corrected, then one can define any ordering of the M categories and build a cumulative distribution function (cdf-type function).Afterward, the fifth step includes drawing a random number U from a uniform distribution in [0, 1] by Monte Carlo simulation.Therefore, the simulated category T (1) h at location χ can be obtained with an inverse of the quantile associated with that generated random number in [0, 1].In the sixth step, the simulated value is added to the hard conditioning data, and then the algorithm proceeds to the next target node, following the identified random path, repeating steps from one to five.In order to generate another realization T (i) h , i = 2, . . ., r, with r total number of realizations, one needs to repeat the entire algorithm with a different random path.These simulated values are obtained by only hard conditioning data-h.However, the current methodology is poor in reproduction of connectivity or large-scale geological features between sample points.In the following, we show how soft information can be incorporated to simulate the values at target nodes, enforcing the connectivity in the outcomes.

Revisited Bayesian Sequential Indicator Simulation
If the secondary evidence, known as soft information, is available at the target node χ , the conditional probability of occurrence of each category µ obtained from Equation ( 5) can be updated accordingly.Let us denote event Ψ, a category that needs to be simulated at target grid node, and ∆ i (i = 1, . . ., n) represent n information of that underlying category, for which they are inferred by using the hard conditioning data, soft data, or any other source of information.Indeed, we aim at approximating the probability P(Ψ|∆ 1 , . . ., ∆ n ) based upon the coexisting information of the n conditional probabilities.Probability aggregation [16] is used to construct an approximation of the true conditional probability by utilizing an aggregation operator Ω, which is also known as the pooling operator or pooling formula: P(Ψ|∆ 1 , . . . ,∆ n ) ≈ Ω(P( Ψ|∆ 1 ), . . . ,P( Ψ|∆ n )) In the case of having a prior probability ∆ 0 for the category sought, then Equation ( 6) can be generalized to: Among others, aggregation operators based on the multiplication of probabilities appear to be more suited in geoscience than those formulated based on an addition operator (Allard et al. 2012).These product-based pooling operators are linear operators of the logarithms of the probabilities [16]: where Z is a normalizing constant.This expression equivalently can be converted to the multiplication of probabilities: where λ i are positive weights with restriction ∑ n i=1 λ i = 1 to verify external Bayesianity.In Equation (9), there is no restriction on the weights.
In the context of sequential indicator simulation using the secondary data, subject to having one hard conditioning data ∆ h and one soft information ∆ s , log-linear pooling can be simplified to combine the information provided by these two sources of information: where P( Ψ|∆ 0 ) is prior probability of each category π µ ; λ h and λ s are the weights that can be arbitrary assigned to the hard and soft data, and ∑ n i=1 λ i = 1 is always verified.Therefore, the conditional probability of occurrence of each category µ obtained from Equation ( 5) can be updated by using Equation (10) as: where I * S is soft information, for which its conditional probability of occurrence of category µ can be reported from an interpretive geological block model at the target grid node.For instance, wireframing or hand contouring can provide such a geological block model [2].However, it is somehow tedious to produce such a probabilistic information from a deterministic interpretive model of geological domains in the region of study.There is an alternative way to convert this deterministic model to conditional probabilities, as thoroughly explained in [27].In this study, we propose to obtain I * S values from simple kriging of the indicators at hard conditioning data.These estimated values provide probabilistic information of the occurrence of that category µ at the corresponding target grid node I * s (χ ; µ).However, to obtain I * h (χ ; µ) λ h , we take both the neighboring hard conditional indicator data and also the previously simulated indicator values as a typical practice in conventional sequential indicator simulation as explained in Section 2.1.In practice, the soft information I * S in this probability aggregation method helps improve the conventional sequential indicator simulation for better reproduction of long-range geological structures.The most important issue in this approach is to properly assign the weights of hard λ h and soft data λ s .In the case of λ h = 1 and λ s = 1, Equation ( 11) simplifies to the traditional Bayesian sequential indicator simulation as proposed in [21].

Optimal Evaluation of Weighting Mechanism
The proposed log-linear pooling approach in Equation ( 10) allows one to identify different or equal weighting schemes for a specific category that are coming from different sources of information.These are prior proportion, primary (hard) and secondary (soft) data, where their weights can dictate the intensity of their influence in the final probability aggregation resulting through Equation (11).The proposed methodology in this work highly depends on the proper assignation of these weights so that their improper designation yields completely different results.These weights can be either assigned equally or ranked differently, for instance, based on the opinion of an expert.Allard et al. [16] suggested using a log-likelihood approach to optimally identify these weights.Nassbaumer et al. [20] proposed a multi-step sophisticated approach to change the weights multiple times during the simulation process following a Monte Carlo-type search.In this study, we used an empirical technique to assign the weights that gradually change in the interval of [0, 2], allowing the testing of our algorithm conveniently.The algorithm for identification of optimal weights is as follows: 1.
Testing the algorithm with 24 different experiments of λ h and λ s as provided in Table 1.

2.
Evaluating the performance of the experiments by calculating the error in the reproduction of the proportion of indicators.

3.
Evaluating the performance of the experiments by calculating the error in matching percentage with the reference map.

4.
Evaluating the performance of the experiments by calculating the error in the reproduction of connectivity measures of the categories.

5.
Evaluating the performance of the experiments by calculating the error in the reproduction of spatial continuity of the categories.6.
Fitting polynomial function to interpolate the errors.7.
Obtaining the optimal weights based on minimum error.
In the above criteria, reproduction of proportions refers to the examination of reproduction of π µ in the simulation results.Connectivity measures τ(d) is defined as the probability that two target grid nodes belonging to category µ are connected [28]: where τ(d) is a non-decreasing function as d increases.The connectivity function can be estimated by: where τ(d) is the estimate of connectivity function for distance d, #N(χ ⇔ χ + d|χ , χ + d ∈ µ) the number of target grid nodes, separated by a vector of distance d, that belong to category µ and are connected and #N(χ , χ + d ∈ µ) the number of target grid nodes, deparated by a vector of distance d, that belong to category µ and that may or may not be connected.
Concerning the spatial continuity evaluation, a variogram can be a satisfying measurement.To do so, experimental variogram γ(d) computes the average dissimilarity between data separated by vector d.It is calculated as half the average difference between components of every data pair [9,29]: where Through the synthetic case study, we show how the method proposed does work and how these weights can be optimally identified.

Synthetic Case Study
To test the proposed methodology and evaluate the optimal weighting parameters, a synthetic case study is considered.For this purpose, one categorical variable, including two indicators/categories, is non-conditionally simulated on a 2D 300 m × 300 m domain consisting of 300 × 300 nodes by using plurigaussian simulation [8] associated with an anisotropic spatial continuity.Ten realizations are produced, and the one (reference map hereafter) is selected in such a way that it shows long connectivity along easting and relatively short connectivity along elevation coordinates (Figure 1A).In order to mimic the vertical sampling pattern like the ones in the actual case studies, 100 points are randomly sampled from the reference map along elevation to constitute four synthetic boreholes throughout the domain.However, in order to evaluate the algorithm properly, two target areas are of paramount importance.These are identified by two ellipses (I and II) in Figure 1A.In fact, we are interested in producing the realizations that honor the connectivity of the categories through these two critical areas when there are few hard conditioning data points.Therefore, it was intended to sample only one category per each of these twotarget areas (Figure 1B).The lack of data in these two areas can be an excellent signature for evaluating the proposed method.Consequently, this synthetic dataset is used in the proposed algorithm to compare the simulation results with a reference map to provide evaluation debates.
where   − ( + ) is a d-increment of the indicator variable  and () is the number of pair.
Through the synthetic case study, we show how the method proposed does work and how these weights can be optimally identified.

Synthetic Case Study
To test the proposed methodology and evaluate the optimal weighting parameters, a synthetic case study is considered.For this purpose, one categorical variable, including two indicators/categories, is non-conditionally simulated on a 2D 300 m × 300 m domain consisting of 300 × 300 nodes by using plurigaussian simulation [8] associated with an anisotropic spatial continuity.Ten realizations are produced, and the one (reference map hereafter) is selected in such a way that it shows long connectivity along easting and relatively short connectivity along elevation coordinates (Figure 1A).In order to mimic the vertical sampling pattern like the ones in the actual case studies, 100 points are randomly sampled from the reference map along elevation to constitute four synthetic boreholes throughout the domain.However, in order to evaluate the algorithm properly, two target areas are of paramount importance.These are identified by two ellipses (I and II) in Figure 1A.In fact, we are interested in producing the realizations that honor the connectivity of the categories through these two critical areas when there are few hard conditioning data points.Therefore, it was intended to sample only one category per each of these two-target areas (Figure 1B).The lack of data in these two areas can be an excellent signature for evaluating the proposed method.Consequently, this synthetic dataset is used in the proposed algorithm to compare the simulation results with a reference map to provide evaluation debates.The connectivity on the selected map, to be considered a reference map, is also verified by computing the connectivity functions [23] over this realization along the specified coordinates (Equation ( 13)).
As can be observed from Figure 2, there is approximately 50% and 0.0% probability that categories A and B are connected from west to east and from south to north, respectively.This interpretation is obtained by looking at the connectivity measures at large lag distances when they reach a steady range around the lag of 240.This is also compatible The connectivity on the selected map, to be considered a reference map, is also verified by computing the connectivity functions [23] over this realization along the specified coordinates (Equation ( 13)).
As can be observed from Figure 2, there is approximately 50% and 0.0% probability that categories A and B are connected from west to east and from south to north, respectively.This interpretation is obtained by looking at the connectivity measures at large lag distances when they reach a steady range around the lag of 240.This is also compatible with the visual inspection of the map illustrated in Figure 1A.Therefore, this is an interesting reference map for further analysis in this study.
with the visual inspection of the map illustrated in Figure 1A.Therefore, this is an i esting reference map for further analysis in this study.Once the synthetic sampling points are identified, the next step in the propose gorithm is to produce a map that reports the soft data at target nodes to be used su quently for log-linear pooling probability aggregation in revisited Bayesian sequenti dicator simulation through Equation ( 6).In this study, simple kriging is used to ma soft information for each indicator.Therefore, after variogram analysis, the categorie dicators are estimated on the target grid nodes using up to 40 conditioning data in a ing neighborhood configuration (Figure 3).Due to the existence of indicator value at thetic sample points, these maps are deemed as the probability of occurrence of the c sponding category (Ψ|Δ ) at the target node, for which they can be used as soft i mation  * in Equation (7).One advantage of this map is that one can recognize the p ability of connectivity between the boreholes, which is favorable information for mod the categories with long connectivity.The next step in the revisited Bayesian sequential indicator simulation in this s is to implement the simulation algorithm but using updating scheme of Equation (1 update the probability of occurrence of the corresponding category  * ( ; ) .As tioned earlier, the updated probability  * ( ; ) highly depends on the assignatio weights  and  .A method is used to test different weighting schemes in Equation to evaluate the optimum weighting values in the revisited Bayesian sequential indi Once the synthetic sampling points are identified, the next step in the proposed algorithm is to produce a map that reports the soft data at target nodes to be used subsequently for log-linear pooling probability aggregation in revisited Bayesian sequential indicator simulation through Equation ( 6).In this study, simple kriging is used to map the soft information for each indicator.Therefore, after variogram analysis, the categories/indicators are estimated on the target grid nodes using up to 40 conditioning data in a moving neighborhood configuration (Figure 3).Due to the existence of indicator value at synthetic sample points, these maps are deemed as the probability of occurrence of the corresponding category P( Ψ|∆ s ) λ s at the target node, for which they can be used as soft information I * S in Equation (7).One advantage of this map is that one can recognize the probability of connectivity between the boreholes, which is favorable information for modeling the categories with long connectivity.with the visual inspection of the map illustrated in Figure 1A.Therefore, this is an interesting reference map for further analysis in this study.Once the synthetic sampling points are identified, the next step in the proposed algorithm is to produce a map that reports the soft data at target nodes to be used subsequently for log-linear pooling probability aggregation in revisited Bayesian sequential indicator simulation through Equation ( 6).In this study, simple kriging is used to map the soft information for each indicator.Therefore, after variogram analysis, the categories/indicators are estimated on the target grid nodes using up to 40 conditioning data in a moving neighborhood configuration (Figure 3).Due to the existence of indicator value at synthetic sample points, these maps are deemed as the probability of occurrence of the corresponding category (Ψ|Δ ) at the target node, for which they can be used as soft information  * in Equation (7).One advantage of this map is that one can recognize the probability of connectivity between the boreholes, which is favorable information for modeling the categories with long connectivity.The next step in the revisited Bayesian sequential indicator simulation in this study is to implement the simulation algorithm but using updating scheme of Equation ( 11) to update the probability of occurrence of the corresponding category  * ( ; ) .As mentioned earlier, the updated probability  * ( ; ) highly depends on the assignation of weights  and  .A method is used to test different weighting schemes in Equation (11) to evaluate the optimum weighting values in the revisited Bayesian sequential indicator The next step in the revisited Bayesian sequential indicator simulation in this study is to implement the simulation algorithm but using updating scheme of Equation (11) to update the probability of occurrence of the corresponding category I * h (χ ; µ) λ h .As mentioned earlier, the updated probability I * U (χ ; µ) highly depends on the assignation of weights λ h and λ s .A method is used to test different weighting schemes in Equation (11) to evaluate the optimum weighting values in the revisited Bayesian sequential indicator simulation.In this technique, λ h ∈ {0, 2} and λ s ∈ {0.5, 2}, and the prior proportion ranging with a weight of 1 − λ h − λ s .These 24 trials are shown in Table 1.
In this examination, the trials with λ h = 0 and λ s ∈ {0.5, 2}, signify that the revisited Bayesian sequential indicator simulation is only based on the soft information (pure soft simulation hereafter).Those trials with λ s = 0 and λ h ∈ {0.5, 2}, denote that the resulting revisited Bayesian sequential indicator simulation is solely yielded on the hard conditioning data (pure hard simulation hereafter).The trial represents the conventional sequential indicator simulation with λ h = 1 and λ s = 0.The trial providing the simulation results with λ h = 1 and λ s = 1 presents the traditional Bayesian Updating approach as proposed in Doyen et al. [21] (Doyan's simulation hereafter).There are some links between Doyan's simulation and sequential indicator simulation that uses a collocated cokriging by different implementation mechanisms [13].All these subgroups are some special cases of the revisited Bayesian sequential indicator simulation as proposed in this study.However, in order to be clearer about these subgroups, another subgroup is defined, which is called Revisited Bayesian simulation hereafter, for which λ h and λ s vary between {0, 2} and {0.5, 2}, respectively, but excluding the weights belonging to pure soft simulation, pure hard simulation, traditional simulation, and Doyan's simulation.A summary of these techniques is provided in Table 2.A note is also provided in Table 1 to link the trials in Table 1 to the subgroups in Table 2.
Table 2. Comparison of some key properties of the log-linear pooling aggregation approach in this study; weight of the prior proportion for the category sought is 1 − λ h − λ s .

Aggregation Equation for
Implementation of simulation for all the trials was conducted under a moving neighborhood scheme with up to 40 hard conditioning data and 40 previously simulated data.The sequential paradigm is followed a random sequence while using a multiple-grid simulation procedure.The number of realizations is 100.Some realizations from each subgroup (presented in Table 2) are selected for visualization (Figure 4).The rest of the realizations for the other trials are provided in Appendix A (Figure A1).As can be seen, the results obtained by using only pure soft information dramatically failed in reproducing the shape of both categories A and B, which do not show the underlying structure.In pure hard simulation, when λ h ≤ 0.5, the resulting maps are quite patchy (Figure 4B).However, by increasing the weight of hard data, the shape of category B, to some extent, is regenerating.It can be seen that the reproduction of connectivity is highly controlled by the weight assigned to soft data (Figure 4D,F).One of its particular cases is the one with λ h = 1 and λ s = 0 (Figure 4F), where it shows the result of traditional sequential indicator simulation.The shape of category B on the left and on the right sides is more or less reproduced, but the continuity of category B is disconnected around area II, shown by a white ellipse in Figure 4A.This is a typical problem of traditional sequential indicator simulation, in which the geological features with long connectivity cannot be produced properly.Doyan's simulation, which represents conventional Bayesian updating in Equation (11), could produce the general shape of category B in the places where enough conditioning data are available, but again it failed to produce the connectivity of interest in area II (Figure 4E).In these figures, the revisited Bayesian simulation with λ h = 1 and λ s = 2 seems to be the most promising one among others.

Validation
However, the results depicted in Figure 4 are only one single realization of each trial, and one might be interested in examining the probability maps obtained from 100 realizations that assess the uncertainty in the categories at a local (node-by-node) scale for better validation.The maps are constructed by calculating, for each grid node, the frequency of occurrence of category B over 100 conditional realizations (Figure 5).They constitute a complement to the reference map insofar as they show the risk of finding a category different from the one that has been expected.The sectors with little uncertainty are those associated with a high probability for a given category (painted in red in Figure 5), indicating that there is little risk of not finding category B, or those associated with a very low probability (painted in dark blue in Figure 5), indicating that one is pretty sure of not finding this category, while the other sectors (painted in light blue, green or yellow in Figure 5) are more uncertain.As can be observed, the revisited Bayesian simulation method with  = 1 and  = 2 provides the most prominent results to give the high probability of connectivity close to areas I and II, as we expect from the reference map.The rest of the probability maps are presented in Appendix B (Figure A2).

Validation
However, the results depicted in Figure 4 are only one single realization of each trial, and one might be interested in examining the probability maps obtained from 100 realizations that assess the uncertainty in the categories at a local (node-by-node) scale for better validation.The maps are constructed by calculating, for each grid node, the frequency of occurrence of category B over 100 conditional realizations (Figure 5).They constitute a complement to the reference map insofar as they show the risk of finding a category different from the one that has been expected.The sectors with little uncertainty are those associated with a high probability for a given category (painted in red in Figure 5), indicating that there is little risk of not finding category B, or those associated with a very low probability (painted in dark blue in Figure 5), indicating that one is pretty sure of not finding this category, while the other sectors (painted in light blue, green or yellow in Figure 5) are more uncertain.As can be observed, the revisited Bayesian simulation method with λ h = 1 and λ s = 2 provides the most prominent results to give the high probability of connectivity close to areas I and II, as we expect from the reference map.The rest of the probability maps are presented in Appendix B (Figure A2).From the probability maps, one can build a categorical map by selecting, for each grid node, the most probable category.This model can then be compared to the reference map in order to identify the grid nodes for which the category matches the most probable one and also the grid nodes for which there is a mismatch (Figure 6).The latter grid nodes are mostly located near the border of the two categories.As can be observed, revisited Bayesian simulation (Figure 6C) and traditional simulation (Figure 6E) provided better results compared to other experiments.From the probability maps, one can build a categorical map by selecting, for each grid node, the most probable category.This model can then be compared to the reference map in order to identify the grid nodes for which the category matches the most probable one and also the grid nodes for which there is a mismatch (Figure 6).The latter grid nodes are mostly located near the border of the two categories.As can be observed, revisited Bayesian simulation (Figure 6C) and traditional simulation (Figure 6E) provided better results compared to other experiments.
From the probability maps, one can build a categorical map by selecting, for each grid node, the most probable category.This model can then be compared to the reference map in order to identify the grid nodes for which the category matches the most probable one and also the grid nodes for which there is a mismatch (Figure 6).The latter grid nodes are mostly located near the border of the two categories.As can be observed, revisited Bayesian simulation (Figure 6C) and traditional simulation (Figure 6E) provided better results compared to other experiments.

Assessment of Optimal Weights
It is shown that the revisited Bayesian simulation method with weights  = 1 and  = 2, by visual inspection, can deliver the most promising results compared to the reference dataset; this has been achieved qualitatively by using only the realizations and probability maps.To show this also quantitatively, a validation step is performed to assess the accuracy of the obtained weights.To secure the optimum weights of  and  , four criteria were assessed by calculating the error deducing from: In all these criteria, the underlying parameter is calculated over individual realizations, and then it is averaged.The average value is compared with the original parameter obtained from the reference map.The difference (error) is then taken into account as the main criterion for evaluating the optimal weights of  and  .For each criterion, 24 experiments are available, where their errors are calculated.Different polynomial functions are tested to interpolate the errors within the trials.The one is selected for each that offers the highest  .
As can be seen from Figure (7), there are two optimal weight candidates that worth to be discussed.It seems that when the  is equal to 1 and 1.5, and  is equal to 2, the errors in variogram reproduction along Easting (Figure 7G), connectivity for both categories (Figure 7C,E), and slightly variogram reproduction along Northing (Figure 7H) are quite low and almost equivalent.However, this is not true when one considers the errors

Assessment of Optimal Weights
It is shown that the revisited Bayesian simulation method with weights λ h = 1 and λ s = 2, by visual inspection, can deliver the most promising results compared to the reference dataset; this has been achieved qualitatively by using only the realizations and probability maps.To show this also quantitatively, a validation step is performed to assess the accuracy of the obtained weights.To secure the optimum weights of λ h and λ s , four criteria were assessed by calculating the error deducing from: In all these criteria, the underlying parameter is calculated over individual realizations, and then it is averaged.The average value is compared with the original parameter obtained from the reference map.The difference (error) is then taken into account as the main criterion for evaluating the optimal weights of λ h and λ s .For each criterion, 24 experiments are available, where their errors are calculated.Different polynomial functions are tested to interpolate the errors within the trials.The one is selected for each that offers the highest R 2 .

Discussion and Conclusions
A methodology proposed in this study revisits the traditional Bayesian sequential indicator simulation.This versatile model uses a log-linear pooling probability aggregation approach to integrate the probabilities that are coming from different sources.The algorithm makes the job easier to find the soft information by using only an available geostatistical interpolation technique to inform the soft data at the target data location.Different weighting options were also tested, and through a numerical examination, it was revealed that different weights that are assigned to each source of information produce different results.Indeed, the incorporation of sources of information with different influences was overlooked through all previously Bayesian sequential indicator simulation approaches.The results of this study compared with traditional sequential indicator simulation algorithms, and it was shown that the long-range structures could be better produced.Nevertheless, the method proposed is not restricted to only two sources of information.Equation ( 5) can be generalized to include more sources of information, and the weights can simply be tuned.However, the methodology needs some numerical experiments to assess the optimal weights.Further research can focus on developing a sophisticated technique to infer the weights automatically.The proposed algorithm is tested in a synthetic case study, but it can also be tested in an actual case study.To set the optimal weights, one solution is to use the production dataset as a benchmark.This information may only be available partially in a domain.The obtained weights can be taken into account for resource reconciliation in other parts of the domain, where only exploratory data are available and one does not have access to production data.Another avenue of research can further develop the method to model the non-stationary domains.
of the indicator variable T and N(d) is the number of pair.

Figure 1 .
Figure 1.Reference dataset, (A) reference map, (B) synthetic sampled dataset; only one sample is preserved in the target areas I and II to better evaluate the revisited Bayesian sequential indicator simulation in this study.

Figure 1 .
Figure 1.Reference dataset, (A) reference map, (B) synthetic sampled dataset; only one sample is preserved in the target areas I and II to better evaluate the revisited Bayesian sequential indicator simulation in this study.

Figure 2 .
Figure 2. Connectivity measures along Easting (back dashed line) and Elevation (green dashed for (A) category A, and (B) category B.

Figure 3 .
Figure 3. Probability of occurrence of (A) category A and (B) category B obtained by estimatin conditioning indicator data on the target grid nodes.These maps are used as soft information proposed algorithm.

Figure 2 .
Figure 2. Connectivity measures along Easting (back dashed line) and Elevation (green dashed line) for (A) category A, and (B) category B.

Figure 2 .
Figure 2. Connectivity measures along Easting (back dashed line) and Elevation (green dashed line) for (A) category A, and (B) category B.

Figure 3 .
Figure 3. Probability of occurrence of (A) category A and (B) category B obtained by estimating the conditioning indicator data on the target grid nodes.These maps are used as soft information in the proposed algorithm.

Figure 3 .
Figure 3. Probability of occurrence of (A) category A and (B) category B obtained by estimating the conditioning indicator data on the target grid nodes.These maps are used as soft information in the proposed algorithm.

Figure 4 .
Figure 4. Comparison of (A) reference map with resulting simulation maps obtained from (B) pure soft simulation, (C) revisited Bayesian simulation, (D) pure hard simulation, (E) Doyan's simulation, and (F) traditional simulation.

Figure 4 .
Figure 4. Comparison of (A) reference map with resulting simulation maps obtained from (B) pure soft simulation, (C) revisited Bayesian simulation, (D) pure hard simulation, (E) Doyan's simulation, and (F) traditional simulation.

Figure 5 .
Figure 5.Comparison of (A) reference map with resulting simulation maps obtained from (B) pure soft simulation, (C) revisited Bayesian simulation, (D) pure hard simulation, (E) Doyan's simulation, and (F) traditional simulation.

Figure 5 .
Figure 5.Comparison of (A) reference map with resulting simulation maps obtained from (B) pure soft simulation, (C) revisited Bayesian simulation, (D) pure hard simulation, (E) Doyan's simulation, and (F) traditional simulation.

Mathematics 2022 , 22 Figure 6 .
Figure 6.The matches and mismatches with respect to (A) reference map for resulting simulation maps obtained from (B) pure soft simulation, (C) revisited Bayesian simulation, (D) pure hard simulation, (E) Doyan's simulation, and (F) traditional simulation.

(
A) The difference between the reproduced proportion of indicators through the realizations and the proportion of categories/indicators in the reference map.(B) The difference in the percentage of the match [27] between the most probable categorical map and the reference map.(C) The difference between the connectivity function of each indicator and the connectivity function in the reference map.(D) The difference between the variogram reproduction of the indicators and the original variogram of the indicators in the reference map.

Figure 6 .
Figure 6.The matches and mismatches with respect to (A) reference map for resulting simulation maps obtained from (B) pure soft simulation, (C) revisited Bayesian simulation, (D) pure hard simulation, (E) Doyan's simulation, and (F) traditional simulation.

(
A) The difference between the reproduced proportion of indicators through the realizations and the proportion of categories/indicators in the reference map.(B) The difference in the percentage of the match [27] between the most probable categorical map and the reference map.(C) The difference between the connectivity function of each indicator and the connectivity function in the reference map.(D) The difference between the variogram reproduction of the indicators and the original variogram of the indicators in the reference map.

Figure A1 .
Figure A1.Realizations obtained from revisited Bayesian sequential indicator simulation by different assignations of weights.

Figure A1 .
Figure A1.Realizations obtained from revisited Bayesian sequential indicator simulation by different assignations of weights.

Figure A2 .
Figure A2.Probability maps obtained from revisited Bayesian sequential indicator simulation by different assignations of weights.

Figure A2 .
Figure A2.Probability maps obtained from revisited Bayesian sequential indicator simulation by different assignations of weights.

Table 1 .
Twenty-four experiments based on different weighting schemes of λ h and λ s ; 1 − λ h − λ s introduces the weight of prior proportion for the category sought.