Next Article in Journal
Abstract Convergence Analysis for a New Nonlinear Ninth-Order Iterative Scheme
Previous Article in Journal
Network Analysis of Volatility Spillovers Between Environmental, Social, and Governance (ESG) Rating Stocks: Evidence from China
Previous Article in Special Issue
Spatial Modeling of Auto Insurance Loss Metrics to Uncover Impact of COVID-19 Pandemic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Contextual Peano Scan and Fast Image Segmentation Using Hidden and Evidential Markov Chains †

by
Clément Fernandes
1,2 and
Wojciech Pieczynski
2,*
1
Department Automobiles, Segula Matra Automotive, Zone d’Activité Pissaloup, 8 Av. Jean d’Alembert, 78190 Trappes, France
2
SAMOVAR, Télécom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 2021 in Proceeding of the IEEE 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021; pp. 626–630.
Mathematics 2025, 13(10), 1589; https://doi.org/10.3390/math13101589
Submission received: 30 March 2025 / Revised: 2 May 2025 / Accepted: 9 May 2025 / Published: 12 May 2025
(This article belongs to the Special Issue Bayesian Statistics and Causal Inference)

Abstract

Transforming bi-dimensional sets of image pixels into mono-dimensional sequences with a Peano scan (PS) is an established technique enabling the use of hidden Markov chains (HMCs) for unsupervised image segmentation. Related Bayesian segmentation methods can compete with hidden Markov fields (HMFs)-based ones and are much faster. PS has recently been extended to the contextual PS, and some initial experiments have shown the value of the associated HMC model, denoted as HMC-CPS, in image segmentation. Moreover, HMCs have been extended to hidden evidential Markov chains (HEMCs), which are capable of improving HMC-based Bayesian segmentation. In this study, we introduce a new HEMC-CPS model by simultaneously considering contextual PS and evidential HMC. We show its effectiveness for Bayesian maximum posterior mode (MPM) segmentation using synthetic and real images. Segmentation is performed in an unsupervised manner, with parameters being estimated using the stochastic expectation–maximization (SEM) method. The new HEMC-CPS model presents potential for the modeling and segmentation of more complex images, such as three-dimensional or multi-sensor multi-resolution images. Finally, the HMC-CPS and HEMC-CPS models are not limited to image segmentation and could be used for any kind of spatially correlated data.

1. Introduction

With the recent advancements in deep learning techniques, a considerable number of deep learning models have been designed and trained to perform unsupervised image segmentation. Indeed, from classical convolutional encoder–decoders [1] to recurrent networks [2], generative adversarial networks [3], and attention encoder–decoders [4], all these methods have produced spectacular results in various domains such as semantic segmentation, cell image segmentation, and super-resolution reconstruction, outperforming more traditional approaches. However, these models still require large training databases, even if the data are unlabeled. In this article, we focus on “agnostic” statistical methods, which only require the information present in the image for segmentation, and no additional sources. In this context, methods based on hidden Markov fields (HMFs) have been widely used since the pioneering research was performed [5,6,7]. HMFs provide satisfactory results in various situations, notably in image segmentation problems [8,9,10,11,12]. However, direct fast calculations are not tractable and iterative methods such as Gibbs or Metropolis sampling are required. These calculations significantly increase the computation time and make the related methods hard to apply in some situations. Utilizing the well-known and widely used hidden Markov chains (HMCs) [13,14,15,16,17] instead of HMFs is feasible; however, transforming a bi-dimensional set of pixels into a mono-dimensional sequence is not straightforward. For example, proceeding “line by line” gives an HMC in which pixels close each to other in the image may be far apart in the sequence. Using the Peano scan partially overcomes these difficulties and allows one to bring the quality of the results obtained with the chains closer to those obtained with the fields [18,19]. The combination of the Peano scan and HMCs has been used to segment different types of images [19,20,21,22,23,24]. Peano scans have also been used in more sophisticated models than HMCs; for example, hidden fuzzy Markov chains [19,25,26] or pairwise Markov chains [26]. Even if data obtained from images via the Peano scan have a complex structure and are obviously not Markovian, the different mentioned methods can provide interesting results, demonstrating the extraordinary robustness of HMCs and their extensions.
All the studies mentioned above show the value of the Peano scan in problems where the computation time is important; indeed, thanks to direct, recursive, and exact computations, HMC-based unsupervised segmentations are incomparably faster than HMF-based ones.
We recently extended the Peano scan (PS) to the following “contextual” Peano scan (CPS) [27]. Let s be a pixel in an image, and let r , t , u , w be its four nearest neighbors, where r , t are its neighbors in the Peano scan. In the extended model, observations on the remaining neighbors u , w are also taken into account in the Peano scan: the image value observed on s is completed with the two observations on u and w . Initial experiments on synthetic images showed that using HMCs with such a “contextual” Peano scan allowed us to reduce the classification error by up to 17%. Our first contribution is to provide a simulation study comparing this recent HMC-CPS method to the classic PS-HMC, on the one hand, and to the classic HMF-based one, on the other hand. We consider several real images as well. Our second contribution is related to the use, in the context of the CPS, of extensions of HMCs called “hidden evidential Markov chains” (HEMCs) [28]. Different “evidential” Markov chains, calling on the Dempster–Shafer theory of evidence [29,30], have been proposed in [31,32,33,34]. Here, we introduce a new model associating the CPS with the HEMCs described in [28], which are particular triplet Markov chains [35,36]. Experiments indicate the existence of situations in which the new model competes with the classic HMF, while being much faster. All experiments are unsupervised, with parameters estimated using a stochastic version of the expectation–maximization (EM) method [37,38]. In the case of HMFs, we use the Gibbsian EM [39].
The organization of this article is as follows. In the next section, we present the contextual Peano scan and related HMC-CPS with Bayesian maximum posterior mode (MPM) segmentation and parameter estimation using SEM. We recall the classic HEMC model and extend it with a contextual scan in Section 3. Section 4 describes the experiments, and the last section contains the conclusions and perspectives.

2. Contextual Peano Scan and Hidden Markov Chains

2.1. Contextual Peano Scan

Let S be the set of pixels of a digital image y = ( y s ) s S of size C a r d ( S ) = 2 k × 2 k . In the statistical segmentation framework we adopt in this study, y = ( y s ) s S is a realization of a random field Y = ( Y s ) s S . Segmenting y = ( y s ) s S consists of searching x = ( x s ) s S , with each x s in a finite set of classes Ω = { 1 , , K } . Then, x = ( x s ) s S is considered as a realization of a random field X = ( X s ) s S , and the segmentation problem is seen as a Bayesian problem of estimation of ( x s ) s S from ( y s ) s S . For a given distribution p ( x , y ) of ( X , Y ) , the problem of segmentation can be dealt with using a Bayesian classification, once the distribution p ( x , y ) allows for related computation. The very elegant hidden Markov fields provide p ( x , y ) , for which Bayesian solutions are computable; however, the computational time needed can be prohibitive for numerous applications. Hidden Markov chains present alternative fast methods; however, their use requires converting the bi-dimensional set of pixels into a mono-dimensional sequence. Here, we consider the conversion based on the Peano scan presented in Figure 1.
Let us recall the use of the contextual scan introduced in [27]. Let ( 1 , , N ) be the sequence of pixels obtained from S with the Peano scan depicted in Figure 1. Let X N = ( X 1 , , X N ) be the related stochastic sequence of classes. The classic way of using the scan consists of setting Y N = ( Y 1 , , Y N ) and considering that p ( x N , y N ) is a hidden Markov chain distribution. Then, both classical maximum posterior mode (MPM) segmentation and maximum a posteriori (MAP) segmentation can be computed in a fast unsupervised way (e.g., MAP can be computed with the well-known Viterbi algorithm). We consider that the contextual scan consists of the following: For each n = 2 , , N 1 , let us set s as the corresponding pixel in S . The previous point n 1 will be called r , and the next point n + 1 will be called t . Let us temporarily assume that s is not on the border, such that it has four neighbors in S . Then, we associate it with each n = 2 , , N 1 two pixels, v n and w n , which are two neighbors of n different from r and t . Thus, each n = s has four neighbors in S : two, r = n 1 and t = n + 1 , which belong to the Peano scan; and another two, v n and w n , which do not. Then, for each X n , we associate the triplet Y n = ( Y v n , Y n , Y w n ) . When s is on the border minus corners, there is only one neighbor, v n , not lying on the Peano scan, such that Y n = ( Y v n , Y n ) . When s is one of the four corners, there are two possibilities. If the scan begins or ends on the point, there is one neighbor in the scan, and one neighbor (e.g., v n ) outside it. Thus, Y n = ( Y v n , Y n ) . This is the case for pixels 1 and 16 in Example 1 below. If the scan neither begins nor ends on the point, there are no points that are neighbors in S without being neighbors in the scan, so that Y n = ( Y n ) . This is the case for pixels 6 and 11 in Example 1 below.
Setting Y N = ( Y 1 , , Y N ) , we consider ( X N , Y N ) . As we will see, p ( x N | y N ) is then Markovian, which allows for the implementation of Bayesian segmentation methods.
Six possible spatial configurations of the added neighbors (green) in relation to the neighbors on the scan being not on borders (blue) are specified in Figure 2.
Example 1.
As an example, let us consider image in Figure 1b, with the Peano scan beginning in the upper left cornerPoints  n = 1 , …,  16  are specified in Figure 3 and added observations are specified in Figure 4.
Let s , t be neighbors in S . If they are horizontal neighbors, let us set
p h y t x s = x t p h x t x s p y t x t
Similarly, if they are vertical neighbors, we set
p v y t x s = x t p v x t x s p y t x t  
Thus, for s in S and v , w —which are neighbors of s in the set of pixels but not neighbors in the Peano scan—we have
p y v , y s , y w x s = p y s x s p a v , s y v x s p a w , s y w x s
where a ( v , s ) = h if v , s are horizontal neighbors, and a ( v , s ) = v if v , s are vertical neighbors, and the same applies for a ( w , s ) . For example, considering ( y 14 , y 3 , y 8 ) in Figure 4, we see that 3 ,   14 are horizontal neighbors, while 3 ,   8 are vertical neighbors. Then, we have
p y 14 , y 3 , y 8 x 3 = p y 3 x 3 p h y 14 x 3 p v y 8 x 3 .  
Finally, for a given Peano scan, we associate with each pixel s the two neighbors v ( s ) , w ( s ) in the set of pixels, which are not its neighbors in the Peano scan. Numbering the Peano scan points as ( 1 , 2 , , N ) , the related contextual Peano scan is the sequence of triplets (they are couples for points on borders, and singletons on two corners where the scan begins and ends):
1 , v 1 , w 1 , 2 , v 2 , w 2 , , N , v N , w N
The classic hidden Markov chain associated with the Peano scan has the following distribution:
p ( x N , y N ) = p x 1 p x 2 x 1 p x N x N 1 p y 1 x 1 p y N x N
The new model we propose, called “hidden Markov chain for contextual Peano scan” (HMC-CPS), is defined as follows.
Let us consider
q ( x N , y N ) = p x 1 n = 1 N 1 p x n + 1 x n n = 1 N p y n , y v n , y w n x n
It is to be noted that q ( x N , y N ) is not the probability density of the pair ( X N , Y N ) . Nonetheless, considering the (unknown) normalizing constant
κ = 1 x N q ( x N , y N ) d y N
we can consider the distribution
p ( x N , y N ) = κ q ( x N , y N )
Definition 1.
Let  S  be a square set of pixels of dimensions  N = 2 k × 2 k . Let  ( 1 ,   2 ,   ,   N )  be a Peano scan (PS) of  S , and let  ( 1 , v 1 ,   w 1 ,   2 , v 2 ,   w 2 , ,   N , v N ,   w N )  be the four nearest neighbors’ contextual PS (4NN-CPS) associated with PS. Then, the conditional probability distribution  p ( x N | y N )  given by the distribution  p ( x N ,   y N ) = κ q ( x N ,   y N )  defined with (3), (7), and (8) will be called the “hidden Markov chain for the contextual Peano scan” (HMC-CPS) distribution.
We note that, in HMC-CPS ( X N , Y N ) , the chain X N is Markovian and it is also Markovian conditional on Y N , but ( X N , Y N ) is not Markovian itself. As discussed in the next paragraph, p ( x 1 | y N ) and transitions p ( x n + 1 | x n , y N ) are computable, which allow for Bayesian restorations. We note that p ( y N | x N ) , which is complicated to write, is not needed.
Remark 1.
It is possible to extend the 4NN-CPS with a richer neighborhood. Considering eight nearest neighbors in the set of pixels would lead to a contextual PS with six additional observations, except for boarding pixels, on each point in the Peano scan.

2.2. Bayesian MPM Segmentation with HMC-CPS

Let 1 , , N be points of a Peano scan, and let ( X N , Y N ) be a HMC-CPS (7) and (8). We consider the Bayesian Marginal Posterior Mode (MPM) to be defined by
s ^ M P M y N = x ^ N p x ^ n y N = max x n Ω p x n y N
Thus, the problem lies in computing p x n y N for n = 1 , …, N . Let us recall the following general result.
Lemma 1.
Let  Z N = ( Z 1 , ,   Z N )  be a stochastic chain taking its values in a finite set  Ω . Then,
(i) 
Z N  is Markovian if and only if there exist  N 1  functions  φ 1 , …,  φ N 1  from  Ω 2  to  R +  such that
p z N n = 1 N 1 φ n z n , z n + 1
 where   means “proportional to”;
(ii) 
if (10) is verified,  p ( z 1 )  and transitions  p z n + 1 z n  are given by the functions  φ 1 , …,  φ N 1  with
p z 1 = β 1 z 1 z 1 β 1 z 1 ;
f o r   1 < n < N ,   p z n + 1 z n = φ n z n , z n + 1 β n + 1 z n + 1 β n z n ,
where  β 1 z 2 , …,  β N z N  can be computed with the following backward recursion:
β N z N = 1 , f o r   n = N 1 , , 1
β n z n = z n + 1 φ n z n , z n + 1 β n + 1 z n + 1
  • Once  p z 1  and  p z n + 1 z n  are given, each  p z n  is computed with forward recursion:
    f o r   n = 2 , , N , p z n = z n p z n z n 1 p z n 1  
Proof of Lemma 1.
  • Let Z N be Markovian: p z 1 ,   ,   z N = p z 1 p z 2 |   z 1 p z 3 |   z 2   p z N |   z N 1 . Then (10) is satisfied by φ 1 z 1 ,   z 2 = p z 1 p z 2 |   z 1 , φ 2 z 2 ,   z 3 = p z 3 |   z 2 , …, φ N 1 z N 1 , z N = p z N |   z N 1 .
  • Conversely, let p z 1 ,   ,   z N satisfy (10). Thus, p z 1 ,   ,   z N = K φ 1 ( z 1 ,   z 2 )   φ N 1 ( z N 1 , z N ) with the K constant, which implies that for each n = 1 , …, N 1 , we have
    p z n + 1 | z 1 ,   ,   z n = p z 1 ,   ,   z n , z n + 1 p z 1 ,   ,   z n = z n + 2 ,   ,   z N , φ 1 z 1 ,   z 2 φ N 1 z N 1 , z N z n + 1 ,   ,   z N , φ 1 z 1 ,   z 2 φ N 1 z N 1 , z N  
    which shows that p z 1 ,   ,   z N is Markovian.
Moreover, let us set
f o r   n = 1 , , N 1 ,   β n z n = z n + 1 , z n + 2 ,   ,   z N φ n z n ,   z n + 1 φ N 1 ( z N 1 , z N )
On the one hand, we see that
β n z n = z n + 1 φ n ( z n ,   z n + 1 ) β n + 1 ( z n + 1 )
On the other hand, according to (14), we have
p z n + 1 | z n = φ n ( z n ,   z n + 1 ) β n + 1 ( z n + 1 )   β n z n .
as
p z 1 = β 1 ( z 1 )   z 1 β 1 ( z 1 )
(11) and (12) are verified, which ends the proof. □
To summarize, once the functions φ 1 , …, φ N 1 satisfying (10) are given, the distributions p z 1 , p z n + 1 | z n , and p z n of the related Markov chain Z N = ( Z 1 , , Z N ) are computable. Thus, it is sufficient to define φ 1 , …, φ N 1 satisfying (10), which will highly simplify different model introductions in practice. Let us return to the conditional Markov chain for contextual Peano scan distribution p ( x N | y N ) , specified in Definition 1. We can say that p ( x N | y N ) q ( x N , y N ) , and thus,
p ( x N y N n = 1 N 1 φ n x n , x n + 1 , y N
with
φ 1 x 1 , x 2 , y N = p x 1 , x 2 p y 1 , y v ( 1 ) , y w ( 1 ) x 1 p y 2 , y v ( 2 ) , y w ( 2 ) x 2 φ 2 x 2 , x 3 , y N = p x 3 x 2 p y 3 , y v 3 , y w 3 x 3 φ N 1 x N 1 , x N , y N = p x N x N 1 p y N , y v N , y w N x N
Finally, functions φ 1 , …, φ N 1 satisfying (10) are of the form
φ 1 x 1 , x 2 , y N = φ 1 ( x 1 , x 2 , y 1 , y v ( 1 ) , y w ( 1 ) , y 2 , y v ( 2 ) , y w ( 2 ) ) φ 2 x 2 , x 3 , y N = φ 2 x 2 , x 3 , y 3 , y v 3 , y w 3   φ N 1 x N 1 , x N , y N = φ N 1 x N 1 , x N , y N , y v N , y w N
and thus, they are easy to compute. Then, p x 1 y N and transitions p x n + 1 x n , y N are computable following Lemma 1, which gives marginal distributions p x n y N of p ( x N | y N ) and allows the use of MPM (9).
Remark 2.
The pair  ( X N ,   Y N )  has a complex and only partially known structure. In particular, neither  p ( x N )  nor  p ( y N | x N )  is Markovian in general. In addition, for a Gaussian  p ( y n | x n ) ,  p ( y N | x N )  is not Gaussian in general. However, this is of little importance because what is important is that  p ( x N | y N )  is Markovian and known up to a constant, which makes  p x n y N  computable.
Example 2.
Let us specify  φ 1 ,  φ 2 , …,  φ 15  used in Example 1. According to Figure 3, (3), and (15), we have
φ 1 x 1 , x 2 , y N = p x 1 , x 2 p y 1 x 1 p v y 4 x 1 p y 2 x 2 p h y 15 x 2 φ 2 x 2 , x 3 , y N = p x 3 x 2 p y 3 x 3 p v y 8 x 3 p h y 14 x 3 φ 3 x 3 , x 4 , y N = p x 4 x 3 p y 4 x 4 p v y 1 x 4 φ 13 x 13 , x 14 , y N = p x 14 x 13 p y 14 x 14 p v y 9 x 14 p h y 3 x 14 φ 14 x 14 , x 15 , y N = p x 15 x 14 p y 15 x 15 p h y 2 x 15 φ 15 x 15 , x 16 , y N = p x 16 x 15 p y 16 x 16 p v y 13 x 16

2.3. Parameter Estimation

Let us suppose that p ( x N , y N ) is a classic hidden Markov chain (CHMC) distribution, with a Gaussian p ( y N | x N ) and two different transitions depending on whether it applies to horizontal neighbors or vertical neighbors in the original image. For K classes Ω = 1 , , K , the parameters are as follows: K 2 probabilities p h = ( p i j h ) 1 i , j K , with p i j h = p x t + 1 = j , x t = i for t , t + 1 neighbors in the chain and horizontal neighbors in the image, K 2 probabilities p v = ( p i j v ) 1 i , j K , with p i j v = p x t + 1 = j , x t = i for t , t + 1 neighbors in the chain and vertical neighbors in the image, and K means m = ( m i ) 1 i K and K variances σ 2 = ( σ i 2 ) 1 i K of the K Gaussian distributions ( p y s x s = i ) 1 i K . Let us consider the HMC-CPS: by choosing p h x t x s and p v x t x s intervening in p y u , y s , y w x s to be those related to p i j h and p i j v , respectively, the new proposed model uses exactly the same parameters as the CHMC. Thus, the problem is to estimate θ = ( p h , p v , m , σ 2 ) from the observed image Y N = y N alone. However, contrary to the CHMC model, the EM algorithm cannot be used for estimating the parameters. Indeed, according to (7) and (8), the joint likelihood p ( x N , y N , θ ) is only computable up to a constant κ , which depends on θ . As such, it is not possible in general to compute argmax θ E l o g p ( x N , y N , θ ) | y N , θ q for each iteration q .
Nonetheless, as p ( x N | y N , θ q ) is computable and, being Markovian, it is possible to sample from it, we can use stochastic EM (SEM), which is performed as follows.
  • Initialize the parameters θ 0 = ( p h , 0 , p v , 0 , m 0 , σ 2 , 0 ) with some simple method;
  • Compute θ q + 1 = ( p h , q + 1 , p v , q + 1 , m q + 1 , σ 2 , q + 1 ) from the current θ q = ( p h , q , p v , q , m q , σ 2 , q ) and y N ;
-
Sample x N , q + 1 = ( x 1 q + 1 , , x N q + 1 ) according to the Markov distribution p ( x N | y N , θ q ) ;
-
Let H q + 1 be the set of couples ( n , n + 1 ) , with n and n + 1 horizontal neighbors in the set of pixels, and let V q + 1 be the set of couples ( n , n + 1 ) , with n and n + 1 vertical neighbors in the set of pixels. Let S i , q + 1 be the set of points n such that x n q + 1 = i . Set
p i j h , q + 1 = n , n + 1 H q + 1 1 x n q + 1 = i , x n + 1 q + 1 = j H q + 1 ,   s i m i l a r   f o r   p i j v , q + 1  
m i q + 1 = n S i , q + 1 y n S i , q + 1 ,   σ i , q + 1 2 = n S i , q + 1 y n m i q + 1 2 S i , q + 1
A stopping criterion is adapted to each case considered. Indeed, the Markov chain ( θ q ) obtained in this way is very complex and its stabilization depends on different factors, particularly on the noise level. To the best of our knowledge, as in the EM case, there are no general theoretical results specifying its asymptotic behavior.
Remark 3.
One can use a simpler approximated SEM by treating  p ( x N ,   y N )  as a CHMC and sampling from its posterior law  p C H M C ( x N | y N ,   θ q )  instead of the real  p ( x N | y N ,   θ q ) ; we programmed it and it produces slightly worse results.

3. Contextual Peano Scan and Hidden Evidential Markov Chains

3.1. MPM Restoration with Hidden Triplet Markov Chains

Let us briefly examine particular triplet Markov chains, named hidden triplet Markov chains. Let  X N = ( X 1 , , X N ) be the stochastic sequence of classes, Y N = ( Y 1 , , Y N ) the sequence of observations on the pixels, and U N = ( U 1 , , U N ) a third stochastic sequence, with each U n taking its values in Λ = 1 , , L . Let T N = ( T 1 , , T N ) , with T n = ( X n ,   U n ,   Y n ) , be a triplet Markov chain. Then, X N can be estimated from Y N like in the classic HMC case. As U N is arbitrary, the family of TMCs is a very general one. As p ( t N ) is written p ( t N ) = p t 1 p t 2 t 1 p t N t N 1 , it has the form
p ( t N ) = φ 1 ( x 1 , x 2 , u 1 , u 2 , y 1 , y 2 ) φ N 1 ( x N 1 , x N , u N 1 , u N , y N 1 , y N )
with
φ 1 ( x 1 , x 2 , u 1 , u 2 , y 1 , y 2 ) = p ( t 1 , t 2 ) φ 2 ( x 2 , x 3 , u 2 , u 3 , y 2 , y 3 ) = p ( t 3 | t 2 ) φ N 1 ( x N 1 , x N , u N 1 , u N , y N 1 , y N ) = p ( t N | t N 1 )
Then, applying Lemma 1 to Z N = ( X N ,   U N ) and p ( z N | y N ) , we can compute p z n y N = p x n ,   u n y N for each n = 1 ,   ,   N , which finally gives
p x n y N = u n Ω p x n ,   u n y N  
Allowing the application of Bayesian MPM segmentation (9). Throughout this article, we will consider a particular TMC, satisfying
p t 1 = p x 1 , u 1 p y 1 x 1
p t n + 1 t n = p x n + 1 , u n + 1 x n , u n p y n + 1 x n + 1
Remark 4.
We may observe that (24) is a simplified TMC, obtained by assuming  p x n + 1 , u n + 1 x n , u n ,   y n = p x n + 1 , u n + 1 x n , u n , and  p y n + 1 x n , u n ,   y n ,   x n + 1 , u n + 1 = p y n + 1 x n + 1 . Setting  W n = ( X n , U n ) , we can see that  T N = ( W N ,   Y N )  has the structure of a very classical HMC:  W N  is Markovian and  p y n w N = p y n w n . In addition, it satisfies  p y n w n = p y n x n , u n = p y n x n . This shows that such TMCs are not much different from classical HMCs, so that different computer programs related to HMCs can be adapted to these TMCs with slight modifications.
Remark 5.
Let us note the great generality of triplet Markov chains, even in their particular form (24). In particular,  ( X N ,   U N )  is Markovian, but  X N  is not necessarily Markovian. Moreover, in TMCs,  U N  is free, and the only condition is that each  U n  takes its values in a finite not-too-large set. In particular, it can be multivariate:  U n = ( U n 1 ,   ,   U n I ) , with each  U n i  modeling a particular aspect of the problem dealt with.

3.2. Models Combining Contextual Peano Scan with Triplet Markov Chains

Let S be a square set of pixels of dimensions N = 2 k × 2 k . Let ( 1 , 2 , , N ) be a Peano scan (PS) of S , and let ( 1 , v 1 , w 1 , 2 , v 2 , w 2 , , N , v N , w N ) be the four nearest neighbors’ contextual PS (4NN-CPS) associated with PS. Let us consider a TMC of the form (21). We limit our presentation to particular TMCs ( X N , U N , Y N ) satisfying (24). Setting Z n = X n , U n for n = 1 , , N , T N = ( Z N , Y N ) is a classic HMC, and thus, in a similar manner to Section 2.1, we can extend this model to contextual Peano scan. We define the “conditional triplet Markov chain for contextual Peano scan” (CTMC-CPS) distribution by setting
p h y t z s = x t , u t p h x t , u t z s p y t x t  
if s , t are horizontal neighbors. Furthermore,
p v y t z s = x t , u t p v x t , u t z s p y t x t  
if they are vertical neighbors. Then, using Definition 1, we obtain
p y v , y s , y w z s = p y s z s p a v , s y v z s p a w , s y w z s  
with a ( v , s ) = h if v , s are horizontal neighbors, and a ( v , s ) = v if v , s are vertical neighbors, and finally, we obtain the CTMC-CPS distribution:
p ( z N , y N ) = κ p z 1 n = 1 N 1 p z n + 1 z n n = 1 N p y n , y v n , y w n z n = κ q ( z N , y N )
with v ( n ) , w ( n ) being the two neighbors of pixel n in the set of pixels, which are not its neighbors in the Peano scan.
Similarly to Section 2.1, p ( z N , y N ) is not Markovian here; thus, κ is not computable in general. However, with the distribution p ( z N y N = p ( x N , u N | y N ) being Markovian, p ( x n , u n | y N ) is computable, and thus, p ( x n y N is also computable by applying (23). Thus, the problem is similar to that in Section 2.2; the only difference is that, here, one considers ( x n , u n ) instead of x n and, once  p ( x n , u n y N is computed, one applies (23) to compute p ( x n y N and to apply MPM. For parameter estimation, the SEM from Section 2.3 can be used, with p ( z N , y N ) instead of p ( x N , y N ) . For Ω = 1 , , K and Λ = 1 , , L , the parameters are as follows: K 2 L 2 probabilities p h = ( p i j k l h ) 1 i ,   j K 1 k ,   l L , with p i j k l h = p x t + 1 = j ,   u t + 1 = l , x t = i ,   u t = k for t , t + 1 neighbors in the chain and horizontal neighbors in the image, K 2 L 2 probabilities p v = ( p i j k l v ) 1 i ,   j K 1 k ,   l L , with p i j k l v = p x t + 1 = j ,   u t + 1 = l , x t = i ,   u t = k for t , t + 1 neighbors in the chain and vertical neighbors in the image, and K means m = ( m i ) 1 i K and K variances σ 2 = ( σ i 2 ) 1 i K of the K Gaussian distributions ( p y s x s = i ) 1 i K .

3.3. Hidden Evidential Markov Chain for Contextual Peano Scan

Hidden evidential Markov chains (HEMCs) are particular triplet Markov chains ( X N , U N , Y N ) satisfying (24), in which each U n takes its values in 2 Ω , the power set of Ω . To define p ( t N ) , we first define m ( u N ) , which is a “basic belief assignment” (bba) m defined on ( 2 Ω ) N with
m u 1 , , u N = m u 1 i = 1 N 1 m u n + 1 u n
and null on 2 ( Ω N ) ( 2 Ω ) N . This simply means that, for each n = 1 , , N 1 and u n ϵ 2 Ω , u n + 1 m u n + 1 u n is a bba on 2 Ω . To simplify, a bba m on 2 Ω in this article will be seen as a probability satisfying m = 0 .
Having (29), the evidential Markov chain (EMC) distribution is the Markovian distribution p ( x N , u N ) obtained with normalization, i.e., with its division by x N , u N φ x N , u N , of the function φ defined in Ω N × ( 2 Ω ) N with
φ x N , u N = φ 1 x 1 , u 1 , x 2 , u 2 n = 2 N 1 φ n u n , x n + 1 , u n + 1  
where
φ 1 x 1 , u 1 , x 2 , u 2 = 1 x 1 u 1 , x 2 u 2 m u 1 m u 2 u 1  
φ n u n , x n + 1 , u n + 1 = 1 x n + 1 u n + 1 m u n + 1 u n ,   f o r   n = 2 , , N 1  
Setting C a r d u n = | u n | and applying Lemma 1, we find, after calculation,
p x 1 ,   u 1 = u 1 m u 1 x 2 , u 2 φ 1 x 1 , u 1 , x 2 , u 2 β 2 u 2 u 1 u 1 m u 1 x 2 , u 2 φ 1 x 1 , u 1 , x 2 , u 2 β 2 u 2 = p u 1 p ( x 1 | u 1 )
p x n + 1 , u n + 1 x n , u n = p x n + 1 , u n + 1 u n = p u n + 1 u n p x n + 1 u n + 1
with
p x n u n = 1 x n u n u n ,   f o r   n = 1 , , N  
f o r   n = 1 , , N 1 , p u n + 1 u n = u n + 1 m u n + 1 u n β n + 1 u n + 1 u n + 1 u n + 1 m u n + 1 u n β n + 1 u n + 1  
where β N ( u N ) , …., β 2 ( u 2 ) are computed with backward induction:
β N ( u N ) = 1
f o r   n = 2 , , N 1 , β n u n = x n + 1 , u n + 1 φ n u n , x n + 1 , u n + 1 β n + 1 u n + 1
Having EMC p ( x N , u N ) , we define hidden EMC (HEMC) p ( x N , u N , y N ) with
p x N , u N , y N = p u 1 p x 1 u 1 p y 1 x 1 n = 1 N 1 p u n + 1 u n p x n + 1 u n + 1 p y n + 1 x n + 1  
with
p x n u n = 1 x n u n u n   f o r   n = 1 , , N  
As HEMCs satisfy (24), to construct the hidden evidential Markov chain for the contextual Peano scan (HEMC-CPS), setting Z n = X n , U n , we can use (25) and (26), which become
p h y t u s = x t , u t p h u t u s p x t u t p y t x t    
if s , t are horizontal neighbors;
p v y t u s = x t , u t p v u t u s p x t u t p y t x t  
if they are vertical neighbors. Then, using Definition 1, we obtain
p y v , y s , y w z s = p y s x s p a v , s y v u s p a w , s y w u s
Finally, this leads to the HEMC-CPS distribution:
p ( z N , y N ) = κ p x 1 p u 1 x 1 n = 1 N 1 p u n + 1 u n p x n + 1 u n + 1 n = 1 N p y n , y v n , y w n z n
with v ( n ) , w ( n ) , the two neighbors of pixel n in the set of pixels, which are not its neighbors in the Peano scan and p x n u n satisfying (38) for n = 1 , , N .
In practice, and in the experiments below, one often uses the following simple bba:
m u n + 1 u n = 0   f o r   u n + 1   o u t s i d e   o f   1 ,   2 ,   , K ,   1 ,   ,   K
Thus, such a hidden evidential Markov chain is a light extension of the classic HMC, as we find the latter again for m u n + 1 = 1 ,   ,   K u n = 0 for each n and each u n . For Ω = 1 , , K and m u n + 1 u n satisfying (43), the parameters are as follows: K + 1 2 probabilities p h = ( p k l h ) 1 k , l K + 1 , with p k l h = p u t + 1 = l ,   u t = k for t , t + 1 neighbors in the chain and horizontal neighbors in the image; K + 1 2 probabilities p v = ( p k l v ) 1 k , l K + 1 , with p k l v = p u t + 1 = l ,   u t = k for t , t + 1 neighbors in the chain and vertical neighbors in the image; and K means m = ( m i ) 1 i K and K variances σ 2 = ( σ i 2 ) 1 i K of the K Gaussian distributions ( p y s x s = i ) 1 i K . Let us note that p x n u n is not a parameter, as it is not free and must satisfy (38). Finally, these parameters can be estimated using the SEM from Section 2.3 by replacing (19) with
p k l h , q + 1 = n , n + 1 H q + 1 i Ω j Ω 1 u n q + 1 = k , x n q + 1 = i , x n + 1 q + 1 = j , u n + 1 q + 1 = l H q + 1 ,   s i m i l a r   f o r   p k l v , q + 1  
Remark 6.
Let us briefly specify the motivation for introducing theory of evidence (TE) in an image segmentation context. For the case of two classes 1 ,   2 , we have noticed—see [28,40], and references therein—that, in small-sized areas, the a posteriori law of the evidential variable loads p ( u n + 1 = 1,2 | u n ,   y N )  at the expense of p ( u n + 1 = 1 | u n ,   y N )  and p ( u n + 1 = 2 | u n ,   y N ) , that has the effect of reducing the role of context, This reduction appears to be beneficial for segmentation. Similarly, in large-sized areas, it loads p ( u n + 1 = 1 | u n ,   y N )  and p ( u n + 1 = 2 | u n ,   y N )  to the detriment of p ( u n + 1 = 1,2 | u n ,   y N ) , which also improves segmentation. In cases where both types of area exist in an image, this mechanism improves (sometimes significantly) the segmentation obtained with the classic Markovian model, without the third evidential variable, which treats all areas in the same way. It is worth noting that the value of DS fusion in Markovian segmentation in a non-stationary data case has been found empirically, and we have no theoretical justification to propose at the moment. Why classical parameter estimation in evidential Markov models, which are classical triplet Markov models, provides parameters allowing for the double property mentioned above is not clear. Searching for theoretical justifications seems to be a challenging problem.

4. Experiments

4.1. Segmentation of Synthetic Images

This section shows some results of hand-drawn noisy images subjected to unsupervised segmentation based on the two models presented in this study, as well as on an HMC and on an HMF for reference. We estimate the parameters using the SEM algorithms from Section 2.3 and Section 3.3, with initializations obtained from a k-means algorithm run on the observed data. HMF parameters are estimated with the Gibbsian EM algorithm, with its initialization also obtained from a k-means algorithm. According to the results presented in Figure 5, we can propose the following.
The main conclusion is that in the frame of classic hidden Markov chains, the contextual Peano scan-based method is systematically and significantly more efficient than the classic Peano scan method. The relative average gain is about 16%. Another important conclusion is that in some, though not all, cases, the efficiency of the CPS-based method is comparable to that of the hidden Markov field-based method. This is the case in the “Zebra”, “Zebra with target”, or “Spaghetti” images. In these three cases, Markov field-based segmentation appears to be worse.
The value of extending hidden Markov to evidential Markov is less striking. However, it can significantly improve the results obtained with hidden Markov chains in the case of images containing simultaneously large areas and fine details. This means that extending HMC-CPS to HEMC-CPS should improve the model in the same fashion. This is confirmed through the experiments, particularly for “Tree” and “Zebra with letter”, for which HEMC-CPS does indeed improve HMC-CPS, while being inferior to HMF. The same occurs for “Digital” and “Nazca” where HEMC-CPS outperforms both HMC-CPS and HMF. It should also be noted that HEMC-CPS outperforms the hidden evidential Markov field model (HEMF), presented in [40], for the “Nazca” image (HEMF scores 6.7% of error, and HEMC-CPS 5.4%). Moreover, if we look at the “Spaghetti” and “Squares” images, we can see something interesting: When considering error rates, HMF is the best model, followed by HMC-CPS and then HEMC-CPS. However, some parts of the images (like the spaghetti shapes in “Spaghetti” and the three fine squares that are numbered 4,5,6 from the center of the image “Squares”) are segmented with more details using HEMC-CPS, while HMC-CPS seems to miss some of the details and HMF misses even more. Finally, in theory, HEMC-CPS is strictly more general than HMC-CPS; however, in practice, it sometimes performs worse than HMC-CPS, as we can see in “Zebra” and “Zebra with target”. Of course, parameter estimation is more difficult in HEMC-CPS than in HMC-CPS, which is a possible explanation.

4.2. Segmentation of Real Images

We chose two real grayscale images and segmented them using the same four methods: HMC-PS, HMC-CPS, HEMC-CPS, and HMF. From the results presented in Figure 6, we can make several observations:
For “Fractured Bones”, while HMF has a more satisfying segmentation of the bones, it is not able to segment the fractured zone, while this zone appears in HMC-CPS segmentation. Both HEMC-CPS and HMC-PS perform quite badly on this image.
For “Tree in the field”, if we look at the segmentation of the fine tree branches, HMC-CPS segmentation seems the least detailed. HMC-PS seems more detailed than HMC-CPS, but less than HMF and HEMC-CPS. Moreover, it is very difficult to decide between the two latter models. In particular, both segmentations of fine branches seem almost identical. This is encouraging as it demonstrates two points: first, HEMC-CPS can be useful with respect to HMC-CPS in real situations; second, in the same kind of real situations, HEMC-CPS-based segmentation can be as efficient as HMF-based segmentation.
Finally, we will discuss the computational complexity of the different methods considered. MPM restoration in HMC and HEMC has a time complexity in O N , where N is the length of the sequence, if we make the assumption that the number of hidden states is negligible compared to N . Under the same assumption, MPM in HMC-CPS and HEMC-CPS also has a time complexity in O N if the neighborhood considered is not too large. For the parameter estimation with SEM, the time complexity for one iteration of the algorithm is also O N for HMC, HEMC, HMC-CPS, and HEMC-CPS. If we compare with HMF, the exact MPM time complexity is not calculable. However, one full iteration of Gibbs sampling has a time complexity of O N . Considering that for approximate MPM, we need several simulations, and each simulation is obtained by running several iterations of Gibbs sampling, the total number of Gibbs iterations becomes non-negligible in front of N for a good MPM approximation. The same can be said for the Gibbsian EM algorithm in which for each iteration, we need several simulations, each of them requiring a number of Gibbs sampling iterations. In the experiments, this made HMF-based MPM about a hundred times longer than HMC-CPS- and HEMC-CPS-based ones.
To conclude, we only identified one case where the extension from Markov to evidential Markov is relevant, which is the segmentation of images combining fine details (about one pixel wide) and low noise level, but finding real-world applications presenting these particular properties is challenging. Nonetheless, we showed that both HMC-CPS and HEMC-CPS can be useful as they both improve upon HMC, while being a faster and still-relevant alternative to HMF-based MPM, even possibly in cases of real images. Furthermore, other cases where HEMC-CPS significantly outperforms HMC-CPS may remain to be discovered.

5. Conclusions and Perspectives

Using hidden Markov chains to design fast unsupervised image segmentation methods via the Peano scan (PS) is feasible. In this article, we investigated how the introduction of the “contextual” Peano scan (CPS) allows for the improvement of classical methods. According to the detailed experiments, introducing the CPS can significantly improve Bayesian unsupervised segmentations, with the average improvement measured in the error ratio being about 15%. Furthermore, in some situations, the quality of CPS-based segmentations can approach that of classic hidden Markov field-based ones, while being much faster. Moreover, hidden Markov chains are generalizable to pairwise and triplet Markov ones, and classic PS-based methods are generalizable to more sophisticated models; thus, it is the same for CPS-based models. We tested one of these models based on the theory of evidence. General conclusions are more difficult to draw; however, its value becomes apparent in particular situations, such as when both large- and small-sized areas are present in the hidden class image. The efficiency of MPM based on such evidential models can even match that of MPM based on HMFs in the case of real images, while being much faster.
In this study, we considered non-causal inferences. For 1 < n < N , we obtained X n , and Y n = ( Y n , Y n ) , with Y n = ( Y v n , Y w n ) , where v n , w n are pixel neighbors of n , albeit outside of the scan. Stochastic inferences in such models are illustrated by the dependency graph shown in Figure 7a. However, similar models remain valid for causal inference. The corresponding graph is the directed graph shown in Figure 7b. Thus, the general structure of the family of models considered is suitable for both causal and non-causal situations.
In terms of research perspectives, let us note that the flexibility of the Peano scan allows for its use in 3D images [41] and sequences of images [18], which are significantly more complex than the fixed images considered in this study, and in which using HMF techniques is very time-consuming. Thus, the proposed CPS-based techniques are likely to be of interest in such complex situations.
Finally, the usefulness of the models presented here are not limited to image segmentation. They can be applied to the processing of any spatially correlated multi-dimensional data, once a scan linking the variables under consideration has been established. There are numerous “multi-dimensional space-filling curves”, which are scans mapping multi-dimensional space to a one-dimensional domain [42]. For example, the models considered in this paper would be directly applicable to the multi-dimensional data classification problem, once such a curve, and a neighborhood system, chosen.

Author Contributions

Conceptualization, C.F. and W.P.; Methodology, C.F. and W.P.; Software, C.F.; Validation, C.F.; Formal analysis, C.F. and W.P.; Investigation, C.F. and W.P.; Writing—original draft, C.F. and W.P.; Writing—review & editing, C.F.; Supervision, W.P.; Funding acquisition, W.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Association of Research and Technology (ANRT), grant number 2018/0776.

Data Availability Statement

The original data and code presented in the study are openly available at https://github.com/Ultimawashi/HMC-HEMC-CPS.git (accessed on 1 May 2025).

Acknowledgments

The authors thank Julien Fouth and Mokrane Abdiche for their involvement in the project, as well as Segula Matra Automotive and the National Association of Research and Technology (ANRT) for their financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
  2. Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T. Long-term recurrent convolutional networks for visual recognition and description. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 677–691. [Google Scholar] [CrossRef] [PubMed]
  3. Majurski, M.; Manescu, P.; Padi, S.; Schaub, N.; Hotaling, N.; Simon, C., Jr.; Bajcsy, P. Cell image segmentation using generative adversarial networks, transfer learning, and augmentations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 15–20 June 2019; pp. 1114–1122. [Google Scholar]
  4. Rong, Y.; Jia, M.; Zhan, Y.; Zhou, L. SR-RDFAN-LOG: Arbitrary-scale logging image super-resolution reconstruction based on residual dense feature aggregation. Geoenergy Sci. Eng. 2024, 240, 213042. [Google Scholar] [CrossRef]
  5. Besag, J. On the statistical analysis of dirty pictures. J. R. Stat. Soc. Ser. B 1986, 48, 259–302. [Google Scholar] [CrossRef]
  6. Geman, S.; Geman, D. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 1984, 6, 721–741. [Google Scholar] [CrossRef]
  7. Marroquin, J.; Mitter, S.; Poggio, T. Probabilistic solution of ill-posed problems in computational vision. J. Am. Stat. Assoc. 1987, 82, 76–89. [Google Scholar] [CrossRef]
  8. Heitz, F.; Perez, P.; Bouthemy, P. Multiscale Minimization of Global Energy Functions in Some Visual Recovery Problems. CVGIP Image Underst. 1994, 59, 125–134. [Google Scholar] [CrossRef]
  9. Mignotte, M.; Collet, C.; Pérez, P.; Bouthemy, P. Three-Class Markovian Segmentation of High-Resolution Sonar Images. Comput. Vis. Image Underst. 1999, 76, 191–204. [Google Scholar] [CrossRef]
  10. Mignotte, M.; Collet, C.; Pérez, P.; Bouthemy, P. Markov Random Field and Fuzzy Logic Modeling in Sonar Imagery: Application to the Classification of Underwater Floor. Comput. Vis. Image Underst. 2000, 79, 4–24. [Google Scholar] [CrossRef]
  11. Ruan, S.; Moretti, B.; Fadili, J.; Bloyet, D. Fuzzy Markovian Segmentation in Application of Magnetic Resonance Images. Comput. Vis. Image Underst. 2002, 85, 54–69. [Google Scholar] [CrossRef]
  12. Antonelli, L.; Simone, V.D.; di Serafino, D. A view of computational models for image segmentation. Ann. Dell’Universita Ferrara 2022, 68, 277–294. [Google Scholar] [CrossRef]
  13. Bhar, R.; Hamori, S. Hidden Markov Models: Applications to Financial Economics; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2006. [Google Scholar]
  14. Cappé, O.; Moulines, E.; Ryden, T. Inference in Hidden Markov Models; Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  15. Knerr, S.; Augustin, E.; Baret, O.; Price, D. Hidden Markov Model Based Word Recognition and Its Application to Legal Amount Reading on French Checks. Comput. Vis. Image Underst. 1998, 70, 404–419. [Google Scholar] [CrossRef]
  16. Li, J.; Gray, R.M. Image Segmentation and Compression Using Hidden Markov Models; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  17. Koski, T. Hidden Markov Models for Bioinformatics; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  18. Benmiloud, B.; Pieczynski, W. Estimation des paramètres dans les chaînes de Markov cachées et segmentation d’images. Trait. Signal 1995, 12, 433–454. [Google Scholar]
  19. Salzenstein, F.; Collet, C. Fuzzy Markov random fields versus chains for multispectral image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1753–1767. [Google Scholar] [CrossRef]
  20. Ameur, M.; Idrissi, N.; Daoudi, C. Triplet Markov chain in images segmentation. In Proceedings of the International Conference on Intelligent Systems and Computer Vision, ISCV, Fez, Marocco, 2–4 April 2018. [Google Scholar]
  21. Hafiane, A.; Chaudhuri, S. A modified FCM with optimal Peano Scan for image segmentation. In Proceedings of the IEEE International Conference on Image Processing, Genova, Italy, 14 September 2005; Volume 3, p. 840. [Google Scholar]
  22. Mari, J.-F.; Le Ber, F. Temporal and spatial data mining with second-order hidden Markov models. Soft Comput. 2006, 10, 406–414. [Google Scholar] [CrossRef]
  23. Paroli, R.; Spezia, L. Reversible jump Markov chain Monte Carlo method and segmentation algorithms in hidden Markov models. Aust. N. Z. J. Stat. 2010, 52, 151–166. [Google Scholar] [CrossRef]
  24. Provost, J.-N.; Collet, C.; Pérez, P.; Bouthemy, P. Unsupervised multispectral segmentation of SPOT images applied to nautical cartograpy. In Proceedings of the IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing, Antalya, Turkey, 20–23 June 1999. [Google Scholar]
  25. Carrincotte, C.; Derrode, S.; Bourennane, S. Unsupervised change detection on SAR images using fuzzy hidden Markov chains. IEEE Trans. Geosci. Remote Sens. 2006, 44, 432–441. [Google Scholar] [CrossRef]
  26. Le Cam, S.; Salzenstein, F.; Collet, C. Fuzzy pairwise Markov chain to segment correlated noisy data. Signal Process. 2008, 88, 2526–2541. [Google Scholar] [CrossRef]
  27. Fernandes, C.; Monti, T.; Monfrini, E.; Pieczynski, W. Fast image segmentation with contextual scan and Markov chains. In Proceedings of the 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021. [Google Scholar]
  28. Boudaren, M.Y.; Pieczynski, W. Dempster-Shafer fusion of evidential pairwise Markov chains. IEEE Trans. Fuzzy Syst. 2016, 24, 1598–1610. [Google Scholar] [CrossRef]
  29. Denoeux, T. 40 years of Dempster-Shafer theory. Int. J. Approx. Reason. 2016, 79, 1–6. [Google Scholar] [CrossRef]
  30. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  31. Ramasso, E.; Deneoux, T. Making use of partial knowledge about hidden states in HMMs: An approach based on belief functions. IEEE Trans. Fuzzy Syst. 2017, 22, 1102–1114. [Google Scholar] [CrossRef]
  32. Ramasso, E. Inference and learning in evidential discrete latent Markov models. IEEE Trans. Fuzzy Syst. 2014, 25, 395–405. [Google Scholar] [CrossRef]
  33. Soubaras, H. On Evidential Markov Chains, Studies in Fuzziness and Soft Computing; Springer Nature: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  34. Soubaras, H.; Labreuche, C.; Saveant, P. Evidential Markov decision processes. Lect. Notes Comput. Sci. 2011, 6717, 338–349. [Google Scholar]
  35. Abbes, A.B.; Farah, M.; Farah, I.; Barra, V. A non-stationary NDVI time series modelling using triplet Markov chain. Int. J. Inf. Decis. Sci. 2019, 19, 163–179. [Google Scholar] [CrossRef]
  36. Chen, S.; Jiang, X. Modeling repayment behavior of consumer loan in portfolio across business cycle: A Triplet Markov Model approach. Complexity 2020, 2020, 5458941. [Google Scholar] [CrossRef]
  37. Baum, L.E.; Petrie, T.; Soules, G.; Weiss, N. A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains. Ann. Math. Stat. 1970, 41, 164–171. [Google Scholar] [CrossRef]
  38. McLachlan, G.J.; Krishnan, T. EM Algorithm and Extensions; Series in Probability and Statistics; Wiley: Hoboken, NJ, USA, 1997. [Google Scholar]
  39. Chalmond, B. An iterative Gibbsian technique for reconstruction of m-ary images. Pattern Recognit. 1989, 22, 747–761. [Google Scholar] [CrossRef]
  40. An, L.; Li, M.; Boudaren, M.E.Y.; Pieczynski, W. Unsupervised segmentation of hidden evidential Markov fields corrupted by correlated non-Gaussian noise. Int. J. Approx. Reason. 2018, 102, 41–59. [Google Scholar] [CrossRef]
  41. Bricq, S.; Collet, C.; Armspach, J.P. Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains. Med. Image Anal. 2008, 12, 639–652. [Google Scholar] [CrossRef]
  42. Mokbel, M.; Aref, W.G. Irregularity in multi-dimensional space-filling curves with applications in multimedia databases. In Proceedings of the CIKM—ACM International Conference on Information and Knowledge Management, Atlanta, GA, USA, 5–10 November 2001. [Google Scholar]
Figure 1. Construction of Peano scan. (a): image de quatre pixels, (b): image de seize pixels, (c): image de soixante-quatre pixels.
Figure 1. Construction of Peano scan. (a): image de quatre pixels, (b): image de seize pixels, (c): image de soixante-quatre pixels.
Mathematics 13 01589 g001
Figure 2. Six possible spatial configurations (af) of added neighbors v , w (green) in relation to the neighbors r , t on the scan of s being not on border (blue).
Figure 2. Six possible spatial configurations (af) of added neighbors v , w (green) in relation to the neighbors r , t on the scan of s being not on border (blue).
Mathematics 13 01589 g002
Figure 3. Pixel numbers for image Figure 1b, with the Peano scan beginning in the upper-left corner.
Figure 3. Pixel numbers for image Figure 1b, with the Peano scan beginning in the upper-left corner.
Mathematics 13 01589 g003
Figure 4. Observations associated with pixels 1, 2, 3, …., 16 in Figure 3, which are ( y 1 , y 4 ) , ( y 2 , y 3 ) , ( y 14 , y 3 , y 8 ) , ( y 1 , y 4 ) , ( y 5 , y 8 ) , y 6 , ( y 7 , y 10 ) , ( y 8 , y 3 , y 5 ) , …, ( y 16 , y 13 ) .
Figure 4. Observations associated with pixels 1, 2, 3, …., 16 in Figure 3, which are ( y 1 , y 4 ) , ( y 2 , y 3 ) , ( y 14 , y 3 , y 8 ) , ( y 1 , y 4 ) , ( y 5 , y 8 ) , y 6 , ( y 7 , y 10 ) , ( y 8 , y 3 , y 5 ) , …, ( y 16 , y 13 ) .
Mathematics 13 01589 g004
Figure 5. Unsupervised segmentation of seven images with Gaussian noise using four Bayesian MPM methods: classic hidden Markov chains and classic Peano scan (HMC-PS), classic HMC and contextual Peano scan (HMC-CPS), evidential HMC and contextual Peano scan (HEMC-CPS), and classic hidden Markov field (HMF).
Figure 5. Unsupervised segmentation of seven images with Gaussian noise using four Bayesian MPM methods: classic hidden Markov chains and classic Peano scan (HMC-PS), classic HMC and contextual Peano scan (HMC-CPS), evidential HMC and contextual Peano scan (HEMC-CPS), and classic hidden Markov field (HMF).
Mathematics 13 01589 g005aMathematics 13 01589 g005b
Figure 6. Unsupervised segmentation of two real grayscale images using four Bayesian MPM methods: classic hidden Markov chains and classic Peano scan (HMC-PS), classic HMC and contextual Peano scan (HMC-CPS), evidential HMC and contextual Peano scan (HEMC-CPS), and classic hidden Markov field (HMF).
Figure 6. Unsupervised segmentation of two real grayscale images using four Bayesian MPM methods: classic hidden Markov chains and classic Peano scan (HMC-PS), classic HMC and contextual Peano scan (HMC-CPS), evidential HMC and contextual Peano scan (HEMC-CPS), and classic hidden Markov field (HMF).
Mathematics 13 01589 g006
Figure 7. Undirected dependence graph (a) and directed one (b), related to non-causal and causal situations, respectively.
Figure 7. Undirected dependence graph (a) and directed one (b), related to non-causal and causal situations, respectively.
Mathematics 13 01589 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fernandes, C.; Pieczynski, W. Contextual Peano Scan and Fast Image Segmentation Using Hidden and Evidential Markov Chains. Mathematics 2025, 13, 1589. https://doi.org/10.3390/math13101589

AMA Style

Fernandes C, Pieczynski W. Contextual Peano Scan and Fast Image Segmentation Using Hidden and Evidential Markov Chains. Mathematics. 2025; 13(10):1589. https://doi.org/10.3390/math13101589

Chicago/Turabian Style

Fernandes, Clément, and Wojciech Pieczynski. 2025. "Contextual Peano Scan and Fast Image Segmentation Using Hidden and Evidential Markov Chains" Mathematics 13, no. 10: 1589. https://doi.org/10.3390/math13101589

APA Style

Fernandes, C., & Pieczynski, W. (2025). Contextual Peano Scan and Fast Image Segmentation Using Hidden and Evidential Markov Chains. Mathematics, 13(10), 1589. https://doi.org/10.3390/math13101589

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop