# A Joint Land Cover Mapping and Image Registration Algorithm Based on a Markov Random Field Model

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Problem Statement

^{$\mathcal{S}$}. The label of LCM at pixel s is denoted by x

_{s}which can also be called the configuration of X($\mathcal{S}$) at the site s. Since land cover classes are more likely to occur in connected patches in the LCM than isolated pixels, the LCM is assumed to satisfy the MRF properties with Gibbs potential V

_{C}(X). Hence, the marginal probability density function (PDF) of a LCM can be written as

_{X}is a normalizing constant, C is a clique, and E (X) = ∑

_{C ⊂ $\mathcal{S}$}V

_{C}(X) is called the Gibbs energy function [14]. Cliques are singleton or groups of pixels such that any two pixels are mutually neighbors. Figure 1 shows all possible clique types for four- and eight-neighborhood systems. The value of the Gibbs potential function depends on the configurations of the entire LCM and the clique. Usually, low values of the potential function correspond to similar configurations, whereas high values correspond to dissimilar configurations of a clique. For instance, the Ising model [11,13], given by,

_{s}is a set of neighboring pixels of s. We can extend the above model to our problem by letting x

_{s}and x

_{r}be the class labels of pixels s and r in $\mathcal{S}$, respectively. With this modification, the Ising model can be applied to describe the LCM because land cover class distributions are similar to the phenomenon described above (i.e., classes occupying neighboring pixels are likely to be the same).

_{n}($\mathcal{T}$

_{n}) ∈ ℜ

^{$\mathcal{T}$n × Bn}; n = 1,2, …, N denotes the n-th remotely sensed image where B

_{n}denotes the number of spectral bands, and $\mathcal{T}$

_{n}is a map coordinate system to which the n-th remote sensing image is registered. Since all remotely sensed images and the LCM are from the same scene, the relationship between $\mathcal{S}$ and $\mathcal{T}$

_{n}can be determined. Let us denote a coordinate of a pixel s in the LCM as s = (i,j) where i and j are the column and row of s. Similarly, we can write t

_{n}= (u

_{n}, v

_{n}) ∈ $\mathcal{T}$

_{n}where u

_{n}and v

_{n}are the column and row of the pixel t

_{n}in Y

_{n}. In this paper, we employ the affine transformation, and the relationship between s and t

_{n}can be written as

_{1,n}and m

_{4,n}are scale parameters, m

_{2,n}and m

_{3,n}are skew parameters, and m

_{5,n}and m

_{6,n}are displacement parameters in the column- and row-direction, respectively. We refer to M

_{n}= [m

_{1,n}, m

_{2,n}, m

_{3,n}, m

_{4,n}, m

_{5,n}, m

_{6,n}] as the map parameter vector between coordinate systems $\mathcal{S}$ and $\mathcal{T}$

_{n}. Note here that our work can be applied to another type of image mapping as well.

**M**= {M

_{1}, …, M

_{n}} and

**Y**= {Y

_{1}($\mathcal{T}$

_{1}), …, Y

_{n}($\mathcal{T}$

_{n})} are collections of the map parameters and the observed multispectral images. We observe that Equation (5) is similar to the hidden Markov model used in [25]. Moreover, the intensity vectors from different pixels in Z

_{n}($\mathcal{S}$) are also assumed to be statistically independent when the LCM is given. Hence, the joint conditional PDF can be written as

**z**

_{n,s}∈ ℝ

^{Bn}denotes the intensity vector of the remapped image Z

_{n}(S) at a pixel s. We acknowledge that the assumption given in Equation (6) may not always be true for all cases since some land cover classes have a textural structure. One can incorporate texture information into our image model appropriately, which may further result in an increase in accuracy. This will, however, lead to extremely complex problems, which may not be desirable in practice.

_{n}(S) given the class label x

_{s}is a multivariate normal random vector with mean vector

**μ**

_{xs,n}and covariance matrix ∑

_{xs,n}, Equation (6) can be rewritten as

**Y, M**) is independent of the choice of X, it can be treated as a constant. Hence, we have

_{X∈Λ$\mathcal{S}$}e

^{−E(X|Y, M)}is a normalizing constant and independent of the choice of X, and

_{s}denotes the set of neighboring pixels of s. The normalizing constant Z′ cannot be computed in most practical scenarios due to the large number of possible configurations (e.g., there are more than 2

^{4096}possible configurations for binary LCM of size 64 × 64.) As a result, we propose to use the mean field theorem [26,27] to remove the interaction between neighboring pixels defined in V

_{C}(X). The mean field theorem approximates the conditional Gibbs energy function as

_{xr}[V

_{{s,r}}(X)] is the expected value of the potential function with respect to the configuration of x

_{r}. The expected value E

_{xr}[V

_{{s,r}}(x

_{s}, x

_{r})] does not depend on x

_{r}, and is equal to

_{s∈$\mathcal{S}$}p(x

_{s}|

**Y,M**), the approximation in Equation (16) is closest to Pr(X|

**Y,M**) when the Kullback-Leibler (KL) divergence [27,28] is used as a distance measure.

## 3. Optimum Image Registration and Land Cover Mapping Criteria

#### 3.1. Optimum Image Registration

_{1}, …, Y

_{N}|M

_{1}, …, M

_{N}) must be calculated and it is equal to

_{n}is the remapped and resampled version of Y

_{N}. Since Equation (18) is written as a multiplication of ∑

_{X∈Λ}

^{$\mathcal{S}$}Pr(Z

_{n}|X($\mathcal{S}$)) PR(X($\mathcal{S}$)), the solution of Equation (17) can be obtained individually, i.e.,

_{n}that maximize Equation (19). For instance, if M

_{n}= [1,0,0,1,0,0] is the solution of Equation (19) for $\mathcal{S}$ = {(0,0), (0,1), (1,0), (1,1)},we know that M′

_{1}= [1,0,0,1,1,0] is also the solution of Equation (19) for $\mathcal{S}$′ = {(0, −1), (0,0), (1, −1), (1,0)}. As a result, it is imperative to limit the search space and number of possible solutions. Furthermore, in most practical situations, we may wish to produce the LCM registered to one of the input remote sensing images. Without loss of generality, we assume that the LCM is registered to Y

_{1}, i.e., we have M

_{1}= ${M}_{1}^{*}$ = [1,0,0,1,0,0].

^{10,000}≈ 2 × 10

^{3,010}possible binary LCMs. Therefore, the direct calculation of Equation (19) is an impossible task, and hence, the solution of the MLE cannot be obtained in reasonable time. As a result, the expectation-maximization (EM) algorithm [24] is employed instead. The EM algorithm is an iterative parameter estimator which produces a new estimate for every iteration. It has been shown in [24] that this new estimate always results in a higher or at least the same value of the likelihood function. In other words, if we let ${M}^{t}=\{{M}_{1}^{t},{M}_{2}^{t},\dots ,{M}_{N}^{t}\}$ be the collection of all estimated parameters at the t-th iteration of the EM algorithm, we will have Pr(Y

_{1}, …, Y

_{N}|M

^{t}) ≥ (Y

_{1}, …, Y

_{N}|M

^{t−1}), where M

^{t−1}is the collection of estimated parameters at the (t−1)-th iteration. Here, and throughout the rest of the paper, we omit $\mathcal{S}$ and $\mathcal{T}$

_{n}for the sake of brevity. In Section 4, we will discuss the details of the EM algorithm employed in this work and how it can be combined with the land cover mapping process. However, before going into the detail of the proposed algorithm, let us state the optimization criterion for the land cover mapping considered in this paper.

#### 3.2. Optimum Land Cover Map

**Y,M**) is a non-concave function and, therefore, conventional gradient-based optimization algorithms are not applicable for the solution of Equation (20). Furthermore, the number of possible solutions is also very large. A direct search for the solution of Equation (20) is too expensive to be practically implemented. As a result, we propose to use the mean field theorem [26,27] to remove the interaction between neighboring pixels defined in V

_{C}(X). Hence, by substituting Equation (16) into Equation (20), the optimization problem becomes

^{MF}(x

_{s}|

**Y,M**) is a non-negative function, the optimum solution can be solved from all individual function, i.e., for s ∈ $\mathcal{S}$,

## 4. Joint Image Registration and Land Cover Mapping Algorithm

**Y**= {Y

_{1}, …, Y

_{N}} is the set of all observed remotely sensed images,

**M**= {M

_{1}, …, M

_{N}} is the set of all unknown map parameters, and ${\mathit{M}}^{t}=\{{M}_{1}^{t},\dots ,{M}_{N}^{t}\}$ is the set of all estimated parameters at the t-th iteration of the EM algorithm. Note here that ${M}_{1}^{t}={M}_{1}^{*}$. By substituting Equations (1) and (7) into Equation (24), the expected value becomes

_{xs,n}|, log(2π)

^{Bn}, ∑

_{C⊂$\mathcal{S}$}V

_{C}(X), and Z

_{X}in Equation (25) do not depend on

**M**. Hence, Equation (25) can be modified to

**z**

_{n,s}depends only on M

_{n}and the right-hand side of Equation (30) is written as the summation of

**z**

_{n,s}from different images, the above optimization problem can be rearranged into the optimization of each individual mapping parameters, i.e.,

**Y**,

**M**

^{t}) is approximated by recalculating h

_{s}(x

_{s}|

**Y**,

**M**

^{t}). We follow the work of Zhang [28] which suggested that h

_{s}(x

_{s}|

**Y**,

**M**

^{t}) can be obtained from

_{obv}(x

_{s}|

**Z**) and h

_{ng}(x

_{s}|X

_{NG}) are the potential functions depending upon the observation and neighboring pixels, respectively.

_{s}(x

_{s}|

**Y**,

**M**

^{t}) is recalculated for every iteration of the EM algorithm, we can choose a land cover class that minimizes h

_{s}(x

_{s}|

**Y**,

**M**

^{t}), and obtain the optimum LCM based on criterion given in Equation (23) By combining the EM algorithm given in Figure 2 and the land cover mapping process by minimizing Equation (23), the joint image registration and land cover mapping algorithm is given as

- (1)
- Initialize map parameters, i.e., ${M}_{1}^{0}={M}_{1}^{*}$ and ${\mathit{M}}^{0}=\{{M}_{1}^{0},\dots ,{M}_{n}^{0}\}$, let t = 1, and assign p
^{MF}(x_{s}|**Y,M**^{0}) based on some prior knowledge. - (2)
- Compute ${Q}_{n}^{\mathit{MF}}\left({M}_{n}||{\mathit{M}}^{t-1}\right)$ for n = 2, …, N.
- (3)
- Obtain ${M}_{n}^{t}$ by solving Equation (32) for n = 2, …, N., and assign ${M}_{1}^{t}={M}_{1}^{*}$ and ${\mathit{M}}^{\mathit{t}}=\left[{M}_{1}^{t},\dots ,{M}_{N}^{t}\right]$.
- (4)
- (5)
- Find the new LCM that minimizes h
_{s}(x_{s}|**Y**,**M**^{t}) for all s ∈ $\mathcal{S}$. - (6)
- Let t = t + 1, and go to Step 2 if the convergence criterion is not satisfied.

^{MF}(M||M

^{t−1}) due to its non-convexity. The PSO exploits the cooperative behavior of a group of animals such as birds and insects. In the PSO, an individual animal is called a particle while a group of animals is called a swarm. Initially, these particles are distributed throughout the search space, and move around the search space. Based on some social and cooperative criteria, these particles will eventually cluster in the regions where the global optima can be found.

_{n}, each particle represents a mapping parameter and we denote the i-th particle as M

_{n,i}. At each iteration, the i-th particle moves with a velocity V

_{i}, which is a function of the best-known positions (mapping parameter) discovered by the i-th particle (P

_{i}) itself, and from all particles (G), i.e.,

_{1}and φ

_{2}are acceleration constants, and u

_{1}and u

_{2}are uniform random numbers between zero and one. The velocity is usually kept within the range of [V

_{min}, V

_{max}] to ensure that ${M}_{n,i}^{r}$ is in the valid regions. Note here that the performance of the PSO depends on the selection of, ω, φ

_{1}and φ

_{2}, and the number of iterations. In this paper, we set the number of particles to 80 and the maximum number of iterations to 200 as a suitable setup for our experiment. We acknowledge that different setups of these parameters may result in different convergence rates. However, the investigation of the optimum parameter selection of the PSO in term of convergence rate is out of the scope of this paper. We refer to the report studied by [33] for further details.

## 5. Experiments

#### 5.1. Experiment 1

_{Perfect}= [1,0,0,1,0,0] which correspond to unit scale, zero skew, and zero displacement. Next, since we wanted to examine the effects of the initial registration errors on the performance of our algorithm, we investigated different scenarios of initial registration errors by varying the initial mapping parameters between the observed images and LCM at different values of displacement, scale and skew parameters. In particular, we investigated three scenarios for only the displacement, only the scale and only the skew errors, respectively. Table 1 shows the initial mapping parameters for all three scenarios. Here, δ, ρ and η are the initial displacement, scale, and skew parameter errors. Note that the initial mapping parameter errors for Image 1 for all scenarios were zero since we assumed that the first image is registered to the LCM as mentioned in Section 3.1.

_{n}from two consecutive iterations where

_{i}from the nth at the tth iteration. In this example, the algorithm terminates when p

_{changes}is less than p

_{min}= 10

^{−5}, and d

_{movement,n}is less than 0.1 pixels for five consecutive iterations for n = 2,3,4. To create a benchmark for our proposed algorithm, we investigated two extreme cases where LCMs were derived directly from the unregistered image pairs and from a perfect registered image pair. The LCMs from these extreme cases were classified using our proposed algorithm by fixing M

^{t}= M

^{*}. For perfect registration, we had M

^{*}= M

_{Perfect}whereas, for unregistered image pairs, we set M

^{*}equal to the values given in Table 1 for the respective scenarios. The first extreme case could be considered as the lower limit on the classification accuracy if we performed the land cover mapping without alignment of images first. The second case was an upper bound on the classification accuracy when we produced a map from a registered image pair. By setting up our experiment in this fashion, we could investigate how much improvement our algorithm could gain by integrating the registration and classification together, and how far the performance of our algorithm was from the upper limit where all uncertainties in registration were removed. To ensure the statistical significance of our experiment, all experiments were repeated ten times.

_{n}= M

_{perfect}. Clearly, for β = 0.25, 0.5, and 0.75, our algorithm can successfully register all images with the LCMs. However, at β = 0, our algorithm could not align these images with the LCM. The results in Table 5 emphasize the importance of parameter selection. Note here that the RMSE of Image 1 is not shown in Table 5 since it is assumed to be perfectly aligned (registration error is zero) with the LCM.

^{2}from −30 dB to 0 dB, and the resulting averaged RMSEs for β = 0.0 and 0.75 are given in Tables 6 and 7, respectively. We observed here that there were slight performance differences in term of the RMSEs for σ

^{2}of −30, −20 and −10 dB for both β = 0.00 and 0.75. However, for the noise variance equal to 0 dB, our algorithm could only correctly align Images 2–4 with the LCM at β = 0.75. This result emphasizes the importance of the MRF model to the convergence of our algorithm.

## 5.2. Experiment 2

^{2}, on 10 July 2008. Through visual interpretation, we classified the area into five classes, namely, water, shadow, vegetation, and impervious type 1 and impervious type 2. The ground truth image is shown in Figure 10 where the blue, black, green, red and white colors correspond to water, shadow, vegetation, impervious type 1 and impervious type 2, respectively. In this case, the impervious was divided into two types due to different roof and pavement colors in the scene. By using both PAN and MI images, we randomly selected 1,000 samples for each land cover class.

_{changes}(see Equation (38)) was less than 10

^{−5}and d

_{movement,MI}(see Equation (39)) was less than 0.1 pixels for five consecutive iterations. Before examining the robustness of our algorithm, we determined the benchmark performance of the MRF-based land cover mapping when MI and PAN were perfectly registered. The resulting LCMs are shown in Figure 11. Again, as we progressed to greater values of β, more connected LCMs are obtained. The overall accuracy graph shown in Figure 12 agrees with the visual inspection that the classification performance increased as the values of β increased. In this example, we employed the overall accuracy rather than the percentages of mis-classified pixels used in Example 1 since overall accuracy is a more widely used performance metric in remote sensing image classification.

_{MI}(see Equation (41)) for all cases are given in Table 10. Again, if our algorithm performed perfectly, the estimated map parameter would converge back to M

^{opt}. In other words, we would eventually end up with M

^{t}= M

^{opt}. Once the correct map parameter vector was obtained, the classification accuracies of the LCMs were expected to be equal to those of the perfect registration cases (Figure 11a–d). In this example, we again assigned ${p}^{\mathit{MF}}\left({x}_{s}|\mathit{Y},{\mathit{M}}^{0}\right)=\frac{1}{5}$, the most extreme case where no prior information was given.

## 6. Conclusion

## Acknowledgments

## Conflicts of Interest

## References

- Lombardo, P.; Oliver, C.J.; Pellizzeri, T.M.; Meloni, M. A new maximum-likelihood joint segmentation technique for multitemporal sar and multiband optical images. IEEE Trans. Geosci. Remote Sens
**2003**, 41, 2500–2518. [Google Scholar] - Skriver, H.; Mattia, F.; Satalino, D.; Balenzano, A.; Pauwells, V.R.N.; Verhoest, N.E.C.; Davison, M. Crop classification using short-revisit multitemporal SAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens
**2011**, 4, 423–431. [Google Scholar] - Baghdadi, N.; Todoroff, P.; Hjj, M.E.; Begue, A. Potential of SAR sensors TerraSAR-X, ASAR/ENVISAT and PALSAR/ALOS for monitoring sugarcane crops on Reunion Island. Remote Sens. Environ
**2009**, 113, 1724–1738. [Google Scholar] - Meroni, M.; Marinho, E.; Sghaier, N.; Verstrate, M.M.; Leo, O. Remote sensing based yield estimation in a stochastic framework—case study of durum wheat in Tunisia. Remote Sens
**2013**, 5, 539–557. [Google Scholar] - Khedam, R.; Belhadj-aissa, A. A General Multisource Contextual Classification Model of Remotely Sensed Imagery Based on MRF. Proceedings of Remote Sensing and Data Fusion over Urban Areas, IEEE/ISPRS Joint Workshop 2001,. Rome, Italy, 8–9 November 2001; pp. 231–235.
- Thoonen, G.; Mahmood, Z.; Peeters, S.; Scheunders, P. Multisource classification of color and hyperspectral images using color attribute profiles and composite decision fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens
**2012**, 5, 510–521. [Google Scholar] - Camps-Valls, G.; Gomez-Chova, L.; Munoz-Mari, J.; Rojo-Alvarez, J.L.; Martinez-Ramon, M. Kernel-Based framework for multitemporal and multisource remote sensing data classification and change detection. IEEE Trans. Geosci. Remote Sens
**2008**, 1822–1835. [Google Scholar] - Hasan, M.; Pickering, M.R.; Jia, X. Robust automatic registration of multi-modal satellite images using CCRE with partial volume interpolation. IEEE Trans. Geosci. Remote Sens
**2012**, 50, 4050–4061. [Google Scholar] - Brook, A.; Ben-Dor, E. Automatic registration of airborne and spaceborne images by topology map matching with surf processor algorithm. Remote Sens
**2011**, 3, 65–82. [Google Scholar] - Mahapatra, D.; Sun, Y. Integrating segmentation information for improved MRF-based elastic image registration. IEEE Trans. Image Process
**2012**, 21, 170–183. [Google Scholar] - Chen, S.; Guo, Q.; Leung, H.; Bosse, E. A maximum likelihood approach to joint image registration and fusion. IEEE Trans. Image Process
**2011**, 20, 1363–1372. [Google Scholar] - Winkle, G. Image Analysis Random Fields and Dynamic Monte Carlo Methods; Springer-Verlag: New York, NY, USA, 1995. [Google Scholar]
- Bremaud, P. Markov Chains Gibbs Field, Monte Carlo Simulation, and Queues; Springer-Verlag: New York, NY, USA, 1999. [Google Scholar]
- Geman, S.; Geman, D. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell
**1984**, 6, 721–741. [Google Scholar] - Solberg, A.H.S.; Taxt, T.; Jain, A.K. A Markov random field model for classification of multisource satellite imagery. IEEE Trans. Geosci. Remote Sens
**1996**, 34, 100–113. [Google Scholar] - Kasetkasem, T.; Varshney, P.K. An image change detection algorithm based on Markov random field models. IEEE Trans. Geosci. Remote Sens
**2002**, 40, 1815–1823. [Google Scholar] - Bruzzone, L.; Prieto, D.F. Automatic analysis of the difference image for unsupervised change detection. IEEE Trans. Geosci. Remote Sens
**2002**, 38, 1171–1182. [Google Scholar] - Xie, H.; Pierce, L.E.; Ulaby, F.T. SAR speckle reduction using wavelet denoising and Markov random field modeling. IEEE Trans. Geosci. Remote Sens
**2002**, 40, 2196–2212. [Google Scholar] - Peng, Y.; Chen, J.; Xu, X.; Pu, F. SAR images statistical modeling and classification based on the mixture of alpha-stable distributions. Remote Sens
**2013**, 5, 2145–2163. [Google Scholar] - Xu, M.; Chen, H.; Varshney, P. An image fusion approach based on Markov random fields. IEEE Trans. Geosci. Remote Sens
**2011**, 49, 5116–5127. [Google Scholar] - Wang, L.; Wang, Q. Subpixel mapping using Markov random field with multiple spectral constraints from subpixel shifted remote sensing images. IEEE Trans. Geosci. Remote Lett
**2013**, 10, 598–602. [Google Scholar] - Kasetkasem, T.; Arora, M.; Varshney, P.K. Super-resolution land cover mapping using a Markov random field based approach. Remote Sens. Environ
**2005**, 96, 302–314. [Google Scholar] - Moser, G.; Serpico, S.B.; Bennediktsson, J.A. Land cover mapping by Markov Modeling of spatial-contextual information in very high-resolution remote sensing images. Proc. IEEE
**2013**, 101, 631–651. [Google Scholar] - Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Soc. Stat. Ser. B
**1997**, 39, 1–38. [Google Scholar] - Celeux, G.; Forbes, F.; Peyrard, N. EM procedures using mean field-like approximations for Markov model-based image segmentation. Pattern Recognit
**2003**, 36, 131–144. [Google Scholar] - Tanaka, T. Mean field theory of boltzmann machine learning. Phys. Rev. E
**1998**, 58, 2302–2310. [Google Scholar] - Tanaka, T. A Theory of Mean Field Approximation. Proceedings of 12th Annual Conference on Neural Information Processing Systems (NIPS’98), Denver, CO, USA, 30 November–5 December 1998.
- Zhang, J. The mean field theory in EM procedures for Markov random fields. IEEE Trans. Signal Process
**1992**, 40, 2570–2583. [Google Scholar] - Bickel, P.J.; Doksum, K.A. Mathematical Statistics; Prentice Hall: Englewood Cliffs, NJ, USA, 1977. [Google Scholar]
- Van Trees, H.L. Detection, Estimation, and Modulation Theory; Wiley: New York, NY, USA, 1968. [Google Scholar]
- Varshney, P.K. Distributed Detection and Data Fusion; Springer: New York, NY, USA, 1997. [Google Scholar]
- Kennedy, J.; Eberhart, R. Particle Swarm Optimization. Proceedings of IEEE International Conference on Neural Networks IV, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948.
- Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Image Process. Lett
**2003**, 85, 317–325. [Google Scholar] - Zitova, B.; Flusser, J. Image registration: A survey. Image Comput
**2003**, 21, 977–1000. [Google Scholar]

**Figure 6.**Examples of the maximum likelihood classifier (MLC)-based land cover maps (LCMs) for (

**a**) Scenario I with δ = 12 and σ = 1; (

**b**) Scenario II with ρ = 0.05 and σ = 1; and (

**c**) Scenario III with η = 0.05 and σ = 1.

**Figure 7.**Examples of the resulting LCMs from our proposed algorithm (

**a**) Scenario I with δ = 12 and σ = 1; (

**b**) Scenario II with ρ = 0.05 and σ = 1; and (

**c**) Scenario III with η = 0.05 and σ = 1.

**Figure 8.**The averaged number of iterations required before the termination criteria were satisfied for different scenarios in Example 1.

**Figure 9.**QUICKBIRD dataset of a part of Kasetsart University (

**a**) False color composite multispectral image (MI); and (

**b**) panchromatic image (PAN).

**Figure 10.**Ground truth image for Example 2 (green, blue, black, red and white colors for vegetation, water, shadow, impervious type 1 and impervious type 2, respectively).

**Figure 11.**LCMs for the perfect registration case for (

**a**) β = 0; (

**b**) β = 0.25; (

**c**) β = 0.50; and (

**d**) β = 0.75.

**Figure 13.**The effect of initial registration errors to the overall accuracies for (

**a**) in the x-direction; (

**b**) in the y-direction; (

**c**) in the scale; and (

**d**) in the rotation.

**Figure 14.**The effect of the initial registration errors to the number of iterations (

**a**) in the x-direction; (

**b**) in the y-direction; (

**c**) in the scale; and (

**d**) in the rotation.

**Figure 15.**The effect of the initial registration errors on the residual registration error of our proposed algorithm in Example 2.

Image | Mapping Parameters | ||||||
---|---|---|---|---|---|---|---|

m_{1} | m_{2} | m_{3} | m_{4} | m_{5} | m_{6} | ||

Scenario I: Displacement error (δ) | 1 | 1 | 0 | 0 | 1 | 0 | 0 |

2 | 1 | 0 | 0 | 1 | δ | 0 | |

3 | 1 | 0 | 0 | 1 | 0 | −δ | |

4 | 1 | 0 | 0 | 1 | −δ | δ | |

Scenario II: Scale error (ρ) | 1 | 1 | 0 | 0 | 1 | 0 | 0 |

2 | 1+ρ | 0 | 0 | 1 | 0 | 0 | |

3 | 1 | 0 | 0 | 1+ρ | 0 | 0 | |

4 | 1−ρ | 0 | 0 | 1−ρ | 0 | 0 | |

Scenario III: Sheer error (η) | 1 | 1 | 0 | 0 | 1 | 0 | 0 |

2 | 1 | η | 0 | 1 | 0 | 0 | |

3 | 1 | 0 | η | 1 | 0 | 0 | |

4 | 1 | −η | −η | 1 | 0 | 0 |

**Table 2.**Comparison of the averaged percentages of misclassified pixels (PMP) between two extreme cases and our proposed algorithm.

β | No Registration Error | No Registration Error Correction | Proposed Algorithm with Initial Registration Errors | ||||
---|---|---|---|---|---|---|---|

Scenario I with δ = 12 | Scenario II with ρ = 0.05 | Scenario III with η = 0.05 | Scenario I with δ = 12 | Scenario II with ρ = 0.05 | Scenario III with η = 0.05 | ||

0.0 | 25.65% | 28.66% | 26.87% | 27.05% | 28.65% | 26.07% | 27.12% |

0.25 | 0.43% | 4.81% | 5.96% | 6.45% | 0.45% | 0.43% | 0.43% |

0.5 | 0.039% | 4.24% | 5.65% | 6.21% | 0.039% | 0.041% | 0.043% |

0.75 | 0.021% | 4.19% | 5.56% | 6.13% | 0.024% | 0.032% | 0.026% |

**Table 3.**The p-values of the pairwise t-test with unequal variances of our proposed algorithm to the perfect registration cases. No registration error correction to the perfect registration cases.

β | No Registration Error | No registration Error Correction | Proposed Algorithm with Initial Registration Errors | ||||
---|---|---|---|---|---|---|---|

Scenario I with δ = 12 | Scenario II with ρ = 0.05 | Scenario III with η = 0.05 | Scenario I with δ = 12 | Scenario II with ρ = 0.05 | Scenario III with η = 0.05 | ||

0.0 | 1 | 1.5 × 10^{−22} | 1.6 × 10^{−14} | 4.0 × 10^{−18} | 1.9 × 10^{−23} | 4.0 × 10^{−15} | 3.9 × 10^{−15} |

0.25 | 1 | 2.0 × 10^{−17} | 3.5 × 10^{−19} | 3.6 × 10^{−18} | 0.457 | 0.717 | 0.500 |

0.5 | 1 | 1.5 × 10^{−15} | 2.8 × 10^{−17} | 1.8 × 10^{−16} | 0.712 | 0.167 | 0.401 |

0.75 | 1 | 1.5 × 10^{−14} | 1.4 × 10^{−15} | 6.2 × 10^{−17} | 0.060 | 0.033 | 0.079 |

**Table 4.**The averaged percentages of mis-classified pixels as a function of the initial registration error for all Scenarios.

Scenario I | Scenario II | Scenario III | |||
---|---|---|---|---|---|

δ | PMP | ρ | PMP | η | PMP |

0 | 0.019% | −0.05 | 0.035% | −0.05 | 0.036% |

4 | 0.032% | −0.03 | 0.035% | −0.03 | 0.029% |

8 | 0.029% | −0.01 | 0.022% | −0.01 | 0.043% |

12 | 0.026% | 0.01 | 0.030% | 0.01 | 0.040% |

0.03 | 0.024% | 0.03 | 0.036% | ||

0.05 | 0.032% | 0.05 | 0.026% |

**Table 5.**The residual registration errors of our proposed algorithm for various scenarios and values of β.

Scenario | No Registration Error Correction | β = 0.0 | β = 0.25 | β = 0.50 | β = 0.75 | ||
---|---|---|---|---|---|---|---|

I (δ = 12) | Image 2 | Mean | 12 | 11.99 | 0.111 | 0.295 | 0.280 |

STD | - | 0.0015 | 0.259 | 0.139 | 0.100 | ||

Image 3 | Mean | 12 | 11.99 | 0.031 | 0.192 | 0.312 | |

STD | - | 0.0018 | 0.020 | 0.120 | 0.156 | ||

Image 4 | Mean | 16.97 | 16.96 | 0.213 | 0.338 | 0.212 | |

STD | - | 0.0017 | 0.566 | 0.088 | 0.136 | ||

II (ρ =0.05) | Image 2 | Mean | 14.06 | 13.56 | 0.028 | 0.281 | 0.327 |

STD | - | 0.072 | 0.010 | 0.130 | 0.113 | ||

Image 3 | Mean | 14.06 | 13.49 | 0.020 | 0.353 | 0.312 | |

STD | - | 0.032 | 0.080 | 0.102 | 0.106 | ||

Image 4 | Mean | 21.97 | 20.97 | 0.253 | 0.245 | 0.315 | |

STD | - | 0.095 | 0.636 | 0.120 | 0.082 | ||

III (η = 0.05) | Image 2 | Mean | 14.76 | 14.71 | 0.025 | 0.295 | 0.296 |

STD | - | 0.204 | 0.020 | 0.149 | 0.098 | ||

Image 3 | Mean | 14.76 | 14.73 | 0.017 | 0.415 | 0.350 | |

STD | - | 0.182 | 0.006 | 0.090 | 0.136 | ||

Image 4 | Mean | 21.72 | 22.04 | 0.350 | 0.312 | 0.371 | |

STD | - | 0.0325 | 0.983 | 0.155 | 0.088 |

Noise Variance (dB) | Average Root Mean Square Errors | ||||||||
---|---|---|---|---|---|---|---|---|---|

Scenario I, δ = 12 | Scenario II, ρ= 0.05 | Scenario III, η = 0.05 | |||||||

Image 2 | Image 3 | Image 4 | Image 2 | Image 3 | Image 4 | Image 2 | Image 3 | Image 4 | |

−30 | 0.007 | 0.011 | 0.009 | 0.006 | 0.010 | 0.019 | 0.012 | 0.019 | 0.013 |

−20 | 0.010 | 0.012 | 0.009 | 0.023 | 0.016 | 0.012 | 0.017 | 0.016 | 0.011 |

−10 | 0.036 | 0.035 | 0.037 | 0.028 | 0.018 | 0.029 | 0.028 | 0.030 | 0.022 |

0 | 0.244 | 0.280 | 0.185 | 0.119 | 0.138 | 0.071 | 0.078 | 0.053 | 0.200 |

Noise Variance (dB) | Average Root Mean Square Errors | ||||||||
---|---|---|---|---|---|---|---|---|---|

Scenario I, δ = 12 | Scenario II, ρ= 0.05 | Scenario III, η = 0.05 | |||||||

Image 2 | Image 3 | Image 4 | Image 2 | Image 3 | Image 4 | Image 2 | Image 3 | Image 4 | |

−30 | 0.016 | 0.08 | 0.010 | 0.015 | 0.007 | 0.019 | 0.009 | 0.011 | 0.019 |

−20 | 0.017 | 0.012 | 0.014 | 0.015 | 0.018 | 0.015 | 0.010 | 0.015 | 0.017 |

−10 | 0.014 | 0.018 | 0.015 | 0.018 | 0.018 | 0.023 | 0.019 | 0.016 | 0.014 |

0 | 11.99 | 11.99 | 16.97 | 11.91 | 11.89 | 20.28 | 12.75 | 12.79 | 20.61 |

**Table 8.**The residual registration errors using the minimum mean square error criteria for various noise variances.

Noise Variance (dB) | Image 2 | Image 3 | Image 4 | |||
---|---|---|---|---|---|---|

Mean | STD | Mean | STD | Mean | STD | |

−30 | 0.008 | 0.0029 | 0.007 | 0.0041 | 0.010 | 0.0054 |

−20 | 0.422 | 0.0040 | 0.425 | 0.0033 | 0.423 | 0.0049 |

−10 | 0.663 | 0.0037 | 0.665 | 0.0014 | 0.664 | 0.0017 |

0 | 0.875 | 0.516 | 1.637 | 1.441 | 1.352 | 0.9744 |

**Table 9.**The p-value from the pairwise t-test between the traditional registration method and our proposed algorithm for various scenarios at β = 0.75.

Noise Variance (dB) | Average Root Mean Square Errors | ||||||||
---|---|---|---|---|---|---|---|---|---|

Scenario I, δ = 12 | Scenario II, ρ= 0.05 | Scenario III, η = 0.05 | |||||||

Image 2 | Image 3 | Image 4 | Image 2 | Image 3 | Image 4 | Image 2 | Image 3 | Image 4 | |

−30 | 0.829 | 0.402 | 0.883 | 0.413 | 0.413 | 0.201 | 0.507 | 0.092 | 0.407 |

−20 | 1 × 10^{−18} | 4 × 10^{−14} | 2 × 10^{−21} | 1 × 10^{−13} | 1 × 10^{−13} | 3 × 10^{−15} | 2 × 10^{−13} | 2 × 10^{−13} | 5 × 10^{−17} |

−10 | 3 × 10^{−14} | 2 × 10^{−14} | 3 × 10^{−14} | 3 × 10^{−15} | 3 × 10^{−15} | 5 × 10^{−16} | 2 × 10^{−23} | 1 × 10^{−14} | 7 × 10^{−17} |

0 | 0.004 | 0.016 | 0.004 | 0.001 | 0.001 | 0.003 | 0.0010 | 0.007 | 0.004 |

Error in x-Direction | Error in y-Direction | Error in Scale | Error in Rotation | ||||
---|---|---|---|---|---|---|---|

Δx | RMSE_{MI} | Δy | RMSE_{MI} | Δs | RMSE_{MI} | Δθ | RMSE_{MI} |

−5 | 12 (20) | −5 | 12 (20) | −5% | 21.3 (36) | −3 | 11.12 (19) |

−3 | 7.2 (12) | −3 | 7.2 (12) | −2.5% | 10.7 (18) | −2 | 7.45 (12) |

−1 | 2.4 (4) | −1 | 2.4 (4) | 0% | 0.0 (0) | −1 | 3.72 (6.2) |

1 | 2.4 (4) | 1 | 2.4 (4) | 2.5% | 10.7 (18) | 1 | 3.72 (6.2) |

3 | 7.2 (12) | 3 | 7.2 (12) | 5% | 21.3 (36) | 2 | 7.45 (12) |

5 | 12 (20) | 5 | 12 (20) | 3 | 11.12 (19) |

**Table 11.**Overall accuracies for different values of β in two extreme cases and our proposed algorithm for different initial displacement error in the x-direction Δx where PA and NC denote the cases of the proposed algorithm and no registration error correction, respectively.

β | PR | Δx = −5.0 | Δx = −3.0 | Δx = −1.0 | Δx = 1.0 | Δx = 3.0 | Δx = 5.0 | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

PA | NC | PA | NC | PA | NC | PA | NC | PA | NC | PA | NC | ||

0.0 | 67.5 | 67.7 | 57.6 | 67.8 | 62.2 | 67.7 | 66.9 | 67.8 | 66.7 | 67.7 | 61.8 | 67.8 | 57.0 |

0.25 | 69.4 | 70.0 | 58.8 | 69.8 | 63.7 | 69.8 | 68.6 | 69.9 | 68.3 | 70.0 | 63.4 | 59.3 | 58.4 |

0.5 | 70.3 | 71.8 | 59.7 | 71.4 | 64.6 | 70.6 | 69.6 | 70.9 | 69.2 | 71.5 | 64.4 | 60.2 | 59.2 |

0.75 | 71.1 | 72.8 | 60.2 | 72.2 | 65.2 | 71.5 | 70.3 | 71.8 | 70.0 | 72.7 | 65.0 | 60.4 | 59.9 |

**Table 12.**Overall accuracies for different values of β in two extreme cases and our proposed algorithm for different initial displacement error in the y-direction Δy where PA and NC denote the cases of the proposed algorithm and no registration error correction, respectively.

β | PR | Δy = −5.0 | Δy = −3.0 | Δy = −1.0 | Δy = 1.0 | Δy = 3.0 | Δy = 5.0 | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

PA | NC | PA | NC | PA | NC | PA | NC | PA | NC | PA | NC | ||

0.0 | 67.5 | 67.7 | 57.6 | 67.7 | 62.2 | 67.7 | 66.9 | 67.7 | 66.7 | 67.7 | 61.8 | 67.8 | 57.0 |

0.25 | 69.4 | 69.9 | 58.8 | 69.9 | 63.7 | 69.8 | 68.6 | 70.1 | 68.3 | 70.1 | 63.4 | 70.3 | 58.4 |

0.5 | 70.3 | 71.6 | 59.7 | 71.2 | 64.6 | 70.5 | 69.6 | 71.8 | 69.2 | 71.8 | 64.4 | 68.6 | 59.2 |

0.75 | 71.1 | 72.5 | 60.1 | 71.9 | 65.2 | 71.2 | 70.3 | 73.4 | 70.0 | 73.4 | 64.9 | 62.9 | 59.9 |

**Table 13.**Overall accuracies for different values of β in two extreme cases and our proposed algorithm for different initial scale error Δs where PA and NC denote the cases of the proposed algorithm and no registration error correction, respectively.

β | PR | Δs = −5% | Δs = −2.5% | Δs = 0% | Δs = 2.5% | Δs = 5% | |||||
---|---|---|---|---|---|---|---|---|---|---|---|

PA | NC | PA | NC | PA | NC | PA | NC | PA | NC | ||

0.0 | 67.5 | 67.8 | 52.7 | 67.7 | 61.0 | 67.7 | 67.5 | 67.8 | 64.9 | 67.8 | 57.8 |

0.25 | 69.4 | 69.6 | 53.4 | 69.5 | 62.4 | 70.0 | 69.4 | 70.3 | 66.1 | 70.2 | 58.9 |

0.5 | 70.3 | 71.1 | 54.2 | 70.6 | 63.3 | 71.0 | 70.3 | 71.6 | 67.0 | 72.1 | 59.7 |

0.75 | 71.1 | 72.1 | 54.7 | 71.5 | 64.2 | 71.1 | 71.1 | 72.7 | 67.6 | 73.4 | 60.1 |

**Table 14.**Overall accuracies for different values of β in two extreme cases and our proposed algorithm for different rotation error Δθ where PA and NC denote the cases of the proposed algorithm and no registration error correction, respectively.

β | PR Registration | Δθ = −3° | Δθ = −2° | Δθ = −1° | Δθ = 1° | Δθ = 2° | Δθ = 3° | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

PA | NC | PA | NC | PA | NC | PA | NC | PA | NC | PA | NC | ||

0.0 | 67.5 | 67.6 | 57.3 | 67.6 | 60.8 | 67.6 | 65.3 | 67.7 | 64.8 | 67.7 | 59.8 | 67.8 | 55.5 |

0.25 | 69.4 | 69.9 | 58.5 | 69.8 | 62.2 | 69.7 | 66.9 | 69.9 | 66.5 | 69.7 | 61.1 | 69.8 | 56.6 |

0.5 | 70.3 | 71.6 | 59.3 | 71.4 | 63.0 | 71.0 | 67.8 | 71.1 | 67.4 | 71.4 | 62.0 | 71.5 | 57.4 |

0.75 | 71.1 | 73.0 | 59.7 | 72.3 | 63.6 | 71.9 | 68.4 | 71.9 | 68.1 | 72.5 | 62.6 | 72.9 | 58.0 |

© 2013 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license ( http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Kasetkasem, T.; Rakwatin, P.; Sirisommai, R.; Eiumnoh, A.
A Joint Land Cover Mapping and Image Registration Algorithm Based on a Markov Random Field Model. *Remote Sens.* **2013**, *5*, 5089-5121.
https://doi.org/10.3390/rs5105089

**AMA Style**

Kasetkasem T, Rakwatin P, Sirisommai R, Eiumnoh A.
A Joint Land Cover Mapping and Image Registration Algorithm Based on a Markov Random Field Model. *Remote Sensing*. 2013; 5(10):5089-5121.
https://doi.org/10.3390/rs5105089

**Chicago/Turabian Style**

Kasetkasem, Teerasit, Preesan Rakwatin, Ratchawit Sirisommai, and Apisit Eiumnoh.
2013. "A Joint Land Cover Mapping and Image Registration Algorithm Based on a Markov Random Field Model" *Remote Sensing* 5, no. 10: 5089-5121.
https://doi.org/10.3390/rs5105089