Next Article in Journal
A New Wavelet Transform and Its Localization Operators
Previous Article in Journal
A Decision-Making Model for the Assessment of Emergency Response Capacity in China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Splitting-Integrating Methods for Image Geometric Transformations: Error Analysis and Applications

by
Hung-Tsai Huang
1,
Zi-Cai Li
2,
Yimin Wei
3 and
Ching Yee Suen
4,*
1
Department of Data Science and Analytics, I-Shou University, Kaohsiung 84001, Taiwan
2
Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
3
Shanghai Key Laboratory of Contemporary Applied Mathematics, Fudan University, Shanghai 200433, China
4
Center for Pattern Recognition and Machine Intelligence, Concordia University, Montreal, QC H3G 1M8, Canada
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(11), 1773; https://doi.org/10.3390/math13111773
Submission received: 9 April 2025 / Revised: 17 May 2025 / Accepted: 22 May 2025 / Published: 26 May 2025

Abstract

:
Geometric image transformations are fundamental to image processing, computer vision and graphics, with critical applications to pattern recognition and facial identification. The splitting-integrating method (SIM) is well suited to the inverse transformation T 1 of digital images and patterns, but it encounters difficulties in nonlinear solutions for the forward transformation T. We propose improved techniques that entirely bypass nonlinear solutions for T, simplify numerical algorithms and reduce computational costs. Another significant advantage is the greater flexibility for general and complicated transformations T. In this paper, we apply the improved techniques to the harmonic, Poisson and blending models, which transform the original shapes of images and patterns into arbitrary target shapes. These models are, essentially, the Dirichlet boundary value problems of elliptic equations. In this paper, we choose the simple finite difference method (FDM) to seek their approximate transformations. We focus significantly on analyzing errors of image greyness. Under the improved techniques, we derive the greyness errors of images under T. We obtain the optimal convergence rates O ( H 2 ) + O ( H / N 2 ) for the piecewise bilinear interpolations ( μ = 1 ) and smooth images, where H ( 1 ) denotes the mesh resolution of an optical scanner, and N is the division number of a pixel split into N 2 sub-pixels. Beyond smooth images, we address practical challenges posed by discontinuous images. We also derive the error bounds O ( H β ) + O ( H β / N 2 ) , β ( 0 , 1 ) as μ = 1 . For piecewise continuous images with interior and exterior greyness jumps, we have O ( H ) + O ( H / N 2 ) . Compared with the error analysis in our previous study, where the image greyness is often assumed to be smooth enough, this error analysis is significant for geometric image transformations. Hence, the improved algorithms supported by rigorous error analysis of image greyness may enhance their wide applications in pattern recognition, facial identification and artificial intelligence (AI).

1. Introduction

Graphics, pictures, computer images, computer vision and digital patterns are often distorted by linear or nonlinear transformations. The algorithms that correctly convert images and patterns under these transformations and restore their original images and patterns are essential for applications in image processing, pattern recognition and artificial intelligence (AI). We solicited numerical algorithms for image geometric transformations and proposed basic algorithms in a book [1] in 1989, which were developed from [2,3,4,5]. Since then, we have focused on refining existing numerical algorithms, with significant advancements reported in [1,6,7]. Geometric transformations heavily depend on shape boundaries, which are particularly important in applications like face fusion [8]. This paper extends the splitting algorithms of [9], with a focus on rigorous error analysis and broader applications. AI relies on three elements: data, models and algorithms. The new trend in AI is to significantly improve algorithmic efficiency, (see DeepSeek [10]); such a trend has guided our study for many years. A systematic summary of new numerical algorithms for image geometric transformations is provided in the new book [11].
Let us introduce the integral model of images and its basic algorithms (the model and algorithms as AI’s terminology). The digital image in 2D is denoted by a matrix with non-negative entries representing the pixel greyness. Each pixel greyness can also be regarded as the mean of a greyness function over a small pixel region, leading to a 2D integral. Hence, the image transformation can be reduced to numerical integration. Based on this idea, we develop the splitting-integrating method (SIM) in [1], which is, indeed, the composite centroid rule in numerical analysis (see Atkinson [12] and Davis and Rabinowitz [13]). The advantage of SIM is the simplicity of algorithms, in particular for the restoration of images because nonlinear solutions are not required. Although the smooth features of the piecewise integrand in the integral are different in different subregions, the same centroid rule is always chosen. In applications, we choose the simple piecewise constant interpolations as μ = 0 and the piecewise linear interpolations as μ = 1 . Our goal is to achieve the optimal convergence rates O ( 1 N μ + 1 ) of sequential image greyness errors, where N is the division number of a pixel split into N 2 .
We may apply SIM for the forward transformation T, combine the SIM for T 1 , and develop a combination CIIM for a cycle transformation T 1 T . A drawback of SIM for T is that nonlinear solutions may be involved. When the nonlinear transformations are not complicated, the Newton iteration method may be used to carry out the nonlinear solutions. In this case, the combination CIIM is studied in ([11], Chapters 4 and 5) to provide the optimal sequential convergence rates O ( 1 N μ + 1 ) , μ = 0 , 1 , 2 , 3 . The CIIM can also be applied to n-dimensional image transformation reported in ([11], Chapter 2). However, for complicated nonlinear transformations T, solving the associated nonlinear equations becomes challenging, limiting the practicality of SIM and CIIM. For image conversion under a geometric transformation T, the splitting-shooting method (SSM) does not need nonlinear solutions. Hence, we may combine SSM for T and SIM for T 1 to develop a combination CSIM for T 1 T . A strict analysis of the CSIM was given in Li and Bai [7], to reveal that only the low convergence rates O ( 1 N ) of image greyness can be obtained for both μ = 0 and μ = 1 . Also, by probability analysis, a higher convergence rate as O p ( 1 N 1.5 ) in probability can be reached in [7]. For 256 greyness-level images under a nonlinear transformation with a double enlargement, the division number N should be chosen as large as N = 32 or 64 to provide satisfactory pictures (see [7]). For 256 real greyscale images, CSIM becomes impractical due to the prohibitively large number of pixels. In general, the original CSIM is suited well to a few (≤16) greyness level images. A sophisticated partition technique is also invented for CSIM, leading to the advanced combination C S ¯ IM and C S ¯ I ¯ M in ([11], Chapter 7), to reduce the division number N significantly.
Now, we recast CIIM and study new techniques embedded into the algorithm SIM for T to reduce and even bypass completely the nonlinear solutions. This is important to the wide application of CIIM. An interpolation technique is given in [6] to reduce the number of solutions to nonlinear equations from O ( M 2 N 2 ) down to O ( M 2 ) , where M 2 is the total number of image pixels. The piecewise bilinear interpolations are chosen in [6] based on the exact values of ξ and η at the semi-points ( ( I + 1 2 ) H , ( J + 1 2 ) H ) . In this paper, we also solicit the interpolation to reduce the iteration number but based on the exact values of ξ I J = ξ ( I H , J H ) and η I J = η ( I H , J H ) at the pixel points ( I H , J H ) . The algorithms in [6] are denoted as SI * M for T and CI * IM for T 1 T .
Moreover, since the values of ξ I J and η I J are unnecessary to be exact, we may also use the interpolation to obtain the approximate values of ξ ^ I J and η ^ I J to provide the improved algorithms S I # M for T and CI # IM for T 1 T without any nonlinear solutions. The improved algorithms are proposed in [9], but no error analysis exists so far. This is due to the improved techniques where the combination CI # IM can be applied to the images under complicated transformations, e.g., those governed by partial differential equations (PDEs). In this paper, we also discuss the transformations of the Poisson and blending models by using the simple finite difference method (FDM) but apply the CI # IM to image transformations undergone with these PDE models.
From the analysis in Section 4, the optimal convergence rates of greyness errors as O ( H 2 ) + O ( H N 2 ) can be maintained for continuous pictures by combination CI # IM as well, where H ( 1 ) is the small image resolution. Another aspect of the analysis in this paper is to deal with image discontinuity. Real image greyness often has discontinuity. In this paper, we solicit the Sobolev norms and give an error analysis of the boundary discontinuity of image greyness. Using Sobolev norms, we derive the absolute errors O ( H α ) , 0 < α < 1 . This study is an important remedy for error analysis made for continuous images [7]. Geometric image transformations play a vital role in image processing, computer vision, computer graphics and AI. For pattern recognition and facial identification, finding and removing the geometric transformations involved are often a critical step. Hence, the improved algorithms with error analysis can be applied to enhance their performance in these areas.
The organization of this paper is as follows. In Section 2, the SIM is introduced for T 1 and T, and in Section 3, the interpolation techniques are proposed to reduce and even to bypass the nonlinear solutions. In Section 4, error analysis is made for both continuity and discontinuity of image greyness, and error bounds are derived. In Section 5, the PDE models of complicated transformations are approximated by the FDM, and then applied to image transformations by the new combination CI # IM . In Section 6, some graphical and numerical experiments are provided to verify the theoretical analysis made, and to demonstrate the effectiveness of the proposed algorithms. Concluding remarks are given in Section 7, and a list of symbols used is provided in Appendix A.

2. The Splitting-Integrating Method and Its Combinations

2.1. Numerical Algorithms

Consider the cycle transformation T 1 T : W ^ T Z ^ T 1 W ^ , where T is a nonlinear transformation defined by
T : ( ξ , η ) ( x , y ) , x = x ( ξ , η ) , y = y ( ξ , η ) .
In this paper, we assume that only the functions x ( ξ , η ) and y ( ξ , η ) are given explicitly, or implicitly governed by the PDE solutions. Denote by W ^ = { W ^ i j } and Z ^ = { Z ^ I J } the original and distorted images, where W ^ i j and Z ^ I J are the image pixels located at ( i , j ) and ( I , J ) in the two Cartesian coordinates ξ o η and X O Y , respectively:
( i , j ) = { ( ξ i , η j ) , ξ i = i H , η j = j H } , ( I , J ) = { ( x I , y J ) , x I = I H , y J = J H } ,
and H is the mesh resolution of the optical scanner.
We solicit numerical algorithms shown in Figure 1 consisting of eight steps. Let G k ( = 1 , 2 , , q ) denote the k-th level in the q-level system, where G 1 and G q are the whiteness and darkness, respectively. In steps 1 and 5, the pixel greyness is converted by
Φ i j = k 1 q 1 if W ^ i j = * with G k ; B I J = k 1 q 1 if Z ^ I J = * with G k .
In steps 8 and 4, the conversions between the pixels and greyness in the q-level system are given by
W ^ i j = * with G q , if Φ i j 1 1 2 ( q 1 ) , * with G k , if k 3 2 q 1 Φ i j < k 1 2 q 1 , k = 2 , 3 , , q 1 , , if Φ i j < 1 2 ( q 1 ) ,
Z ^ I J = * with G q , if B I J 1 1 2 ( q 1 ) , * with G k , if k 3 2 q 1 B I J < k 1 2 q 1 , k = 2 , 3 , , q 1 , , if B I J < 1 2 ( q 1 ) .
Case II consists of all eight steps in Figure 1; Case I consists of steps 1–3 and 6–8, where the distorted image Z ^ is converted from B ^ = { B I J } but with no feedback from Z ^ to B ^ in step 5.
In steps 2 and 6, we choose the simple piecewise constant ( μ = 0 ) and the bilinear interpolations ( μ = 1 ) , ϕ μ ( ξ , η ) and b μ ( x , y ) , defined by
ϕ 0 = Φ i j in i j , or ϕ 1 in i j ; b 0 = B I J in I J , or b 1 in I J ,
where ϕ 1 and b 1 are piecewise bilinear interpolations based on the Φ i j and B I J , respectively. In (6), the square small domains are defined by
i j = { ( ξ , η ) | ( i 1 2 ) H ξ < ( i + 1 2 ) H , ( j 1 2 ) H η < ( j + 1 2 ) H } , i j = { ( ξ , η ) | i H ξ < ( i + 1 ) H , j H η < ( j + 1 ) H } , I J = { ( x , y ) | ( I 1 2 ) H x < ( I + 1 2 ) H , ( J 1 2 ) H y < ( J + 1 2 ) H } , I J = { ( x , y ) | I H x < ( I + 1 ) H , J H y < ( J + 1 ) H } .
Hence, the original and the distorted image domains are given by Ω = i j i j and S = I J I J .

2.2. A Splitting-Integrating Method for T 1

The greyness Φ i j and B I J can be represented by the mean of continuous (or piecewise continuous) greyness functions ϕ ( ξ , η ) and b ( x , y ) :
Φ i j = 1 H 2 i j ϕ ( ξ , η ) d ξ d η , B I J = 1 H 2 I J b ( x , y ) d x d y ,
where ϕ ( ξ , η ) = b ( x ( ξ , η ) , y ( ξ , η ) ) = b ( x , y ) . The linkage of digital images and integrals in (7) enables us to develop numerical algorithms for image transformations under T and T 1 T easily. The greyness functions ϕ ( ξ , η ) and b ( x , y ) can be formed by the cubic spline interpolations based on Φ i j and B I J , respectively (see [11], Chapter 5). Note that even H ( 1 ) is small, but not infinitesimal yet. In our previous analysis, we always assume that the piecewise cubic spline functions have
ϕ ( ξ , η ) C 2 ( Ω ) , b ( x , y ) C 2 ( S ) .
In this paper, we assume
ϕ ( ξ , η ) H 2 ( Ω ) , b ( x , y ) H 2 ( S ) ,
where H 2 ( S ) is the Sobolev space. Equation (7) is called the integral model of images, and numerical algorithms are developed based on (7) (see [1]).
First, consider the inverse transformation T 1 : ( x , y ) ( ξ , η ) based on the known distorted image pixels { Z ^ I J } , where the approximate functions b ( x , y ) b μ ( x , y ) = ϕ μ ( ξ ( x , y ) , η ( x , y ) ) . The composite centroid rule will be used to evaluate the integration values in (7). Let i j be split into N × N uniform squares i j , k , i.e., i j = k , i j , k , where
i j , k = ( ξ , η ) , ( i 1 2 ) H + ( k 1 ) h ξ < ( i 1 2 ) H + k h , ( j 1 2 ) H + ( 1 ) h η < ( j 1 2 ) H + h ,
and h is the boundary length of i j , k , given by h = H N . For the small subpixels i j , k , the coordinates of the center of gravity are given by
ξ G ˙ = ξ G ˙ i j , k = ( i 1 2 ) H + ( k 1 2 ) h , η G ˙ = η G ˙ i j , k = ( j 1 2 ) H + ( 1 2 ) h .
By the composite centroid rule in Davis and Rabinowitz [13], we have
i j , k ϕ ( ξ , η ) d ξ d η = i j , k b ( x ( ξ , η ) , y ( ξ , η ) ) d ξ d η h 2 b ( x ( ξ G ˙ i j , k , η G ˙ i j , k ) , y ( ξ G ˙ i j , k , η G ˙ i j , k ) ) .
Hence, the normalized greyness of W ^ i j is obtained by
Φ i j h 2 H 2 k , = 1 N b μ ( x ( ξ G ˙ i j , k , η G ˙ i j , k ) , y ( ξ G ˙ i j , k , η G ˙ i j , k ) ) .
Note that the computational algorithms (11) do not involve any nonlinear solutions, and the sequential errors are proven to be O ( N 2 ) as μ = 1  (see [11], Chapter 4).

2.3. A Splitting-Integrating Method for T

We now apply the splitting-integrating method to images under the forward transformation T based on the given Φ i j . The piecewise interpolation functions ϕ μ ( ξ , η ) are chosen, and b μ ( x , y ) = ϕ μ ( ξ ( x , y ) , η ( x , y ) ) . Let the pixel I J = k I J , k in X O Y be split into N × N small sub-pixels I J , k with the boundary length h = H / N , where
I J , k = ( x , y ) , ( I 1 2 ) H + ( k 1 ) h x < ( I 1 2 ) H + k h , ( J 1 2 ) H + ( 1 ) h y < ( J 1 2 ) H + h .
Also denote the coordinates of center of gravity of I J , k by
x g ˙ = x g ˙ I J , k = ( I 1 2 ) H + ( k 1 2 ) h , y g ˙ = y g ˙ I J , k = ( J 1 2 ) H + ( 1 2 ) h .
Similarly, we may evaluate pixel greyness by the composite centroid rule:
B I J = 1 H 2 k , = 1 N I J , k b ( x , y ) d x d y ( h H ) 2 k , = 1 N b μ ( x g ˙ , y g ˙ ) = B I J ( N ) = ( h H ) 2 k , = 1 N ϕ μ ( ξ ( x g ˙ , y g ˙ ) , η ( x g ˙ , y g ˙ ) ) .
In (15), we do also need the inverse values ξ g ˙ = ξ ( x g ˙ , y g ˙ ) and η g ˙ = η ( x g ˙ , y g ˙ ) , which are unknown. Then, we need to solve the following nonlinear equations:
x = x ( ξ , η ) = f ( ξ , η ) , y = y ( ξ , η ) = g ( ξ , η ) .
The Newton iteration method is suggested as
z ( k + 1 ) = z ( k ) D ( z ( k ) ) 1 r ( k ) , k = 0 , 1 , ,
where z ( 0 ) is an initial approximation, and vectors and the Jacobian matrix are given by
z ( k ) = ξ ( k ) η ( k ) , r ( k ) = f ( ξ ( k ) , η ( k ) ) x g ( ξ ( k ) , η ( k ) ) y , D ( z ) = x ξ x η y ξ y η .
For the given quadratic models of T, five to seven iterations may reduce the initial errors by a factor 1 2 × 10 6 . Of course, other iteration methods may also be chosen (see Atkinson [12]). The combination CIIM is called when the SIM is used for images under both T and T 1 . Note that the nonlinear solutions are involved only for the forward transformation T.

3. Improved Algorithms of SIM for Images Under T

The nonlinear solutions for seeking ( ξ , η ) from ( x , y ) in (16) by the Newton iterations (17) may encounter some difficulties, e.g., choices of good initial values and multiple solutions. Below, we develop improved techniques to reduce and even to bypass the nonlinear solutions completely. Such improved techniques may also be incorporated with the PDE solutions in the harmonic, Poisson and blending models provided in Section 5.
Technique I: Reduction of nonlinear solutions.
In Li [6], the reduction technique was first used for evaluating ξ and η only at the center of gravity of I J in X O Y , i.e., at the mid-points ( x I + 1 2 , y J + 1 2 ) = ( ( I + 1 2 ) H , ( J + 1 2 ) H ) . It is better to carry out (16) only at the pixel points ( x I , y J ) = ( I H , J H ) . The number of solutions to (16) is only O ( M 2 ) as in Li [6], where M 2 is the total pixel number of the image. The approximations of ξ = ξ ( x , y ) and η = η ( x , y ) in the quadrilateral A ¯ B ¯ C ¯ D ¯ in ξ o η may be obtained by the piecewise bilinear approximations (see Figure 2), where the gravity center ( x g ˙ , y g ˙ ) I J , I H x g ˙ < ( J + 1 ) H and J H y g ˙ < ( J + 1 ) H . Then, the values of ( ξ g ˙ , η g ˙ ) at x = x g ˙ and y = y g ˙ can be evaluated by the bilinear approximation:
ξ ( x , y ) ξ ^ ( x , y ) = ξ I + 1 , J + 1 ( x I H ) ( y J H ) H 2 + ξ I , J + 1 ( I + 1 ) H x ) ( y J H ) H 2 + ξ I + 1 , J ( x I H ) ( ( J + 1 ) H y ) H 2 + ξ I , J ( ( I + 1 ) H x ) ( ( J + 1 ) H y ) H 2 .
Moreover, the following linear approximations may be used (see Figure 3). If x + y ( I + J + 1 ) H , we have
ξ ( x , y ) ξ ^ ( x , y ) = ξ I , J + ( ξ I + 1 , J ξ I , J ) ( x I H ) H + ( ξ I , J + 1 ξ I , J ) ( y J H ) H .
If x + y > ( I + J + 1 ) H ,
ξ ( x , y ) ξ ^ ( x , y ) = ξ I + 1 , J + 1 + ( ξ I , J + 1 ξ I + 1 , J + 1 ) ( I + 1 ) H x H + ( ξ I + 1 , J ξ I + 1 , J + 1 ) ( J + 1 ) H y H .
The formula for η ^ ( x , y ) is similar. The advantage of (19) in Li [6] is that the greyness errors in the computation are smaller. The algorithms in [6] are denoted by CI * IM for T 1 T . Note that when N = 1 , the CI * IM using Technique I is just the original CIIM given in Section 2.3.
Technique II: Bypassing nonlinear solutions.
The values ( ξ I J , η I J ) in (19)–(21) used in Technique I do not need to be exact, either, if their approximate values ( ξ ^ I J , η ^ I J ) have small allowable errors. The nonlinear transformation T may also be approximated by a piecewise linear transformation T ^ on triangles Δ i j , , where i j = Δ i j , 1 Δ i j , 2 in Figure 3. Hence, the values for ( ξ I J , η I J ) T 1 ( I H , J H ) can be evaluated approximately by those for ( ξ ^ I J , η ^ I J ) T ^ 1 ( I H , J H ) . Note that the inverse transformation of T ^ 1 is just a linear transformation locally. This avoids the nonlinear solutions in Algorithm SIM for T. The values ξ ^ I J and η ^ I J can be found using the following four steps.
Step 1. 
Compute all x i j and y i j by x i j = x ( ξ i , η j ) = x ( i H , j H ) and y i j = y ( ξ i , η j ) = y ( i H , j H ) .
Step 2. 
Find all potential ( I , J ) Ω i j ( Ω i j T i j ) in XOY, which can be determined by I min I I max and J min J J max , where
I max = max x i j H , x i + 1 , j H , x i , j + 1 H , x i + 1 , j + 1 H , J max = max y i j H , y i + 1 , j H , y i , j + 1 H , y i + 1 , j + 1 H ,
where x is the largest integer x . The definitions of I min and J min are the same.
Step 3. 
Split i j into two triangles i j , 1 and i j , 2 as i j , T ^ Ω ^ i j , , where Ω ^ i j , are also triangles, and find all possible ( I , J ) such that
( I H , J H ) Ω ^ i j , .
The area coordinates σ 1 , σ 2 and σ 3 are obtained from the following algebraic system:
x I = σ 1 x a + σ 2 x b + σ 3 x c , y J = σ 1 y a + σ 2 y b + σ 3 y c , 1 = σ 1 + σ 2 + σ 3 ,
where a , b , c are the vertices of triangles Ω ^ i j , , = 1 , 2 in Figure 3. The sufficient and necessary conditions for (22) are all nonnegative area coordinates 0 σ i 1 , i = 1 , 2 , 3 .
Step 4. 
Obtain the approximate values of ( ξ I J , η I J ) related to ( I H , J H ) in (22) by
ξ ^ I J = σ 1 ξ a ¯ + σ 2 ξ b ¯ + σ 3 ξ c ¯ , η ^ I J = σ 1 η a ¯ + σ 2 η b ¯ + σ 3 η c ¯ ,
where a ¯ , b ¯ and c ¯ are the vertices of i j , shown in Figure 3.
Note that Steps 1–4 above are easy to carry out. The computation complexity is also only O ( M 2 ) , where M 2 is the total pixel number. More importantly, the computation for values, ( ξ ^ I J , η ^ I J ) T ^ 1 ( x I , y J ) , does not involve any nonlinear solutions. Based on the known ( ξ ^ I J , η ^ I J ) , the bilinear approximations ( ξ ^ * , η ^ * ) are formulated by
ξ ( x , y ) ξ * ^ ( x , y ) = ξ ^ I + 1 , J + 1 ( x I H ) ( y J H ) H 2 + ξ ^ I , J + 1 ( I + 1 ) H x ) ( y J H ) H 2 + ξ ^ I + 1 , J ( x I H ) ( y ( J + 1 ) H ) H 2 + ξ ^ I , J ( ( I + 1 ) H x ) ( ( J + 1 ) y ) H 2 .
The improved SIM using Techniques I and II is given by
B I J ( B ^ I J # ) ( N ) = h 2 H 2 k , = 1 N ϕ ^ μ ( ξ ^ * ( x g ˙ , y g ˙ ) , η ^ * ( x g ˙ , y g ˙ ) ) ,
where g ˙ = g ˙ I J , k given in (14). The computation complexity of (25) and (26) is O ( M 2 N 2 ) , where N = H h . Hence, the CPU time by Techniques I and II is insignificant if comparing O ( M 2 ) to O ( M 2 N 2 ) . Note that no nonlinear solutions are needed in (26) either. To distinguish the algorithms using Techniques I and II from SIM and CIIM, we denote (26) by S I # M and the combination of (12) and (26) by C I # IM . Technique I was proposed in [6] with a preliminary error analysis, but Technique II was proposed in [9] without error analysis. The error analysis is important and challenging for improved S I # M for T via both Techniques I and II.
Below, we will derive error bounds in the next section by Sobolev norms, which can be applied to discontinuous images.

4. Error Analysis

Algorithm analysis (such as error analysis) is essential to efficiency for numerical partial differential equations (PDEs). In this section, the new error analysis of image greyness by Techniques I and II is twofold.
I. Compared to error analysis in the original CIIM in Li [6], the inverse coordinates for ξ = ξ ( x , y ) and η = η ( x , y ) are not exact. First, the piecewise linear (or bilinear) interpolations ξ ^ ( x , y ) and η ^ ( x , y ) are used based on the exact values ξ I , J and η I , J in Technique I. We give a strict analysis (ref. [6], Section 6). Next, in Technique II, the inverse values ξ I , J and η I , J are approximated by ξ ^ I , J and η ^ I , J , also by using the piecewise linear interpolations. The linear (or bilinear) interpolations ξ ^ and η ^ on I J yield the same order O ( H 2 ) of the errors, and the approximations ξ ^ I , J and η ^ I , J on Δ i j , also yield the same error O ( H 2 ) for continuous images. The improvements in Section 3 will maintain the optimal convergence rates O ( H 2 ) + O ( H / N 2 ) for μ = 1 as in [11] (Chapter 4). Therefore, Algorithms SI # M and CI # IM circumvent the troublesome nonlinear solutions in the original CIIM in Section 2. For the rather complicated harmonic and blending models, the PDE solutions in Section 5 may be incorporated easily into the discrete algorithms proposed. Hence, the improved algorithms SI # M and CI # IM may be applied to a wide scale of applications in digital images and patterns.
II. To estimate the errors of discontinuous images, we have to face the image discontinuity; this is a challenging task because discontinuity of image greyness always exists, and because the severe discontinuity may become useless for the continuous analysis of image transformations. In Section 4.2, we do not need the assumption (9) but still solicit the Sobolev norms for error analysis all the way. For the improved techniques in Section 3, the greyness errors, O ( H β ) + O ( H β N 2 ) , are given in Corollary 1, where β ( 0 , 1 ) and 0 < H 1 . This is a significant development in error analysis for image transformations, noting that a high smooth greyness of images was often required in our previous papers [6,7]. New splitting techniques in this paper can also be applied to image processing, pattern recognition and AI under geometric transformation in [14,15,16,17].

4.1. Error Analysis for SI # M

Since the detailed analysis of the combinations for images under T 1 T is given in Li [6], we only derive the distinct analysis for SI # M for T as μ = 1 . Let the division number N = N p = 2 p , p = 0 , 1 , 2 , , and then, we define the global greyness errors:
E 2 ( N ) ( B ^ ) = I J ( ( B ^ I J # ) ( N ) B I J ) 2 I N u m ( Z ^ ) 1 2 , E ( N ) ( B ^ ) = I J ( B ^ I J # ) ( N ) B I J I N u m ( Z ^ ) ,
where ( B ^ I J # ) ( N ) are given in (26), and the sequential errors
E 2 ( N p ) ( B ^ ) = 1 I N u m ( Z ^ ) I J ( B ^ I J # ) ( N p ) ( B ^ I J # ) ( N p 1 ) 2 1 2 ,
E ( N p ) ( B ^ ) = 1 I N u m ( Z ^ ) I J ( B ^ I J # ) ( N p ) ( B ^ I J # ) ( N p 1 ) ,
and the pixel number I N u m ( Z ^ ) = M 2 | S | H 2 , M 1 . The Sobolev norms over H n ( S ) are defined by
v n , S = | α | n S | D α v | 2 d x d y 1 / 2 , | v | n , Ω = | α | = n S | D α v | 2 d x d y 1 / 2 .
We consider the forward transformations T, and construct a cubic spline function passing over ϕ ( i H , j H ) = ϕ i j such that ϕ ( ξ , η ) C 2 ( Ω ) and b ( x , y ) = ϕ ( ξ ( x , y ) , η ( x , y ) ) C 2 ( S ) . Define
B I J = 1 H 2 I J b ( x , y ) d x d y , B ^ I J = 1 H 2 I J b ^ ( x , y ) d x d y ,
where b ^ ( x , y ) = ϕ ^ μ ( ξ ( x , y ) , η ( x , y ) ) . The ϕ ^ 0 and ϕ ^ 1 denote the piecewise constant and bilinear functions, respectively. The transformation T is said to be regular if x ( ξ , η ) C 2 ( Ω ) , y ( ξ , η ) C 2 ( Ω ) and the Jacobian determinant J satisfies 0 < c 0 J 0 J J M , where J 0 and J M are bounded constants. In this subsection, we assume that the image greyness is continuous to satisfy ϕ ( ξ , η ) H 2 ( Ω ) and b ( x , y ) H 2 ( S ) as in (9), where H 2 ( S ) is the Sobolev space. Such relaxed assumptions in the Sobolev space will be extended to the discontinuous greyness functions ϕ ( ξ , η ) and b ( x , y ) in the sequential subsection. We have the following lemma.
Lemma 1. 
Let T be regular, b ( x , y ) H 2 ( S ) hold, and the interpolation functions with μ = 0 , 1 be used. There exists a bounded constant C independent of μ and h such that
B ^ I J B ^ I J ( N ) C h 2 H b ^ 2 , I J + h μ + 1 2 H 3 2 b ^ μ , I J ,
where I J = Case B k , I J , k , and Case B denotes the I J , k whose inverse transformed shapes by T 1 fall across the boundary i j of i j in ξ o η . For steps 2 and 3 in Figure 1, we have
1 M 2 I , J | B ^ I J B ^ I J ( N ) | 2 1 2 C | S | 1 2 h 2 b ^ 2 , S + h μ + 1 2 H 1 2 b ^ μ , S ˜ ,
where S = I J I J and S ˜ = I J I J .
Proof. 
The bounds in (30) are obtained similarly from [11] (Chapter 4), by replacing ϕ and Φ i j with b and B I J , respectively. Next, from (30), we have
1 M 2 I , J | B ^ I J B ^ I J ( N ) | 2 1 2 C M I , J ( h 2 H ) 2 b ^ 2 , I J 2 + ( h μ + 1 2 H 3 2 ) 2 b ^ μ , I J 2 1 2 = C M h 2 H b ^ 2 , S + h μ + 1 2 H 3 2 b ^ μ , S 0 = C | S | 1 2 h 2 b ^ 2 , S + C h μ + 1 2 H 1 2 b ^ μ , S ˜ = O ( H 2 N 2 ) + O ( H μ N μ + 1 ) ,
where | S | ( M H ) 2 = O ( 1 ) and | S ˜ | M 2 H h = ( M H ) 2 N = O ( 1 N ) . This completes the proof of Lemma 1. □
For simplicity, we consider only the case of μ = 1 and achieve the optimal convergence rates O ( H 2 ) + O ( H / N 2 ) . For μ = 0 , the analysis may be carried out leading to O ( N 1 ) and O p ( N 1.5 ) in probability in Li and Bai [7]. The difference in analysis between SI # M of Section 3 and SIM of Section 2 lies in the evaluation on the approximate errors resulting from ξ ^ ( x g ˙ , y g ˙ ) , η ^ ( x g ˙ , y g ˙ ) , ξ ^ * ( x g ˙ , y g ˙ ) and η ^ * ( x g ˙ , y g ˙ ) in (19) and (25), instead of the exact values, ξ ( x g ˙ , y g ˙ ) and η ( x g ˙ , y g ˙ ) . Now, we give the grayness errors from Technique I from Strang and Fix [18].
Lemma 2. 
Let ξ ^ ( x , y ) and η ^ ( x , y ) in (19) be the piecewise bilinear interpolations of ξ ( x , y ) and η ( x , y ) on I J , based on the exact values of ξ I J = ξ ( I H , J H ) and η I J = η ( I H , J H ) satisfying the two equations I H = x ( ξ I J , η I J ) and J H = y ( ξ I J , η I J ) exactly. Suppose that ξ ( x , y ) H 2 ( S ) and η ( x , y ) H 2 ( S ) . There exists a bounded constant C independent of H such that
ξ ξ ^ 0 , S + η η ^ 0 , S C H 2 ( | ξ | 2 , S + | η | 2 , S ) .
In Technique II, we obtain the approximations ( ξ ^ I J * , η ^ I J * ) in (25) from the piecewise linear interpolations ξ = ξ ^ ( x , y ) and η = η ^ ( x , y ) (see Figure 3). Below we prove a lemma.
Lemma 3. 
Let ξ ^ * ( x , y ) and η ^ * ( x , y ) be the piecewise linear interpolation based on the approximations: ξ I J * = ξ ^ ( I H , J H ) and η I J * = η ^ ( I H , J H ) , where ξ ^ and η ^ are the piecewise linear interpolations on i j , Δ i j , in Lemma 2. Then, there exist the error bounds:
ξ ^ * ( x , y ) ξ ^ ( x , y ) 0 , S C ξ ( x , y ) ξ ^ ( x , y ) 0 , S ,
η ^ * ( x , y ) η ^ ( x , y ) 0 , S C η ( x , y ) η ^ ( x , y ) 0 , S .
Proof. 
Take the linear approximation (20) as an example. We have
ξ ^ * ( x , y ) ξ ^ ( x , y ) = ϵ I , J ( x + y ( I + J + 1 ) H ) H + ϵ I + 1 , J ( x I H ) H + ϵ I , J + 1 ( y J H ) H ,
where ϵ ( x , y ) = ξ ^ * ( x , y ) ξ ^ ( x , y ) , ϵ I , J = ϵ ( I H , J H ) , and ( ( x , y ) I J ) ( ( x + y ) ( I + J + 1 ) H ) . By using the affine transformation r = x I H H and t = y J H H , the triangle Δ I J , 1 is transformed to a reference triangle Δ ( r 0 , t 0 , r + t 1 ) . Then, we have
Δ I J , 1 ( ξ ^ * ( x , y ) ξ ^ ( x , y ) ) 2 d x d y = H 2 Δ ( ( 1 r t ) ϵ I J + r ϵ I + 1 , J + t ϵ I , J + 1 ) 2 d r d t 3 H 2 ϵ I J 2 Δ ( 1 r t ) 2 d r d t + ϵ I + 1 , J 2 Δ r 2 d r d t + ϵ I , J + 1 2 Δ t 2 d r d t ,
where the integrals are evaluated by calculus
Δ ( 1 r t ) 2 d r d t = 0 1 0 1 t ( 1 r t ) 2 d r d t = 1 12 , Δ r 2 d r d t = Δ t 2 d r d t = 1 12 .
Then, it follows that
Δ I J , 1 ( ξ ^ * ( x , y ) ξ ^ ( x , y ) ) 2 d x d y H 2 4 ( ϵ I J 2 + ϵ I + 1 , J 2 + ϵ I , J + 1 2 ) .
Since all norms in finite dimensions are equivalent to each other, by noting ϵ ( x , y ) = ξ ^ ( x , y ) ξ ( x , y ) = ϵ ( r , t ) , we obtain
( ϵ I J 2 + ϵ I + 1 , J 2 + ϵ I , J + 1 2 ) C Δ ( ϵ ( r , t ) ) 2 d r d t C H 2 Δ I J , 1 ( ξ ^ ξ ) 2 d x d y ,
where C is a bounded constant independent of H. Therefore, we obtain from (35) and (36)
ξ ^ * ξ ^ 0 , S 2 = I J , Δ I J , ( ξ ^ * ( x , y ) ξ ^ ( x , y ) ) 2 d x d y C I J , Δ I J , ( ξ ^ ( x , y ) ξ ( x , y ) ) 2 d x d y = C ξ ^ ξ 0 , S 2 .
This is the desired result (32), and Equation (33) holds similarly. □
Lemma 3 implies that the error order resulting from Technique II is no worse than that from Technique I. Now let us consider SI # M using Techniques I and II, and the absolute errors are defined by
( E 2 ( B ^ ) ) 2 = I J ( ( B ^ I J # ) ( N ) B I J ) 2 M 2 ,
where B I J and ( B ^ I J # ) ( N ) are defined in (15) and (26), respectively. Denote
B ^ I J # = 1 H 2 I J ϕ ^ 1 ( ξ ^ * , η ^ * ) d x d y , B ^ I J * = 1 H 2 I J ϕ ^ 1 ( ξ ^ , η ^ ) d x d y , B ^ I J = 1 H 2 I J ϕ ^ 1 ( ξ , η ) d x d y .
We have
( E 2 ( B ^ ) ) 2 4 M 2 { I J ( B I J B ^ I J ) 2 + I J ( B ^ I J B ^ I J * ) 2 + I J ( B ^ I J * B ^ I J # ) 2 + I J ( B ^ I J # ( B ^ I J # ) ( N ) ) 2 } .
To provide the error bounds for (39), we first give two lemmas.
Lemma 4. 
Suppose that ξ H 2 ( Ω ) and η H 2 ( Ω ) . There exist the error bounds
1 M 2 I J ( B I J B ^ I J ) 2 1 | S | ϕ ϕ ^ 1 0 , S 2 ,
1 M 2 I J ( B ^ I J B ^ I J * ) 2 C | S | | ϕ ^ 1 | 1 , S 2 ( ξ ξ ^ 0 , S 2 + η η ^ 0 , S 2 ) ,
1 M 2 I J ( B ^ I J * B ^ I J # ) 2 C | S | | ϕ ^ 1 | 1 , S 2 ( ξ ξ ^ 0 , S 2 + η η ^ 0 , S 2 ) .
Proof. 
We have from | S | = M 2 H 2 and the Schwarz inequality
1 M 2 I J ( B I J B ^ I J ) 2 = 1 M 2 I J 1 H 2 I J ( ϕ ϕ ^ 1 ) d x d y 2 1 M 2 H 4 I J I J ( ϕ ϕ ^ 1 ) 2 d x d y × I J d x d y = 1 M 2 H 2 I J I J ( ϕ ϕ ^ 1 ) 2 d x d y = 1 | S | ϕ ϕ ^ 1 0 , S 2 .
This is the first bound (40).
Next, we have from (43) and the Schwarz inequality
1 M 2 I J ( B ^ I J B ^ I J * ) 2 = 1 M 2 I J 1 H 2 I J ( ϕ ^ 1 ( ξ , η ) ϕ ^ 1 ( ξ ^ , η ^ ) ) d x d y 2 1 | S | ϕ ^ 1 ( ξ , η ) ϕ ^ 1 ( ξ ^ , η ^ ) 0 , S 2 C | S | ϕ ^ 1 1 , S 2 ( ξ ξ ^ 0 , S 2 + η η ^ 0 , S 2 ) .
Third, from Lemma 3 we have similarly
1 M 2 I J ( B ^ I J * B ^ I J # ) 2 = 1 M 2 I J 1 H 2 I J ( ϕ ^ 1 ( ξ ^ , η ^ ) ϕ ^ 1 ( ξ ^ * , η ^ * ) ) d x d y 2 1 | S | ϕ ^ 1 ( ξ ^ , η ^ ) ϕ ^ 1 ( ξ ^ * , η ^ * ) 0 , S 2 C | S | ϕ ^ 1 ( ξ , η ) ϕ ^ 1 ( ξ ^ , η ^ ) 0 , S 2 C | S | ϕ ^ 1 1 , S 2 ( ξ ξ ^ 0 , S 2 + η η ^ 0 , S 2 ) .
This completes the proof of Lemma 4. □
The Jacobian determinant J = D ( z ) = x ξ x η y ξ y η is given from (18). The nonsingular Jacobian determinant satisfies J M J J 0 c 0 > 0 . Denote by σ 0 ( c 0 > 0 ) the minimal singular value of Jacobian matrix D ( z ) . Then, we have D T ( z ) D ( z ) σ 0 2 z T z , and give the following lemma.
Lemma 5. 
Let the transformation T be regular. Then, under the inverse transformation T 1 : ( x , y ) ( ξ , η ) , there exist the bounds
| ϕ | k , S J M σ 0 k | ϕ | k , Ω , k = 0 , 1 , 2 ,
where J M is the maximal value of the Jacobian determinant, and σ 0 is the minimal singular value of matrix D ( z ) in (18).
Proof. 
We have
ϕ 0 , S 2 = S ϕ 2 d x d y = Ω ϕ 2 J d ξ d η J M Ω ϕ 2 d ξ d η = J M ϕ 0 , Ω 2 .
This is (45) for k = 0 .
Next, since
ϕ ξ ϕ η = D T ( z ) ϕ x ϕ y ,
we have
ϕ ξ 2 + ϕ η 2 = [ ϕ x , ϕ y ] D ( z ) D T ( z ) ϕ x ϕ y σ 0 2 ( ϕ x 2 + ϕ y 2 ) .
From (47), we have ϕ ξ ξ 2 + ϕ ξ η 2 σ 0 2 ( ϕ ξ x 2 + ϕ ξ y 2 ) and ϕ η ξ 2 + ϕ η η 2 σ 0 2 ( ϕ η x 2 + ϕ η y 2 ) . Hence, we obtain
ϕ ξ ξ 2 + 2 ϕ ξ η 2 + ϕ η η 2 σ 0 2 ( ϕ ξ x 2 + ϕ ξ y 2 + ϕ η x 2 + ϕ η y 2 ) = σ 0 2 { ( ϕ x ξ 2 + ϕ x η 2 ) + ( ϕ y ξ 2 + ϕ y η 2 ) } σ 0 4 { ( ϕ x x 2 + ϕ x y 2 ) + ( ϕ y x 2 + ϕ y y 2 ) } = σ 0 4 ( ϕ x x 2 + 2 ϕ x y 2 + ϕ y y 2 ) .
Consequently, we have from (47) and (48)
| ϕ | 1 , S 2 = S ( ϕ x 2 + ϕ y 2 ) d x d y 1 σ 0 2 Ω ( ϕ ξ 2 + ϕ η 2 ) J d ξ d η J M σ 0 2 Ω ( ϕ ξ 2 + ϕ η 2 ) d ξ d η = J M σ 0 2 | ϕ | 1 , Ω 2 ,
and
| ϕ | 2 , S 2 = S ( ϕ x x 2 + 2 ϕ x y 2 + ϕ y y 2 ) d x d y 1 σ 0 4 Ω ( ϕ ξ ξ 2 + 2 ϕ ξ η 2 + ϕ η η 2 ) 2 J d ξ d η J M σ 0 4 | ϕ | 2 , Ω 2 .
These are (45) for k = 1 , 2 , and completes the proof of Lemma 5. □
Now we prove a main theorem, which is not only important for error analysis of the improved algorithms in this paper, but also effective for error analysis of discontinuous greyness of images in Section 4.2.
Theorem 1. 
Let x ( ξ , η ) H 2 ( Ω ) and y ( ξ , η ) H 2 ( Ω ) , and the conditions in Lemma 5 hold. Then, there exists the error bound for μ = 1 ,
E 2 ( B ^ ) C J M | S | 1 2 H 2 σ 0 | ϕ 1 | 1 , Ω ( | ξ | 2 , S + | η | 2 , S ) + H 2 ϕ 2 , Ω + h 2 σ 0 2 ϕ 2 , Ω + h 2 σ 0 H ϕ 1 , Ω .
Proof. 
From Lemmas 1 and 5 for μ = 1 ,
1 M 2 I J ( B ^ I J # ( B ^ I J # ) ( N ) ) 1 2 C | S | 1 2 h 2 b ^ 1 * 2 , S ˜ + h 3 2 H 1 2 b ^ 1 * 1 , S ˜ C J M 1 2 | S | 1 2 h 2 σ 0 2 ϕ ^ 1 * 2 , Ω ˜ + h 3 2 σ 0 ( H ) 1 2 ϕ ^ 1 * 1 , Ω ˜ ,
where Ω ˜ and Ω ˜ are the original domains of S ˜ and S ˜ . Since ϕ can be regarded as piecewise bi-cubic spline functions, the following bounds hold due to finite-dimensional functions.
| Ω | 1 N | Ω | = C h H | Ω | , | ϕ ^ 1 | 1 , Ω C ( h H ) 1 2 ϕ 1 , Ω , ϕ ^ 1 * 2 , Ω ˜ C ϕ 2 , Ω .
It follows that
1 M 2 I J ( B ^ I J # ( B ^ I J # ) ( N ) ) 1 2 C J M 1 2 | S | 1 2 h 2 σ 0 2 ϕ 2 , Ω + h 2 σ 0 H ϕ 1 , Ω .
Also we have from Lemmas 4 and 5
1 M 2 I J ( B I J B ^ I J ) 2 1 | S | ϕ ϕ ^ 1 0 , S 2 C J M | S | ϕ ϕ ^ 1 0 , Ω 2 C J M H 4 | S | ϕ 2 , Ω 2 .
Moreover, we have | ϕ ^ 1 | 1 , S C J M σ 0 | ϕ | 1 , Ω from Lemma 5. Then, we obtain from Lemmas 4 and 2,
1 M 2 I J { ( B ^ I J B ^ I J * ) 2 + ( B ^ I J * B ^ I J # ) 2 } C | S | | ϕ ^ 1 | 1 , S 2 ( ξ ξ ^ 0 , S 2 + η η ^ 0 , S 2 ) C J M H 4 | S | σ 0 2 | ϕ | 1 , Ω 2 ( | ξ | 2 , S 2 + | η | 2 , S 2 ) .
Based on (39) we have from (52)–(54),
E 2 ( B ^ ) C J M 1 2 H 2 | S | 1 2 σ 0 | ϕ 1 | 1 , Ω ( | ξ | 2 , S + | η | 2 , S ) + C J M 1 2 H 2 | S | 1 2 ϕ 2 , Ω + C J M 1 2 | S | 1 2 h 2 σ 0 2 ϕ 2 , Ω + h 2 σ 0 H ϕ 1 , Ω ,
to give the desired results (49). This completes the proof of Theorem 1. □
Remark 1. 
For the regular transformation T, we have ξ , η H 2 ( S ) , σ 0 c 0 > 0 and J M < C . Then, Equation (49) leads to
E 2 ( B ^ ) C H 2 + h 2 + h 2 H = O ( H 2 ) + O ( H 2 N 2 ) + O ( H μ N 2 ) , μ = 1 .
The error bounds (56) are the optimal estimates of the SIM via transformation T 1 in [11] (Chapters 3 and 4). The optimal convergence rates (56) remain for the improved S I # M via T without nonlinear solutions. Hence, the S I # M is beneficial for wide applications as in Section 5. Moreover, the error bounds (49) in Sobolev norms can be extended to discontinuous images below.

4.2. Error Analysis for Image Discontinuity

In our past study for image geometric transformations, the greyness functions ϕ ( ξ , η ) were assumed to be continuous ϕ H 2 ( Ω ) or even ϕ C 2 ( Ω ) . However, since H 1 , we may study the discontinuity of digital images and patterns. In this subsection, we introduce the discontinuity degree. First, let us consider the binary images. If the entire image is either black or white, the discontinuity degree is zero. On the other hand, if the pixel is black when i + j is even, and white when i + j is odd, the discontinuity degree is one. Let Ω be split into a 2 × 2 partition of i j and denoted by i j 2 × 2 . Denote by i j 2 × 2 both the blacks and the whites existing at nine-pixel points (see Figure 4). Let the number N d i s = N u m ( i j 2 × 2 ) . Hence, the discontinuity degree is defined by
ρ d i s = N d i s N u m o f i j 2 × 2 = 4 N d i s M 2 = 4 N d i s | Ω | / H 2 = 4 H 2 | Ω | N d i s .
There are four cases of an image in 2D of the image discontinuity with the discontinuity degree from (57).
(1)
There are only a few isolated pixels,
N d i s C , ρ d i s = O ( H 2 ) .
(2)
Greyness discontinuity exists only along the interior and exterior image boundary,
N d i s = O ( H 1 ) , ρ d i s = O ( H ) .
(3)
Greyness discontinuity is minor,
N d i s = O ( H γ ) , ρ d i s = O ( H 2 γ ) , γ ( 0 , 2 ) .
(4)
All i j 2 × 2 are i j 2 × 2 ,
N d i s = O ( H 2 ) , ρ d i s = 1 .
The analysis below in this paper is well suited to Cases (1)–(3). For Case (4), the large, useless errors O ( 1 ) are obtained. Let us prove a new lemma.
Lemma 6. 
For nonuniform binary images in i j 2 × 2 , the error bounds of piecewise cubic-spline interpolant functions ϕ ( ξ , η ) passing all ϕ i j are given by
| ϕ | , i j 2 × 2 C H 1 ,
| ϕ | , Ω C 1 + N d i s H 1 , = 0 , 1 , 2 .
Proof. 
By the affine transformation: T * : ( ξ , η ) ( ξ * , η * ) ,
ξ * = ξ ξ i 1 2 H , η * = η η j 1 2 H ,
the region i j 2 × 2 is transformed to a unit square * in Figure 4, where * = { ( ξ * , η * ) | 0 ξ * 1 , 0 η * 1 } . Then, we have | ϕ | , i j 2 × 2 = ( 2 H ) 1 | ϕ * | , * . Note that the same binary images at the nine-pixel points on * as in i j 2 × 2 . Let ϕ ^ 2 * be the bi-quadratic Lagrange polynomial passing through all pixel points: ϕ i j * = ϕ i j . Based on the equivalence of finite dimensional norms, we can prove | ϕ * | , * C | ϕ ^ 2 * | , * . Hence,
| ϕ | , i j 2 × 2 C H 1 | ϕ ^ 2 * | , * C | ϕ ^ 2 | , i j 2 × 2 ,
where ϕ ^ 2 is also a bi-quadratic Lagrange interpolant polynomial in ξ o η .
For the binary images, the bi-quadratic interpolant gives ϕ 2 * T ϕ ^ 2 * and ϕ ^ 2 ( i H , j H ) = ϕ i j . It is easy to see
| ϕ ^ 2 | , i j 2 × 2 C H 1 .
The first desired result (58) follows from (61) and (62). Moreover
| ϕ | , Ω 2 = i j | ϕ | , i j 2 × 2 2 = i j | ϕ | , i j 2 × 2 2 + i j left | ϕ | , i j 2 × 2 2 = C N d i s H 2 ( 1 ) + O ( 1 ) .
This complete the proof of Lemma 6. □
For the q-greyness levels, we may define the greyness jumps of images as
ϕ q 1 p , i . e . , p = 32 for q = 256 .
The analysis of binary images can be extended to that of other multiple level images. From Theorem 1 and Lemma 6, we have the following theorem.
Theorem 2. 
Let transformation T be regular and all conditions in Lemma 6 hold. When N d i s 1 , there exist the bounds:
E 2 ( B ^ ) C N d i s H + H N 2 .
Proof. 
From Theorem 1 and Lemma 6, we have
E 2 ( B ^ ) C ( N d i s J M ) 1 2 H | S | 1 2 H σ 0 ( | ξ | 2 , S + | η | 2 , S ) + 1 + 1 N 2 ( 1 σ 0 2 + 1 σ 0 ) .
The regular transformation T implies ξ , η H 2 ( S ) , σ 0 c 0 > 0 , J M < C . We have
E 2 ( B ^ ) C N d i s H 2 + H + H N 2 C N d i s H + H N 2 .
This is the desired result (64) and completed the proof of Theorem 2. □
From Theorem 2, we have the following corollary.
Corollary 1. 
Let all conditions in Theorem 1 hold. For the finite greyness jumps N d i s C , the absolute errors are
E 2 ( B ^ ) = O ( H ) + O ( H / N 2 ) .
When the images have with greyness jumps of interior and exterior boundaries N d i s = O ( H 1 ) ,
E 2 ( B ^ ) = O ( H ) + O ( H / N 2 ) .
When N d i s = O ( H γ ) with γ ( 0 , 2 ) , denote β = 1 γ 2 ( 0 , 1 ) . We have
E 2 = E 2 ( B ^ ) = O ( H 1 γ 2 ) + O ( H 1 γ 2 N 2 ) = O ( H β ) + O ( H β N 2 ) .
Remark 2. 
When the majority greyness of images is continuous, we have N d i s = O ( H β ) with β 1 . The errors E 2 are small due to small H ( 1 ) from (65). This is an important development from [6,7], where images are often assumed to be continuous. Note that the sequential errors Δ E 2 = O ( N 2 ) as μ = 1 still hold. However for Case (4): the full discontinuous images, we have E 2 = O ( 1 ) from (65) with β = 0 and γ = 2 , which is meaningless. Another error analysis for greyness discontinuity is reported for other kinds of splitting algorithms in [11] (Chapter 8).

4.3. Summary of Six Combinations for the Cycle Transformation T 1 T

Here let us give a summary of various combinations for the cycle transformation T 1 T . There are six combinations with μ = 0 and μ = 1 in 2D images under T 1 T : (1) CSIM , (2) C S ¯ IM and C S ¯ I ¯ M , (3) C S ¯ # I ¯ # M , (4) CIIM , (5) CI * IM , and (6) CI # IM in this paper. We list their characteristics, accuracy and convergence in Table 1 with μ = 1 only. The details of algorithms and rigor analysis are provided in a recent book [11]. Furthermore, a list of symbols used is listed in Appendix A.
In summary, combination CSIM in [1,7] is the simplest, but suffers from low sequential errors O ( 1 / N ) and O p ( 1 / N 1.5 ) . Combination CIIM in [11] (Chapter 4) gains the better convergence rates, O ( 1 / N μ + 1 ) . When μ = 1 using the piecewise bilinear interpolations ϕ ^ 1 ( ξ , η ) and b ^ 1 ( x , y ) , the sequential convergence rates O ( 1 / N 2 ) are optimal. For μ = 0 using the piecewise constant interpolations ϕ ^ 0 ( ξ , η ) and b ^ 0 ( x , y ) , the performance of CIIM is still satisfactory. However, CIIM suffers from nonlinear solutions during the forward transformation T. Consequently, for the inverse transformation T 1 , CIIM is recommended if the nonlinear solutions are easy using iteration methods. The original CSIM is suited well to low numbers of greyness-level images; but CIIM is suited to both low and high numbers of greyness level (≥256) images. The improved CI # IM in [9] may completely bypass the nonlinear solutions, and its rigor analysis is explored in this paper. The improved CI # IM are particularly beneficial for harmonic models with the finite element method, finite difference method and finite volume method because the approximate transformation T ^ obtained is already piecewise linear (see [8,9]).
To promote accurate images with no need for nonlinear solutions, C S ¯ IM for μ = 1 and C S ¯ I ¯ M for μ = 0 in [11] (Chapter 7) have been developed to have the optimal convergence rates O ( 1 / N 2 ) . The sophisticated combinations C S ¯ # I ¯ # M in [11] (Chapter 8) may be easily carried out without sequential errors when N N 0 . Moreover, an analysis in [11] (Chapter 8) reveals the better absolute error O ( H ) for all kinds of image discontinuity.

5. Applications to Harmonic, Poisson and Blending Models

5.1. Descriptions of the Models

Let us consider the arbitrary geometric shape transformations T by the given boundaries in both original and distorted images (see Figure 5). Assume x ( ξ , η ) , y ( ξ , η ) C 2 ( Ω ) , we may define the functions x ( ξ , η ) and y ( ξ , η ) by satisfying the Poisson equations
x = 2 x ξ 2 + 2 x η 2 = f 1 ( ξ , η ) in Ω ,
y = 2 y ξ 2 + 2 y η 2 = f 2 ( ξ , η ) in Ω ,
and the Dirichlet boundary conditions:
x | Ω = g 1 ( ξ , η ) , y | Ω = g 2 ( ξ , η ) .
The transformation of (70)–(72) is called the Poisson model. When f 1 ( ξ , η ) = f 2 ( ξ , η ) = 0 , Equations (70) and (71) are reduced to the harmonic equations:
x = y = 0 , ( ξ , η ) Ω .
The transformation of (72) and (73) is called the harmonic model. The harmonic model was proposed in Li, et al. [1] and then in [9,11] for combinations CSIM and CIIM.
In this paper, we also propose blending models by assuming that the shapes can be blended as a thin elastic plate. This leads to the following two blending models.
  • Simply supported blending models:
    2 x = f 1 ( ξ , η ) , 2 y = f 2 ( ξ , η ) ,
    x | Ω = g 1 ( ξ , η ) , y | Ω = g 2 ( ξ , η ) ,
    x | Ω = y | Ω = 0 .
  • The clamped blending models: Equations (74), (75) and x ν = y ν = 0 on Ω , where x ν = x ν , and ν is the outward normal of Ω . In the blending models (74)–(76), we assume x , y C 4 ( Ω ) .
Since the white pixels can be added to images without any changes, the original image domains may be assumed to be rectangular or even square. The rectangular and square images have many applications, e.g., pictures, TV, etc. Below we will use the simple finite difference method (FDM) for harmonic, Poisson and blending models. The practical approximations x ˜ and y ˜ can be obtained and easily embedded into SI # M and CI # IM . The error analysis for Poisson models in Section 5.3 shows that the optimal sequential errors will be maintained as well. Furthermore, the error analysis of the CSIM is made for harmonic, models by the FEM in [11] (Chapter 10).

5.2. The Finite Difference Method

Let Ω be a unit square and the pixel points be chosen as the difference grid nodes. For the same partition Ω = i j i j , the standard five-node schemes of FDM for (70)–(72) are given by
4 x i , j ( x i + 1 , j + x i 1 , j + x i , j + 1 + x i , j 1 ) = H 2 f i , j , 1 i , j M 1 ,
x i , j = g 1 ( i H , j H ) , i = 0 , M j = 0 , M ,
and H = 1 / M . The equations for y i j are similar to (77) and (78). The successive over-relaxation iteration (SOR) may be chosen to solve (77) and (78) by
x i , j ( k + 1 ) = x i , j ( k ) w 4 4 x i , j ( k ) x i 1 , j ( k + 1 ) + x i , j 1 ( k + 1 ) + x i + 1 , j ( k ) + x i , j + 1 ( k ) + H 2 f i , j ,
where w is the relaxation parameter. The iteration (79) is convergent when 0 < w < 2 and optimal when (see Hageman and Young [19])
w = w o p t = 2 1 + 1 ρ 2 = 2 1 + sin ( π H ) ,
where ρ is the spectral radius of the Jacobian iteration matrix. For the case of (77) and (78), ρ = cos ( π H ) . Since the iteration number is O ( M ) , the total CPU time for the FDM solutions is O ( M 3 ) . For Dirichlet problems of PDE, the domain boundary Ω must be given. The harmonic modes by using the FDM in (77)–(80) are used for face imaging resembling (face morphing) in a recent paper [8], where face boundary formulation is studied in detail.
Next, for the simply supported blending model (74)–(76), we may split into two Poisson models. Let v = x and w = y . We have two Poisson models.
(I)
Poisson Model A.
v = f 1 ( ξ , η ) , w = f 2 ( ξ , η ) , v | Ω = w | Ω = 0 .
(II)
Poisson Model B.
x = v , y = w , x | Ω = g 1 ( ξ , η ) , y | Ω = g 2 ( ξ , η ) .
We may also apply the same SOR to two models (81) and (82). As to the clamped blending models, we may obtain the FDM (see Figure 6a)
20 x i , j 8 ( x i + 1 , j + x i 1 , j + x i , j + 1 + x i , j 1 ) + 2 ( x i + 1 , j + 1 + x i + 1 , j 1 + x i 1 , j + 1 + x i 1 , j 1 ) + ( x i + 2 , j + x i 2 , j + x i , j + 2 + x i , j 2 ) = H 4 f 1 ( i H , j H ) ,
where the boundary conditions are given by
x i , j = f 1 ( i H , j H ) , i = 0 , M j = 0 , M , x 1 , j = x 1 , j , x M + 1 , j = x M 1 , j , j = 0 , M , x i , 1 = x i , 1 , x i , M + 1 = x i , M 1 , i = 0 , M .
The difference equations for y i , j are similar. The block SOR can be used for solving (83) and (84) (see Hageman and Young [19]).
When Ω is an arbitrary bounded domain, the finite element method (FEM) may be chosen for the Poisson and the blending models. However, the programming of the FEM is more complicated than that of the FDM, in particular for the biharmonic equations in (74). Furthermore, the finite volume method (FVM) with Delaunay triangulation is used in [11] (Chapter 10). The FDM with SOR is easier to carry out for image transformations. It is worth pointing out that the FDM (77) is also a kind of FEM using linear elements of Figure 6b, or bilinear elements in Figure 6a.
Remark 3. 
The standard five-node difference equations (77) are valid for uniform meshes as digital images. Rigor error analysis is given in Theorem 3 below for the improved S I # M by Techniques I and II to give the error E 2 = O ( H 2 ) + O ( h 2 ) = O ( H 2 ) + O ( H 2 N 2 ) . The Laplace operator can be used as a contour filter to test the edges of digital images; see Winnicki et al. [20]. The contour filter may be used for pre-processing to face boundary formulation for harmonic/Poisson models in [8]. Various 3 × 3 difference schemes of nine-nodes: ( i + k , j + ) with k , = 0 , ± 1 , are discussed in [20]. Suppose the solutions u C 6 ( Ω ) . The best 3 × 3 difference scheme is found in [20] ((16)–(18)) (called the fourth-order filter), to offer higher accuracy of the Laplace operator. The bounds of (87) can be improved to x x ^ h 0 , Ω + y y ^ h 0 , Ω = O ( H 4 ) . Such a better difference scheme may also be used for harmonic/Poisson models. They are beneficial for the numerical algorithms by using piecewise-spline interpolations with μ = 2 , 3 in [11] (Chapter 10) due to the high errors requested.

5.3. Error Analysis for the FDM Solutions in SI # M

The FDM was used for harmonic transformations in [8,9], but no error analysis exists so far. In this subsection, we provide a new error analysis. In Section 2, x I J = x ( I H , J H ) and y I J = y ( I H , J H ) are exact. Since the values x ( ξ , η ) and y ( ξ , η ) can also be approximately obtained by FDM, ξ ^ and η ^ may be obtained approximately from x ˜ I J = x ^ h ( I H , J H ) and y ˜ I J = y ^ h ( I H , J H ) , which is similar to Section 3. To embed the FDM solutions x h and y h into SIM, we first give a lemma.
Lemma 7. 
Let x h and y h be the piecewise bilinear interpolant on i j , Δ i j , , and ξ and η be their inverse bilinear functions on i j , Ω ^ i j , , where Ω ^ i j , T ^ 1 Δ i j , by T ^ . Then, there exist the bounds,
ξ ξ ^ h 0 , S + η η ^ h 0 , S J M σ 0 x x ^ h 0 , Ω + y y ^ h 0 , Ω ,
where σ 0 is the minimal value of matrix D ( z ) in (18), and the Jacobian determinant J satisfies 0 < J 0 J J M .
Proof. 
From the Lagrange mean theorem in calculus, we have
x x ^ h y y ^ h = D ( z ˜ ) ξ ξ ^ h η η ^ h ,
where z ˜ is a mean value. Then, we have ( ξ ξ ^ h ) 2 + ( η η ^ h ) 2 1 σ 0 2 ( ( x x ^ h ) 2 + ( y y ^ h ) 2 ) to lead to
ξ ξ ^ h 0 , S 2 + η η ^ h 0 , S 2 = S ( ( ξ ξ ^ h ) 2 + ( η η ^ h ) 2 ) d x d y 1 σ 0 2 Ω ( ( x x ^ h ) 2 + ( y y ^ h ) 2 ) J d ξ d η J M σ 0 2 x x ^ h 0 , Ω 2 + y y ^ h 0 , Ω 2 .
This completes the proof of Lemma 7. □
Theorem 3. 
Let all conditions in Theorem 1 hold. Suppose that x ^ h and y ^ h are the approximate solutions of the Poisson models by the FDM (or the linear FEM), there exist the error bounds of image errors for μ = 1 .
E 2 ( B ^ ) C J M | S | 1 2 J M 1 2 H 2 σ 0 2 | ϕ | 1 , Ω ( | x | 2 , Ω + | y | 2 , Ω ) + H 2 ϕ 2 , Ω + h 2 σ 0 2 ϕ 2 , Ω + h 2 σ 0 H ϕ 1 , Ω .
Proof. 
We evaluate the greyness errors of the FDM. The FDM can be regarded as the linear FEM in the regular triangulation (see Figure 6b). The FDM solutions, x ^ = x h and y ^ = y h , have the error bounds for Poisson’s equation:
x x ^ h 0 , Ω + y y ^ h 0 , Ω C H 2 ( | x | 2 , Ω + | y | 2 , Ω ) .
From (55), (87), Lemma 7 and Theorem 1, we obtain
E 2 ( B ^ ) C | S | 1 2 J M σ 0 2 | ϕ | 1 , Ω ( x x ^ h 0 , Ω + y y ^ h 0 , Ω ) + C J M | S | 1 2 H 2 ϕ 2 , Ω + C J M 1 2 | S | 1 2 h 2 σ 0 2 ϕ 2 , Ω + h 2 σ 0 H ϕ 1 , Ω C J M 1 2 | S | 1 2 J M 1 2 H 2 σ 0 2 | ϕ | 1 , Ω ( | x | 2 , Ω + | y | 2 , Ω ) + H 2 ϕ 2 , Ω + h 2 σ 0 2 ϕ 2 , Ω + h 2 σ 0 H ϕ 1 , Ω .
This is the desired bound (86) and completes the proof of Theorem 3. □
Since Theorems 1 and 2 are similar to each other, all conclusions of error analysis for discontinuous images in Section 4.2 are valid.

6. Numerical and Graphical Experiments

6.1. Binary Images

Consider binary images with the conversion,
W ^ i j = * if Φ i j 1 2 , + if 1 10 Φ i j < 1 2 , . if 0 < Φ i j < 1 10 , if Φ i j = 0 ,
where + and . are used to display the greyness errors less than 1 2 . We carry out combination CI # IM for T 1 T without nonlinear solutions. Choose the simple bi-quadratic model in [1] (p. 127). The greyness errors are listed in Table A1 and Table A2, and the graphical images are given in Figure A1 and Figure A2. In Table A1 and Table A2, E, E 2 , Δ E and Δ E 2 are given in (27)–(29). Also Δ I 1 , Δ I 2 and Δ I 3 denote the numbers of pixel errors for * , + and . , respectively.

6.2. Harmonic and Poisson’s Models

Face recognition is an active research area due to wide applications, and we cite some recent reports [21,22,23,24,25,26]. Let us choose Lena’s images in [7,9], which consists of 256 × 256 pixels with 256 greyness. Lena’s images were the standard model for testing in engineering. However, Lena’s images are no longer allowed in IEEE publications after 1 April 2024. Readers may refer to our original papers if necessary. We add a darkest square boundary for Lena’s images as in Figure A1 and choose the transformed boundary as that of the quadratic model, as in Figure A2. See Figures 12–18 in [7]. First, we use the CI # IM for T 1 T with the harmonic transformation T, with the same boundary of bi-quadratic transformation in [1] (p. 127). Next, consider the Poisson model (70)–(72), where the functions are given by
f 1 = f 2 = 1 in Ω 0 Ω , f 1 = f 2 = 0 in Ω Ω 0 ,
and Ω 0 is a smaller rectangle as Ω 0 = { ( ξ , η ) | 1 3 ξ 2 3 , 1 3 η 2 3 } . The exterior boundary is also chosen as that of the quadratic model. In this paper, we employ the FDM. The numerical results of greyness and pixel errors are listed in Table A3, Table A4, Table A5 and Table A6 for μ = 0 and μ = 1 . In Table A3, Table A4, Table A5 and Table A6, P . E . denotes the greyness-level errors in the 256 greyness-level system. Also, T o t . P i x . denotes the total number of non-empty pixels of the distorted images under T. In Table A5 and Table A6, the absolute errors of E and E 2 under the transformation T are computed from (27), and the sequential errors of Δ E and Δ E 2 are computed from (28) and (29), respectively. For the cycle transformation T 1 T , the sequential errors of Δ E and Δ E 2 are defined similarly. The curves of Δ E under T 1 T are drawn in Figure A3 from Table A5 and Table A6. We can see from Figure A3,
Δ E = O ( N 2 ) as μ = 1 ,
Δ E = O ( N 1.5 ) as μ = 0 .
Equation (88) is consistent with (68) and Theorem 3 with μ = 1 by noting Δ E C Δ E 2 . Equation (89) is consistent with Δ E = O p ( N 1.5 ) in probability in [7]. Note that the greyness discontinuity does happen, and the error analysis in Section 4.2 is valid and important in practical applications.
When the face boundary is changeable, the face images are also changeable [8]. The face transformations may be applied to the moving pictures as Sora. The harmonic models developed by using FDM in this paper may also be applied for face recognition and face morphing (see [8]). In addition, we also cited other reports [21,22,23,24,25,26,27,28] for this subject.
To close this section, let us give a remark.
Remark 4. 
In this remark, we also apply the numerical algorithms to deep learning in AI. The numerical algorithms in Table 1 are the core of Chapters 2–8 of a new book [11]. The focus of this paper is error analysis for geometric transformations using Techniques I and II. The link between numerical algorithms and deep learning-based transformation may also be found in [11]. In [29,30,31], camera photos and the marine image videos are used; they may suffer from perspective transformations. The projective and perspective transformations are discussed in Chapter 1 of [11]. Moreover, in [32,33,34], the 3D videos and 3D geometric morphology are discussed, while the efficient numerical algorithms in 3D are explored in Chapter 9 of [11]. Feature-based approaches are discussed in [35,36,37], where line segments are basic features as handwritten characters. Their images are often found by cameras and X-rays. When geometric distortions and illumination effects are involved, the numerical algorithms in this paper and [11] can be used to remove them. Then, the correct pattern recognition can be made. Moreover, in [8,9] and Chapter 10 of [11], we provided more applicable examples in AI, such as handwriting recognition, hiding secret images, face image fusion, and morphing attack detection.

7. Concluding Remarks

Compared to our previous study of digital images under geometric transformations (see [1,6,7,9]), we address several novelties in this paper.
1. For the images under a nonlinear transformation T, the original SIM requires the nonlinear solutions, although the number of nonlinear equations may be reduced to O ( M 2 ) in Li [6]. In Li, et al. [9], we proposed interpolation techniques that entirely bypass nonlinear solutions for the geometric images. The new numerical algorithms are called the improved SI # M for T and Combination CI # IM under T 1 T . This paper provides error analysis for the improved SI # M by using Techniques I and II.
2. The error bounds are given in Theorem 1 based on the Sobolev norms. The optimal convergence rates O ( H 2 ) + O ( H / N 2 ) are obtained for the piecewise bilinear interpolations ( μ = 1 ) and smooth images, where H ( 1 ) is the mesh resolution of an optical scanner, and N is the division number of a pixel split into N 2 sub-pixels.
3. For the images in which the portion of discontinuity is minor, the error bounds are given in Theorem 2. For general cases of discontinuous images, the error bounds are given in Corollary 1 as O ( H β ) + O ( H β / N 2 ) , β ( 0 , 1 )  when  μ = 1 .
4. The combination CI # IM can be applied to the harmonic, Poisson and blending models in Section 5 and their approximate solutions x ^ h and y ^ h may be obtained by the FDM using the SOR. The programming of the FDM is much easier than that of the FEM and the finite volume method (FVM), and the CPU time greatly reduced.
5. Numerical and graphical experiments are carried out in Section 6. The real images of 256 × 256 (and 504 × 631 ) pixels of 256 greyness-levels are carried out to show the importance of the improved algorithms. The numerical experiments in Section 6 also support the error analysis made.
6. New numerical algorithms for image geometric transformations in this paper are beneficial to computer vision, image processing and pattern recognition. Applications of numerical algorithms particularly to deep learning in AI are also given in Remark 4. Modern AI systems (e.g., ChatGPT, DeepSeek, Manus, Sora) rely on three core elements: data, models and algorithms. As emphasized in [8,10,11], efficient algorithms are critical for optimizing computational resources. Six combinations of numerical algorithms for the images under T 1 T are summarized in Table 1.

Author Contributions

Conceptualization, Z.-C.L.; Methodology, H.-T.H., Y.W. and C.Y.S.; Writing—original draft, Z.-C.L.; Writing—review & editing, H.-T.H.; Funding acquisition, Y.W. and C.Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We are grateful to the reviewers for valuable comments and suggestions. We also express our thanks to Yi-Chiung Lin for the numerical and graphical computations in this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Glossary of Symbols

  • T: ( ξ , η ) ( x , y ) ; the nonlinear transformation.
  • T 1 : the inverse transformation of T.
  • T 1 T : a cycle converse transformation of first T and then T 1 .
I. 
Abbreviations of Numerical Algorithms
(1) 
Single methods for T or  T 1 :
  • SSM: Splitting-shooting method for T.
  • SIM: Splitting-integrating method for T 1 .
  • S ¯ S ¯ M : Advanced SSM for T having O ( 1 / N 2 ) .
  • S ¯ S ¯ # M : Advanced SSM for T without infinitesimal N.
  • SI * M : Improved SIM reducing nonlinear equations for T.
  • SI # M : Improved SIM bypassing nonlinear equations for T.
(2) 
Combinations for  T 1 T  (see Table 1):
  • CIIM: Combination of splitting-integrating methods.
  • CI * IM : Combination of improved SIM reducing nonlinear solutions.
  • CI # IM : Combination of improved SIM bypassing nonlinear equations.
  • CSIM: Combination of splitting-shooting-integrating methods.
  • C S ¯ IM and C S ¯ I ¯ M : Advanced CSIM as μ = 0 , 1 having O ( N 2 ) sequential errors.
  • C S ¯ # I ¯ # M : CSIM without infinite N.
  • CSSM: Combination of splitting-shooting methods.
II. 
Regular Transformation Assumptions
  • T: ( ξ , η ) ( x , y ) , if x ( ξ , η ) C 2 ( Ω ) , y ( ξ , η ) C 2 ( Ω ) and 0 < J 0 J J M , where J is the Jacobian determinant of x ( ξ , η ) and y ( ξ , η ) . These conditions may lead to quasiuniform i j , k , * where i j , k , T i j , k , * .
III. 
The integration representations of pixel greyness
  • B I J ( or B I J I n t , B I J M ) = 1 H 2 I J b ( x , y ) d x d y .      B ^ I J = 1 H 2 I J b ^ μ ( x , y ) d x d y .
  • Φ i j = 1 H 2 i j ϕ ( ξ , η ) d ξ d η .       Φ ^ i j = 1 H 2 i j ϕ ^ μ ( ξ , η ) d ξ d η .
  • ϕ ^ μ ( ϕ ) and b ^ μ ( b ) : the piecewise interpolation polynomials of order μ .
IV. 
Greyness Errors
(1) 
The absolute greyness errors:
  • E = 1 I max I J | B I J ( N p ) B I J | .             E 2 = I J B ˜ I J ( N p ) B I J I max 2 1 2 .
  • E α = I J B ˜ I J ( N p ) B I J I max α 1 α ,   α 1 .
  • B ˜ I J ( N p ) : the numerical solutions.
  • I max : the total number of nonempty pixels of B I J .
  • h: h = H / N , N = N p = 2 p , p = 0 , 1 , .
(2) 
The sequential greyness errors for T:
  • E = 1 I max I J | B ˜ I J ( N p ) B ˜ I J ( N p 1 ) | .      E 2 = I J B ˜ I J ( N P ) B ˜ I J ( N p 1 ) I max 2 1 2 .
  • E α = I J B ˜ I J ( N P ) B ˜ I J ( N p 1 ) I max α 1 α , α 1 .
(3) 
The absolute and sequential errors for  T 1 :
  • E = i j Φ ˜ i j ( N p ) Φ i j I max α 1 α .           E = i j Φ ˜ i j ( N p ) Φ ˜ i j ( N p 1 ) I max α 1 α .
  • Φ i j ( N p ) : the numerical solution.
  • α : α 1 , in particular α = 1 and 2.
  • I max : the total number of nonempty pixels of ϕ i j .
  • N p : N p = 2 p , p = 0 , 1 , .
V. 
Pixels, pixel regions and pixel errors
  • ( i , j ) = ( ξ , η ) , ξ = i H , y = j H .      ( I , J ) = ( x , y ) , x = I H , y = J H .
  • i j = ( ξ , η ) | ( i 1 2 ) H ξ < ( i + 1 2 ) H , ( j 1 2 ) H η < ( j + 1 2 ) H .
  • I J = ( x , y ) | ( I 1 2 ) H x < ( I + 1 2 ) H , ( J 1 2 ) H y < ( J + 1 2 ) H .
  • i j = ( ξ , η ) | i H ξ < ( i + 1 ) H , j H η < ( j + 1 ) H .
  • I J = ( x , y ) | I H x < ( I + 1 ) H , J H y < ( J + 1 ) H .
  • i j , k = ( ξ , η ) | ( i 1 2 ) H + ( k 1 ) h ξ < ( i + 1 2 ) H + k h , ( j 1 2 ) H + ( 1 ) h η < ( j + 1 2 ) H + h .
  • I J , k = ( x , y ) | ( I 1 2 ) H + ( k 1 ) h x < ( I + 1 2 ) H + k h , ( J 1 2 ) H + ( 1 ) h y < ( J + 1 2 ) H + h .
  • N d ( Z I J , Z I J * ) = 1 if Z I J Z I J * , 0 otherwise .       N f ( Z I J ) = 1 if Z I J , 0 otherwise .
  • I = I J N d Z I J ( N p ) , Z I J ( N p 1 ) ,     or I = i j N d W i j ( N p ) , W i j ( N p 1 ) ,
           or I = i j N d W i j ( N p ) , W i j .
  • I max = I J N f ( Z I J ( N p ) ) ,     or I max = i j N f ( W i j ( N p ) ) .
VI. 
Sobolev and other spaces with norms
  • v k , p , Ω = | α | k Ω | D α v | p d x 1 p .         | v | k , p , Ω = | α | = k Ω | D α v | p d x 1 p .
  • v k , Ω = v k , 2 , Ω .      | v | k , Ω = | v | k , 2 , Ω .
  • | v | k , , Ω = max | α | = k in Ω | D α v | .
  • H k ( Ω ) : the Sobolev space of the functions with v k , Ω C .
  • C k ( Ω ) : the space of the functions having k-order continuous derivatives.
  • D k ( Ω ) : the space of the functions having k-order bounded derivatives on Ω i ,
           where Ω = i = 1 n Ω i , and n is finite.
VII. 
Other Notations
  • The Jacobian matrix and determinant: P = x ξ y ξ x η y η , J = d e t ( P ) .
  • O ( N α ) , α > 0 : the convergence rates.
  • O p ( N α ) , α > 0 : the convergence rates in probability.
  • H: the optimal resolution.
  • h: the boundary length of i j , k and I J , k and h = H / N .
  • Δ = 2 x 2 + 2 y 2 : the Laplace operator.
  • x : the floor function of x, i.e., the largest integer x .
Table A1. The greyness and pixel errors of binary images under T 1 T by CI # IM without nonlinear iterations for μ = 0 .
Table A1. The greyness and pixel errors of binary images under T 1 T by CI # IM without nonlinear iterations for μ = 0 .
T
Sequential Errors
NTotal Δ I 1 Δ I 2 Δ I 3 Δ E Δ E 2
1687-----
282480570 0.1323 0.2420
486505650 0.4754 × 10 1 0.8996 × 10 1
886202521 0.1704 × 10 1 0.3329 × 10 1
168671511 0.5961 × 10 2 0.1243 × 10 1
32867022 0.2047 × 10 2 0.4979 × 10 2
64867010 0.6364 × 10 3 0.1602 × 10 2
T 1 T
Sequential ErrorsAbsolute Errors
N Δ I 1 Δ I 2 Δ I 3 Δ E Δ E 2 Δ I E E 2
1-----20.53471270.07312
2172820.18010.204600.18010.2046
4021130.10260.104000.20670.1896
80225 0.3856 × 10 1 0.4334 × 10 1 00.21420.1906
160012 0.1186 × 10 1 0.1445 × 10 1 00.21520.1906
32001 0.3932 × 10 2 0.5192 × 10 2 00.21560.1907
64000 0.1224 × 10 2 0.1723 × 10 2 00.21560.1907
Table A2. The greyness and pixel errors of binary images under T 1 T by CI # IM without nonlinear iterations for μ = 1 .
Table A2. The greyness and pixel errors of binary images under T 1 T by CI # IM without nonlinear iterations for μ = 1 .
T
Sequential Errors
NTotal Δ I 1 Δ I 2 Δ I 3 Δ E Δ E 2
1903-----
295715540.2594(-1)0.4096(-1)
49611940.592(-2)0.8293(-2)
89610300.1440(-2)0.1816(-2)
169620010.3611(-3)0.4303(-3)
329620000.8800(-4)0.9834(-4)
649620000.2256(-4)0.2472(-4)
T 1 T
Sequential ErrorsAbsolute Errors
N Δ I 1 Δ I 2 Δ I 3 Δ E Δ E 2 Δ I E E 2
1-----00.21340.1956
2028660.8706(-1)0.9920(-1)00.29640.2492
40110.1603(-1)0.1473(-1)00.31210.2609
80100.3762(-2)0.3226(-2)00.31580.2637
160000.9333(-3)0.7585(-3)00.31680.2644
320000.2316(-3)0.1843(-3)00.31700.2646
640000.5829(-4)0.4552(-4)00.31700.2646
Table A3. The greyness and pixel errors of Lena images under T 1 T by CI # IM for μ = 0 , where T is the harmonic model.
Table A3. The greyness and pixel errors of Lena images under T 1 T by CI # IM for μ = 0 , where T is the harmonic model.
N T T 1 T
Tol . Sequential ErrorsSequential ErrorsAbsolute Errors
Pix . P . E . Δ E Δ E 2 P . E . Δ E Δ E 2 P . E . E E 2
164,324//////0.580.2284(-2)0.4552(-1)
265,6416.010.2354(-1)0.7550(-1)6.200.2426(-1)0.6356(-1)5.940.2324(-1)0.5591(-1)
466,2642.340.8655(-1)0.2833(-1)3.160.1232(-1)0.2867(-1)6.350.2487(-1)0.5271(-1)
866,5120.950.3102(-2)0.1071(-1)1.180.4506(-2)0.1117(-1)6.470.2519(-1)0.5240(-1)
1666,5760.380.1116(-2)0.4413(-2)0.410.1538(-2)0.4310(-2)6.490.2529(-1)0.5243(-1)
Table A4. The greyness and pixel errors of Lena images under T 1 T by CI # IM for μ = 1 , where T is the harmonic model.
Table A4. The greyness and pixel errors of Lena images under T 1 T by CI # IM for μ = 1 , where T is the harmonic model.
N T T 1 T
Tol . Sequential ErrorsSequential ErrorsAbsolute Errors
Pix . P . E . Δ E Δ E 2 P . E . Δ E Δ E 2 P . E . E E 2
166,786//////6.440.2508(-1)0.5398(-1)
267,8151.150.4474(-2)0.1561(-1)2.490.9690(-2)0.2499(-1)8.590.3349(-1)0.6628(-1)
468,1770.310.1090(-2)0.4358(-2)0.470.1847(-2)0.4183(-2)8.980.3504(-1)0.6907(-1)
868,2890.100.2377(-3)0.1805(-2)0.110.4428(-3)0.1223(-2)9.070.3539(-1)0.6969(-1)
Table A5. The greyness and pixel errors of Lena images under T 1 T by CI # IM for μ = 0 , where T is the Poisson model with f 1 = f 2 = 1 on Ω 0 .
Table A5. The greyness and pixel errors of Lena images under T 1 T by CI # IM for μ = 0 , where T is the Poisson model with f 1 = f 2 = 1 on Ω 0 .
N T T 1 T
Tol . Sequential ErrorsSequential ErrorsAbsolute Errors
Pix . P . E . Δ E Δ E 2 P . E . Δ E Δ E 2 P . E . E E 2
169,284/////////
270,5585.670.2222(-1)0.7206(-1)7.840.3068(-1)0.9083(-1)8.080.3163(-1)0.9722(-1)
471,1452.260.8255(-2)0.2750(-1)3.450.1344(-1)0.3228(-1)8.180.3205(-1)0.9328(-1)
871,3660.980.3096(-2)0.1100(-1)1.250.4786(-2)0.1217(-1)8.280.3229(-1)0.9273(-1)
Table A6. The greyness and pixel errors of Lena images under T 1 T by CI # IM for μ = 1 , where T is the Poisson model with f 1 = f 2 = 1 on Ω 0 .
Table A6. The greyness and pixel errors of Lena images under T 1 T by CI # IM for μ = 1 , where T is the Poisson model with f 1 = f 2 = 1 on Ω 0 .
N T T 1 T
Tol . Sequential ErrorsSequential ErrorsAbsolute Errors
Pix . P . E . Δ E Δ E 2 P . E . Δ E Δ E 2 P . E . E E 2
171,429//////8.270.3225(-1)0.9324(-1)
272,3561.150.4449(-2)0.1713(-1)2.790.1089(-1)0.2734(-1)10.330.4033(-1)0.9884(-1)
472,7750.280.9620(-3)0.3144(-2)0.480.1887(-2)0.4070(-2)10.730.4189(-1)0.1005
872,8800.100.2388(-3)0.7165(-3)0.110.4417(-3)0.8919(-3)10.830.4228(-1)0.1009
Figure A1. The original binary image.
Figure A1. The original binary image.
Mathematics 13 01773 g0a1
Figure A2. The distorted and restored images under T 1 T by CI # IM for μ = 0 and N = 1 (left) and for μ = 1 and N = 4 (right).
Figure A2. The distorted and restored images under T 1 T by CI # IM for μ = 0 and N = 1 (left) and for μ = 1 and N = 4 (right).
Mathematics 13 01773 g0a2
Figure A3. Error curves Δ E from Table A5 and Table A6 for images under T 1 T by CI # IM of the Poisson model with f 1 = f 2 = 1 on Ω 0 .
Figure A3. Error curves Δ E from Table A5 and Table A6 for images under T 1 T by CI # IM of the Poisson model with f 1 = f 2 = 1 on Ω 0 .
Mathematics 13 01773 g0a3

References

  1. Li, Z.C.; Bui, T.D.; Tang, Y.Y.; Suen, C.Y. Computer Transformation of Digital Images and Patterns; World Scientific Publishing: Singapore; Hackensack, NJ, USA; London, UK, 1989. [Google Scholar]
  2. Farin, G. Curve and Surfaces for Computer-Aided Geometric Design: A Practical Guide, 3rd ed.; Academic Press: Boston, MA, USA, 1993. [Google Scholar]
  3. Foley, J.D.; van Dam, A.; Feiner, S.K.; Hughes, J.F. Computer Graphics: Principles and Practice, 2nd ed.; Addison-Wesley: Reading, MA, USA, 1990. [Google Scholar]
  4. Rogers, D.F.; Adams, J.A. Mathematical Elements for Computer Graphics, 2nd ed.; McGraw Hill: New York, NY, USA, 1989. [Google Scholar]
  5. Su, B.Q.; Liu, D.Y. Computational Geometry: Curve and Surface Modeling; Academic Press: Boston, MA, USA, 1989. [Google Scholar]
  6. Li, Z.C. Splitting-integrating method for the image transformations T and T−1T. Comput. Math. Applic. 1996, 32, 39–60. [Google Scholar] [CrossRef]
  7. Li, Z.C.; Bai, Z.D. Probabilistic analysis on the splitting-shooting method for image transformations. J. Comp. Appl. Math. 1998, 94, 69–121. [Google Scholar] [CrossRef]
  8. Huang, H.T.; Li, Z.C.; Wei, Y.; Suen, C.Y. Face boundary formulation for harmonic models: Face image resembling. J. Imaging 2025, 11, 14. [Google Scholar] [CrossRef]
  9. Li, Z.C.; Wang, H.; Liao, S. Numerical algorithms for image geometric transformation and applications. IEEE Trans. Sys. Man Cybern Part B-Cybern. 2004, 34, 132–149. [Google Scholar] [CrossRef] [PubMed]
  10. DeepSeek-AI Group. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. arXiv 2025, arXiv:2501.12948v1. [Google Scholar] [CrossRef]
  11. Li, Z.C.; Huang, H.T.; Wei, Y.; Suen, C.Y. Algorithms for Geometric Image Transformations. World Sci. 2025; in press. [Google Scholar]
  12. Atkinson, K.E. An Introduction to Numerical Analysis, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1989. [Google Scholar]
  13. Davis, P.J.; Rabinowitz, P. Methods of Numerical Integration, 2nd ed.; Academic Press Inc.: San Diego, CA, USA; New York, NY, USA, 1984. [Google Scholar]
  14. Castleman, K.R. Geometric transformations. In Microscope Image Processing, 2nd ed.; Merchant, F.A., Castlema, K.R., Eds.; Academic Press: London, UK; Elsevier: London, UK, 2023; pp. 47–54. [Google Scholar]
  15. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Pearson: New York, NY, YSA, 2017. [Google Scholar]
  16. Lakemond, N.; Holmberg, G.; Pettersson, A. Digital transformation in complex systems. IEEE Trans. Eng. Manag. 2024, 71, 192–204. [Google Scholar] [CrossRef]
  17. Pang, Y.; Lin, J.; Qin, T.; Chen, Z. Image-to-image translation: Methods and applications. IEEE Trans. Multimed. 2021, 24, 3859–3881. [Google Scholar] [CrossRef]
  18. Strang, G.; Fix, G.J. An Analysis of the Finite Element Method; Prentice-Hall Inc.: Englewood Cliffs, NJ, USA, 1973. [Google Scholar]
  19. Hageman, L.A.; Young, D.M. Applied Iterative Method; Academic Press: New York, NY, YSA, 1981. [Google Scholar]
  20. Winnicki, I.; Jasinski, J.; Pietrek, S.; Kroszczynski, K. The mathematical characteristic of the Laplace contour filters used in digital image processing. The third order filters. Adv. Geod. Geoinf. 2022, 71, e23. [Google Scholar] [CrossRef]
  21. Guo, G.; Zhang, N. A survey on deep learning based face recognition. Comput. Vis. Image Underst. 2019, 189, 102805. [Google Scholar] [CrossRef]
  22. Indrawal, D.; Sharma, A. Multi-module convolutional neural network based optimal face recognition with minibatch optimization. Int. J. Image Graph. Signal Process. 2022, 3, 32–46. [Google Scholar] [CrossRef]
  23. Scherhag, U.; Rathgeb, C.; Merkle, J.; Busch, C. Deep face representations for differential morphing attack detection. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3625–3639. [Google Scholar] [CrossRef]
  24. You, M.; Han, X.; Xu, Y.; Li, L. Systematic evaluation of deep face recognition methods. Neurocomputing 2020, 388, 144–156. [Google Scholar] [CrossRef]
  25. Tuncer, T.; Dogan, S.; Subasi, A. Automated facial expression recognition using novel textural transformation. J. Ambient Intell. Human Comput. 2023, 14, 9439–9449. [Google Scholar] [CrossRef]
  26. Venkatesh, S.; Ramachandra, R.; Raja, K.; Busch, C. Face morphing attack generation and detection: A comprehensive survey. IEEE Trans. Technol. Soc. 2021, 2, 128–145. [Google Scholar] [CrossRef]
  27. Aloraibi, A.Q. Image morphing techniques: A review. Technium 2023, 9, 41–53. [Google Scholar] [CrossRef]
  28. Li, J.; Zhou, S.K.; Chellappa, R. Appearance modeling using a geometric transform. IEEE Trans. Image Process. 2009, 18, 889–902. [Google Scholar] [PubMed]
  29. Cai, J.; Ding, S.; Zhang, Q.; Liu, R.; Zeng, D.; Zhou, L. Broken ice circumferential crack estimation via image techniques. Ocean Eng. 2022, 259, 111735. [Google Scholar] [CrossRef]
  30. Yao, F.; Zhang, H.; Gong, Y.; Zhang, Q.; Xiao, P. A study of enhanced visual perception of marine biology images based on diffusion-GAN. Complex Intell. Syst. 2025, 11, 1–20. [Google Scholar] [CrossRef]
  31. Zhou, L.; Cai, J.; Ding, S. The identification of ice floes and calculation of sea ice concentration based on a deep learning method. Remote. Sens. 2023, 15, 2663. [Google Scholar] [CrossRef]
  32. Wang, B.; Chen, W.; Qian, J.; Feng, S.; Chen, Q.; Zuo, C. Single-shot super-resolved fringe projection profilometry (SSSR-FPP): 100,000 frames-persecond 3D imaging with deep learning. Light. Sci. Appl. 2025, 14, 1–13. [Google Scholar] [CrossRef]
  33. Yu, Y.; Chen, F.; Gu, Y.; Zhang, Y.; Cui, C.; Liu, J.; Qiu, Z.; Wang, P. Optimization of 3D reconstruction of granular systems based on refractive index matching scanning. Opt. Laser Technol. 2025, 186, 112662. [Google Scholar] [CrossRef]
  34. Xu, X.; Fu, X.; Zhao, H.; Liu, M.; Xu, A.; Ma, Y. Three-dimensional reconstruction and geometric morphology analysis of lunar small craters within the patrol range of the Yutu-2 rover. Remote. Sens. 2023, 15, 4251. [Google Scholar] [CrossRef]
  35. Sun, Q.; Wang, H.; Liu, W.; Zou, J.; Ye, F.; Li, Y. An improved stereo visual-inertial SLAM algorithm based on point-and-line features for subterranean environments. IEEE Trans. Veh. Technol. 2025, 74, 3925–3940. [Google Scholar] [CrossRef]
  36. Zhao, D.; Zhou, H.; Chen, P.; Hu, Y.; Ge, W.; Dang, Y.; Liang, R. Design of forward-looking sonar system for real-time image segmentation with light multiscale attention net. IEEE Trans. Instrum. Meas. 2024, 73, 4501217. [Google Scholar] [CrossRef]
  37. Li, M.; Jia, T.; Wang, H.; Ma, B.; Lu, H.; Lin, S.; Cai, D.; Chen, D. AO-DETR: Anti-Overlapping DETR for X-Ray Prohibited Items Detection. IEEE Trans. Neural Netw. Learn. Syst. 2024; early access. [Google Scholar] [CrossRef]
Figure 1. Schematic steps of numerical methods for image transformations.
Figure 1. Schematic steps of numerical methods for image transformations.
Mathematics 13 01773 g001
Figure 2. The inverse transformations at the vertices of I J .
Figure 2. The inverse transformations at the vertices of I J .
Mathematics 13 01773 g002
Figure 3. The inverse transformation at ( I , J ) in X O Y .
Figure 3. The inverse transformation at ( I , J ) in X O Y .
Mathematics 13 01773 g003
Figure 4. The square * and i , j 2 × 2 in 3 × 3 nodes.
Figure 4. The square * and i , j 2 × 2 in 3 × 3 nodes.
Mathematics 13 01773 g004
Figure 5. The transformation of arbitrary domains.
Figure 5. The transformation of arbitrary domains.
Mathematics 13 01773 g005
Figure 6. Two partitions in the FDM.
Figure 6. Two partitions in the FDM.
Mathematics 13 01773 g006
Table 1. Comparisons of six combinations for T 1 T with μ = 1 in their characteristics, accuracy and convergence.
Table 1. Comparisons of six combinations for T 1 T with μ = 1 in their characteristics, accuracy and convergence.
CombinationsReferencesSolve Nonlinear Eqs.Increasing NSequential ErrorsAbsolute Errors (Continuous Greyness)Absolute Errors (Discontinuous Greyness)
CSIM [1,7]NoYes O ( 1 / N )
and
O p ( 1 / N 1.5 )
O ( H 2 ) /
C S ¯ IM [11] (Chapter 7) NoYes O ( 1 / N 2 ) O ( H 2 ) /
C S ¯ # I ¯ # M [11] (Chapter 8) NoNot but N 0 / O ( H ) O ( H )
CIIM [6]YesYes O ( H 2 ) /
CI * IM [6]YesYes O ( 1 / N 2 ) O ( H 2 ) /
CI # IM [9] and
this paper
NoYes O ( H 2 ) O ( H α ) , 0 < α < 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, H.-T.; Li, Z.-C.; Wei, Y.; Suen, C.Y. Improved Splitting-Integrating Methods for Image Geometric Transformations: Error Analysis and Applications. Mathematics 2025, 13, 1773. https://doi.org/10.3390/math13111773

AMA Style

Huang H-T, Li Z-C, Wei Y, Suen CY. Improved Splitting-Integrating Methods for Image Geometric Transformations: Error Analysis and Applications. Mathematics. 2025; 13(11):1773. https://doi.org/10.3390/math13111773

Chicago/Turabian Style

Huang, Hung-Tsai, Zi-Cai Li, Yimin Wei, and Ching Yee Suen. 2025. "Improved Splitting-Integrating Methods for Image Geometric Transformations: Error Analysis and Applications" Mathematics 13, no. 11: 1773. https://doi.org/10.3390/math13111773

APA Style

Huang, H.-T., Li, Z.-C., Wei, Y., & Suen, C. Y. (2025). Improved Splitting-Integrating Methods for Image Geometric Transformations: Error Analysis and Applications. Mathematics, 13(11), 1773. https://doi.org/10.3390/math13111773

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop