Next Article in Journal
Cosmic Evolution Optimization: A Novel Metaheuristic Algorithm for Numerical Optimization and Engineering Design
Previous Article in Journal
A Half-Discrete Hardy–Mulholland-Type Inequality Involving One Multiple Upper Limit Function and One Partial Sum
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Stability of Isometry by Singular Value Decomposition

1
Nano Convergence Technology Research Institute, School of Semiconductor & Display Technology, Hallym University, Chuncheon 24252, Republic of Korea
2
Ilsong Liberal Art Schools (Mathematics), Hallym University, Chuncheon 24252, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2500; https://doi.org/10.3390/math13152500
Submission received: 27 June 2025 / Revised: 30 July 2025 / Accepted: 1 August 2025 / Published: 3 August 2025

Abstract

Hyers and Ulam considered the problem of whether there is a true isometry that approximates the ε -isometry defined on a Hilbert space with a stability constant 10 ε . Subsequently, Fickett considered the same question on a bounded subset of the n-dimensional Euclidean space R n with a stability constant of 27 ε 1 / 2 n . And Vestfrid gave a stability constant of 27 n ε as the answer for bounded subsets. In this paper, by applying singular value decomposition, we improve the previous stability constants by C n ε for bounded subsets, where the constant C depends on the approximate linearity parameter K, which is defined later.
MSC:
46C99; 39B82; 39B62; 46B04

1. Introduction

Suppose ( E , · , · ) and ( F , · , · ) are real (or complex) Hilbert spaces and D is a nonempty subset of E. Given a constant ε > 0 , a function f : D F is said to be an ε-isometry if and only if f satisfies the following inequality:
| f ( x ) f ( y ) x y | ε
for all x , y D , where · is the norm generated by the inner product · , · , i.e., x = x , x .
Hyers and Ulam [1] proved the stability of the surjective isometry defined on the ‘whole’ space by using properties of the inner product of Hilbert space:
For every surjective ε-isometry f : E E that satisfies f ( 0 ) = 0 , there exists a surjective isometry I : E E satisfying
f ( x ) I ( x ) 10 ε
for all x E .
In 1978, Gruber [2] proved that if the Hyers–Ulam theorem holds for all surjective ε -isometries f : E F , where E and F are real Banach spaces, then 10 ε can be replaced by 5 ε in inequality (2) (see also Gevirtz [3]). Finally, Omladič and Šemrl [4] succeeded in changing 10 ε to 2 ε in inequality (2) and showing that the resulting upper bound is sharp.
For the case when the domain of the relevant ε -isometries is bounded, Fickett [5] answered the question as to whether there exists a true isometry that approximates the ε -isometry defined on a bounded set. Now we introduce Fickett’s theorem:
Assume that n 2 is an integer, D is a bounded nonempty subset of R n , and ε > 0 is any constant. If f : D R n is an ε-isometry, then there is an isometry I : D R n that satisfies
f ( x ) I ( x ) 27 ε 1 / 2 n
for all x D .
Comparing (2) and (3), we see that as ε approaches 0, the convergence rate on the bounded set of high-dimensional spaces becomes slower than the convergence rate on the (whole) Hilbert space. Vestfrid [6] proved that for every ε -isometry f : B 1 ( 0 ) R n , there is an isometry I : B 1 ( 0 ) R n such that f ( x ) I ( x ) C n ε for all x B 1 ( 0 ) , where C n = 27 n . Very recently, Choi and Jung were able to improve the upper bound of inequality (3) to C n = 16 n + 8 n , where we set d = 1 (see [7], Theorem 3):
Given an integer n 4 , let D be a bounded subset of the n-dimensional Euclidean space R n such that { 0 , e 1 , e 2 , , e n } D B d ( 0 ) for some d 1 . If f : D R n is an ε-isometry with f ( 0 ) = 0 , where ε is some real number with 0 < ε < 1 8 n , then there exists a linear isometry I : R n R n such that
f ( x ) I ( x ) ( ( 10 d + 6 ) n + 8 d n ) ε
for all x D .
So, as n , the C n of the results in [6,7] converges by O ( n ) , while the C n of our result converges by O ( n ) . As we can see, the rate with which the upper bound of the relevant inequality varies with the dimension n of the space is as important as the speed with which the upper bound converges to 0 as ε approaches 0.
In this paper, we focus on finding a sufficient condition to obtain a convergence rate proportional to n by applying singular value decomposition. Indeed, we prove the stability of isometry defined on the closed ball B 1 ( 0 ) , where C n = ( 6 K + 8 ) n , under an additional condition of approximate linearity with the constant K (whose definition is given in Section 4). Throughout this paper, R n denotes the n-dimensional Euclidean space, where n is a fixed positive integer.

2. Change of Basis

Assume that B = { b 1 , b 2 , , b n } is a basis for the n-dimensional Euclidean space R n and x R n , where every b i and x are written as a column vector. The B -coordinates of x are the weights c 1 , c 2 , , c n such that
x = c 1 b 1 + c 2 b 2 + + c n b n ,
where c 1 , c 2 , , c n are uniquely determined real numbers that depend only on the choice of x R n . We use the symbol [ x ] B to denote the B -coordinates of x. More precisely, if x is represented by (4), we have
[ x ] B = ( c 1 , c 2 , , c n ) t r ,
where ( c 1 , c 2 , , c n ) t r denotes the transpose of the row vector ( c 1 , c 2 , , c n ) , i.e., [ x ] B is a column vector. For simplicity, we write x instead of [ x ] E , where E = { e 1 , e 2 , , e n } denotes the standard basis for R n .
We now denote by [ R n ] B the n-dimensional real vector space R n in the B -coordinate system. That is,
[ R n ] B = [ x ] B : x R n .
Let us define the n × n matrix C B by
C B = ( b 1 b 2 b n ) .
Then the vector equation, Equation (4), is equivalent to
x = C B [ x ] B .
The square matrix C B is called the change-of-coordinates matrix from B to the standard basis E . Since the columns of C B form a basis for R n , C B is invertible. Thus, Equation (6) is equivalent to
[ x ] B = C B 1 x ,
where C B 1 denotes the inverse of C B . In other words, C B 1 converts the E -coordinates of x into its B -coordinates.

3. Singular Value Decomposition

A square matrix A is said to be diagonalizable if it is similar to a diagonal matrix, that is, if there is an invertible matrix P and a diagonal matrix Λ such that A = P Λ P 1 . However, there are matrices that cannot be diagonalized, but fortunately the factorization A = Q Λ P 1 is possible for any m × n matrix A , where P and Q are some suitable invertible matrices. A special factorization of this kind, the so-called singular value decomposition, is one of the most useful matrix factorizations in applied linear algebra.
Suppose A is an arbitrary m × n matrix whose entries are all real numbers, i.e., A R m × n . We use the symbol A t r to denote the transpose of the matrix A . We note that A t r A is symmetric and can therefore be orthogonally diagonalized. Let { v 1 , v 2 , , v n } be an orthonormal basis for R n consisting of eigenvectors of A t r A , and let λ 1 , λ 2 , , λ n be the corresponding eigenvalues of A t r A . Then we have
A v i 2 = ( A v i ) t r A v i = v i t r ( A t r A v i ) = v i t r ( λ i v i ) = λ i
for every i { 1 , 2 , , n } , where · denotes the Euclidean norm on R m . Therefore, all eigenvalues of A t r A are nonnegative. By renumbering if necessary, we can assume that the eigenvalues are arranged in decreasing order:
λ 1 λ 2 λ n 0 .
The singular values of A are the square roots of the eigenvalues of A t r A , denoted by σ 1 , σ 2 , , σ n , and they are arranged in decreasing order. That is,
σ 1 σ 2 σ n 0 ,
where σ i = λ i for every i { 1 , 2 , , n } .
According to the singular value decomposition, for every A R n × n , there is an n × n orthogonal matrix U = ( u 1 u 2 u n ) , a diagonal matrix Σ whose diagonal entries are the singular values of A , namely σ 1 , σ 2 , , σ n , and an n × n orthogonal matrix V = ( v 1 v 2 v n ) such that
A = U Σ V t r ,
where u i and v i are column vectors with n components each, and the column vectors { v 1 , v 2 , , v n } of V form an orthonormal basis for R n consisting of eigenvectors of A t r A .
The SVD method is a way of carrying out diagonalization, and the reason we use this method is to dramatically reduce the number of variables we have to consider, so we can obtain good estimates.

4. Stability of Isometries on a Bounded Domain

As before, let E = { e 1 , e 2 , , e n } be the standard basis for the n-dimensional Euclidean space R n , where n > 0 . Let B d ( 0 ) represent a closed ball in R n with radius d 1 and its center at the origin of R n , i.e.,
B d ( 0 ) = x R n : x d ,
where · is the Euclidean norm generated by the inner product · , · with x , y = i = 1 n x i y i for x = i = 1 n x i e i and y = i = 1 n y i e i , i.e., x = x , x .
In this section, let f : B d ( 0 ) R n be an ε -isometry that satisfies f ( 0 ) = 0 . We define the n × n matrix A by
A = f ( e 1 ) f ( e 2 ) f ( e n ) ,
where every f ( e i ) is written as a column vector. We note that f ( e i ) = A e i for every i { 1 , 2 , , n } .
According to the singular value decomposition or (10), there is an n × n orthogonal matrix U = ( u 1 u 2 u n ) , a diagonal matrix Σ whose diagonal entries are the singular values of A , namely σ 1 , σ 2 , , σ n , and an n × n orthogonal matrix V = ( v 1 v 2 v n ) such that A = U Σ V t r . Note that the sets of column vectors U : = { u 1 , u 2 , , u n } and V : = { v 1 , v 2 , , v n } form orthonormal bases for R n , respectively.
Gevirtz showed in [3] that every surjective ε -isometry f : E F behaves asymptotically like a linear surjective isometry. When D is a proper subset of a real Banach space E, this result does not imply that each ε -isometry f : D F behaves asymptotically as a linear isometry. Nevertheless, this Gevirtz result ensures the reasonableness of the following definition.
Definition 1. 
A function f : B d ( 0 ) R n will be called approximately linear with constant K if and only if there is a constant K 0 such that
f ( x ) A x K ε
for all x B d ( 0 ) , where ε and A are given in ( 1 ) and ( 12 ) , respectively. In particular, a linear function f : B d ( 0 ) R n is approximately linear with the constant K = 0 .
In the following two examples, we will construct functions that are approximately linear and functions that are not.
Example 1. 
We define a function f : B 1 ( 0 ) R 2 by
f ( x ) = x 1 ε x 2 + ε
for all x = ( x 1 , x 2 ) t r B 1 ( 0 ) . Since
f ( e 1 ) = 1 ε ε a n d f ( e 2 ) = ε 1 + ε ,
we have
A = ( f ( e 1 ) f ( e 2 ) ) = 1 ε ε ε 1 + ε .
Moreover, we obtain
f ( x ) A x = x 1 ε x 2 + ε x 1 ε ( x 1 + x 2 ) x 2 + ε ( x 1 + x 2 ) ( 2 + 2 ) ε
for all x B 1 ( 0 ) . Therefore, we see that f is approximately linear with the constant K = 2 + 2 .
Example 2. 
As a counter example, we let K = 1 . And we define a function f : B 1 ( 0 ) R n by
f ( x ) = x 1 + ε x 2 + ε x n + ε
for all x = ( x 1 , x 2 , , x n ) t r B 1 ( 0 ) , where the closed unit ball B 1 ( 0 ) is defined by ( 11 ) . Then we have
A = ( f ( e 1 ) f ( e 2 ) f ( e n ) ) = 1 + ε ε ε ε 1 + ε ε ε ε 1 + ε .
Furthermore, we apply the Lagrange multiplier method to obtain
f ( x ) A x = ε ( 1 x 1 x 2 x n ) ε ( 1 x 1 x 2 x n ) ε ( 1 x 1 x 2 x n ) = n | 1 x 1 x 2 x n | ε ,
for all x B 1 ( 0 ) . Therefore, there exists an x = ( x 1 , x 2 , , x n ) t r R n that does not satisfy the condition for the approximate linearity of f with the constant K = 1 .
Theorem 1. 
Assume that n is a positive integer, d is a real number that is not less than 1, and K is a nonnegative real number. Suppose f : B d ( 0 ) R n is a function that satisfies the following three conditions:
( i )
f ( 0 ) = 0 ;
( i i )
f is approximately linear with the constant K;
( i i i )
f is an ε-isometry; more precisely, it satisfies inequality ( 1 ) for all x , y B d ( 0 ) and for some constant ε with 0 < ε < 1 K + 1 .
Then there exists an isometry J : B d ( 0 ) R n such that
f ( x ) J ( x ) ( 2 d + 4 ) ( K + 1 ) + d + 1 n ε
for all x B d ( 0 ) .
Proof. 
Let C U be the change-of-coordinates matrix from the basis (coordinate system) U to the standard basis E . Similarly, let C V be the change-of-coordinates matrix from V to E . On account of (5), (6), (7), (10), and (12), it holds that C U = U = ( u 1 u 2 u n ) , C V = V = ( v 1 v 2 v n ) , and
[ f ( e i ) ] U = U t r f ( e i ) = U t r A e i = U t r A V [ e i ] V = Σ [ e i ] V
for any i { 1 , 2 , , n } . Let [ R n ] U be the set of U -coordinates of all x R n , i.e.,
[ R n ] U = [ x ] U : x R n .
From a set-theoretic perspective alone, we can see that [ R n ] U = R n . For simplicity, we write R n instead of [ R n ] E .
Considering (13), we define the linear function f ˜ : B d ( 0 ) [ R n ] U by
f ˜ ( x ) = Σ [ x ] V
for any x B d ( 0 ) . Therefore, we obtain
f ˜ ( v i ) = Σ [ v i ] V = ( 0 , , 0 , σ i i th , 0 , , 0 ) U ,
i.e., the U -coordinates of f ˜ ( v i ) is ( 0 , , 0 , σ i , 0 , , 0 ) for all i { 1 , 2 , , n } . (From now on, all vectors will be expressed as row vectors for convenience.) Also comparing (13) and (14), we have
f ˜ ( e i ) = [ f ( e i ) ] U
for any i { 1 , 2 , , n } .
Since the orthogonal matrix U t r preserves the Euclidean norm of each vector in R n , it follows from (7), ( i i ) , and (14) that
f ˜ ( x ) [ f ( x ) ] U = Σ V t r x U t r f ( x ) = U t r ( U Σ V t r x f ( x ) ) = A x f ( x ) K ε
for all x B d ( 0 ) and for the constant K 0 given by ( i i ) .
For any x B d ( 0 ) with [ x ] V = ( x 1 , x 2 , , x n ) , let [ f ( x ) ] U = ( x 1 , x 2 , , x n ) , where [ x ] V and [ f ( x ) ] U denote the V -coordinates of x and the U -coordinates of f ( x ) , respectively. Since x = C V [ x ] V = V [ x ] V and V and V t r are orthogonal matrices (since f ( x ) = U [ f ( x ) ] U and U and U t r are orthogonal), it is obvious that
[ x ] V = x = i = 1 n x i 2 1 / 2 a n d [ f ( x ) ] U = f ( x ) = i = 1 n x i 2 1 / 2 .
Thus, from (1), ( i ) , and (17), we have
| [ f ( x ) ] U [ x ] V | ε and | [ f ( x ) ] U [ f ( v j ) ] U [ x ] V [ v j ] V | ε
for all x B d ( 0 ) and j { 1 , 2 , , n } . Moreover, since f ˜ ( v j ) has U -coordinates, from the inverse triangle inequality and (16), we obtain
| f ˜ ( v j ) [ f ( v j ) ] U | f ˜ ( v j ) [ f ( v j ) ] U K ε .
Since σ j = f ˜ ( v j ) from (15) and v j ε [ f ( v j ) ] U = f ( v j ) v j + ε from (17), it follows from (19) that
1 ( K + 1 ) ε [ f ( v j ) ] U K ε σ j [ f ( v j ) ] U + K ε 1 + ( K + 1 ) ε
for all j { 1 , 2 , , n } .
Hence, using (17) and the first inequality of (18), we have
| i = 1 n x i 2 1 / 2 i = 1 n x i 2 1 / 2 | ε
and from (16) and the (inverse) triangle inequality, we note that
[ f ( x ) ] U [ f ( v j ) ] U K ε [ f ( x ) ] U [ f ( v j ) ] U f ˜ ( v j ) [ f ( v j ) ] U [ f ( x ) ] U f ˜ ( v j ) [ f ( x ) ] U [ f ( v j ) ] U + f ˜ ( v j ) [ f ( v j ) ] U [ f ( x ) ] U [ f ( v j ) ] U + K ε ,
i.e.,
K ε [ f ( x ) ] U f ˜ ( v j ) [ f ( x ) ] U [ f ( v j ) ] U K ε
for every j { 1 , 2 , , n } . Furthermore, using (15), the second inequality of (18), and (22), we obtain
| i j x i 2 + ( x j σ j ) 2 1 / 2 i j x i 2 + ( x j 1 ) 2 1 / 2 | = | [ f ( x ) ] U f ˜ ( v j ) [ x ] V [ v j ] V | | [ f ( x ) ] U f ˜ ( v j ) [ f ( x ) ] U [ f ( v j ) ] U | + | [ f ( x ) ] U [ f ( v j ) ] U [ x ] V [ v j ] V | ( K + 1 ) ε
for all j { 1 , 2 , , n } .
It now follows from (21) that
| i = 1 n x i 2 i = 1 n x i 2 | = | i = 1 n x i 2 1 / 2 + i = 1 n x i 2 1 / 2 | | i = 1 n x i 2 1 / 2 i = 1 n x i 2 1 / 2 | ( 2 d + 1 ) ε
since x 1 2 + + x n 2 = [ f ( x ) ] U = f ( x ) x + ε d + 1 , x 1 2 + + x n 2 = [ x ] V = x d , and 0 < ε < 1 K + 1 1 by ( i i i ) . Similarly, it follows from (23) that
| i j x i 2 + ( x j σ j ) 2 i j x i 2 + ( x j 1 ) 2 | ( 2 d + 3 ) ( K + 1 ) ε
for all j { 1 , 2 , , n } , since it holds from (15) and (22) that
i j x i 2 + ( x j σ j ) 2 1 / 2 = [ f ( x ) ] U f ˜ ( v j ) [ f ( x ) ] U [ f ( v j ) ] U + K ε = f ( x ) f ( v j ) + K ε x v j + ( K + 1 ) ε d + 1 + ( K + 1 ) ε
and
i j x i 2 + ( x j 1 ) 2 1 / 2 = [ x ] V [ v j ] V = x v j x + v j d + 1 .
We use (25) to obtain
( 2 d + 3 ) ( K + 1 ) ε i = 1 n x i 2 i = 1 n x i 2 + 1 σ j 2 2 x j 2 σ j x j ( 2 d + 3 ) ( K + 1 ) ε i = 1 n x i 2 i = 1 n x i 2 + 1 σ j 2
for any j { 1 , 2 , , n } . Therefore, it follows from (20), (24), and (26) that
( 2 d + 3 ) ( K + 1 ) + ( 2 d + 1 ) + 3 ( K + 1 ) ε 2 x j 2 σ j x j ( 2 d + 3 ) ( K + 1 ) + ( 2 d + 1 ) + 2 ( K + 1 ) ε
for all j { 1 , 2 , , n } .
We note that | x j | [ f ( x ) ] U = f ( x ) x + ε d + 1 . Since x j σ j x j = ( x j x j ) + ( 1 σ j ) x j , we can use (20) and (27) to obtain
| x j x j | | x j σ j x j | + | 1 σ j | | x j | ( 2 d + 4 ) ( K + 1 ) + d + 1 ε
for j { 1 , 2 , , n } .
We now define an isometry I : B d ( 0 ) [ R n ] U by I ( x ) = ( x 1 , x 2 , , x n ) U for all points x B d ( 0 ) with [ x ] V = ( x 1 , x 2 , , x n ) , i.e., [ I ( x ) ] E = i = 1 n x i u i for all x B d ( 0 ) with [ x ] E = i = 1 n x i v i . It then follows from (28) that
[ f ( x ) ] U I ( x ) = ( x 1 x 1 , x 2 x 2 , , x n x n ) = j = 1 n | x j x j | 2 1 / 2 ( 2 d + 4 ) ( K + 1 ) + d + 1 n ε
for all x B d ( 0 ) with [ x ] V = ( x 1 , x 2 , , x n ) . We recall that x = [ x ] V d for all x B d ( 0 ) . Let I be the transformation matrix for the linear operator I. Then we have
f ( x ) U I x = U U t r f ( x ) U I x = U t r f ( x ) I x = [ f ( x ) ] U I ( x ) ( 2 d + 4 ) ( K + 1 ) + d + 1 n ε
for all x B d ( 0 ) , where J = U I is the (orthogonal) transformation matrix for some isometry J : B d ( 0 ) R n . □
We note that a function f : B d ( 0 ) R n is linear if and only if it is approximately linear with constant K = 0 . Therefore, if we assume that f is linear, we can easily verify that the following corollary is an immediate consequence of Theorem 1.
Corollary 1. 
Assume that n is a positive integer, d is a real number that is not less than 1, and ε is a constant satisfying 0 < ε < 1 . If f : B d ( 0 ) R n is a linear ε-isometry, then there exists an isometry J : B d ( 0 ) R n such that
f ( x ) J ( x ) ( 3 d + 5 ) n ε
for all x B d ( 0 ) .
Example 3. 
Given 0 < δ < 1 , we define a function f : B 1 ( 0 ) R 2 by
f ( x ) = x 1 + δ x 2 x 2 δ x 1
for all x = ( x 1 , x 2 ) t r B 1 ( 0 ) . Then we have
f ( e 1 ) = 1 δ , f ( e 2 ) = δ 1 a n d A = 1 δ δ 1 .
Hence, it follows that
A x = 1 δ δ 1 x 1 x 2 = x 1 + δ x 2 x 2 δ x 1 = f ( x )
for all x B 1 ( 0 ) , i.e., f is linear (and therefore K = 0 ) .
In addition, we obtain
f ( x ) f ( y ) 2 = x 1 y 1 + δ ( x 2 y 2 ) x 2 y 2 δ ( x 1 y 1 ) 2 = 1 + δ 2 ( x 1 y 1 ) 2 + ( x 2 y 2 ) 2 = ( 1 + ε ) x y 2
for all x , y B 1 ( 0 ) , where we set ε = δ 2 . That is, it holds that
0 < f ( x ) f ( y ) x y = ( 1 + ε 1 ) x y ε
for all x , y B 1 ( 0 ) , which implies that f is an ε-isometry. According to Corollary 1, there exists an isometry J : B 1 ( 0 ) R 2 such that
f ( x ) J ( x ) 8 2 ε
for all x B 1 ( 0 ) .

5. Conclusions and Discussion

One of the characteristics of this paper is that it is the first to use the technique of singular value decomposition to study the stability of isometries. Although we additionally assumed approximate linearity, we were able to obtain a better result than previous results by other mathematicians by utilizing the SVD technique, as can be seen in Table 1 below.
We compare the result of this paper with previously published notable research results and present them in Table 1. More precisely, we compare the coefficients C n of the upper bounds C n ε for the difference between the given function f and the sought isometry I.
The values in the first row of Table 1 were obtained by substituting c = 1 into the formula presented in the proof of [8] (Theorem 4.1). The values in the second row of Table 1 result from the assumption that C = 27 and R = r = 1 in [6] (Theorem 1). The values of the third and fourth rows are due to the formulas presented in [9] and [7] with d = 1 , respectively. The last row gives approximate values obtained from Theorem 1 in this paper, assuming d = 1 and K = 1 . According to Table 1, the results of this paper are superior to those of [6] for all integers n 1 .
In an extension of this paper, we plan to continue to study the following topics:
( i )
Whether similar results can be achieved with norms other than the Euclidean norm will be investigated in more detail.
( i i )
We will investigate whether similar results to those in this paper can be obtained under conditions other than approximate linearity.
( i i i )
We will examine the relationship between the concept of ε -isometry and the concept of approximate linearity.
( i v )
We will study whether the results of this paper can be extended to infinite-dimensional real Hilbert spaces.

Author Contributions

Writing—review & editing, S.-M.J. and J.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C109489611).

Data Availability Statement

No data were gathered for this article.

Acknowledgments

We would like to thank the Reviewers for taking the time and effort necessary to review the manuscript. We sincerely appreciate all valuable comments and suggestions, which helped us to improve the quality of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hyers, D.H.; Ulam, S.M. On approximate isometries. Bull. Am. Math. Soc. 1945, 51, 288–292. [Google Scholar] [CrossRef]
  2. Gruber, P.M. Stability of isometries. Trans. Am. Math. Soc. 1978, 245, 263–277. [Google Scholar] [CrossRef]
  3. Gevirtz, J. Stability of isometries on Banach spaces. Proc. Am. Math. Soc. 1983, 89, 633–636. [Google Scholar] [CrossRef]
  4. Omladič, M.; Šemrl, P. On non linear perturbations of isometries. Math. Ann. 1995, 303, 617–628. [Google Scholar] [CrossRef]
  5. Fickett, J.W. Approximate isometries on bounded sets with an application to measure theory. Stud. Math. 1982, 72, 37–46. [Google Scholar] [CrossRef]
  6. Vestfrid, I.A. ε-Isometries in Euclidean spaces. Nonlinear Anal. 2005, 63, 1191–1198. [Google Scholar] [CrossRef]
  7. Choi, G.; Jung, S.-M. Hyers-Ulam stability of isometries on bounded domains–III. Mathematics 2024, 12, 1784. [Google Scholar] [CrossRef]
  8. Väisälä, J. Isometric approximation property in Euclidean spaces. Israel J. Math. 2002, 128, 1–27. [Google Scholar] [CrossRef]
  9. Jung, S.-M. Hyers-Ulam stability of isometries on bounded domains. Open Math. 2021, 19, 675–689. [Google Scholar] [CrossRef]
Table 1. Comparison of the results of this paper with those of existing papers.
Table 1. Comparison of the results of this paper with those of existing papers.
R R 2 R 3 R 4 R 5
in [8]4>79>799>7990>79,900
in [6]275481108135
in [9]<84128<179
in [7]8098
in this paper14<20<2528<32
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jung, S.-M.; Roh, J. The Stability of Isometry by Singular Value Decomposition. Mathematics 2025, 13, 2500. https://doi.org/10.3390/math13152500

AMA Style

Jung S-M, Roh J. The Stability of Isometry by Singular Value Decomposition. Mathematics. 2025; 13(15):2500. https://doi.org/10.3390/math13152500

Chicago/Turabian Style

Jung, Soon-Mo, and Jaiok Roh. 2025. "The Stability of Isometry by Singular Value Decomposition" Mathematics 13, no. 15: 2500. https://doi.org/10.3390/math13152500

APA Style

Jung, S.-M., & Roh, J. (2025). The Stability of Isometry by Singular Value Decomposition. Mathematics, 13(15), 2500. https://doi.org/10.3390/math13152500

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop