Previous Article in Journal
A High-Precision UWB-Based Indoor Positioning System Using Time-of-Arrival and Intersection Midpoint Algorithm
Previous Article in Special Issue
European Efficiency Schemes for Domestic Gas Boilers: Estimation of Savings in Heating of Settlements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Matrix Techniques Based on Averages

by
María A. Navascués
Department of Applied Mathematics, Universidad de Zaragoza, 50018 Zaragoza, Spain
Algorithms 2025, 18(7), 439; https://doi.org/10.3390/a18070439
Submission received: 19 June 2025 / Revised: 10 July 2025 / Accepted: 14 July 2025 / Published: 17 July 2025

Abstract

Matrices have an important role in modern engineering problems like artificial intelligence, biomedicine, machine learning, etc. The present paper proposes new algorithms to solve linear problems involving finite matrices as well as operators in infinite dimensions. It is well known that the power method to find an eigenvalue and an eigenvector of a matrix requires the existence of a dominant eigenvalue. This article proposes an iterative method to find eigenvalues of matrices without a dominant eigenvalue. This algorithm is based on a procedure involving averages of the mapping and the independent variable. The second contribution is the computation of an eigenvector associated with a known eigenvalue of linear operators or matrices. Then, a novel numerical method for solving a linear system of equations is studied. The algorithm is especially suitable for cases where the iteration matrix has a norm equal to one or the standard iterative method based on fixed point approximation converges very slowly. These procedures are applied to the resolution of Fredholm integral equations of the first kind with an arbitrary kernel by means of orthogonal polynomials, and in a particular case where the kernel is separable. Regarding the latter case, this paper studies the properties of the associated Fredholm operator.
MSC:
15A18; 15A06; 65H17; 47A10; 45B05

1. Introduction

Matrices have an important role in current engineering problems like artificial intelligence [1], biomedicine [2], machine learning [3], neural networks [4], etc. It could be said that they are, in fact, an interface between mathematics and the real world. Spectral matrix theory is closely related to big data and statistics, graph theory, networking, cryptography, search engines, optimization, convexity and, of course, the resolution of all kinds of linear equations.
An open problem of spectral theory is the sought-for eigenvalues and eigenvectors of a matrix. For this purpose, iterative methods are especially appropriate when the matrices have a large size and they are sparse, because the iteration matrix does not change during the process.
The classical power method of R. von Mises and H. Pollaczek-Geiringer [5] assumes the existence of a dominant eigenvalue. One of the main contributions of this paper is the proposal of an algorithm to compute the eigenvalues of matrices and operators that do not have a dominant eigenvalue. The algorithm is based on the iterative averaged methods for fixed-point approximation proposed initially by Mann [6] and Krasnoselskii [7] and, subsequently, by Ishikawa [8], Noor [9], Sahu [10] and others. There are recent articles on this topic (references [11,12,13,14,15,16,17,18,19], for instance). In general, these papers deal with the problem of approximation of fixed points of non-expansive, asymptotically non-expansive and nearly asymptotically non-expansive mappings that are particular cases of Lipschitz and near-Lipschitz maps [20,21]. However, this article aims to prove that the usefulness of iterative averaged methods goes beyond fixed-point approximation.
In reference [22], the author proposed a new algorithm of this type called N-iteration. The method has proved useful for the iterative resolution of Fredholm integral equations of the second kind and of the Hammerstein type [23] (see, for instance, references [24,25]).
The above procedure is used in this article to compute eigenvalues and eigenvectors of matrices and linear operators (Section 2 and Section 3) and for the resolution of systems of linear equations (Section 4). All these techniques are applied to solving Fredholm integral equations [26,27,28,29,30] of the first kind by means of orthogonal polynomials (Section 5). In Section 6, we review some properties of compact and Fredholm integral operators, while Section 7 presents the solution of a Fredholm integral equation of the first kind in a particular case where the kernel of the operator is separable. Section 8 deals with a particular case of a Fredholm integral equation of the second kind. Several examples with tables and figures illustrate the performance and convergence of the algorithms proposed.

2. Averaged Power Method for Finding Eigenvalues and Eigenvectors of a Matrix

Given a matrix A M ( m × m ) , where M ( m × m ) denotes the set of real or complex matrices of size m × m , the power method is a well-known procedure for finding the dominant eigenvalue of matrix A. Let σ ( A ) represent the spectrum of A, that is to say, the set of eigenvalues of A.
Definition 1. 
A matrix A has a dominant eigenvalue λ 1 σ ( A ) if | λ 1 | > | λ j | for any λ j σ ( A ) , j 1 . If v R m (or v C m ) is non-null and A v = λ 1 v then v is a dominant eigenvector of A .
It is well known that if the matrix A is diagonalizable and it has a dominant eigenvalue, then the sequence defined by X 0 0 ¯ and the recurrence X k + 1 = A X k for k 0 converges with high probability to a dominant eigenvector (i.e., if the component of X 0 with respect to the dominant eigenvector is non-null). The dominant eigenvalue can be approximated by means of the so-called Rayleigh quotient:
λ 1 < A X k , X k > < X k , X k > ,
when k is sufficiently large. The formula comes, obviously, from the definition of an eigenvector associated with the eigenvalue λ 1 . We shall see, with an example, that if the matrix has no dominant eigenvalue then the power method may not converge.
Example 1. 
The matrix A = d i a g ( 1 , 1 ) has the eigenvalue λ 1 = 1 with eigenvectors ( x , 0 ) and λ 2 = 1 with eigenvectors ( 0 , y ) . The power method does not work in general since
A k ( x , y ) = ( x , y ) ,
for k being odd and
A k ( x , y ) = ( x , y )
for k being even. The sequence ( A k X 0 ) does not converge except for the starting point X 0 = ( x , 0 ) .
The present section is devoted to proposing an algorithm to find eigenvalues and eigenvectors of matrices that do not have a dominant eigenvalue.
In reference [22], an algorithm for the approximation of a fixed point of T : C C , where C, is a nonempty, closed and convex subset of a proposed normed space E. This is given by the following recurrence:
g n = ( 1 α n ) f n + α n T f n , h n = ( 1 β n ) f n + β n g n , f n + 1 = ( 1 γ n ) h n + γ n T h n ,
for α n , β n , γ n [ 0 , 1 ] , n 0 and f 0 C . This iterative scheme was called N-algorithm. The procedure generalizes the Krasnoselskii–Mann method, defined in references [6,7], obtained in the case where α n = 0 , β n = 1 . If, additionally, γ n = 1 one has the usual Picard–Banach iteration.
Averaged power method
If the matrix A M ( m × m ) has no dominant eigenvalue, that is to say, | λ j | = k 0 for any λ j σ ( A ) , the power method may not converge. The algorithm proposed here is the use of the N-iteration of the matrix A instead of computing the powers of A. The sequence of vectors obtained by the power or the averaged power method may produce a divergent sequence (even though the iterates may be good approximations of an eigenvector). To avoid this fact, a scaling of the output vector is sometimes performed at every step of the algorithm. Thus, the procedure is described by the following recurrence:
V n = ( 1 α n ) X n + α n A X n , W n = ( 1 β n ) X n + β n V n , X n + 1 = ( 1 γ n ) W n + γ n A W n , X n + 1 : = X n + 1 / | | X n + 1 | | , λ n + 1 = < A X n + 1 , X n + 1 >
for α n , β n , γ n [ 0 , 1 ] , such that 0 < inf α n sup α n < 1 , 0 < inf γ n sup γ n < 1 , and X 0 0 ¯ , X 0 R m . The notation | | · | | represents any norm in R m and < · , · > is the inner product in the m-dimensional Euclidean space.
A useful criterion to stop the algorithm is to check that | | A X n + 1 λ n + 1 X n + 1 | | < ε for some tolerance ε , or | λ n + 1 λ n | < ε .
The last value λ k is an approximation of an eigenvalue of A and X k is the corresponding eigenvector.
In order to justify the suitability of the algorithm, we will consider the simplest case of the Krasnoselskii iteration:
X n + 1 = ( 1 α ) X n + α A X n = ( ( 1 α ) I + α A ) X n ,
for 0 < α < 1 . Let us denote B : = ( 1 α ) I + α A . It is easy to check that
σ ( B ) = { ( 1 α ) + λ α : λ σ ( A ) } ,
and the eigenvectors of B and A agree, that is to say, X is an eigenvector of A with respect to an eigenvalue λ , if and only if X is an eigenvector of B with eigenvalue ( 1 α ) + λ α .
Let us assume that A has positive and negative eigenvalues with the same absolute value. Then, | ( 1 α ) + α λ | ( 1 α ) + α λ * for the positive eigenvalue λ * .
For a negative eigenvalue λ : | ( 1 α ) + α λ | < ( 1 α ) + α λ * and, consequently, ( 1 α ) + α λ * is a dominant eigenvalue of B. Hence, the power method applied to the matrix B will converge to an eigenvector of B (and thus of A), if the component of X 0 with respect to this eigenvector is non-null.
The power method is based on the product of a matrix A M ( m × m ) and a vector. This operation has a complexity O ( m 2 ) . If the number of iterations is n m a x , the complexity is O ( n m a x m 2 ) . For a large matrix, the averaging operations of the N-algorithm proposed are negligible, and the complexity is O ( 2 n m a x m 2 ) . Though the operations involved are about twice as many as in the original power method, the type of convergence (polynomial of degree 2) is not modified. If one wishes to reduce the total number of operations, the use of constant weight coefficients is advisable, and the algorithm is able to be conveniently optimized.
The speed of convergence of the power method (and consequently the number of iterations required to give a good approximation) depends on the ratio | λ 2 | / | λ 1 | , where λ 1 , λ 2 are the greatest eigenvalues in magnitude. The smaller is this quotient, while the faster is the convergence. In the averaged Mann–Krasnoselskii method, this “spectral gap” is | 1 α α λ * | / ( 1 α + α λ * ) , where λ * is the positive eigenvalue. For instance, if α = 1 / 2 , minimum quotients are reached for values of λ * close to 1 .
Example 2. 
Considering the matrix of Example 1,
A = 1 0 0 1
we obtain that for X 0 = ( x , y ) , x 0 , the Krasnoselskii method with α = 1 / 2 gives as first approximation
X 1 = ( X 0 + A X 0 ) / 2 = ( x , 0 ) ,
and the rest of iterations agree with the eigenvector ( x , 0 ) . The positive eigenvalue is computed as λ 1 = < A X k , X k > / < X k , X k > = x 2 / x 2 = 1 .
Example 3. 
The spectrum of the matrix
A = 0 2 2 0
is σ ( A ) = { 2 , 2 } . The N-iteration, with α n = β n = γ n = 1 / 2 starting at X 0 = ( 2 , 3 ) , with Euclidean scaling produces the sequence of approximations collected in Table 1.
The eigenvalue computed is λ = < A X 7 , X 7 > / < X 7 , X 7 > = 2 . The stopping criterion was | | X n X n 1 | | < 10 6 .
Example 4. 
Let us consider the following matrix:
A = 2 4 1 0 3 0 5 4 2 .
The set of eigenvalues of A is σ ( A ) = { 3 , 3 } . The algorithm described with α n = β n = γ n = 1 / 2 starting at X 0 = ( 2 , 3 , 2 ) with Euclidean scaling produces the sequence of approximations and distances as between those collected in Table 2.
The eigenvalue computed is λ = < A X 3 , X 3 > / < X 3 , X 3 > = 3 . The stopping criterion was | | X n X n 1 | | < 10 6 .
Remark 1. 
Let us notice that the matrices of these examples do not have a dominant eigenvalue but, nevertheless, the N-iterated power algorithm is convergent.

Averaged Power Method for Finding Eigenvalues and Eigenfunctions of a Linear Operator

The algorithm proposed in this section can be applied to find eigenvalues and eigenvectors/eigenfunctions of a linear operator defined in an infinite-dimensional normed linear space K : E E , in case of existence (see, for instance, Theorem 2). Starting at f 0 E , we would follow the steps
g n = ( 1 α n ) f n + α n K f n , h n = ( 1 β n ) f n + β n g n , f n + 1 = ( 1 γ n ) f n + γ n K f n , f n + 1 : = f n + 1 / | | f n + 1 | | , λ n + 1 = < K f n + 1 , f n + 1 >
for α n , β n , γ n [ 0 , 1 ] , such that 0 < inf α n sup α n < 1 , 0 < inf γ n sup γ n < 1 , and f 0 0 ¯ .
Let us note that a structure of inner product space in E for computing the eigenvalue by means of the Rayleigh quotient is required. Otherwise, we should solve the equation K f n = λ n f n .
The stop criteria of the algorithm are similar to the given for matrices, using the norm of the space E or the modulus in the case of employing the approximations of the eigenvalue.
Example 5. 
Let us compute an eigenvalue of a Fredhom integral operator on L 2 ( [ 0 , 1 ] ) defined, for f L 2 ( [ 0 , 1 ] ) , as
K f ( x ) : = 0 1 ( 2 x 2 + 3 y 3 ) f ( y ) d y ,
For f 0 ( x ) = x the sequence of approximate dominant eigenvalues ( λ n ) generated by the N-algorithm with α n = β n = γ n = 1 / 2 is collected in Table 3. The norm | | · | | 2 is computed as follows:
| | f | | 2 = ( 0 1 | f ( x ) | 2 d x ) 1 / 2
and the inner product is given by
< f , g > = 0 1 f ( x ) g ( x ) d x .
Table 3 collects the number of iterations (n), the approximate eigenvalue ( λ n ), the L 2 -distance between K f n and λ n f n (where f n is the n-th approximation of the eigenfunction) and the successive distances between the eigenvalue approximations ( | λ n λ n 1 | ). The tenth outcome of the eigenfunction is
f 10 ( x ) = 0.575913 + 1.104832 x 2 ,
with eigenvalue λ = 1.709201 . The stopping criterion was | | K f n λ n f n | | < 10 7 .  Figure 1 represents the number of iterations (x-axis) and corresponding magnitude | | K f n λ n f n | | (y-axis).

3. Computation of an Eigenvector of a Known Eigenvalue of a Linear Operator

This section is devoted to studying the convenience of using the N-iteration to find an eigenvector/eigenfunction corresponding to a non-null known eigenvalue of a linear operator in finite or infinite dimensions.
Let T : E E be a linear operator defined on a normed space E. If λ σ ( T ) , where σ ( T ) is the point spectrum of T, the search for an eigenvector (or eigenfunction if E is a functional space) associated with λ consists in finding an element v E non-null such that T v = λ v . The solution of this equation can be reduced to a fixed-point problem considering that this equality is equivalent to
λ 1 T v = v ,
assuming that λ is non-null. The following result gives sufficient conditions for the convergence of the N-iteration applied to λ 1 T to find an eigenvector associated to a non-null eigenvalue.
The next result was given in reference [22].
Proposition 1. 
Let E be a normed space and C E be nonempty, closed and convex. Let T : C C be a nonexpansive operator such that the set of fixed points is nonempty; then, the sequence of N-iterates ( f n ) for f 0 C is such that the sequence ( | | f n f * | | ) is convergent for any fixed point f * C of T. If E is a uniformly convex Banach space and the scalars are chosen such that 0 < inf α n sup α n < 1 and 0 < inf γ n sup γ n < 1 , then the sequence ( | | f n T f n | | ) tends to zero for any f 0 C .
Theorem 1. 
Let E be a uniformly convex Banach space, and K : E E be a linear and compact operator. If λ σ ( K ) is non-null, such that | | K | | | λ | , then the N-iteration with the scalars described in the previous proposition applied to the map λ 1 K converges strongly to an eigenvector associated with λ .
Proof. 
Since λ σ ( K ) , there exists f * E , f * 0 , such that K f * = λ f * . Then, f * is a fixed point of λ 1 K .
The latter operator is nonexpansive since
| | λ 1 K f λ 1 K g | | | λ 1 | | | K | | | | f g | | | | f g | | ,
for any f , g E , due to the hypotheses on λ . Regarding invariance, if f B ¯ ( 0 ¯ , R ) then
| | λ 1 K f | | | | f | | R ,
and λ 1 K f B ¯ ( 0 ¯ , R ) .
Let f 0 B ¯ ( 0 ¯ , R ) , and ( f n ) B ¯ ( 0 ¯ , R ) be the sequence of N-iterates of f 0 . Since λ 1 K is compact, there exists a convergent subsequence ( λ 1 K f m k ). Let f be the limit of this subsequence. Then,
f m k = ( I λ 1 K ) f m k + λ 1 K f m k f ,
because ( I λ 1 K ) f m k 0 due to the properties of the N-iteration (Proposition 1). The continuity of I λ 1 K implies that 0 = ( I λ 1 K ) f and f F i x ( λ 1 K ) . The fact that lim n | | f n f | | exists (see Proposition 1) implies that the sequence of N-iterates ( f n ) converges strongly to the fixed point f (that is to say, to an eigenvector of K associated with λ ). □
Corollary 1. 
Let λ be a non-null eigenvalue of the real or complex matrix A. If, for some subordinate matrix norm, | | A | | | λ | , then the N-iteration applied to λ 1 A is convergent to an eigenvector associated with λ .
Proof. 
This is a trivial consequence of Theorem 1 considering that, in finite dimensional spaces, all the linear operators are compact. □
Example 6. 
The matrix
A = 4 5 6 5
has an eigenvalue equal to 10 and | | A | | 1 = 10 . The computation of an eigenvector associated with λ = 10 by means of the N-iteration with α n = β n = γ n = 1 / 2 starting at u 0 = ( 1 , 1 ) , as described in Corollary 1, has produced the approximations collected in Table 4.
The seventh approximation of the estimated eigenvalue is
λ 7 = < A u 7 , u 7 > / < u 7 , u 7 > 10.000035 .
The stop criterion was | | A X n λ X n | | < 10 3 .

4. Solution of a Linear System of Equations by an Averaged Iterative Method

If we need to solve a linear system of equations such as
A X = b ,
where A M ( m × m ) , X M ( m × 1 ) and b M ( m × 1 ) , we can transform (1) into an equation of the following type:
T X = X ,
where T X = A 1 X + b , and A 1 = I A . It is clear that T is a Lipschitz operator:
| | T X 1 T X 2 | | | | A 1 | | | | X 1 X 2 | | ,
X 1 , X 2 R m , and | | · | | are a subordinate matrix norm of A 1 . Let us note that we use the same notation for vector and subordinate matrix norms.
If | | A 1 | | 1 then T is nonexpansive. Let us assume that the system (1) has some solution X * . Then, for any X R m ,
| | T X X * | | | | X X * | | ,
and any ball B ¯ ( X * , R ) is invariant under T. If X B ¯ ( X * , R ) ,
| | T X X * | | | | X X * | | R .
then the restriction of T to the ball B ¯ ( X * , R ) is a continuous map on a compact set. The N-iteration is such that | | X n T X n | | 0 ¯ (Proposition 1). At the same time, due to the compactness of B ¯ ( X * , R ) , for X 0 B ¯ ( X * , R ) there exists a subsequence ( X n k ) such that X n k X ¯ . By continuity, ( I T ) X n k tends to ( I T ) X ¯ = 0 ¯ and consequently X ¯ is a fixed point of T. The fact that | | X n X ¯ | | is convergent (Proposition 1) implies that the whole sequence ( X n ) converges to the fixed point X ¯ , that is to say, it approximates a solution of the equation A X = b .
Remark 2. 
If | | A 1 | | < 1 , T is contractive on a complete space, and there exists a unique solution which can be approximated by means of the Picard iteration.
If | | A 1 | | 1 , T is nonexpansive and the N-iteration where 0 < inf α n sup α n < 1 and 0 < inf γ n sup γ n < 1 can be used to find a solution, according to the arguments exposed.
Remark 3. 
The complexity of this iterative algorithm is similar to that given for the averaged power method, since the core of the procedure is the product of a matrix (which remains constant over the process) and a vector.
Example 7. 
Let us consider the equation A X = b , where
A = 1 2 / 3 0 1 / 3 1 / 3 1 1 / 3 1 / 3 1 / 3 1 / 3 1 1 / 3 1 / 3 1 / 3 1 / 3 1
and b = ( 1 , 0 , 0 , 1 ) T . The spectral radius of A 1 = I A is 1 , and the typical fixed-point method does not converge. The N-iteration has been applied to the map T X = A 1 X + b with the values of the parameters equal to 1 / 2 and starting point X 0 = ( 0.5 , 0.5 , 0.5 , 0.5 ) .  Table 5 collects ten approximations of the solution along with their relative errors, computed as E n = | | X n X n 1 | | / | | X n 1 | | . The stop criterion was E n < 10 6 .  Figure 2 represents the number of iteration (x-axis) along with E n (y-axis)

5. Solution of a Fredholm Integral Equation of the First Kind by Means of Orthogonal Polynomials

This section is devoted to solving integral equations of the Fredholm type of the first kind, given by the following expression:
a b k ( x , y ) v ( y ) d y = u ( x ) ,
assuming that u , v L 2 ( [ a , b ] ) , k L 2 ( [ a , b ] × [ a , b ] ) , and the function u is known. The notation L 2 ( [ a , b ] ) means the space of square-integrable real functions defined on a compact real interval [ a , b ] .
We must notice that the solution of this equation is not unique, since if v * is a solution of (3) and h belongs to the null space of the operator K defined as follows:
K f ( x ) : = a b k ( x , y ) f ( y ) d y ,
then v * + h is a solution as well.
The map k ( x , y ) is called the kernel of the Fredholm operator K. The following are interesting particular cases of k:
  • If k ( x , y ) = e i x y , K f is the Fourier Transform of f.
  • For k ( x , y ) = 1 / ( x y ) , we have the Hilbert Transform of f.
  • When k ( x , y ) = e x y , the Laplace Transform of f is obtained.
We consider in this section an arbitrary kernel k L 2 ( [ a , b ] × [ a , b ] ) .
Assuming that [ a , b ] = [ 1 , 1 ] , we consider the expansions of u and v in terms of an orthogonal basis of the space L 2 in order to find a solution of (3). In this case we have chosen the polynomials of Legendre, which compose an orthogonal basis of L 2 ( [ 1 , 1 ] ) . They are defined for n = 0 , 1 , as
P n ( x ) = 1 2 n k = 0 n n k 2 ( x + 1 ) n k ( x 1 ) k .
Legendre polynomials satisfy the following equalities:
1 1 P n ( x ) P m ( x ) d x = 0 n m 2 2 n + 1 n = m .
We have considered here orthonormal Legendre polynomials p n , that is to say,
p n ( x ) : = P n ( x ) / | | P n ( x ) | | 2 = 2 n + 1 2 P n ( x ) ,
in order to facilitate the calculus of the coefficients of a function with respect to the basis. In this way, for f L 2 ( [ 1 , 1 ] ) ,
f ( x ) = m = 0 c m p m ( x ) ,
where
c m = < f , p m > = 1 1 f ( x ) p m ( x ) d x .
The convergence of the sum holds with respect to the norm | | · | | 2 in L 2 ( [ 1 , 1 ] ) . If the domain of the integral of the Equation (3) is a different interval, a simple change of variable transforms ( P m ) into a system of orthogonal polynomials defined on it.
To solve the Equation (3), let us consider Legendre expansions of the maps u , v up to the q-th term:
u ( x ) m = 0 q u m p m ( x ) ,
v ( x ) m = 0 q v m p m ( x ) .
The discretization of the integral Equation (3) becomes
u ( x ) = m = 0 q u m p m ( x ) = a b k ( x , y ) j = 1 q v j p j ( y ) d y = j = 0 q v j a b k ( x , y ) p j ( y ) d y .
Thus,
u m = < u , p m > = j = 0 q v j < a b k ( x , y ) p j ( y ) d y , p m > = j = 0 q v j a b a b k ( x , y ) p m ( x ) p j ( y ) d x d y .
Denoting c m j : = a b a b k ( x , y ) p m ( x ) p j ( y ) d x d y , we obtain a system of linear equations whose unknowns are the coefficients ( v j ) of the sought function v :
u m = j = 0 q c m j v j ,
for m , j = 0 , 1 , 2 , , q . The linear system can be written as
U = C V ,
where C = ( c m j ) , U = ( u 0 , u 1 , u 2 , , u q ) T and V = ( v 0 , v 1 , v 2 , , v q ) T . The system can be transformed into the fixed-point problem
T V = ( I C ) V + U = V ,
as explained in the previous section. This equation can be solved using the described method whenever | | I C | | 1 for some subordinate norm of the matrix I C .
Example 8. 
In order to solve the following Fredholm integral equation of the first kind
0 1 4 x t v ( t ) d t = x ,
we used the Legendre polynomials defined in the interval [ 0 , 1 ] :
p m ( x ) = P m ( 2 x 1 ) / | | P m ( 2 x 1 ) | | 2 .
We have considered an expansion with q = 4 and obtained a system of 5 × 5 linear equations with matrix C M ( 5 × 5 ) ,
c m j = 4 0 1 0 1 x y p m ( x ) p j ( y ) d x d y .
The unknowns V = ( v 0 , , v 4 ) are the coefficients of v with respect to ( p m ) and U = ( 0.5 , 0.288675 , 0 , 0 , 0 ) is the vector of coefficients of u with respect to the same basis. The system U = C V has been transformed into V = ( I C ) V + U .
Since ρ ( I C ) 0.91 (close to 1), we have applied the N-algorithm with all the parameters equal to 1 / 2 . Starting at V 0 = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) , we obtain at the 18-th iteration the approximate solution:
v ( x ) 0.364601 + 1.15891 x 3.30005 x 2 + 3.82515 x 3 1.55503 x 4 ,
with an error of
E 18 = | | 0 1 4 x t v ( t ) d t x | | 2 0.001 ,
and stop criterion E n < 10 3 .  Figure 3 displays the 6th, 12th and 18th approximation function to the solution of the equation. For the 18th Picard iteration, v ¯ ( x ) , the error is as follows:
E ¯ 18 = | | 0 1 4 x t v ¯ ( t ) d t x | | 2 0.12 .
Consequently, the N-iteration may be better even if the Picard method can be applied.

6. Spectrum of a Compact Operator and General Solution of Fredholm Integral Equations of the First Kind

In this section we explore the relationships between spectral theory and the resolution of a Fredholm integral equation of the first kind given by following the expression:
K f = g ,
where f , g L 2 ( [ a , b ] ) , and K : L 2 ( [ a , b ] ) L 2 ( [ a , b ] ) is defined as
K f ( x ) : = a b k ( x , y ) f ( y ) d y ,
where k L 2 ( [ a , b ] × [ a , b ] ) . It is well known that K is a linear and compact operator.
A linear map K : E E , where E is a normed space, is self-adjoint if K = K * where K * is the adjoint of K. The set of eigenvalues of a linear operator is called the point spectrum of K, σ ( K ) , and λ σ ( K ) if there exists v 0 such that K v = λ v . In this case v is an eigenvector of K (or an eigenfunction, in the case where E is a functional space). The set E λ = { v E : K v = λ v } is the eigenspace associated with λ .
The next theorems describe the spectrum of compact operators (see the references [26,27], for instance), and provide a kind of “diagonalization” of these maps.
Theorem 2. 
Let K : E E be a linear, compact and self-adjoint operator. Then K has an orthonormal basis of eigenvectors and
  • If the cardinal of σ ( K ) is not finite then the spectrum is countably infinite and the only cluster point of σ ( K ) is 0 , that is to say, lim m λ m = 0 where λ m σ ( K ) .
  • The eigenspaces E λ corresponding to different eigenvalues are mutually orthogonal.
  • For λ 0 , E λ has finite dimension.
According to this theorem, the eigenvalues of a compact self-adjoint operator can be ordered as follows:
| λ 1 | | λ 2 | | λ m | ,
and the action of K on every element u E can be expressed as follows:
K u = m = 1 λ m < u , e m > e m ,
where ( e m ) is an orthonormal basis of eigenvectors of K. If σ ( K ) is finite, then K has a finite range. If additionally K is positive, then the eigenvalues also are. If K is strictly positive, then the inverse operator admits the following expression:
K 1 u = m = 1 λ m 1 < u , e m > e m ,
and, in general, K is a densely defined unbounded operator, because the range of K is dense in E.
For a compact operator, the map K * K is positive and self-adjoint and the positive square roots of the eigenvalues of K * K are the singular values of K . For a not necessarily self-adjoint operator, we have the following result, known as Singular Value Decomposition of a compact operator K.
Theorem 3. 
Let K : E E be a non-null linear and compact operator. Let ( σ m ) be the sequence of singular values of K, considered according to their algebraic multiplicities and in decreasing order. Then there exist orthonormal sequences ( v m ) and ( u m ) on E and E , respectively, such that, for m 1 ,
K v m = σ m u m
and
K * u m = σ m v m .
For every u E ,
u = m = 1 < u , v m > v m + w ,
where w K e r ( K ) , and K e r ( K ) denotes the null space of K. Moreover
K u = m = 1 σ m < u , v m > u m .
Remark 4. 
According to the equalities (7) and (8), for m 1 ,
( K * K ) v m = σ m 2 v m
and
( K K * ) u m = σ m 2 u m ,
that is to say, v m is an eigenvector of K * K with respect to the eigenvalue λ m = σ m 2 and u m is an eigenvector of K K * with respect to the same eigenvalue.
Let us consider the equation quoted above
K f ( x ) : = a b k ( x , y ) f ( y ) d y = g ( x ) ,
for x [ a , b ] , where g L 2 ( [ a , b ] ) is the data function and f L 2 ( [ a , b ] ) is the unknown. We will assume that the kernel k ( x , y ) of K satisfies the following:
| | k | | 2 = a b a b | k ( x , y ) | 2 d x d y 1 / 2 < ,
so that K is linear and compact.
Let us assume that g can be expanded in terms of the following system ( u m ) :
g = m = 1 < g , u m > u m .
According to (8) and the definition of adjoint operator,
g = m = 1 < K f , u m > u m = m = 1 < f , K * u m > u m = m = 1 < f , σ m v m > u m
and
g = m = 1 σ m < f , v m > u m .
Let us assume also that
k ( x , · ) = m = 1 < k ( x , · ) , v m > v m .
Then, using the equality (7),
< k ( x , · ) , v m > = a b k ( x , y ) v m ( y ) d y = K v m ( x ) = σ m u m ( x )
and by (12),
k ( x , y ) = m = 1 σ m u m ( x ) v m ( y ) .
If f is expanded in terms of the system ( v m ) ,
f = m = 1 f m v m ,
K f = m = 1 f m K v m = m = 1 f m σ m u m = g
and, by (11), there exists a solution of K f = g if
f = m = 1 σ m 1 < g , u m > v m .
We can conclude that if g admits an expansion in terms of the system ( u m ) the Fredholm integral equation of the first kind has a solution.

7. Solution of a Fredholm Integral Equation of the First Kind with Separable Kernel

We consider now the case where K has a separable kernel k, that is to say, k can be expressed as
k ( x , y ) = j = 1 r α j ( x ) β j ( y ) ,
for some r N .
Spectrum of K:
As suggested by the equality (13), we can guess that K has eigenfunctions of type
φ = j = 1 r c j α j .
In order to find φ , we consider that
K φ ( x ) = a b ( j = 1 r α j ( x ) β j ( y ) ) ( j = 1 r c j α j ( y ) ) d y = i , j = 1 r c j α j ( x ) a b β i ( y ) α j ( y ) d y ,
K φ ( x ) = i , j = 1 r A i j c j α i ( x ) = λ φ ( x ) ,
where
A i j = a b β i ( y ) α j ( y ) d y ,
and the problem is reduced to solving the eigenvalue issue:
A . c = λ c ,
where c = ( c 1 , c 2 , c r ) T is the vector of coefficients of φ with respect to ( α j ).
Null space of K:
Let us consider now the null space of the operator K. If h K e r ( K ) ,
K h ( x ) = a b k ( x , y ) h ( y ) d y = j = 1 r α j ( x ) a b β j ( y ) h ( y ) d y . = j = 1 r < β j , h > α j ( x ) = 0 .
If the maps α j are linearly independent, a characterization of the functions belonging to the null space is as follows:
< h , β j > = 0 ,
for j = 1 , 2 , , r . Let us consider an orthogonal system of functions ( p j ) for j = 1 , 2 , , r defined following a Gram–Schmidt process from ( β j ) . If w L 2 ( [ a , b ] ) is any function,
w ¯ = w j = 1 r < w , p j > < p j , p j > p j
belongs to the null space of K since
< w ¯ , p i > = 0
for i = 1 , 2 , , r . In order to define an orthonomal basis of the kernel, let us consider a system of functions ( h i ) such that { p 1 , p 2 , p r , h 1 , h 2 , . . . } is linearly independent and complete. Then, we define
h ¯ i = h i j = 1 r < h i , p j > < p j , p j > p j K e r ( K ) .
A Gram–Schmidt process on ( h ¯ i ) provides an orthonormal basis ( ψ j ) of the null space of K.
General solution of K f = g :
The general solution of the equation K f = g has the following form:
f ( x ) = j = 1 r a j φ j ( x ) + j = 1 b j ψ j ( x ) ,
where φ j are the eigenfunctions, the coefficients a j are to be computed and the b j are arbitrary. To find a j let us consider that
< g , φ i > = < K f , φ i > = j = 1 r a j λ j < φ j , φ i >
and we have a linear system of r equations with unknown a j ’s, j = 1 , 2 , , r .

Example

Consider the following Fredholm integral equation of the first kind:
K f ( x ) : = 0 1 ( 2 x 2 + 3 y 3 ) f ( y ) d y = 3 + 14 x 2 .
Let us study the characteristics of the operator K.
Spectrum of K:
The first eigenvalue of the operator was computed in Example 5, obtaining a value of λ 1 = 1.709201 . The second eigenvalue was found by applying the averaged power method, but subtracting at every step the component of the approximation with respect to the first eigenfunction ( φ 1 ) . The value obtained is λ 2 = 0.292534 . The maps of the kernel k are as follows:
α 1 ( x ) = x 2 , α 2 ( x ) = 1 ,
β 1 ( y ) = 2 , β 2 ( y ) = 3 y 3 .
Let us define the matrix A as in (14), A i j = 0 1 β i ( y ) α j ( y ) d y :
A = 2 / 3 2 1 / 2 3 / 4
The spectrum of the matrix is σ ( A ) = { 1.709201 , 0.292534 } . Two eigenfunctions associated with these eigenvalues were computed as follows: φ 1 ( x ) = 0.575913 + 1.104832 x 2 and φ 2 ( x ) = 1 2.08508 x 2 . Since the map g s p a n ( φ 1 , φ 2 ) , the equation has a solution.
Null space of K:
As explained previously, the maps h belonging to the null space of K are characterized by the following equations:
< h , β i > = 0 ,
for i = 1 , 2 . An orthogonal system defined from β 1 , β 2 is as follows:
p 1 ( x ) = 1 , p 2 ( x ) = x 3 1 / 4 .
In order to construct an orthonormal basis of the null space, we consider the following functions: h 1 ( x ) = x ,   h 2 ( x ) = x 2 ,   h 3 ( x ) = x 4 ,   h 4 ( x ) = x 5 , First, define
h ¯ 1 = x j = 1 2 < x , p j > < p j , p j > p j ( x ) = 4 15 + x 14 15 x 3 ,
h ¯ 2 = x 2 j = 1 2 < x 2 , p j > < p j , p j > p j ( x ) = 2 27 + x 2 28 27 x 3 ,
h ¯ 3 = x 4 j = 1 2 < x 4 , p j > < p j , p j > p j ( x ) = 1 30 14 15 x 3 + x 4 ,
h ¯ 4 = x 5 j = 1 2 < x 5 , p j > < p j , p j > p j ( x ) = 4 81 70 81 x 3 + x 5
Then, by means of a Gram–Schmidt process, we obtain an orthogonal basis of the null space:
h ^ 1 = 4 15 + x 14 15 x 3 , h ^ 2 = h ¯ 2 < h ¯ 2 , h ^ 1 > < h ^ 1 , h ^ 1 > h ^ 1 ( x ) , h ^ 2 = 1 27 5 12 x + x 2 35 54 x 3 , h ^ 3 = h ¯ 3 < h ¯ 3 , h ^ 1 > < h ^ 1 , h ^ 1 > h ^ 1 ( x ) < h ¯ 3 , h ^ 2 > < h ^ 2 , h ^ 2 > h ^ 2 ( x ) , h ^ 3 = 1 70 2 7 x + 9 7 x 2 2 x 3 + x 4 , h ^ 4 = h ¯ 4 < h ¯ 4 , h ^ 1 > < h ^ 1 , h ^ 1 > h ^ 1 ( x ) < h ¯ 4 , h ^ 2 > < h ^ 2 , h ^ 2 > h ^ 2 ( x ) < h ¯ 4 , h ^ 3 > < h ^ 3 , h ^ 3 > h ^ 3 ( x ) , h ^ 4 = 1 252 + 5 42 x 5 6 x 2 + 20 9 x 3 5 2 x 4 + x 5 . . .
General solution of K f = g :
Let the solution of the equation be
f ( x ) = a 1 φ 1 ( x ) + a 2 φ 2 ( x ) + h ,
where φ 1 , φ 2 are the eigenfunctions of K and h K e r ( K ) . Solving the system (15), we obtain the coefficients a 1 = 5.139840 and a 2 = 7.039890 . Substituting in the previous equality we find the approximate general solution of the equation:
f ( x ) = 9.999991 9.000074 x 2 + i = 1 μ i h ^ i ,
where μ i are arbitrary coefficients. An exact solution is f ( x ) = 10 9 x 2 .

8. Solving the Equation Kf = λ f

Let us consider now the following equation:
K f ( x ) = a b k ( x , y ) f ( y ) d y = λ f ( x ) ,
where the kernel is separable: k ( x , y ) = j = 1 r α j ( x ) β j ( x ) . It is obvious that this equation has a non trivial solution if λ belongs to the point spectrum of K . In this case the solution is any eigenfunction φ associated with λ . For instance, let us solve the following equation:
K f ( x ) = 0 1 ( 8 x y 10 x 2 y 2 ) f ( y ) = f ( x ) ,
where k ( x , y ) is a separable kernel with α 1 ( x ) = 8 x , α 2 ( x ) = 10 x 2 , β 1 ( y ) = y , β 2 ( y ) = y 2 . In this case the eigenvalue is λ = 1 . The matrix associated with the operator, A i j = 0 1 β i ( y ) α j ( y ) d y , is as follows:
A = 8 / 3 5 / 2 2 2
We look for an eigenfunction of the form: φ = c 1 α 1 + c 2 α 2 . The system of equations to find the coefficients is as follows:
A c 1 c 2 = 8 / 3 5 / 2 2 2 c 1 c 2 = c 1 c 2
The spectral radius of A is ρ ( A ) = 1 . Consequently, the usual iterative methods to solve this system do not converge. We apply the N-iteration to the matrix A starting at C 0 = ( 1 , 1 ) and we obtain a sequence of vectors for the approximate solutions of φ in terms of the maps α i . The results are collected in Table 6. The last outcome is c 1 = 0.375075 ,   c 2 = 0.25009 , providing the approximate solution:
φ ( x ) = j = 1 2 c j α j ( x ) = 3.0006 x 2.5009 x 2 ,
with stop criterion | | C n C n 1 | | < 10 3 , where C n is the vector of coefficients computed at the n-th iteration. An exact solution is φ ( x ) = 3 x 2.5 x 2 .
Remark 5. 
All the computations have been performed using Mathematica software.

9. Conclusions

This article presents a novel algorithm for the computation of eigenvalues and eigenvectors of matrices which do not have a dominant eigenvalue. The method is extended to find eigenvalues and eigenvectors/eigenfunctions of linear infinite-dimensional operators. The iterative scheme generalizes the classical power algorithm, which becomes a particular case of the same. The procedure is based on iterative averaged methods, started initially by Krasnoselskii and Mann.
The article provides numerical procedures to find eigenvectors for known eigenvalues as well. Theorems of convergence give sufficient conditions for the suitability of the algorithm applied to matrices and linear operators. Another section proposes a method to solve iteratively a system of linear equations, based on similar principles.
Finally, the algorithms proposed have been checked to solve Fredholm integral equations of the first and second kind with different kernels and procedures. The examples given illustrate the performance and convergence of the methods proposed.

Funding

This work has not received external funds.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares that the publication of this paper has no conflicts of interest.

References

  1. Zhang, X.-D. A Matrix Algebra Approach to Artificial Intelligence; Springer: Singapore, 2020. [Google Scholar]
  2. Wang, H.; Wei, Y.; Cao, M.; Xu, M.; Wu, W.; Xing, E.P. Deep inductive matrix completion for biomedical interaction prediction. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine, San Diego, CA, USA, 18–21 November 2019; pp. 520–527. [Google Scholar]
  3. Xu, M.; Jin, R.; Zhou, Z.-H. Speedup matrix completion with side information: Application to multi-label learning. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–8 December 2013; pp. 2301–2309. [Google Scholar]
  4. Gusak, J.; Daulbaev, T.; Ponomarev, E.; Cichocki, A.; Oseledets, I. Reduced-order modeling of deep neural networks. Comput. Math. Math. Phys. 2021, 61, 774–785. [Google Scholar]
  5. Mises, R.; Pollaczek-Geiringer, H. Praktische Verfahren der Gleichungsauflosung. Z. Angew. Math. Mech. 1929, 9, 152–164. [Google Scholar] [CrossRef]
  6. Mann, W.R. Mean value methods in iteration. Proc. Amer. Math. Soc. 1953, 44, 506–510. [Google Scholar] [CrossRef]
  7. Krasnoselskii, M.A. Two remarks on the method of successive approximations. Uspehi Mat. Nauk. 1955, 10, 123–127. (In Russian) [Google Scholar]
  8. Ishikawa, S. Fixed points by a new iteration method. Proc. Amer. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  9. Noor, M.A. New approximation schemes for general variational inequalities. J. Math. Analysis Appl. 2000, 251, 217–229. [Google Scholar] [CrossRef]
  10. Sahu, D.R. Applications of the S-iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory 2011, 12, 187–204. [Google Scholar]
  11. Noor, M.A.; Noor, K.I. Some novel aspects and applications of Noor iterations and Noor orbits. J. Adv. Math. Stud. 2024, 17, 276–284. [Google Scholar]
  12. Chairatsiripong, C.; Thianwan, T. Novel Noor iteration technique for solving nonlinear equations. AIMS Math. 2022, 7, 10958–10976. [Google Scholar] [CrossRef]
  13. Yadav, M.R.; Thakur, B.S.; Sharma, A.K. Two-step iteration scheme for nonexpansive mappings in uniformly convex Banach space. J. Ineq. Spec. Funct. 2012, 3, 85–92. [Google Scholar]
  14. Sahu, D.R. Fixed points of demicontinuous nearly Lipschitzian mappings in Banach spaces. Comment. Math. Univ. Carolin. 2005, 46, 652–666. [Google Scholar]
  15. Sahu, D.R.; Beg, R. Weak and strong convergence for fixed points of nearly asymptotically nonexpansive mappings. Int. J. Modern Math. 2008, 3, 135–151. [Google Scholar]
  16. Karakaya, V.; Atalan, Y.; Dogan, K.; Houda, E.; Bouzara, N. Some fixed point results for a new three steps iteration process in Banach spaces. Fixed Point Theory Appl. 2017, 18, 625–640. [Google Scholar] [CrossRef]
  17. Pheungrattana, W.; Suantai, S. On the rate of convergence of Mann, Ishikawa, Noor and SP iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 2011, 235, 3006–3014. [Google Scholar] [CrossRef]
  18. Osilike, M.O.; Aniagsbosor, S.C. Weak and strong convergence theorems for fixed points of asymptotically nonexpansive mappings. Math. Comp. Model. 2000, 32, 1181–1191. [Google Scholar] [CrossRef]
  19. Combettes, P.L.; Pennanen, T. Generalized Mann iterates for constructing fixed points in Hilbert spaces. J. Math. Anal. Appl. 2002, 275, 521–536. [Google Scholar] [CrossRef]
  20. Cobzas, S.; Miculescu, R.; Nicolae, A. Lipschitz Functions; Lecture Notes in Mathematics; Springer Nature: Cham, Switzerland, 2019. [Google Scholar]
  21. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Fixed Point Theory for Lipschitzian-Type Mappings with Applications; Springer: New York, NY, USA, 2009. [Google Scholar]
  22. Navascués, M.A. Approximation sequences for fixed points of non contractive operators. J. Nonl. Funct. Anal. 2024, 20, 1–13. [Google Scholar]
  23. Hammerstein, A. Nichtlineare integralgleichungen nebst anwendungen. Acta Math. 1930, 54, 117–176. [Google Scholar] [CrossRef]
  24. Navascués, M.A. New iterative methods for fixed point approximation. Application to the numerical solution of Fredholm integral equations. Arabian J. Math. 2025, 1–14. [Google Scholar] [CrossRef]
  25. Navascués, M.A. Hammerstein nonlinear integral equations and iterative methods for the computation of common fixed points. Axioms 2025, 14, 214. [Google Scholar] [CrossRef]
  26. Naylor, A.W.; Sell, G. Linear Operator Theory in Engineering and Science; Springer: New York, NY, USA, 1982. [Google Scholar]
  27. Kress, R. Linear Integral Equations; Springer: New York, NY, USA, 2014; Volume 82. [Google Scholar]
  28. Jerri, A.J. Introduction to Integral Equations with Applications, 2nd ed.; Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  29. Wazwaz, A.M. A First Course in Integral Equations; World Scientific Publishing Company: Singapore, 2015. [Google Scholar]
  30. Wazwaz, A.M. Linear and Nonlinear Integral Equations Methods and Applications; Higher Education Press: Beijing, China; Springer: Berlin/Heidelberg, Germany; London, UK; New York, NY, USA, 2011. [Google Scholar]
Figure 1. Errors of approximation of the dominant eigenvalue of the operator K.
Figure 1. Errors of approximation of the dominant eigenvalue of the operator K.
Algorithms 18 00439 g001
Figure 2. Successive distances of the approximate solutions of a system.
Figure 2. Successive distances of the approximate solutions of a system.
Algorithms 18 00439 g002
Figure 3. Sixth (blue), twelve (yellow) and eighteen (green) approximations of the solution of a Fredholm equation.
Figure 3. Sixth (blue), twelve (yellow) and eighteen (green) approximations of the solution of a Fredholm equation.
Algorithms 18 00439 g003
Table 1. Approximations of an eigenvector and successive Euclidean distances.
Table 1. Approximations of an eigenvector and successive Euclidean distances.
Iteration (n) X n | | X n X n 1 | |
1(−0.894427, −0.447214)3.85933
2(−0.691223, −0.722642)0.342276
3(−0.708154, −0.706058)0.023699
4(−0.707037, −0.707177)0.001580
5(−0.707111, −0.707102)0.000105
6(−0.707106, −0.707107)0.000007
7(−0.707107, −0.707107)0.000000
Table 2. Approximations of an eigenvector with successive distances.
Table 2. Approximations of an eigenvector with successive distances.
Iteration (n) X n | | X n X n 1 | |
1(−0.212899, −0.479022, 0.851594)3.545573
2(−0.212899, −0.479022, 0.851594)0.000001
3(−0.212899, −0.479022, 0.851594)0.000000
Table 3. Approximations of an eigenvalue of the operator K.
Table 3. Approximations of an eigenvalue of the operator K.
Iteration (n) λ n | | Kf n λ n f n | | 2 | λ n λ n 1 |
11.7092470.074275-
21.7093160.0148050.000069
31.7091790.0031650.000137
41.7091860.0007070.000007
51.7091960.0001620.000010
61.7092010.0000380.000004
71.7092010.0000090.000001
81.7092010.0000020.000000
91.7092010.0000000.000000
101.7092010.0000000.000000
Table 4. Approximations of an eigenvector of λ = 10 .
Table 4. Approximations of an eigenvector of λ = 10 .
Iteration (n)Approximation X n | | AX n λ X n | |
1(0.93875, 1.06125)0.46138
2(0.91877, 1.08123)0.15052
3(0.91225, 1.08775)0.04911
4(0.91012, 1.08988)0.01602
5(0.90943, 1.09057)0.00523
6(0.90920, 1.09080)0.00170
7(0.90913, 1.09087)0.00056
Table 5. Approximations of a solution of the system, along with relative errors.
Table 5. Approximations of a solution of the system, along with relative errors.
Iteration (n)Approx. X n | | X n X n 1 | | / | | X n 1 | |
1(−0.08333, 0.5, 0.5, 1.08333)0.824958
2(−0.21296, 0.5, 0.5, 1.21296)0.141414
3(−0.24177, 0.5, 0.5, 1.24177)0.028687
4(−0.24817, 0.5, 0.5, 1.24817)0.006246
5(−0.24959, 0.5, 0.5, 1.24959)0.001382
6(−0.24991, 0.5, 0.5, 1.24991)0.000307
7(−0.24998, 0.5, 0.5, 1.24998)0.000068
8(−0.25000, 0.5, 0.5, 1.25000)0.000015
9(−0.25000, 0.5, 0.5, 1.25000)0.000003
10(−0.25000, 0.5, 0.5, 1.25000)0.000000
Table 6. Approximations of the vector of coefficients of the solution, along with successive distances between them.
Table 6. Approximations of the vector of coefficients of the solution, along with successive distances between them.
Iteration (n)Approximation C n E n = | | C n C n 1 | |
1(0.513889, 0.416667)0.759330
2(0.405864, 0.287037)0.168740
3(0.381859, 0.258230)0.037498
4(0.376524, 0.251829)0.008333
5(0.375339, 0.250406)0.001852
6(0.375075, 0.250090)0.000411
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Navascués, M.A. Iterative Matrix Techniques Based on Averages. Algorithms 2025, 18, 439. https://doi.org/10.3390/a18070439

AMA Style

Navascués MA. Iterative Matrix Techniques Based on Averages. Algorithms. 2025; 18(7):439. https://doi.org/10.3390/a18070439

Chicago/Turabian Style

Navascués, María A. 2025. "Iterative Matrix Techniques Based on Averages" Algorithms 18, no. 7: 439. https://doi.org/10.3390/a18070439

APA Style

Navascués, M. A. (2025). Iterative Matrix Techniques Based on Averages. Algorithms, 18(7), 439. https://doi.org/10.3390/a18070439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop