Next Article in Journal
A Comparative Analysis of Fairness and Satisfaction in Multi-Agent Resource Allocation: Integrating Borda Count and K-Means Approaches with Distributive Justice Principles
Previous Article in Journal
Hybrid Algorithm via Reciprocal-Argument Transformation for Efficient Gauss Hypergeometric Evaluation in Wireless Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Double Inertial Mann-Type Method for Two Nonexpansive Mappings with Application to Urinary Tract Infection Diagnosis

by
Krittin Naravejsakul
1,
Pasa Sukson
1,
Waragunt Waratamrongpatai
1,
Phatcharapon Udomluck
1,
Mallika Khwanmuang
1,
Watcharaporn Cholamjiak
2 and
Watcharapon Yajai
2,*
1
School of Medicine, University of Phayao, Phayao 56000, Thailand
2
Department of Mathematics, School of Science, University of Phayao, Phayao 56000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2352; https://doi.org/10.3390/math13152352
Submission received: 22 June 2025 / Revised: 17 July 2025 / Accepted: 18 July 2025 / Published: 23 July 2025
(This article belongs to the Special Issue Variational Analysis, Optimization, and Equilibrium Problems)

Abstract

This study proposes a double inertial technique integrated with the Mann algorithm to address the fixed-point problem. Our method is further employed to tackle the split-equilibrium problem and perform classification using a urinary tract infection dataset in practical scenarios. The Extreme Learning Machine (ELM) model is utilized to categorize urinary tract infection cases based on both clinical and demographic features. It exhibits excellent precision and efficiency in differentiating infected from non-infected individuals. The results validate that the ELM provides a rapid and reliable method for handling classification tasks related to urinary tract infections.

1. Introduction

In this research, we assume that H is a real Hilbert spaces. Let T : H H be a mapping. The fixed-point problem is formulated as follows:
find χ H such that T χ = χ .
We denote that F ( T ) is a solution set of fixed points of T. For fixed-point techniques in diverse fields such as economics, biology, chemistry, engineering, game theory, computer science, physics, geometry, control theory, and image processing, see [1,2,3,4,5]. To find the fixed points of a nonexpansive mapping T. In 1953, Mann [6] introduced an iterative method aimed at approximating fixed points of nonexpansive operators within a Hilbert space H . Let C be a nonempty closed convex subset of real Hilbert space H . This method constructs a sequence given by χ 0 C and
χ k + 1 = ( 1 α k ) χ k + α k T χ k ,
for all k N , where { α k } is a sequence of real numbers in the interval ( 0 , 1 ) . Known as the Mann iteration, this scheme has attracted considerable attention in the study of fixed-point approximations for nonexpansive mappings (see, e.g., [7]).
The inertial technique, widely employed to accelerate algorithm convergence, was first presented through Polyak’s [8] heavy-ball method in 1964. This method generates a sequence starting from χ 0 , χ 1 H , r k > 0 and
χ k + 1 = χ k + θ k ( χ k χ k 1 ) r k F ( χ k ) , k N ,
where { θ k } [ 0 , 1 ) is the extrapolation coefficient of the inertial step θ k ( χ k χ k 1 ) , and F : H H is differentiable. Mainge [9], in 2008, initially proposed the inertial Mann algorithm for a countable set of nonexpansive mappings { T k } , presenting a unified formulation as follows:
χ 0 , χ 1 H , y k + 1 = χ k + θ k ( χ k χ k 1 ) , χ k + 1 = α k y k + 1 + ( 1 α k ) T k y k + 1 ,
where { α k } , { θ k } [ 0 , 1 ] , and θ k [ 0 , θ ] for any θ [ 0 , 1 ] . He showed that the iterative process { a k } weakly converges to a shared fixed point in k 1 F ( T k ) . Recently, several authors have extended the inertial technique by incorporating double inertial terms to further enhance the convergence speed of iterative methods. Shehu and co-authors [10] proposed a two-step inertial proximal point algorithm, laying foundational work in this direction. Later, Yao et al. [11] introduced a subgradient extragradient algorithm with double inertial steps designed explicitly for variational inequalities. These methods demonstrated improved stability and faster convergence compared to traditional single-inertial algorithms. Furthermore, Yajai et al. [12] developed a double inertial embedded modified S-iteration algorithm tailored for fixed-point problems and applied it to lung cancer classification tasks. Despite these advances, existing approaches typically focus on specific monotone or variational settings. In contrast, our work introduces a double inertial structure into the classical Mann iteration and extends it to tackle both fixed point and split-equilibrium problems, with applications in real-world medical data classification.
In 2022, Shehu [10] extended the approach by developing a double-inertial version from the single-inertial scheme. This algorithm was generated χ 0 , χ 1 , η 1 H , λ > 0 as follows:
χ k + 1 = J λ A η k , η k + 1 = χ k + 1 + θ ( χ k + 1 χ k ) + δ ( χ k χ k 1 ) ,
where ( δ , θ ) : 0 θ < 1 3 , 3 θ 1 3 + 4 θ < δ 0 , and the operator J λ A : = ( I + λ A ) 1 is the resolvent operator.
We now turn our attention to the split-equilibrium problem (SEP), initially formulated by Moudafi [13], as described below:
find χ P such that p ( χ , v ) 0 , v P ,
and q = A χ Q solves g ( q , s ) 0 , s Q .
Let A : H 1 H 2 be a bounded linear operator, and p : P × P R and g : Q × Q R are bifunctions. Assume that P and Q are nonempty closed convex subsets of the real Hilbert spaces H 1 and H 2 , respectively. To solve the fixed-point problem and the split-equilibrium problem, Witthayarat et al. [14] present shrinking projection methods for solving split-equilibrium problems and fixed-point problems in the case of asymptotically nonexpansive mappings, see, for example, Ref. [15].
In 2016, Suantai [16] revised Mann’s iterative scheme [6] to address typical challenges of the split-equilibrium problem (SEP) and the fixed-point formulation involving nonspreading multivalued mappings T, as outlined below:
χ 1 P , k 1 , y k + 1 = T r k P ( I γ A T ( I T r k Q ) A ) χ k , χ k + 1 = α k χ k + ( 1 α k ) T y k + 1 ,
where γ ( 0 , 1 L ) such that L is the spectal radius of A T A , A T is the adjoint of A , { r k } ( 0 , ) , and { α k } ( 0 , 1 ) . In 2023, Wang et al.’s [17] work concerned split-equilibrium problems and fixed-point problems of nonexpansive mappings in Hilbert spaces. Several methods have been proposed and analyzed to solve SEP together with the fixed-point problem in Hilbert spaces, see, for example, Refs. [16,18,19,20] and the references cited therein. Numerical iterative algorithms have been proposed for finding a split problem of the set of solutions of equilibrium problems and the set of fixed points of nonexpansive operators; see, for example, Refs. [21,22,23,24,25] and the references therein.
In 2025, Yajai et al. [12] proposed a novel double inertial embedded modified S-iteration algorithm for solving fixed-point problems of nonexpansive mappings in Hilbert spaces. The algorithm is extended to handle split-equilibrium problems with convergence theorems proven. It is applied to optimize machine learning models for lung cancer classification using a Kaggle dataset. This method is defined as below:
χ 0 , η 0 , η 1 H , k 1 , η k + 1 = ( 1 α k ) χ k + α k T r P ( I γ A T ( I T r Q ) A ) χ k , y k + 1 = η k + 1 + θ k ( η k + 1 η k ) + δ k ( η k η k 1 ) , χ k + 1 = ( 1 β k ) T r P ( I γ A T ( I T r Q ) A ) y k + 1 + β k P E y k + 1 ,
where E is nonempty closed and convex subset of H , { α k } , { β k } ( 0 , 1 ) , { θ k } , { δ k } ( , ) and r ( 0 , ) . Yajai et al. [26] pioneered a double-inertial Mann strategy for single split-equilibrium problems and applied it to breast-cancer screening. The present study generalizes that framework in two orthogonal directions: (i) we treat the intersection of two non-expansive fixed-point sets, and (ii) we introduce the projective split-equilibrium models I-II, for which new algorithm and corresponding convergence theorems are proved. Furthermore, we validate the methodology on UTI risk prediction, a clinical task whose data characteristics, outcome prevalence, and evaluation metrics differ significantly from those of mammographic screening.
Equilibrium problems and fixed-point problems are connected so that to solve a fixed-point problem, it is sufficient to solve the corresponding equilibrium problem, which motivates this paper. In Section 1, we introduce the background and motivation for our research, highlighting the significance of the fixed-point problem and the split-equilibrium problem in real-world applications. Section 2 provides the mathematic tools, definitions, and techniques proposed for solving fixed-point problems and split-equilibrium problems. Section 3 outlines our proposed method, and we prove our main result. Section 4 discusses the practical applicability of data classification and presents numerical experiments to validate the effectiveness of the algorithm and compare it with other existing approaches. Section 5 provides the conclusion of this paper.

2. Preliminaries

We first introduce the following definitions of operators, which will be used in our main results. A mapping T : H H is called nonexpansive mapping if T z T u z u , z , u H .
For any x , y H ,
x + y 2 x 2 + 2 y , x + y .
Assumption 1. 
[27Let p : P × P R be a bifunction satisfying the following assumptions:
(A1) p ( u , u ) = 0 for all u P ;
(A2) p is monotone, i.e., p ( u , v ) + p ( v , u ) 0 for all u , v P ;
(A3) for each u , v , w P , lim sup c 0 + p ( c w + ( 1 c ) u , v ) p ( u , v ) ;
(A4) for each u P , v p ( u , v ) is convex and lower semi-continuous.
Lemma 1. 
[28Let p : P × P R satisfy Assumption 1. Denote a mapping T r p : H 1 P for each r > 0 and e H 1 , as follows:
T r p ( e ) = { z P : p ( z , ν ) + 1 r ν z , z e 0 , ν P } .
Then, the following hold:
(1) E P ( p , P ) = F ( T r p ) ;
(2) T r p is nonempty and single-valued;
(3) E P ( p , P ) is convex and closed;
(4) T r p is firmly nonexpansive, i.e., for any e , ν H 1 .
T r p e T r p ν 2 T r p e T r p ν , e ν .
Next, let g : Q × Q R satisfy Assumption 1, and let Q be a nonempty closed convex subset of a Hilbert space H 2 . Define a mapping T s g : H 2 Q for each s > 0 and b H 2 , as follows:
T s g ( b ) = { h Q : g ( h , a ) + 1 s a h , h b 0 , a Q } .
Then, we easily observe the following:
(1) F ( T s g ) = E P ( g , Q ) ;
(2) T s g is nonempty and single-valued;
(3) E P ( g , Q ) is closed and convex;
(4) T s g is firmly nonexpansive.
Lemma 2. 
[29] Suppose that { a k } , { b k } and { c k } be sequences in [ 0 , + ) such that a k + 1 a k + c k ( a k a k 1 ) + b k , k 1 , k = 1 b k < + and there is c R with 0 c k < c < 1 , k 1 .
  • Then, the following conditions are satisfied:
(i) 
There exists  a [ 0 , ) such that  lim k + a k = a ;
(ii) 
[ a k a k 1 ] + < + , where [ R ] + = max { R , 0 } .
Lemma 3. 
[30] Let  T : H H be a nonexpansive mapping such that  F ( T ) . If there exists a sequence  { a k } in  H such that  a k a H and  a k T a k 0 , then  a F ( T ) .
Lemma 4. 
[31] Let  C be a nonempty set of  H and  { a k } be a sequence in  H . Assume that the following conditions hold.
(i) 
Every weak sequential cluster point of  { a k } belongs to  C .
(ii) 
For every  a C , the sequence  { a k a } converges.
  • Then,  { a k }  weakly converges to a point in  C .

3. Main Results

for this section, assume T 1 , T 2 : H H to be nonexpansive mappings, such that F ( T 1 ) F ( T 2 ) . Let the following conditions hold:
( i ) 0 < lim inf k α k lim sup k α k < 1 ;
( ii ) 0 < lim inf k β k lim sup k β k < 1 ;
( iii ) k = 1 | θ k | x k + 1 x k < , k = 1 | δ k | x k x k 1 < .
Theorem 1. 
Let the sequence { y k } be generated by Algorithm 1, and assume that the conditions (i)–(iii) hold. Then, the sequence { y k } weakly converges to an element of F ( T 1 ) F ( T 2 ) .
Algorithm 1: Double Inertial Mann-Type Method.
Initialization. Select y 0 , x 0 , x 1 H , { α k } , { β k } ( 0 , 1 ) , { θ k } , { δ k } ( , ) , and k = 0 .
Step 1. Compute
z k + 1 = ( 1 α k ) y k + α k T 1 y k .
Step 2. Compute
x k + 1 = ( 1 β k ) z k + 1 + β k T 2 z k + 1 .
Step 3. Compute
y k + 1 = x k + 1 + θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) .
Replace k by  k + 1  and return to Step 1.
Proof. 
Let m F ( T 1 ) F ( T 2 ) . By the nonexpansiveness of T 1 and T 2 , we have
z k + 1 m = ( 1 α k ) y k + α k T 1 y k m ( 1 α k ) y k m + α k T 1 y k m y k m
and
y k + 1 m = x k + 1 + θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) m x k + 1 m + | θ k | x k + 1 x k + | δ k | x k x k 1 = ( 1 β k ) z k + 1 + β k T 2 z k + 1 m + | θ k | x k + 1 x k + | δ k | x k x k 1 ( 1 β k ) z k + 1 m + β k T 2 z k + 1 m + | θ k | x k + 1 x k + | δ k | x k x k 1 z k + 1 m + | θ k | x k + 1 x k + | δ k | x k x k 1 .
From (1), we have
y k + 1 m y k m + | θ k | x k + 1 x k + | δ k | x k x k 1 .
By our conditions, it follows from Lemma 2 that we obtain lim k y k m exists. This implies that { y k } bounded. It is also the case that { z k + 1 } is bounded. On the other hand, we have
y k + 1 m 2 = x k + 1 + θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) m 2 x k + 1 m 2 + 2 θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) , y k + 1 m = ( 1 β k ) z k + 1 + β k T 2 z k + 1 m 2 + 2 θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) , y k + 1 m = ( 1 β k ) z k + 1 m 2 + β k T 2 z k + 1 m 2 ( 1 β k ) β k T 2 z k + 1 z k + 1 2 + 2 θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) , y k + 1 m z k + 1 m 2 ( 1 β k ) β k T 2 z k + 1 z k + 1 2 + 2 θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) , y k + 1 m = ( 1 α k ) y k + α k T 1 y k m 2 ( 1 β k ) β k T 2 z k + 1 z k + 1 2 + 2 θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) , y k + 1 m = ( 1 α k ) y k m 2 + α k T 1 y k m 2 ( 1 α k ) α k T 1 y k y k 2 ( 1 β k ) β k T 2 z k + 1 z k + 1 2 + 2 θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) , y k + 1 m y k m 2 ( 1 α k ) α k T 1 y k y k 2 ( 1 β k ) β k T 2 z k + 1 z k + 1 2 + 2 θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) , y k + 1 m .
Again, for our conditions and (2), we obtain
lim k T 1 y k y k = lim k T 2 z k + 1 z k + 1 = 0 .
By definition of { z k + 1 } , we have
lim k z k + 1 y k = lim k α k T 1 y k y k = 0 .
Since { y k } is bounded, suppose that y is a weak sequential cluster point of { y k } . It follows from Equation (4) that y is also a weak sequential cluster point of { z k + 1 } . By using Lemma 3 with (3), we can obtain that y F ( T 1 ) F ( T 2 ) . By applying Opial’s Lemma 4, we can obtain that { y k } converges weakly to an element in F ( T 1 ) F ( T 2 ) .    □
Assume that A : H 1 H 2 is a bounded linear operator. Let E be a nonempty closed convex subset of real Hilbert space H . To apply our Algorithm 1 to solve the split-equilibrium problem, we use P E is nonexpansive as discussed by Combettes [32] and T r p ( I β A T ( I T r g ) A ) is nonexpansive when β ( 0 , 1 L ] , as demonstrated in the proof of Byrne [33]. Let g be upper semi-continuous in the first argument, and p : P × P R , g : Q × Q R be two bifunctions satisfying Assumption 1. Let L be the spectral radius of A T A , and A T is the adjoint of A with = { χ E P ( p ) and A χ E P ( g ) } . Based on these parameters, we derive the algorithm as follows:
Theorem 2. 
Let the sequence { y k } be generated by Algorithms 2–4, and assume that the conditions (i)–(iii) in Theorem 1 hold. Then, the sequence { y k } weakly converges to an element in ℏ.
Algorithm 2: Double Inertial Mann-Type Method for Split-Equilibrium Problem.
Initialization. Select y 0 , x 0 , x 1 H , { α k } , { β k } ( 0 , 1 ) , { θ k } , { δ k } ( , ) , and k = 0 .
Step 1. Compute
z k + 1 = ( 1 α k ) y k + α k T r p ( I β A T ( I T r g ) A ) y k .
Step 2. Compute
x k + 1 = ( 1 β k ) z k + 1 + β k T r p ( I β A T ( I T r g ) A ) z k + 1 .
Step 3. Compute
y k + 1 = x k + 1 + θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) .
Replace k by  k + 1  and return to Step 1.
Algorithm 3: Double Inertial Mann-Type Method for Projective Split-Equilibrium Problem I.
Initialization. Select y 0 , x 0 , x 1 H , { α k } , { β k } ( 0 , 1 ) , { θ k } , { δ k } ( , ) , and k = 0 .
Step 1. Compute
z k + 1 = ( 1 α k ) y k + α k T r p ( I β A T ( I T r g ) A ) y k .
Step 2. Compute
x k + 1 = ( 1 β k ) z k + 1 + β k P E z k + 1 .
Step 3. Compute
y k + 1 = x k + 1 + θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) .
Replace k by k + 1  and return to Step 1.
Algorithm 4: Double Inertial Mann-Type Method for Projective Split-Equilibrium Problem II
Initialization. Select y 0 , x 0 , x 1 H , { α k } , { β k } ( 0 , 1 ) , { θ k } , { δ k } ( , ) , and k = 0 .
Step 1. Compute
z k + 1 = ( 1 α k ) y k + α k P E y k .
Step 2. Compute
x k + 1 = ( 1 β k ) z k + 1 + β k T r p ( I β A T ( I T r g ) A ) z k + 1 .
Step 3. Compute
y k + 1 = x k + 1 + θ k ( x k + 1 x k ) + δ k ( x k x k 1 ) .
Replace k by k + 1  and return to Step 1.
Proof. 
It suffices to demonstrate only if a F ( T r p ( I β A T ( I T r g ) A ) ) , then a . Let ρ and a = T r p ( I β A T ( I T r g ) A ) a . Then, we have
a ρ 2 = T r p ( I β A T ( I T r g ) A ) a ρ 2 = T r p ( I β A T ( I T r g ) A ) a T r p ρ 2 a β A T ( I T r g ) A a ρ 2 a ρ 2 + β 2 A T ( I T r g ) A a 2 + 2 β ρ a , A T ( I T r g ) A a . = a ρ 2 + β 2 A a T r g A a , A A T ( I T r g ) A a + 2 β ρ a , A T ( I T r g ) A a .
On the other hand, we have
β 2 A a T r g A a , A A T ( I T r g ) A a L β 2 A a T r g A a , A a T r g A a = L β 2 A a T r g A a 2
and
2 β ρ a , A T ( I T r g ) A a = 2 β A ( ρ a ) , A a T r g A a = 2 β A ( ρ a ) + ( A a T r g A a ) ( A a T r g A a ) , A a T r g A a = 2 β { A ρ T r g A a , A a T r g A a A a T r g A a 2 } 2 β { 1 2 A a T r g A a 2 A a T r g A a 2 } = β A a T r g A a 2 .
Using (5), (6), and (7), we have
a ρ 2 a ρ 2 + L β 2 A a T r g A a 2 β A a T r g A a 2 = a ρ 2 + β ( L β 1 ) A a T r g A a 2 .
Since β ( 0 , 1 L ) , it follows from (8) that A a T r g A a = 0 . This implies that A a = T r g A a . Therefore, a = T r p a = T r p ( a β A T ( 0 ) ) = T r p ( I β A T ( I T r g ) A ) a . □
In order to illustrate our main result, we provide an example within the infinite-dimensional Hilbert space L 2 [ 0 , 1 ] , equipped with the norm x = 0 1 | x ( t ) | 2 d t , where x ( t ) L 2 [ 0 , 1 ] .
Example 1. 
Let H = L 2 [ 0 , 1 ] , T 1 , T 2 : H H be defined by T 1 y ( t ) = 2 3 y ( t ) , T 2 y ( t ) = 1 4 y ( t ) , for all y ( t ) L 2 [ 0 , 1 ] . For setting the parameters θ k and δ k , we denote as follows:
θ k = 2 12 x k + 1 x k k 5 , if x k + 1 x k and k > M , θ , otherwise
and
δ k = 2 15 x k x k 1 k 2 , if x k x k 1 and k > M , δ , otherwise
such that M is the number if we want to stop. To perform a comparative analysis, we consider four different cases with varying choices of operator T and initial values. The stopping criterion is based on the Cauchy error: x k x k 1 2 < 10 3 .
  • Case 1: Define T 1 y ( t ) = 2 3 y ( t ) , with initial settings y 0 = s i n ( t ) 2 , x 0 = t 2 and x 1 = t 2 1 .
  • Case 2: Define T 2 y ( t ) = 1 4 y ( t ) , with initial settings y 0 = s i n ( t ) 2 , x 0 = t 2 and x 1 = t 2 1 .
  • Case 3: Define T 1 y ( t ) = 2 3 y ( t ) , with initial settings y 0 = e t , x 0 = t 2 1 and x 1 = t 2 + 1 .
  • Case 4: Define T 2 y ( t ) = 1 4 y ( t ) , with initial settings y 0 = e t , x 0 = t 2 1 and x 1 = t 2 + 1 . All parameters for Algorithm 1, as well as those for the benchmark algorithm from the literature, are configured under three distinct settings, as presented in Table 1, Table 2 and Table 3, with their corresponding Cauchy plot comparisons shown in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Remark 1. 
The numerical results in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6, corresponding to the parameter settings in Table 1, Table 2 and Table 3, consistently demonstrate that Algorithm 1 achieves faster convergence with fewer iterations than Mainge’s method across all four test cases.

4. Application to Data Classification Problem

Urinary tract infections (UTIs) represent one of the most prevalent infectious diseases globally, affecting approximately 150 million individuals annually and imposing substantial healthcare and economic burdens worldwide. These infections, characterized by microbial invasion and colonization of the normally sterile urinary tract, demonstrate remarkable clinical heterogeneity ranging from asymptomatic bacteriuria to life-threatening urosepsis. Escherichia coli remains the predominant causative pathogen, accounting for 75–85% of uncomplicated UTIs, followed by other Enterobacteriaceae, Enterococcus species, and Staphylococcus saprophyticus.
The pathophysiology of UTIs involves complex molecular mechanisms including bacterial adherence via specialized pili, the invasion of uroepithelial cells, the formation of intracellular bacterial communities, and the establishment of quiescent reservoirs that contribute to recurrence patterns. Host defense mechanisms encompass mechanical clearance through urine flow, antimicrobial peptide secretion, and innate immune responses, which successful uropathogens have evolved sophisticated strategies to circumvent.
Timely and accurate classification of UTIs into uncomplicated, complicated, recurrent, or healthcare-associated forms remains pivotal for guiding evidence-based diagnosis and management strategies. Uncomplicated UTIs occur in healthy individuals with normal urinary tract anatomy and function, while complicated UTIs involve patients with predisposing factors such as structural abnormalities, immunocompromise, or indwelling catheters. Recurrent UTIs, defined as three or more episodes within 12 months or two within 6 months, affect 20–30% of women following initial infection and significantly impact quality of life. Healthcare-associated UTIs, particularly catheter-associated infections, represent the most common nosocomial infection and carry substantial morbidity and mortality risks.
Such systematic stratification enables targeted antimicrobial therapy, reduces treatment failure rates, and mitigates the growing threat of antimicrobial resistance. Furthermore, standardized classification frameworks enhance research comparability, inform epidemiological surveillance, and support evidence-based clinical practice guideline development.
The integration of machine learning and artificial intelligence technologies into UTI prediction and management represents a transformative advancement in precision medicine approaches to infectious disease care. Predictive modeling systems utilizing comprehensive clinical datasets can significantly enhance diagnostic accuracy, optimize treatment selection, and improve patient outcomes through early intervention strategies.
Advanced data classification algorithms can analyze multidimensional patient data including demographic characteristics, clinical symptoms, laboratory parameters, imaging findings, and historical infection patterns to predict UTI occurrence with remarkable precision. These predictive models demonstrate particular value in identifying high-risk patients who would benefit from prophylactic interventions, enabling proactive rather than reactive healthcare approaches.
The clinical applications of UTI prediction models extend across multiple healthcare domains. In emergency departments, rapid risk stratification tools can expedite triage decisions and guide empirical antibiotic selection, potentially reducing diagnostic delays and improving patient flow efficiency. For primary care providers, predictive algorithms can identify patients requiring closer monitoring or preventive interventions, facilitating personalized care plans that address individual risk factors.
In hospital settings, predictive models can enhance antimicrobial stewardship programs by identifying patients likely to harbor resistant organisms, enabling targeted therapy selection and reducing unnecessary broad-spectrum antibiotic use. This approach not only improves individual patient outcomes but also contributes to institutional efforts to combat antimicrobial resistance.
The economic implications of accurate UTI prediction are substantial, with potential cost savings through reduced emergency department visits, decreased hospitalization rates, and prevention of complications requiring intensive interventions. Early identification of recurrence-prone patients enables implementation of cost-effective prevention strategies, including behavioral modifications, prophylactic antimicrobials, or alternative interventions such as cranberry supplementation or probiotics.
Furthermore, predictive modeling can support clinical decision-making in special populations including pregnant women, elderly patients, and immunocompromised individuals, where UTI complications carry particularly serious consequences. By identifying high-risk cases early, healthcare providers can implement intensive monitoring protocols and aggressive treatment strategies to prevent progression to severe complications such as pyelonephritis or urosepsis. The integration of real-time clinical data with predictive algorithms also enables dynamic risk assessment, allowing healthcare systems to adapt prevention and treatment strategies based on evolving patient conditions and local epidemiological patterns. This adaptive approach represents a significant advancement toward truly personalized medicine in infectious disease management, ultimately improving patient outcomes while optimizing healthcare resource utilization.
To apply machine learning, we employ the Extreme Learning Machine (ELM) framework, as introduced by Huang et al. [34]. The training dataset is defined as E : = { ( s n , r n ) : s n R N , r n R M , n = 1 , 2 , . . . , P } , where s n represents the input data, and r n denotes the corresponding target. In a single-hidden layer feedforward neural network (SLFN) with R hidden nodes, the output function is given by
Q n = k = 1 R U k V ( c k s n + d k ) ,
where V is the activation function, c k is the input weight matrix, d k is the bias vector, and U k is the output weight associated with the k-th hidden node. The hidden layer can be further defined as follows:
L = V ( c 1 s 1 + d 1 ) V ( c R s 1 + d R ) V ( c 1 s P + d 1 ) V ( c R s P + d R )
To solve ELM is to find optimal output weight U = [ U 1 , . . . , U R ] T such that L U = T , where T = [ r 1 , . . . , r P ] T is the training data. In certain cases, finding U = L T , where L is the Moore–Penrose generalized inverse of L. When the computation of L is impractical, we can solve L U = T by the following least square problems
min U R R { L U T 2 2 } .
To apply our algorithm to problem (11), we define the functions p ( u , v ) and g ( u , v ) as follows:
p ( u , v ) = 0 , u , v P , 1 , otherwise , g ( u , v ) = 0 , u , v Q , 1 , otherwise .
Under these definitions, the proposed method reduces to the split-feasibility problem (SFP), originally introduced by Censor and Elfving [35], which consists of finding a point u P such that
A u Q
where P , Q are nonempty closed and convex subsets of R N and R M , respectively, and A is an M × N real matrix. We create a model for our algorithm of the split-feasibility problem (12) to obtain results in machine learning (11).
  • Least squares model
min U P 1 2 P Q L U L U 2 2 ,
setting A = L , Q = { T } , and P = H 1 .
In this study, we employ the multi-class cross-entropy loss function ( H ( y , p ) ) , where W denotes the total number of classes, and y k and p k represent the predicted probability for class k. The loss function is defined as follows:
H ( y , p ) = k = 1 W y k · log ( p k ) .
In evaluating model performance, precision reflects the ability to correctly identify positive instances, recall indicates the capacity to retrieve all relevant instances, and accuracy measures the proportion of total correct predictions. The mathematical formulations of these metrics are as follows [36]:
Precision ( Pre ) = T P T P + F P × 100 % .
Recall ( Rec ) = T P T P + F N × 100 % .
Accuracy ( Acc ) = T P + T N T P + F P + T N + F N × 100 % .
Here, True Positives (TPs) refers to instances correctly predicted as positive; True Negatives (TNs) refers to instances correctly predicted as negative; False Positives (FPs) are incorrectly predicted as positive; and False Negatives (FNs) are incorrectly predicted as negative. Let P and N denote the number of actual positive and negative samples, respectively (e.g., malignant and benign cases). Furthermore, the F1-score—defined as the harmonic mean of precision and recall—is given by
F1-Score = 2 Precision × Recall Precision + Recall × 100 % .
All results are processed with Python 3.12.10 Visual Studio Code on a Lenovo laptop powered by AMD Ryzen™ 5 5600H CPU, 16 GB of RAM, and a GeForce RTX™ 3050 Laptop GPU. These data were collected at the University of Phayao Hospital, and we used training data of 90% and testing data of 10%. Clinical data were systematically extracted from the University of Phayao Hospital’s electronic health record system using comprehensive ICD-10 diagnostic codes. UTI cases were identified through specific codes including N39.0 (urinary tract infection and site unspecified), N30.0 (acute cystitis), N30.1-N30.9 (chronic and unspecified cystitis), N10 (acutepyelonephritis), N11.0-N11.9 (chronic pyelonephritis variants), and related complicated UTI codes (N12, N13.6, N15.1, and N34.0-N34.1). The Hospital Information System integration enabled automated extraction of patient demographics, clinical presentations, laboratory results, microbiological data, and treatment outcomes. Quality assurance measures included duplicate record identification, diagnostic code verification, and clinical correlation assessment to ensure data accuracy and research reproducibility.
Table 4 is a description of each feature. These data contained 332 patients: 155 from class 2, 85 from class 1, and 59 from class 0. We select the hidden node as 600. The parameters θ k and δ k are assigned according to (9) and (10), respectively.
To select the parameter for all methods see in Table 5.
The proposed Algorithm 2 outperforms existing methods in all evaluation metrics, demonstrating both higher efficiency and accuracy see in Table 6.
From Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11, Figure 7 is evident that our Algorithm 2 achieves a well-fitted model, indicating that it effectively learns from the training data and generalizes well for classifying urinary tract infections.
The ROC curve Figure 12 illustrates the classification performance across three classes. Class 0 demonstrates the highest discriminative ability with an AUC of 0.68, indicating moderate predictive accuracy. Class 1 and Class 2 yield lower AUC values of 0.58 and 0.55, respectively, suggesting limited separability from random guessing. Despite these limitations, the model shows potential in distinguishing clinically relevant cases in Class 0, providing a promising foundation for future enhancements through targeted optimization and class-specific calibration.
The performance evaluation presented in Table 6 demonstrates significant improvements achieved by Algorithm 2 compared to the existing Algorithms of Suantai [16] and Yajai [12]. Algorithm 2 achieved superior accuracy of 78.4946% versus 72.0430% for the baseline algorithm, representing a clinically meaningful improvement of 6.45 percentage points. This enhancement translates to approximately 65 additional correct classifications per 1000 UTI cases, potentially preventing misdiagnoses that could lead to inappropriate treatment decisions or delayed interventions.
The precision metrics reveal Algorithm 2’s exceptional performance at 81.6667% compared to 52.6882% for the baseline, indicating substantially reduced false-positive rates. This 29-point improvement in precision is particularly crucial for UTI prediction as false positives can lead to unnecessary antibiotic prescriptions, contributing to antimicrobial resistance and patient exposure to potential adverse effects. The enhanced precision directly supports antimicrobial stewardship initiatives by ensuring that antibiotic interventions are recommended only for patients with genuine infection risk.
The recall performance shows Algorithm 2 achieving 77.7035% versus 66.6667% for the baseline algorithm, representing an 11-point improvement in sensitivity. This enhanced recall capability is clinically significant as it reduces false-negative rates, ensuring that fewer genuine UTI cases are missed. In clinical practice, missed UTI diagnoses can result in progression to serious complications including pyelonephritis, sepsis, and chronic kidney disease, particularly in vulnerable populations such as elderly patients or those with diabetes.
The F1-score improvement from 58.8044% to 78.3688% demonstrates Algorithm 2’s superior balanced performance between precision and recall, indicating optimal trade-offs essential for clinical decision-support systems. This balanced optimization is crucial for UTI prediction models as healthcare providers require algorithms that minimize both missed diagnoses and unnecessary treatments. The computational efficiency gains are equally important for real-world implementation. Algorithm 2 requires only 85 iterations compared to 85 iterations for the baseline but achieves convergence in 13.4534 s versus 11.2421 s. While slightly slower, the marginal time increase is clinically acceptable given the substantial accuracy improvements, particularly in non-emergency clinical settings where decision support can be integrated into routine workflow.
The accuracy improvements translate to tangible clinical benefits across multiple healthcare domains. In emergency departments, the enhanced precision reduces unnecessary antibiotic prescriptions and diagnostic uncertainty, potentially decreasing patient length of stay and improving resource utilization. For primary care settings, improved recall ensures that high-risk patients receive appropriate preventive interventions or closer monitoring, potentially preventing costly emergency department visits and hospitalizations. The algorithm’s performance characteristics support implementation in clinical decision support systems where balanced accuracy is essential. The 6.45% accuracy improvement could prevent approximately 6450 misclassifications per 100,000 patients screened, representing substantial healthcare cost savings through reduced inappropriate treatments and prevented complications. Furthermore, the enhanced precision supports quality improvement initiatives by reducing false-positive alerts that can lead to alert fatigue among healthcare providers. These performance improvements demonstrate Algorithm 2’s potential for meaningful clinical impact, supporting evidence-based UTI prediction and management strategies that optimize patient outcomes while promoting responsible antimicrobial use in contemporary healthcare practice.

5. Conclusions

This study aimed to develop a double inertial Mann-type method for nonexpansive mappings to predict the risk of urinary tract infection (UTI) in individuals with vitamin D deficiency. Using a dataset composed of various clinical features, the Extreme Learning Machine (ELM) algorithm was employed for classification. Statistical analysis, including the Kruskal–Wallis and Mann–Whitney U tests, confirmed significant differences in serum vitamin D levels across different UTI groups. These findings suggest a potential link between vitamin D status and UTI risk. The results support the model’s applicability in identifying at-risk individuals and provide a basis for further investigation into AI-assisted UTI screening tools in clinical practice.

Author Contributions

Methodology, W.C.; Software, W.Y.; Writing—Original Draft, K.N.; Writing—Review and Editing, P.S., W.W., P.U. and M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the University of Phayao and Thailand Science Research and Innovation Fund (Fundamental Fund 2025, Grant No. 5013/2567).

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki, the Belmont report, the CIOMS Guidelines, and the International Conference on Harmonization in Good Clinical Practice or ICH-GCP and with approval from the Ethics Committee and Institutional Review Board of University of Phayao (Institutional Review Board (IRB) approval, IRB Number: HREC-UP-HSST 1.1/045/68).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abdou, A.A.N. Fixed Point Theorems: Exploring Applications in Fractional Differential Equations for Economic Growth. Fractal Fract. 2024, 8, 243. [Google Scholar] [CrossRef]
  2. Batiha, I.; Aoua, L.; Oussaeif, T.; Ouannas, A.; lshorman, S.; Jebril, I.; Momani, S. Common fixed point theorem in non-Archimedean Menger PM-spaces using CLR property with application to functional equations. IAENG Int. J. Appl. Math. 2023, 53, 360–368. [Google Scholar]
  3. Chouhan, S.; Desai, B. Fixed-Point Theory and Its Some Real-Life Applications. In Research Highlights in Mathematics and Computer Science; Book Publisher International: Versailles, France, 2022; Volume 1, pp. 119–125. [Google Scholar] [CrossRef]
  4. Patriche, M. Bayesian abstract fuzzy economies, random quasi-variational inequalities with random fuzzy mappings and random fixed point theorems. Fuzzy Sets Syst. 2014, 245, 125–136. [Google Scholar] [CrossRef]
  5. Scarf, H. Fixed-point theorems and economic analysis. Am. Sci. 1983, 71, 289–296. [Google Scholar]
  6. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  7. Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef]
  8. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  9. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef]
  10. Iyiola, O.S.; Shehu, Y. Convergence results of two-step inertial proximal point algorithm. Appl. Numer. Math. 2022, 182, 57–75. [Google Scholar] [CrossRef]
  11. Yao, Y.; Iyiola, O.S.; Shehu, Y. Subgradient extragradient method with double inertial steps for variational inequalities. J. Sci. Comput. 2022, 90, 71. [Google Scholar] [CrossRef]
  12. Yajai, W.; Kankam, K.; Yao, J.C.; Cholamjiak, W. A double inertial embedded modified S-iteration algorithm for nonexpansive mappings: A classification approach for lung cancer detection. Commun. Nonlinear Sci. Numer. Simul. 2025, 150, 108978. [Google Scholar] [CrossRef]
  13. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  14. Witthayarat, U.; Abdou, A.; Cho, Y. Shrinking projection methods for solving split equilibrium problems and fixed point problems for asymptotically nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 2015, 200. [Google Scholar] [CrossRef]
  15. Gebrie, A.; Wangkeeree, R. Hybrid projected subgradient-proximal algorithms for solving split equilibrium problems and split common fixed point problems of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2018, 2018, 5. [Google Scholar] [CrossRef]
  16. Suantai, S.; Cholamjiak, P.; Cho, Y.J.; Cholamjiak, W. On solving split equilibrium problems and fixed point problems of nonspreading multi-valued mappings in Hilbert spaces. Fixed Point Theory Appl. 2016, 2016, 35. [Google Scholar] [CrossRef]
  17. Li, Y.; Zhang, C.; Wang, Y. On solving split equilibrium problems and fixed point problems of α-nonexpansive multi-valued mappings in Hilbert spaces. Fixed Point Theory 2023, 24, 2. [Google Scholar]
  18. Bnouhachem, A. Strong convergence algorithm for split equilibrium problems and hierarchical fixed point problems. Sci. World J. 2014, 2014, 390956. [Google Scholar] [CrossRef]
  19. Kangtunyakarn, A. Hybrid algorithm for finding common elements of the set of generalized equilibrium problems and the set of fixed point problems of strictly pseudo-contractive mapping. Fixed Point Theory Appl. 2011, 2011, 274820. [Google Scholar] [CrossRef]
  20. Kazmi, K.R.; Rizvi, S.H. Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 2013, 21, 44–51. [Google Scholar] [CrossRef]
  21. Alansari, M.; Kazmi, K.R.; Ali, R. Hybrid iterative scheme for solving split equilibrium and hierarchical fixed point problems. Optim. Lett. 2020, 14, 2379–2394. [Google Scholar] [CrossRef]
  22. Harisa, S.A.; Khan, M.A.A.; Mumtaz, F. Shrinking Cesàro means method for the split equilibrium and fixed point problems in Hilbert spaces. Adv. Difference Equ. 2020, 2020, 345. [Google Scholar] [CrossRef]
  23. Inthakon, W.; Niyamosot, N. The split equilibrium problem and common fixed points of two relatively quasi-nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 2019, 20, 685–702. [Google Scholar]
  24. Jolaoso, L.O.; Karahan, I. A general alternative regularization method with line search technique for solving split equilibrium and fixed point problems in Hilbert spaces. J. Comput. Appl. Math. 2020, 391, 105150. [Google Scholar] [CrossRef]
  25. Petrot, N.; Rabbani, M.; Khonchaliew, M.; Dadashi, V. A new extragradient algorithm for split equilibrium problems and fixed point problems. J. Inequal. Appl. 2019, 2019, 137. [Google Scholar] [CrossRef]
  26. Yajai, W.; Nabheerong, P.; Cholamjiak, W. A double inertial Mann algorithm for split equilibrium problems application to breast cancer screening. J. Nonlinear Convex Anal. 2024, 25, 1697–1716. [Google Scholar]
  27. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  28. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  29. Ofoedu, E.U. Strong convergence theorem for uniformly L-Lipschitzian asymptotically pseudocontractive mapping in real Banach space. J. Math. Anal. Appl. 2006, 321, 722–728. [Google Scholar] [CrossRef]
  30. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  31. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc. 1967, 73, 235–240. [Google Scholar] [CrossRef]
  32. Combettes, P.L. The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 1996, 95, 155–270. [Google Scholar]
  33. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. The split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  34. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  35. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  36. Thomas, T.; Pradhan, N.; Dhaka, V.S. Comparative analysis to predict breast cancer using machine learning algorithms: A survey. In Proceedings of the 2020 International Conference on Inventive Computation Technologies (ICICT), Pune, India, 26–28 February 2020; pp. 192–196. [Google Scholar]
Figure 1. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 1: the blue line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 2: the purple line indicates Algorithm 1 and the red line indicates Mainge’s method.
Figure 1. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 1: the blue line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 2: the purple line indicates Algorithm 1 and the red line indicates Mainge’s method.
Mathematics 13 02352 g001
Figure 2. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 3: the green line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 4: the magenta line indicates Algorithm 1 and the red line indicates Mainge’s method.
Figure 2. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 3: the green line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 4: the magenta line indicates Algorithm 1 and the red line indicates Mainge’s method.
Mathematics 13 02352 g002
Figure 3. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 1: the blue line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 2: the orange line indicates Algorithm 1 and the red line indicates Mainge’s method.
Figure 3. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 1: the blue line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 2: the orange line indicates Algorithm 1 and the red line indicates Mainge’s method.
Mathematics 13 02352 g003
Figure 4. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 3: the green line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 4: the magenta line indicates Algorithm 1 and the red line indicates Mainge’s method.
Figure 4. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 3: the green line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 4: the magenta line indicates Algorithm 1 and the red line indicates Mainge’s method.
Mathematics 13 02352 g004
Figure 5. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 1: the blue line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 2: the orange line indicates Algorithm 1 and the red line indicates Mainge’s method.
Figure 5. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 1: the blue line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 2: the orange line indicates Algorithm 1 and the red line indicates Mainge’s method.
Mathematics 13 02352 g005
Figure 6. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 3: the green line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 4: the pink line indicates Algorithm 1 and the red line indicates Mainge’s method.
Figure 6. Cauchy error plots comparing Algorithm 1 and Mainge’s method [9]. (a) Case 3: the green line indicates Algorithm 1 and the red line indicates Mainge’s method; (b) Case 4: the pink line indicates Algorithm 1 and the red line indicates Mainge’s method.
Mathematics 13 02352 g006
Figure 7. The left panel presents the training and validation accuracy of Algorithm 2—case 1, while the right panel shows its corresponding training and validation loss.
Figure 7. The left panel presents the training and validation accuracy of Algorithm 2—case 1, while the right panel shows its corresponding training and validation loss.
Mathematics 13 02352 g007
Figure 8. The left panel presents the training and validation accuracy of Algorithm 2—case 2, while the right panel shows its corresponding training and validation loss.
Figure 8. The left panel presents the training and validation accuracy of Algorithm 2—case 2, while the right panel shows its corresponding training and validation loss.
Mathematics 13 02352 g008
Figure 9. The left panel presents the training and validation accuracy of Algorithm 2—case 3, while the right panel shows its corresponding training and validation loss.
Figure 9. The left panel presents the training and validation accuracy of Algorithm 2—case 3, while the right panel shows its corresponding training and validation loss.
Mathematics 13 02352 g009
Figure 10. The left panel presents the training and validation accuracy of Suantai’s method [16], while the right panel shows its corresponding training and validation loss.
Figure 10. The left panel presents the training and validation accuracy of Suantai’s method [16], while the right panel shows its corresponding training and validation loss.
Mathematics 13 02352 g010
Figure 11. The left panel presents the training and validation accuracy of Yajai’s method [12], while the right panel shows its corresponding training and validation loss.
Figure 11. The left panel presents the training and validation accuracy of Yajai’s method [12], while the right panel shows its corresponding training and validation loss.
Mathematics 13 02352 g011
Figure 12. ROC curves for Algorithm 2 showing class-wise AUC performance after model training.
Figure 12. ROC curves for Algorithm 2 showing class-wise AUC performance after model training.
Mathematics 13 02352 g012
Table 1. Setting parameters.
Table 1. Setting parameters.
α k β k θ δ
Algorithm 10.90.50.90.01
Algorithm of Mainge [9]0.9-0.9-
Table 2. Setting parameters.
Table 2. Setting parameters.
α k β k θ δ
Algorithm 10.60.30.60.05
Algorithm of Mainge [9]0.6-0.6-
Table 3. Setting parameters.
Table 3. Setting parameters.
α k β k θ δ
Algorithm 10.30.70.30.005
Algorithm of Mainge [9]0.3-0.3-
Table 4. Data description.
Table 4. Data description.
MinimumMaximumMeanMedianModeStandard Deviation
UTI Type021.3886220.7712
Urinalysis Nitrite010.1536000.3611
Urinalysis Leukocyte Esterase062.5663331.0820
Age19855.8223662125.9832
Weight8.59051.9077534513.2993
BMI13.281033.270021.730921.875015.62104.1219
Serum Vit D level4.960062.400024.845523.600020.50008.4848
CBC hct215136.716937365.1781
Height69185153.728915515013.1780
Wbc153.5904431.0433
eGFR0281139.4970140.50006682.2671
DM010.3494000.4775
Table 5. Setting parameters.
Table 5. Setting parameters.
L α β δ θ
Algorithm 2—case 1 0.9999 m a x ( e i g e n v a l u e ( A T A ) ) 0.90.50.010.9
Algorithm 2—case 2 0.9999 m a x ( e i g e n v a l u e ( A T A ) ) 0.90.90.0050.9
Algorithm 2—case 3 0.9999 m a x ( e i g e n v a l u e ( A T A ) ) 0.70.70.0050.9
Algorithm of Suantai [16] 0.9999 m a x ( e i g e n v a l u e ( A T A ) ) 0.9---
Algorithm of Yajai [12] 0.9999 m a x ( e i g e n v a l u e ( A T A ) ) 0.90.5 2 15 x k + 1 x k 2 + k 2 + 2 15 2 12 x k + 1 x k 5 + k 5 + 2 12
Table 6. Comparative efficiency of the proposed algorithm and existing method.
Table 6. Comparative efficiency of the proposed algorithm and existing method.
IterationsComputation Time (s)Precision (%)Recall (%)F1-Score (%)Accuracy (%)
Algorithm 2—case 18513.453481.666777.703578.368878.4946
Algorithm 2—case 28519.273379.605277.536278.129176.3441
Algorithm 2—case 38520.486781.666777.703578.368878.4946
Algorithm of Suantai [16]8511.242152.688266.666758.804472.0430
Algorithm of Yajai [12]8546.968476.326673.690173.818674.1935
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Naravejsakul, K.; Sukson, P.; Waratamrongpatai, W.; Udomluck, P.; Khwanmuang, M.; Cholamjiak, W.; Yajai, W. A Double Inertial Mann-Type Method for Two Nonexpansive Mappings with Application to Urinary Tract Infection Diagnosis. Mathematics 2025, 13, 2352. https://doi.org/10.3390/math13152352

AMA Style

Naravejsakul K, Sukson P, Waratamrongpatai W, Udomluck P, Khwanmuang M, Cholamjiak W, Yajai W. A Double Inertial Mann-Type Method for Two Nonexpansive Mappings with Application to Urinary Tract Infection Diagnosis. Mathematics. 2025; 13(15):2352. https://doi.org/10.3390/math13152352

Chicago/Turabian Style

Naravejsakul, Krittin, Pasa Sukson, Waragunt Waratamrongpatai, Phatcharapon Udomluck, Mallika Khwanmuang, Watcharaporn Cholamjiak, and Watcharapon Yajai. 2025. "A Double Inertial Mann-Type Method for Two Nonexpansive Mappings with Application to Urinary Tract Infection Diagnosis" Mathematics 13, no. 15: 2352. https://doi.org/10.3390/math13152352

APA Style

Naravejsakul, K., Sukson, P., Waratamrongpatai, W., Udomluck, P., Khwanmuang, M., Cholamjiak, W., & Yajai, W. (2025). A Double Inertial Mann-Type Method for Two Nonexpansive Mappings with Application to Urinary Tract Infection Diagnosis. Mathematics, 13(15), 2352. https://doi.org/10.3390/math13152352

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop