Next Article in Journal
Reconstruction of Piecewise Smooth Functions Based on Fourier Extension
Previous Article in Journal
The Generalized Multistate Complex Network Contagion Dynamics Model and Its Stability
Previous Article in Special Issue
The Waiting Time Distribution of Competing Patterns in Markov-Dependent Bernoulli Trials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Penalty Approach for Solving Generalized Absolute Value Equations

Fundamental and Numerical Laboratory (MFNL), Department of Mathematics, Faculty of Sciences, Ferhat Abbas Setif-1 University, Algeria Setif 19000, Algeria
*
Authors to whom correspondence should be addressed.
Axioms 2025, 14(7), 488; https://doi.org/10.3390/axioms14070488
Submission received: 17 April 2025 / Revised: 10 June 2025 / Accepted: 19 June 2025 / Published: 22 June 2025

Abstract

In this paper, we propose a penalty approach for solving generalized absolute value equations (GAVEs) of the type A x B | x | = b , ( A , B R n × n , b R n ) . Firstly, we reformulate the GAVEs as variational inequality problems passing through an equivalent horizontal linear complementarity problem. To approximate the resulting variational inequality, a sequence of nonlinear equations containing a penalty term is then defined. Under a mild assumption, we show that the solution of the considered sequence converges to that of GAVE if the penalty parameter tends to infinity. An algorithm is developed where its corresponding theoretical arguments are well established. Finally, some numerical experiments are presented to show that our approach is quite appreciable.

1. Introduction

Consider generalized absolute value equations (GAVEs) of the following type:
Problem 1. 
A x B | x | = b ,
where A , B R n × n   ( B 0 ) are the given matrix, and  b R n , such that | x | represents the vector of the absolute values of each component of x R n .
For more details, the reader can refer to [1,2,3,4,5,6]. However, if B is the identity matrix, then the GAVE (1) is reduced to the absolute value equations AVE of the following type:
A x | x | = b .
In recent years, GAVE (1) and AVE (2) have attracted considerable attention from many researchers because they have several applications, such as linear complementarity problems (LCPs) which, include linear and convex quadratic programs [7,8], the horizontal linear complementarity problem [1], the boundary value problem [9], an interval linear system [10], bimatrix games [6] and equilibrium problem [2].
Various methods have been proposed; for instance, projection methods [11], interior-point methods [1], semi-smooth Newton methods [12] and smoothing methods [13,14,15]. While effective for certain classes of problems, these methods face limitations such as sensitivity to initial points (e.g., Newton-type methods), complex implementation requirements (e.g., interior-point methods), potential difficulties with the non-convexity inherent in the absolute value terms, or the introduction of approximation errors (e.g., smoothing methods). However, it seems that there are no studies investigating penalty methods for GAVE.
The penalty methods have been widely used for linear, nonlinear, mixed nonlinear, and horizontal linear complementarity problems [16,17,18,19], as well as box linear constrained variational inequalities [20,21] in both finite-dimensional. space R n and infinite-dimensional functional spaces. These methods are particularly attractive when handling non-smoothness through reformulation, leveraging well-established variational inequality theory, and generating simpler subproblems that are solvable using standard numerical techniques.
Inspired by the above works, we introduce a new penalty approach that approximates the GAVE through a sequence of subproblems that contain a penalty term. The resulting algebraic equations are easily solvable using a conventional numerical method such as the Newton type.
The outline of the paper is as follows. First, we present the GAVE generalized absolute value equations. In Section 2, we reformulate these as a variational inequality, and we present the associated penalized problem. In Section 3, we analyze the convergence of the penalty approach. The corresponding algorithm is formulated. Next, under the midl assumption, its global convergence is established in Section 4. Section 5 contains the numerical experiments and comments. Finally, the conclusion ends Section 6.
The notations used in this paper are as follows: R n denotes the space of n dimensional real vectors, with R + n = x R n : x 0 and R n = x R n : x 0 representing the nonnegative orthant and negative orthant, respectively. In addition, the Euclidean inner product and norm are denoted by . , . and . , respectively.

2. GAVE and Its Penalty Formulation

In this section, we present the horizontal linear complementarity problem (HLCP) that is equivalent to the GAVEs given by [1]. Here, the GAVEs stated in (1) are denoted by Problem 1. We present the HLCP equivalent to the GAVEs as follows:
Problem 2. 
Find ( y , z ) in R n × R n , such that
y 0 , z 0 , N y M z = q , y , z = 0 ,
where
N = A B , M = A + B , q = b ,
y = x + and z = x , with x = y z ,   x = y + z . The vectors x + , x are defined, respectively, by x i + = max i x i , 0 and x i = max i x i , 0 , i = 1 , , n .
The following result shows the equivalence between the GAVE and the HLCP.
Proposition 1. 
GAVE is an HLCP.
Proof. 
For a detailed proof, we refer to [1].    □
Now, let us define the following cone by
K = { ( y T , z T ) T : y R n , z 0 } ,
and let
F y , z = N y M z b ( I β N ) y + β M z + β b ,
where β is a constant, and define the following variational inequalities problem VIP(F,K) corresponding to Problem 2.
Problem 3. 
Find ( y T , z T ) T K such that, for all u T , v T T K :
u v y z T N y M z b ( I β N ) y + β M z + β b 0 .
Using a standard argument, we can easily show that Problem 2 is equivalent to Problem 3 in the sense that ( y T , z T ) T is a solution to Problem 3 if, and only if, it is a solution to Problem 2. For a detailed proof, we refer to [17]. Therefore, x = y z is a solution to Problem 1; see [1].
Now, we present a penalty approach for solving Problem 1. To begin, we will establish a fundamental definition.
Definition 1 
([22]). An operator B : R n   R n is called a penalty operator relative to a convex set C R n if it satisfies the following conditions:
(i) 
B is a continuous operator on R n .
(ii) 
For any x C ,
B y , y x = 0 i f y C , B y , y x > 0 i f y C .
In the literature, several penalty operators exist, including the following projection operator [22]:
B x = x P r C x ,
where P r C x is the Euclidean projection of x onto C .
For the case when C = K , the projection operator is defined by
B y , z = 0 P r R n z .
Next, we consider the following penalty problem V E P r F , K .
Problem 4. 
Find ( y r , z r ) in R n × R n such that
E y r , z r = F y r , z r + r B y r , z r = 0 ,
where r > 0 is the penalty parameter, B is the previous projection operator, and 
F y , z = N y M z b ( I β N ) y + β M z + β b .
Equation 5 is equivalent to
N y r M z r b y r + r 0 P r R n z r = 0 .
The formula 7 is an approximate problem in Problem 1. When r   + , we can expect that the solution ( y r T , z r T ) T of Problem 4, x r = y r z r converges to one of Problem 1. In the next section, we present the convergence analysis.

3. Theoretical Aspects of Problem 4

Firstly, we establish some upper bounds for the distance between the solution of Problem 1 and x r = y r z r , where ( y r T , z r T ) T is the solution of Problem 4. To do this, we first make the following assumption for the coefficient of the matrix H.
Assumption 1. 
Constant β is obtained such that coefficient matrix
H = N M I β N β M
is positive definite, i.e., constants β  and α > 0 exist such that
w T H w α w 2
for any w R 2 n .
Lemma 1 
([22]). If Assumption 1 is verified, then the operator defined by (6) is strongly monotone.
Lemma 2 
([22]). Let F be a strong monotone operator then, F is strongly coercive, in sense that, there exists ( y 0 ) T , ( z 0 ) T T K such that:
lim y T , z T T F y , z , y T , z T T ( y 0 ) T , ( z 0 ) T T y T , z T T ( y 0 ) T , ( z 0 ) T T =
Remark 1. 
Under Assumption 1, Problem 1 has a unique solution (see [22]). Further, since F . + r B . is also strongly monotone for any r > 0 , the VIP(F, R n × R n ) corresponding to Problem 4 has a unique solution.
In the remainder of our discussion, we start our convergence analysis with the following lemma.
Lemma 3. 
Let ( y r T , z r T ) T be the solution of Problem 4 for any r > 0 and with Assumption 1 satisfied; then, there exists a positive constant M, independent of ( y r T , z r T ) T such that
( y r T , z r T ) T M , f o r a l l r > 0 .
Proof. 
For r > 0 , let ( y r T , z r T ) T be the solution of Problem 4. Left-multiplying both sides of ( 4 ) by ( y r T , z r T ) T , we obtain
( y r T , z r T ) T F ( ( y r T , z r T ) ) + r ( y r T , z r T ) T 0 P r R n z = 0 .
We have 0 , 0 T K , then,
( y r T , z r T ) T 0 P r R n z 0 0 0
which leads to
( y r T , z r T ) T F ( ( y r T , z r T ) ) = r ( y r T , z r T ) T 0 P r R n z r 0 , r > 0 .
Moreover, we have
( y r T , z r T ) T F ( y r T , z r T ) = ( y r T , z r T ) T F ( y r T , z r T ) ( y r T , z r T ) T F ( 0 , 0 ) + ( y r T , z r T ) T F ( 0 , 0 )
which is equivalent to
( y r T , z r T ) T ( F ( y r T , z r T ) F ( 0 , 0 ) ) ( y r T , z r T ) T F ( 0 , 0 ) .
From the last inequqlity, and according to the Cauchy–Schwarz inequality, we can obtain
( y r T , z r T ) T ( F ( y r T , z r T ) F ( 0 , 0 ) ) ( y r T , z r T ) T F ( 0 , 0 ) 0 P r R n z F ( 0 , 0 ) .
Thus, we obtain
α ( y r T , z r T ) T 2 ( y r T , z r T ) T , F ( ( y r T , z r T ) ) F ( 0 , 0 ) ( y r T , z r T ) T F ( 0 , 0 ) ,
and so,
( y r T , z r T ) T M .
This completes the proof.    □
Remark 2. 
Lemma 2 shows that for any non-negative r, the solution of 6 always belongs to a bounded closed set D = ( y T , z T ) T R n × R n ; ( y T , z T ) T M . As F is obviously continuous, a positive constant L independent of  y r T , z r T T and r is guaranteed, such that
F y r , z r L .
Lemma 4. 
Let ( y r T , z r T ) T be a solution of Problem 3 and suppose that Assumption 1 is satisfied; then, there exists a positive constant L, independent of ( y r T , z r T ) T and r, such that
B y r , z r 2 L r
for all r > 0 .
Proof. 
Let ( y r T , z r T ) T be a solution of Problem 3 for any r 0 ; we have
B y r , z r = 0 P r R n z r ,
so we only prove that | | Pr R n z r | | c r under the condition of Lemma 4.
Left-multiplying both sides of ( 3 ) by B y r , z r ) T , i.e., by  0 P r R n z r T , we obtain
0 Pr R n z r T N y r M z r b y r + r 0 P r R n z r T 0 P r R n z r = 0 ,
i.e.,
P r R n z r T y r + r P r R n z r T P r R n z r = 0 .
Adding both sides of ( 11 ) by z r T z r leads to the following:
P r R n z r T y r + z r T z r + r P r R n z r T P r R n z r = z r T z r .
Since
z r T z r = z r T ( z r + z r ) = ( z r T z r ) 0 ,
we obtain ( 12 )
r P r R n z r T P r R n z r P r R n z r T y r + z r T z r = P r R n z r , P r R n z r T y r z r .
Next, through the Cauchy–Schwarz inequality, we can obtain the following from the above equation:
r P r R n z r 2 P r R n z r , P r R n z r T y r z r 2 P r R n z r F ( y r , z r ) ,
and so,
P r R n z r 2 F ( y r , z r ) r .
Finally, (10) holds due to the previous inequality and (9); we have
P r R n z r 2 L r .
   □
Next, we prove our main convergence result.
Theorem 1. 
Let ( y ¯ T , z ¯ T ) T and ( y r T , z r T ) T be the solutions of Problems 3 and 4, respectively, and  Assumption 1 is satisfied. Then, there exists a constant c 1 > 0 , independent of ( y r T , z r T ) T and r, such that:
y ¯ z ¯ y r z r c 1 r .
Proof. 
Let ( y ¯ T , z ¯ T ) T be a solution of Problem 3 and ( y r T , z r T ) T be a solution of Problem 4. Now, we decompose y ¯ z ¯ y r z r as follows:
y ¯ z ¯ y r z r = y ¯ z ¯ P r K y r z r y r z r P r K y r z r = s r y r z r P r K y r z r
where
s r w r = y ¯ z ¯ P r K y r z r ,
then,
y ¯ z ¯ s r w r = P r K y r z r K .
From ( 4 ) , for
y z = y ¯ z ¯ s r w r K ,
we obtain
F y ¯ , z ¯ , y ¯ z ¯ s r w r y ¯ z ¯ = F y ¯ , z ¯ , s r w r 0 .
Multiplying ( 5 ) by ( s r T , w r T ) T , and multiplying ( 4 ) by ( s r T , w r T ) T , we have:
F y r , z r , s r w r + r B y r , z r , s r w r = 0 .
Adding up ( 18 ) and ( 19 ) , we deduce
F y r , z r F y ¯ , z ¯ , s r w r + r B y r , z r , s r w r 0 .
Note that
B y r , z r , s r w r = y r z r P r K y r z r , y ¯ z ¯ P r K y r z r 0 .
Thus, ( 20 ) leads to
F y ¯ , z ¯ F y r , z r , y ¯ z ¯ P r K y r z r = F y ¯ , z ¯ F y r , z r , s r w r 0 .
Then,
F y ¯ , z ¯ F y r , z r , y ¯ z ¯ y r z r + y r z r P r K y r z r 0
which gives
F y ¯ , z ¯ F y r , z r , y ¯ z ¯ y r z r F y ¯ , z ¯ F y r , z r , y r z r P r K y r z r .
Using the Cauchy–Schwarz inequality and Assumption 1 and the above inequality, it follows that
α y ¯ z ¯ y r z r 2 F y ¯ , z ¯ F y r , z r , y ¯ z ¯ y r z r F y ¯ , z ¯ F y r , z r , y r z r P r K y r z r F y ¯ , z ¯ F y r , z r y r z r P r K y r z r = F y ¯ , z ¯ F y r , z r B y r , z r .
Finally, from  ( 10 ) and Lemma 4, we obtain
α y ¯ z ¯ y r z r 2 2 2 L 2 r ,
which implies
y ¯ z ¯ y r z r c 1 r ,
with c 1 = 2 2 L α .    □

4. Algorithm and Its Convergence

In this section, we will present the corresponding Algorithm 1 for solving the GAVE problem.
Algorithm 1 Penalty algorithm for GAVE
Step 1. Let ε > 0 be a given precision and ω > 1 .
  • Let ( ( y 0 ) T , ( z 0 ) T ) T R 2 n , r 0   = 1 and k = 0 ;
Step 2. Find ( y k + 1 ) T , ( z k + 1 ) T T the solution of ( 7 ) .
Step 3. If P r R n z k + 1 ε or y k + 1 z k + 1 y k z k ϵ , then stop:
x k + 1 = y k + 1 z k + 1 is an approximated solution of G A V E .
If not, take
  • r k + 1 = ω r k , ( y k ) T , ( z k ) T T = ( y k + 1 ) T , ( z k + 1 ) T T ;
  • Step 4. Set k : = k + 1 , and go back to Step 2.

4.1. Comments and Remarks

  • The properties and analysis of the algorithm are closely related in principle to the study of Equation (7).
  • Equation (7) may be solved using one of the conventional methods (fixed point, Newton, … etc).
  • The choice of method is related to the properties of the matrix H in Equation ( 7 ) .

4.2. Convergence of Penalty Algorithm

In what follows, we will present a theorem that is the same as Theorem 4.4 in [22], and we establish the convergence result using a detailed proof.
Theorem 2. 
Under Assumption 1, if F is strongly coercive, then x k = y k z k generated by Algorithm 1 converges to the unique adherent value x ¯ = y ¯ z ¯ , which is, in turn, a solution of Problem 1.
Proof. 
The proof of this theorem is completed in three steps
  • Existence of the solution for
    E y , z = F y , z + r B y , z = 0 .
    In Step 2: The operator E = F + r k B is strongly coercive on R n , since F is strongly coercive; therefore:
    k N , and y k + 1 z k + 1 such that E y k + 1 , z k + 1 = 0 .
  • y k + 1 z k + 1 Is this bounded?
    We suppose the contrary, i.e., there exists k 0 such that
    y k 0 z k 0 + .
    From Step 2 of Algorithm 1, we have
    E y k 0 , z k 0 , y k 0 z k 0 y 0 z 0 x k 0 x 0 = F y k 0 , z k 0 , y k 0 z k 0 y 0 z 0 y k 0 z k 0 y 0 z 0 + r k B y k 0 , z k 0 , y k 0 z k 0 y 0 z 0 y k 0 z k 0 y 0 z 0 = 0 ,
    which implies that
    F y k 0 , z k 0 , y k 0 z k 0 y 0 z 0 y k 0 z k 0 y 0 z 0 = r k B y k 0 , z k 0 , y k 0 z k 0 y 0 z 0 y k 0 z k 0 y 0 z 0 .
    According to the property ( i i ) in Definition 1 of B , we have,
    F y k 0 z k 0 , y k 0 z k 0 y 0 z 0 y k 0 z k 0 y 0 z 0 0 .
    On the other hand, the strongly coercive of F leads to the following:
    lim y z F y , z , y z y 0 z 0 = + > 0 ,
    which is a contradiction. Consequently, y k z k is bounded.
  • Since the sequence y k z k k N is bounded, we can extract a subsequence converging to y ¯ z ¯ , the adhesion value of the sequence y k z k . Note that for all y z K
    lim k inf r k B y k , z k , y k z k y z 0 .
    Therefore,
    lim k inf F y k , z k , y z y k z k = lim k inf r k B y k , z k , y k z k y z .
    F is continuous over R n ; then,
    lim k inf F y k , z k , x y k z k = F y ¯ , z ¯ , y z y ¯ z ¯ 0 .
    We deduce that
    y ¯ z ¯ K and F y ¯ , z ¯ , y z y ¯ z ¯ 0 , y z K .
    This shows that x ¯ =   y ¯ z ¯ is a solution of Problem 1.
This completes the proof. □

5. Computational Experiments

To provide some insight into the behavior of our penalty algorithm, we implemented it in Matlab and ran it on a set of problems taken from the literature. The implementation was manipulated in Matlab 7.9 software on a personal PC. We used x and v 0 = y 0 T , z 0 T T to denote the solution and the initial point, respectively.
The examples were tested for different values of ω with ω > 1 , and the tolerance was considered ϵ = 10 6 . ‘Iter’ represents the number of iterations and ‘CPU (s)’ is the computation time. Also, // indicates that the algorithm does not provide any solution when ω takes a value greater than or equal to the last value displayed. Therefore, we consider that the method does not converge.
We mention here that the resolution of the nonlinear equations system 7 is due to f-solver that exists in the optimization toolbox. More precisely, we used the quasi-Newtonian Levenberg–Marquardt method, which is known for its main algorithmic properties: global convergence, robustness, and efficiency.
Example 1 
([23]). The data ( A , B , b ) of the GAVE is given by
A = ( a i j ) = 10 i f i = j 1 i f i j = 1 i , j = 1 , , n , 0 i f otherwise ,
B = ( b i j ) = 5 i f i = j 1 i f i j = 1 i , j = 1 , , n , 0 i f otherwise ,
and b = ( A B ) e .
The exact solution is x * = e .
Using the starting point V 0 =   ( ( y 0 ) T , ( z 0 ) T ) T = ( 1 , 2 , , n T , 1 , 2 , , n T ) T ), the computational results are summarized in Table 1 for different sizes.
Example 2 
([10]). Let symmetric matrix A, matrix B, and vector b be given by
A = ( a i j ) = 4 n i f i = j , n i f i j = 1 0.5 o t h e r w i s e
B = ( b i j ) = n i f i = j , 1 n i f i j = 1 , 0.125 o t h e r w i s e .
b = ( 548 , 647.5 , , 647.5 , 548 ) T .
The exact solution is x * = ( 4 3 , 4 3 , , 4 3 ) .
Using the starting point V 0 =   ( ( y 0 ) T , ( z 0 ) T ) T = ( 1 , , 1 T , 1 , , 1 T ) T , the numerical results are summarized in Table 2 for different sizes.
Example 3 
([24]). The data ( A , B , b ) of the GAVE are generated for various sizes n . We chose a random matrix A from a uniform distribution in [ 10 , 10 ] , then chose a random vector x from a uniform distribution in [ 1 , 1 ] . Finally, we calculated b = A x | x | . The data ( A , B , b ) are as follows: A = 10 ( r a n d ( n , n ) r a n d ( n , n ) ) ; x = r a n d ( n , 1 ) r a n d ( n , 1 ) ; b = A x a b s ( x ) and B = e y e ( n ) . The numerical results are summarized in Table 3 for random GAVEs.

Remarks

Based on the numerical tests conducted using examples of various dimensions, we noted that the number of iterations of the algorithm and the execution time depend on the values of parameter ω . Specifically, as the value of ω increases, both the number of iterations and the computational time decrease. However, the quality of the solution risks deterioration when ω is too large. This behavior can be explained by the fact that large values of ω make the penalized Equation (7) ill-conditioned, which could cause numerical instability in the solver and an amplification of rounding errors. It is also noticeable that this algorithm can be considered to solve GAVEs with large dimensions.

6. Conclusions

In this study, we proposed and analyzed a penalty method for solving the generalized absolute value problem. This method can be formulated as a sequence of penalized nonlinear equations. This alternative principle allowed us to obtain a system of equations that are easy to treat under the midl hypotheses. To illustrate our contribution, we presented numerical simulations of different examples. These simulations clearly demonstrate that the results obtained are promising and consolidate our theoretical findings.
In future work, we aim to address the following points:
  • The need to further relax the convergence assumptions.
  • The search for an efficient strategy for choosing the penalty parameter.
  • The need to conduct comparative numerical studies with other recent methods.

Author Contributions

Methodology, Z.K., H.G. and M.A.; Formal analysis, Z.K., H.G. and M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Achache, M.; Hazzam, N. Solving absolute value equations via linear complementarity and interior-point methods. J. Nonl. Funct. Anal. 2018, 2018, 39. [Google Scholar] [CrossRef]
  2. Lehmann, L.; Radons, M.; Rump, M.; Strom, C.H. Sign controlled solvers for the absolute value equation with an application to support vector machines. arXiv 2010, arXiv:1707.09174. [Google Scholar]
  3. Mangasarian, O.L.; Meye, R.R. Absolute value equations. Linear Algebra Appl. 2006, 419, 359–367. [Google Scholar] [CrossRef]
  4. Migot, T.; Abdellah, L.; Haddou, M. Solving absolute value equation using complementarity and smoothing functions. Comput. Appl. Math. 2018, 327, 196–207. [Google Scholar]
  5. Rohn, J. An algorithm for computing all solutions of an absolute value equation. Optimiz. Lett. 2012, 6, 851–856. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Yu, D.; Yuan, Y. On the Alternative SOR-like Iteration Method for Solving Absolute Value Equations. Symmetry 2023, 15, 589. [Google Scholar] [CrossRef]
  7. Achache, M.; Anane, N. On the Unique Solvability and Picard’s Iterative Method for Absolute Value Equations; Bulletin of Transilvania, Series III: Mathematics and Computer Sciences; Transilvania University Press: Braşov, Romania, 2021; Volume 1, pp. 13–26. [Google Scholar]
  8. Prokopyev, O. On Equivalent Reformulations for Absolute Value Equations. Comput. Optim. Appl. 2009, 44, 363–372. [Google Scholar] [CrossRef]
  9. Noor, M.A.; Iqbal, J.; Al-Said, E. Residual iterative method for solving absolute value equations. Abst. Appl. Anal. 2012, 2012, 406232. [Google Scholar] [CrossRef]
  10. Anane, N.; Achache, M. Preconditioned conjugate gradient methods for absolute value equations. J. Numer. Anal. Approx. Theory 2020, 48, 3–14. [Google Scholar] [CrossRef]
  11. Alcantara, J.H.; Chen, J.-S.; Tam, M.K. Method of alternating projections for the general absolute value equation. J. Fixed Point Theory Appl. 2023, 25, 39. [Google Scholar] [CrossRef]
  12. Bello, J.Y.; Cruz, O.P.; Prudente, L.F. On the global convergence of the inexact semi-smooth Newton method for absolute value equation. Comput. Optim. Appl. 2016, 65, 93–108. [Google Scholar] [CrossRef]
  13. Caccetta, L.; Qu, B.; Zhou, G. A globally and quadratically convergent method for absolute value equations. Comput. Optim. Appl. 2011, 48, 45–58. [Google Scholar] [CrossRef]
  14. Mangasarian, O.L. A generalized Newton method for absolute value equations. Optimiz. Lett. 2009, 3, 101–108. [Google Scholar] [CrossRef]
  15. Yong, L.Q. A smoothing Newton method for absolute value equation. Int. J. Control. Autom. Syst. 2016, 9, 119–132. [Google Scholar] [CrossRef]
  16. Huang, C.; Wang, S. A penalty method for a mixed nonlinear complementarity problem. Nonlinear Anal. 2012, 75, 588–597. [Google Scholar] [CrossRef]
  17. Hu, X.; Huang, C.; Luo, A.; Chen, H. A Power Penalty Approach to a Horizontal Linear Complementarity, Problem. Res. J. Appl. Sci. Eng. Technol. 2013, 5, 1830–1835. [Google Scholar] [CrossRef]
  18. Huang, C.; Wang, S. A power Penalty approach to a nonlinear complementarity problem. Oper. Res. Lett. 2010, 38, 72–76. [Google Scholar] [CrossRef]
  19. Wang, S.; Yang, X.Q. A power penalty method for linear complementarity problems. Oper. Res. Lett. 2008, 36, 211–214. [Google Scholar] [CrossRef]
  20. Kebaili, Z.; Benterki, D. A penalty approach for a box constrained variational inequality problem. Appl. Math. 2018, 63, 439–454. [Google Scholar] [CrossRef]
  21. Chen, M.; Huang, C. A power penalty method for a class of linearly constrained variational inequality. J. Ind. Manag. Optim. (JIMO) 2018, 63, 1381–1396. [Google Scholar] [CrossRef]
  22. Auslender, A. Optimization: Méthodes Numériques; Masson: Paris, France, 1976. (In French) [Google Scholar]
  23. Achache, M. On the unique solvability and numerical study of absolute value equations. J. Numer. Anal. Approx. Theory 2019, 48, 112–121. [Google Scholar]
  24. Mangasarian, O.L. Absolute value equation solution via concave minimization. Optim. Lett. 2007, 1, 3–8. [Google Scholar] [CrossRef]
Table 1. Example 1.
Table 1. Example 1.
ω 10 10 2 5 × 10 2 10 3
Size  n IterCPU (s)IterCPU (s)IterCPU (s)IterCPU (s)
2080.619450.521340.2653////
500960.2795546.8847445.9938////
10009120.2547594.5488489.0125////
15009178.21665138.11864129.9565////
Table 2. Example 2.
Table 2. Example 2.
ω 10 10 2 10 3 10 4
Size  n IterCPU (s)IterCPU (s)IterCPU (s)IterCPU (s)
10086.752955.112044.9464////
200949.2267545.6289429.5840////
15009115.2275567.6533453.9052////
30009145.98225139.10024120.6812////
Table 3. Example 3.
Table 3. Example 3.
ω 10 10 2 5 × 10 2 10 4
Size  n IterCPU (s)IterCPU (s)IterCPU (s)IterCPU (s)
3210.106810.083210.0418////
6410.151010.123610.0749////
12810.734810.643210.4250////
25613.325113.303813.2497////
528131.7037131.3189131.2446////
10241460.83031459.81121452.9188////
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kebaili, Z.; Grar, H.; Achache, M. A Penalty Approach for Solving Generalized Absolute Value Equations. Axioms 2025, 14, 488. https://doi.org/10.3390/axioms14070488

AMA Style

Kebaili Z, Grar H, Achache M. A Penalty Approach for Solving Generalized Absolute Value Equations. Axioms. 2025; 14(7):488. https://doi.org/10.3390/axioms14070488

Chicago/Turabian Style

Kebaili, Zahira, Hassina Grar, and Mohamed Achache. 2025. "A Penalty Approach for Solving Generalized Absolute Value Equations" Axioms 14, no. 7: 488. https://doi.org/10.3390/axioms14070488

APA Style

Kebaili, Z., Grar, H., & Achache, M. (2025). A Penalty Approach for Solving Generalized Absolute Value Equations. Axioms, 14(7), 488. https://doi.org/10.3390/axioms14070488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop