Next Article in Journal
Galerkin Finite Element Method for Caputo–Hadamard Time-Space Fractional Diffusion Equation
Previous Article in Journal
Hidden-like Attractors in a Class of Discontinuous Dynamical Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Douglas–Rachford Algorithms for Biconvex Optimization Problem in the Finite Dimensional Real Hilbert Spaces

1
Department of Risk Management and Insurance, Feng Chia University, Taichung 407102, Taiwan
2
Department of Applied Mathematics, National Chiayi University, Chiayi 600355, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(23), 3785; https://doi.org/10.3390/math12233785
Submission received: 15 October 2024 / Revised: 3 November 2024 / Accepted: 27 November 2024 / Published: 29 November 2024
(This article belongs to the Section E: Applied Mathematics)

Abstract

:
In this paper, we delve into the realm of biconvex optimization problems, introducing an adaptive Douglas–Rachford algorithm and presenting related convergence theorems in the setting of finite-dimensional real Hilbert spaces. It is worth noting that our approach to proving the convergence theorem differs significantly from those in the literature.

1. Introduction

In science and engineering, convex optimization has been extensively studied and applied. For convex optimization problems, any local minimum is also a global minimum, simplifying the search for optimal solutions. In parameter estimation, particularly in the context of system identification, convexity ensures the convergence of estimates to the true parameters. However, some identification problems cannot always be formulated as convex optimization problems. For instance, the identification of block-oriented nonlinear systems often leads to a biconvex optimization problem rather than a convex one. Unlike convex optimization, biconvex optimization may have numerous local minima. Nevertheless, it exhibits convex substructures, as a biconvex optimization problem can be divided into multiple convex optimization subproblems. These substructures can be effectively utilized to solve the entire biconvex optimization problem.
From the literature [1], we observe that many optimization problems are multi-convex programming, and many published studies on practical multi-convex programming focus on special practical models, like [2,3,4,5,6,7,8,9,10,11,12,13]. In particular, Wen, Yin and Zhang [12] pointed out that multi-convex programming is an NP-hard problem.
Let H 1 and H 2 be two real Hilbert spaces, and let f : H 1 × H 2 R { + } be an extended function. Then f is called a biconvex function if x f ( x , y ) is convex for each y H 2 , and  y f ( x , y ) is convex for each x H 1 . Thus, every convex function is also biconvex, but a biconvex function may still be nonconvex.
The following is a type of biconvex optimization problem, also referred to as a block optimization problem.
( BOP ) argmin x H 1 , y H 2 J ( x , y ) = f ( x ) + g ( y ) + h ( x , y ) ,
where H 1 and H 2 are real Hilbert spaces, h : H 1 × H 2 R is a block biconvex function, and  f : H 1 R and g : H 2 R are convex functions. Here, f and g are called the regularization functions of (BOP). In general, f and g could be supposed as Fréchet differentiable. It is important to note that the function ( x , y ) f ( x ) + g ( y ) + h ( x , y ) can be nonconvex, even if f and g are convex functions.
The standard approach to solving the biconvex optimization problem is via the so-called Gauss–Seidel iteration scheme, popularized in the modern era under the name alternating minimization. Indeed, this method could be called the block coordinate descent algorithm (BCD).
( BCD ) x 1 H 1 and y 1 H 2 are given arbitrarily , x k + 1 argmin x H 1 f ( x ) + g ( y k ) + h ( x , y k ) , y k + 1 argmin y H 2 f ( x k + 1 ) + g ( y ) + h ( x k + 1 , y ) , k N .
In 1992, the proximal BCD algorithm was proposed by Auslender [14] to relax the requirements of the (BCD) convergence theorem.
( proximal BCD ) choose τ > 0 and σ > 0 , x 1 H 1 and y 1 H 2 are given arbitrarily , x k + 1 argmin x H 1 f ( x ) + g ( y k ) + h ( x , y k ) + 1 2 τ | | x x k | | 2 , y k + 1 argmin y H 2 f ( x k + 1 ) + g ( y ) + h ( x k + 1 , y ) + 1 2 σ | | y y k | | 2 , k N .
In 2013, Xu and Yin [13] gave the proximal linearized BCD.
choose τ k > 0 and σ k > 0 , x 1 H 1 and y 1 H 2 are given arbitrarily , x k + 1 argmin x H 1 f ( x ) + 1 2 τ k | | x x k | | 2 + x , x h ( x k , y k ) , y k + 1 argmin y H 2 g ( y ) + 1 2 σ k | | y y k | | 2 + y , y h ( x k + 1 , y k ) , k N .
In 2014, Botle et al. [3] gave the proximal alternating linearized minimization.
( PALM ) r 1 > 1 , r 2 > 1 , x 1 H 1 and y 1 H 2 are given arbitrarily , c k = r 1 L 1 ( y k ) , x k + 1 p r o x c k f ( x k 1 c k x h ( x k , y k ) ) , d k = r 2 L 2 ( x k + 1 ) , x k + 1 p r o x d k g ( y k 1 d k y h ( x k + 1 , y k ) ) , k N .
In 2019, Nikolova and Tan [10] gave the alternating structure-adapted proximal gradient descent algorithm.
( ASAP ) choose τ ( 0 , 2 L i p ( f ) ) , σ ( 0 , 2 L i p ( g ) ) , x 1 H 1 and y 1 H 2 are given arbitrarily , x k + 1 argmin x H 1 H ( x , y k ) + 1 2 τ | | x x k | | 2 + x , f ( x k ) , y k + 1 argmin y H 2 H ( x k + 1 , y ) + 1 2 σ | | y y k | | 2 + y , g ( y k ) , k N .
In fact, the algorithm of ASAP is equivalent to the following.
choose τ ( 0 , 2 L i p ( f ) ) , σ ( 0 , 2 L i p ( g ) ) , x 1 H 1 and y 1 H 2 are given arbitrarily , x k + 1 = argmin x H 1 H ( x , y k ) + 1 2 τ | | x x k | | 2 + x x k , f ( x k ) + f ( x k ) , y k + 1 argmin y H 2 H ( x k + 1 , y ) + 1 2 σ | | y y k | | 2 + y y k , g ( y k ) + g ( y k ) , k N .
Therefore, the algorithm of ASAP can be written in the following form.
choose τ ( 0 , 2 L i p ( f ) ) , σ ( 0 , 2 L i p ( g ) ) , x k + 1 = p r o x τ h ( · , y k ) ( x k τ f ( x k ) ) , y k + 1 = p r o x σ h ( x k + 1 , · ) ( y k σ g ( y k ) ) , k N .
On the other hand, the following is a well-known generalized convex optimization problem, and related Douglas–Rachford algorithm.
arg min { φ ( x ) + g ( x ) : x H } ,
where H is a real Hilbert space, and  φ : H ( , ] and g : H ( , ] are proper, lower semicontinuous, and convex functions (see Algorithm 1).
Algorithm 1: ([15], Corollary 27.4)
Let { x n } n N be generated by the following.
x 1 H 1 is chosen arbitrarily , y n : = p r o x β , φ ( x n ) , z n : = p r o x β , g ( 2 y n x n ) , x n + 1 : = x n + k n ( z n y n ) , n N ,
where φ , g : H ( , ] are proper, lower semicontinuous, and convex functions, β is a positive real number, { k n } n N is a sequence of positive real numbers, p r o x β , φ and p r o x β , g are proximal operators of g and h with β , respectively.
Indeed, it is well-known that the Douglas–Rachford algorithm is widely used for convex optimization problems and those involving convexity assumptions. Additionally, despite the lack of theoretical justification, the literature shows that the algorithm has been successfully applied to various practical non-convex problems [16,17]. For further details on the Douglas–Rachford and Peaceman–Rachford algorithms, please refer to [18,19,20,21,22] and related references.
Inspired by the above work, we propose the adaptive Douglas–Rachford algorithm to study the biconvex optimization problem in the finite dimensional real Hilbert spaces.
Remark 1.
In Algorithm 2, if  { l n } n N is given by Step 3-1, then this algorithm could be called the Douglas–Rachford algorithm. But, when { l n } n N could be given by Step 3-2, we call this algorithm the adaptive Douglas–Rachford algorithm.
In this paper, we study the biconvex optimization problem and give an adaptive Douglas–Rachford algorithm and related convergence theorems in the setting of finite dimensional real Hilbert spaces. It is worth noting that our approach to proving the convergence theorem differs significantly from those in the literature.
Algorithm 2: Adaptive Douglas–Rachford Algorithm
  • Let τ > 0 , and  { l n } n N be a sequence in ( 0 , 2 ) , and  [ a , b ] ( 0 , 2 ) with a + b < 2 , and  x 1 and y 1 be given, and let { x n } n N , { y n } n N , { u n } n N , and  { v n } n N be generated as follows.
  • Step 1. u n = p r o x τ f ( x n ) and v n = p r o x τ g ( y n ) .
  • Step 2. s n = p r o x τ h ( · , v n ) ( 2 u n x n ) and t n = p r o x τ h ( s n , · ) ( 2 v n y n ) .
  • Step 3. l n is selected through the following steps.
  •     Step 3-1. If f ( u n ) f ( s n ) + g ( v n ) g ( t n ) 0 , then we choose l n ( a , b ) arbitrarily.
  •     Step 3-2. If f ( u n ) f ( s n ) + g ( v n ) g ( t n ) < 0 , then we set p n as
    p n = a 2 2 τ · | | s n u n | | 2 + | | t n v n | | 2 ] f ( s n ) f ( u n ) + g ( t n ) g ( v n ) .
  •     (i) If p n b , then we choose l n ( a , b ) arbitrarily.
  •     (ii) If a p n < b , then we set l n = p n .
  •     (iii) If p n < a and | | s n u n | | 2 + | | t n v n | | 2 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] , then we set l n = 0.5 .
  •     (iv) If p n < a and | | s n u n | | 2 + | | t n v n | | 2 < 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] , then we set l n as
    l n = min b , 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] | | s n u n | | 2 + | | t n v n | | 2 a 2 .
  • Step 4. x n + 1 = x n + l n ( s n u n ) and y n + 1 = y n + l n ( t n v n ) . Set n = n + 1 and go to Step 1.

2. Preliminaries

Let H be a real Hilbert space with inner product · , · and norm | | · | | . We denote the strong and weak convergence of { x n } n N to x H by x n x and x n x , respectively. For each x , y , u , v H , and  λ R , we have
| | x + y | | 2 = | | x | | 2 + 2 x , y + | | y | | 2 ,
| | λ x + ( 1 λ ) y | | 2 = λ | | x | | 2 + ( 1 λ ) | | y | | 2 λ ( 1 λ ) | | x y | | 2 ,
2 x y , u v = | | x v | | 2 + | | y u | | 2 | | x u | | 2 | | y v | | 2 .
Definition 1.
Let H be a real Hilbert space, B : H H be a mapping, and  ρ > 0 . Thus,
 (i)
B is monotone if x y , B x B y 0 for all x , y H .
 (ii)
B is ρ-strongly monotone if x y , B x B y ρ | | x y | | 2 for all x , y H .
Definition 2.
Let H be a real Hilbert space, and  B : H H be a set-valued mapping with domain D ( B ) : = { x H : B ( x ) Ø } . Thus,
 (i)
B is monotone if u v , x y 0 for any u B ( x ) and v B ( y ) .
 (ii)
B is maximal monotone if its graph { ( x , y ) : x D ( B ) , y B ( x ) } is not properly contained in the graph of any other monotone mapping.
 (iii)
B is ρ-strongly monotone ( ρ > 0 ) if x y , u v ρ | | x y | | 2 for all x , y H , and all u B ( x ) , and  v B ( y ) .
Definition 3.
Let H be a real Hilbert space, and  f : H R be a function. Thus,
 (i)
f is proper if { x H : f ( x ) < } Ø .
 (ii)
f is lower semicontinuous if { x H : f ( x ) r } is closed for each r R .
 (iii)
f is convex if f ( t x + ( 1 t ) y ) t f ( x ) + ( 1 t ) f ( y ) for every x , y H and t [ 0 , 1 ] .
 (iv)
f is ρ-strongly convex ( ρ > 0 ) if
f ( t x + ( 1 t ) y ) + ρ 2 · t ( 1 t ) | | x y | | 2 t f ( x ) + ( 1 t ) f ( y )
for all x , y H and t ( 0 , 1 ) .
 (v)
f is Gâteaux differentiable at x H if there is f ( x ) H such that
lim t 0 f ( x + t y ) f ( x ) t = y , f ( x )
for each y H .
 (vi)
f is Fréchet differentiable at x if there is f ( x ) such that
lim y 0 f ( x + y ) f ( x ) f ( x ) , y | | y | | = 0 .
Remark 2.
Let H be a real Hilbert space, and  f : H R be a function. Then f is a convex function if and only if g ( x ) = f ( x ) + ρ 2 · | | x | | 2 is a ρ-strongly convex function ([15], Proposition 10.6). Hence, it is easy to establish the relation between convex functions and strongly convex functions.
Definition 4.
Let f : H ( , ] be a proper lower semicontinuous and convex function. Then the subdifferential f of f is defined by
f ( x ) : = { x * H : f ( x ) + y x , x * f ( y ) for each y H }
for each x H .
Lemma 1
([15,23]). Let f : H ( , ] be a proper lower semicontinuous and convex function. Then the following is satisfied.
 (i)
f is a set-valued maximal monotone mapping.
 (ii)
f is Gâteaux differentiable at x int ( D ) if and only if f ( x ) consists of a single element. That is, f ( x ) = { f ( x ) } .
 (iii)
Suppose that f is Fréchet differentiable. Then f is convex if and only if f is a monotone mapping.
Lemma 2
([15], Example 22.3(iv)). Let ρ > 0 , H be a real Hilbert space, and  f : H R be a proper lower-semicontinuous and convex function. If f is ρ-strongly convex, then f is ρ-strongly monotone.
Lemma 3
([15], Proposition 16.26). Let H be a real Hilbert space, and  f : H ( , ] be a proper lower semicontinuous and convex function. If  { u n } n N and { x n } n N are sequences in H with u n f ( x n ) for all n N , and  x n x and u n u , then u f ( x ) .
Lemma 4
([24]). Let H be a real Hilbert space, B : H H be a set-valued maximal monotone mapping, β > 0 , and  J β B be defined by J β B ( x ) : = ( I + β B ) 1 ( x ) for each x H . Then J β B is a single-valued mapping.
Definition 5.
Let τ > 0 , H be a real Hilbert space, and  g : H R be a proper lower-semicontinuous and convex function. Then the proximal operator of g with τ is defined by
p r o x τ g ( x ) : = argmin v H { g ( v ) + 1 2 τ | | v x | | 2 }
for each x H .
Lemma 5.
Let f : H R be a proper, lower semicontinuous, and convex function. Assume τ > 0 and x ¯ = p r o x τ f ( x ) , we have
f ( u ) f ( x ¯ ) 1 τ · u x ¯ , x ¯ x
for each u H .

3. Main Results

We are interested in solving nonconvex minimization problems of the form
( B O P ) argmin x H 1 , y H 2 J ( x , y ) = f ( x ) + g ( y ) + h ( x , y ) ,
where H 1 and H 2 are two real Hilbert spaces, J : H 1 × H 2 R , f : H 1 R , g : H 2 : R , and  h : H 1 × H 2 R .
Here, if  ( x ¯ , y ¯ ) H 1 × H 2 is a solution of the problem (BOP), then
f ( x ¯ ) + g ( y ¯ ) + h ( x ¯ , y ¯ ) f ( x ) + g ( y ) + h ( x , y ) for all ( x , y ) H 1 × H 2 ,
and this implies that
f ( x ¯ ) + g ( y ¯ ) + h ( x ¯ , y ¯ ) f ( x ) + g ( y ¯ ) + h ( x , y ¯ ) ,
and
f ( x ¯ ) + g ( y ¯ ) + h ( x ¯ , y ¯ ) f ( x ¯ ) + g ( y ) + h ( x ¯ , y )
for all ( x , y ) H 1 × H 2 . That is,
( x , y ) H 1 × H 2 : f ( x ¯ ) + h ( x ¯ , y ¯ ) f ( x ) + h ( x , y ¯ ) & g ( y ¯ ) + h ( x ¯ , y ¯ ) g ( y ) + h ( x ¯ , y ) .
This implies that
( x , y ) H 1 × H 2 : f ( x ¯ ) + g ( y ¯ ) + 2 · h ( x ¯ , y ¯ ) f ( x ) + g ( y ) + h ( x , y ¯ ) + h ( x ¯ , y ) .
So, it is natural to give a condition:
( B O P ) C h ( x , v ) + h ( u , y ) = h ( x , y ) + h ( u , v )
for all x , u H 1 and y , v H 2 . Indeed, if this condition ( B O P ) C holds, then (4), (5), and (6) are equivalent.
Example 1.
Let h : R × R R be defined as
h ( x , y ) = log | x | + log | y | if | x | > 1 and | y | > 1 , log | x | if | x | > 1 and | y | 1 , log | y | if | x | 1 and | y | > 1 ,   0 if | x | 1 and | y | 1 ,
for all ( x , y ) R × R . Then h satisfies the condition ( B O P ) C .
Assumption 1.
Assume that:
 (i)
f and g are proper lower semicontinuous and convex functions;
 (ii)
h is continuous, and  x h ( x , y ) and y h ( x , y ) are proper and convex functions;
 (iii)
J is lower bounded;
 (iv)
h ( x , v ) + h ( u , y ) = h ( x , y ) + h ( u , v ) for all x , u H 1 and y , v H 2 .
For a fixed y, the partial subdifferential of J ( · , y ) at x is denoted by x J ( x , y ) . For a fixed x, the partial subdifferential of J ( x , · ) at y is denoted by y J ( x , y ) . Hence, for  J ( x , y ) = f ( x ) + g ( y ) + h ( x , y ) , we have
J ( x , y ) = x J ( x , y ) × y J ( x , y ) = ( f ( x ) + x h ( x , y ) ) × ( g ( y ) + y h ( x , y ) ) .
Fermat’s rule, extended to nonconvex and nonsmooth function, is given next.
Proposition 1
([25], Theorem 10.1). Let f : R m R { + } be a proper function. If f has a local minimum at x ¯ , then 0 f ( x ¯ ) .
Definition 6.
We say that ( x ¯ , y ¯ ) is a critical point of J if ( 0 , 0 ) J ( x ¯ , y ¯ ) . For simplicity, the set of the critical points of J is denoted by crit ( J ) .
Remark 3.
We know
( 0 , 0 ) J ( x ¯ , y ¯ ) ( 0 , 0 ) x J ( x ¯ , y ¯ ) × y J ( x ¯ , y ¯ ) 0 f ( x ¯ ) + x h ( x ¯ , y ¯ ) & 0 g ( y ¯ ) + y h ( x ¯ , y ¯ ) ( x , y ) H 1 × H 2 : f ( x ¯ ) + h ( x ¯ , y ¯ ) f ( x ) + h ( x , y ¯ ) & g ( y ¯ ) + h ( x ¯ , y ¯ ) g ( y ) + h ( x ¯ , y ) .
Hence, if the condition ( B O P ) C is satisfied, then ( 0 , 0 ) J ( x ¯ , y ¯ ) if and only if ( x ¯ , y ¯ ) is a solution of the problem ( B O P ) .
Here, we consider the first part of Algorithm 2.
Remark 4.
In Algorithm 3, for each x H 1 and y H 2 , we set u = p r o x τ f ( x ) and v = p r o x τ g ( y ) . Thus, | | u n u | | 2 x n x , u n u and | | v n v | | 2 y n y , v n v . Further, if  { x n } n N and { y n } n N are bounded, then { u n } n N and { v n } n N are bounded.
Algorithm 3: Adaptive Douglas–Rachford Algorithm (Part 1)
Let τ > 0 , and  { l n } n N be a sequence in ( 0 , 2 ) , and  [ a , b ] ( 0 , 2 ) with a + b < 2 , and  x 1 and y 1 be given, and let { x n } n N , { y n } n N , { u n } n N , and  { v n } n N be generated as
u n = p r o x τ f ( x n ) , v n = p r o x τ g ( y n ) , s n = p r o x τ h ( · , v n ) ( 2 u n x n ) , t n = p r o x τ h ( s n , · ) ( 2 v n y n ) , x n + 1 = x n + l n ( s n u n ) , y n + 1 = y n + l n ( t n v n ) , n N .
Proof. 
Since u n = p r o x τ f ( x n ) = ( I + τ f ) 1 ( x n ) and u = p r o x τ f ( x ) , we have
1 τ ( x n u n ) f ( u n ) and 1 τ ( x u ) f ( u )
Since f is maximal monotone, we know
( x n u n ) ( x u ) , u n u 0 ,
and this implies that
| | u n u | | 2 x n x , u n u | | u n u | | · | | x n x | | .
Similarly, we have
1 τ ( y n v n ) g ( v n ) and 1 τ ( y v ) g ( v ) ,
and
| | v n v | | 2 y n y , v n v | | v n v | | · | | y n y | | .
Further, it is easy to see { u n } n N and { v n } n N are bounded when { x n } n N and { y n } n N are bounded.    □
Lemma 6.
Let { x n } n N , { y n } n N , { u n } n N , and  { v n } n N be generated from Algorithm 3. Then, for each x H 1 and y H 2 , and  n N , we have
| | x n + 1 x | | 2 | | x n x | | 2 l n ( 2 l n ) | | s n u n | | 2 2 l n τ [ f ( u n ) f ( x ) + h ( s n , v n ) h ( x , v n ) ] ,
and
| | y n + 1 y | | 2 | | y n y | | 2 l n ( 2 l n ) | | t n v n | | 2 2 l n τ [ g ( v n ) g ( y ) + h ( s n , t n ) h ( s n , y ) ] .
Proof. 
Take any x H 1 and y H 2 , and let x , y be fixed. First, it follows from u n = p r o x τ f ( x n ) and Lemma 5 that
f ( x ) f ( u n ) 1 τ · x u n , u n x n ,
and this implies that
| | u n x | | 2 + 2 τ [ f ( u n ) f ( x ) ] | | x n x | | 2 | | x n u n | | 2 .
Similar to (15), we have
| | v n y | | 2 + 2 τ [ g ( v n ) g ( y ) ] | | y n y | | 2 | | y n v n | | 2 .
Next, it follows from s n = p r o x τ h ( · , v n ) ( 2 u n x n ) and Lemma 5 that
h ( x , v n ) h ( s n , v n ) 1 τ · x s n , s n ( 2 u n x n ) ,
and this implies that
2 τ [ h ( s n , v n ) h ( x , v n ) ] 2 x s n , s n ( 2 u n x n ) .
Here, we know
2 x s n , s n ( 2 u n x n ) = | | 2 u n x n x | | 2 | | s n x | | 2 | | 2 u n x n s n | | 2 = | | ( u n x n ) + ( u n x ) | | 2 | | s n x | | 2 | | ( u n x n ) + ( u n s n ) | | 2 = | | u n x n | | 2 + | | u n x | | 2 + 2 u n x n , u n x | | s n x | | 2 | | u n x n | | 2 | | u n s n | | 2 2 u n x n , u n s n = | | u n x | | 2 | | s n x | | 2 | | u n s n | | 2 + 2 u n x n , s n x = | | u n x | | 2 | | s n x | | 2 | | u n s n | | 2 + | | u n x | | 2 + | | s n x n | | 2 | | u n s n | | 2 | | x n x | | 2 = 2 | | u n x | | 2 | | s n x | | 2 2 | | u n s n | | 2 + | | s n x n | | 2 | | x n x | | 2 .
By (18) and (19), we have
2 τ [ h ( s n , v n ) h ( x , v n ) ] 2 | | u n x | | 2 | | s n x | | 2 2 | | u n s n | | 2 + | | s n x n | | 2 | | x n x | | 2 .
Similar to (18), we have
2 τ [ h ( s n , t n ) h ( s n , y ) ] 2 y t n , t n ( 2 v n y n ) .
Hence, we know
2 y t n , t n ( 2 v n y n ) = | | 2 v n y n y | | 2 | | t n y | | 2 | | 2 v n y n t n | | 2 = | | ( v n y n ) + ( v n y ) | | 2 | | t n y | | 2 | | ( v n y n ) + ( v n t n ) | | 2 = | | v n y | | 2 | | t n y | | 2 | | v n t n | | 2 + 2 v n y n , t n y = 2 | | v n y | | 2 | | t n y | | 2 2 | | v n t n | | 2 + | | y n t n | | 2 | | y n y | | 2 .
By (21) and (22), we have
2 τ [ h ( s n , t n ) h ( s n , y ) ] 2 | | v n y | | 2 | | t n y | | 2 2 | | v n t n | | 2 + | | y n t n | | 2 | | y n y | | 2 .
Next, we have
| | x n + 1 x | | 2 = | | x n + l n ( s n u n ) x | | 2 = | | ( 1 l n 2 ) ( x n x ) + l n 2 ( x n + 2 s n 2 u n x ) | | 2 = ( 1 l n 2 ) | | x n x | | 2 + l n 2 | | x n + 2 s n 2 u n x | | 2 l n 2 ( 1 l n 2 ) | | 2 s n 2 u n | | 2 = ( 1 l n 2 ) | | x n x | | 2 l n ( 2 l n ) | | s n u n | | 2 + l n 2 | | x n x | | 2 + 4 | | s n u n | | 2 + 4 x n x , s n u n = | | x n x | | 2 l n ( 2 l n ) | | s n u n | | 2 + 2 l n | | s n u n | | 2 + l n | | x n u n | | 2 + l n | | s n x | | 2 l n | | x n s n | | 2 l n | | u n x | | 2 .
By (15), (20), and (24), we have
| | x n + 1 x | | 2 = | | x n x | | 2 l n ( 2 l n ) | | s n u n | | 2 + 2 l n | | s n u n | | 2 + l n | | x n u n | | 2 + l n | | s n x | | 2 l n | | x n s n | | 2 l n | | u n x | | 2 | | x n x | | 2 l n ( 2 l n ) | | s n u n | | 2 + 2 l n | | s n u n | | 2 + l n ( | | x n x | | 2 | | u n x | | 2 2 τ [ f ( u n ) f ( x ) ] ) + l n ( 2 | | u n x | | 2 2 | | u n s n | | 2 + | | s n x n | | 2 | | x n x | | 2 2 τ [ h ( s n , v n ) h ( x , v n ) ] ) l n | | x n s n | | 2 l n | | u n x | | 2 = | | x n x | | 2 l n ( 2 l n ) | | s n u n | | 2 2 l n τ [ f ( u n ) f ( x ) + h ( s n , v n ) h ( x , v n ) ] .
Similar to (24), we have
| | y n + 1 y | | 2 = | | y n y | | 2 l n ( 2 l n ) | | t n v n | | 2 + 2 l n | | t n v n | | 2 + l n | | y n v n | | 2 + l n | | t n y | | 2 l n | | y n t n | | 2 l n | | v n y | | 2 .
By (16), (23), and (26),
| | y n + 1 y | | 2 | | y n y | | 2 l n ( 2 l n ) | | t n v n | | 2 2 l n τ [ g ( v n ) g ( y ) + h ( s n , t n ) h ( s n , y ) ] .
So, we obtain the conclusion of Lemma 6.    □
The following is the second part of Algorithm 2.
Remark 5.
In Algorithm 4, we know the sequence { l n } n N is chosen from the interval ( 0 , 2 ) with
a 2 = min a 2 , 1 2 l n b .
Algorithm 4: Adaptive Douglas–Rachford Algorithm (Part 2)
  • In Algorithm 3, let l n be selected through the following steps.
  • Case 1. If f ( u n ) f ( s n ) + g ( v n ) g ( t n ) 0 , then we choose l n ( a , b ) arbitrarily.
  • Case 2. If f ( u n ) f ( s n ) + g ( v n ) g ( t n ) < 0 , then we set p n as
    p n = a 2 2 τ · | | s n u n | | 2 + | | t n v n | | 2 ] f ( s n ) f ( u n ) + g ( t n ) g ( v n ) .
  •   (i) If p n b , then we choose l n ( a , b ) arbitrarily.
  •   (ii) If a p n < b , then we set l n = p n .
  •   (iii) If p n < a and | | s n u n | | 2 + | | t n v n | | 2 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] , then we set l n = 0.5 .
  •   (iv) If p n < a and | | s n u n | | 2 + | | t n v n | | 2 < 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] , then we set l n as
    l n = min b , 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] | | s n u n | | 2 + | | t n v n | | 2 a 2 .
Theorem 1.
Let { x n } n N , { y n } n N , { u n } n N , and { v n } n N be generated from Algorithms 3 and 4, and we assume that the solution set of the problem (BOP) is nonempty, and we assume that H 1 and H 2 are finite dimensional. Then there exist x ¯ , u ¯ H 1 , y ¯ , v ¯ H 2 , a subsequence { ( x n k , y n k , u n k , v n k ) } k N of { ( x n , y n , u n , v n ) } n N such that x n k x ¯ , y n k y ¯ , u n k u ¯ , v n k v ¯ , and ( u ¯ , v ¯ ) is a solution of the problem ( B O P ) .
Proof. 
Let ( u ^ , v ^ ) H 1 × H 2 be any solution of problem (BOP). By (26), (27), and the assumption, we have
| | x n + 1 u ^ | | 2 + | | y n + 1 v ^ | | 2 | | x n u ^ | | 2 + | | y n v ^ | | 2 l n ( 2 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) 2 l n τ [ f ( u n ) f ( u ^ ) + h ( s n , v n ) h ( u ^ , v n ) ] 2 l n τ [ g ( v n ) g ( v ^ ) + h ( s n , t n ) h ( s n , v ^ ) ] | | x n u ^ | | 2 + | | y n v ^ | | 2 l n ( 2 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) 2 l n τ [ f ( u n ) + g ( v n ) + h ( s n , t n ) f ( u ^ ) g ( v ^ ) h ( u ^ , v ^ ) ] = | | x n u ^ | | 2 + | | y n v ^ | | 2 l n ( 2 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) 2 l n τ [ f ( s n ) + g ( t n ) + h ( s n , t n ) f ( u ^ ) g ( v ^ ) h ( u ^ , v ^ ) ] 2 l n τ [ f ( u n ) f ( s n ) + g ( v n ) g ( t n ) ] | | x n u ^ | | 2 + | | y n v ^ | | 2 l n ( 2 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) 2 l n τ [ f ( u n ) f ( s n ) + g ( v n ) g ( t n ) ] .
Next, we consider the following cases.
Case 1: If f ( u n ) f ( s n ) + g ( v n ) g ( t n ) 0 , then we choose l n ( a , b ) ( 0 , 2 ) . Hence,
| | x n + 1 u ^ | | 2 + | | y n + 1 v ^ | | 2 | | x n u ^ | | 2 + | | y n v ^ | | 2 l n ( 2 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) 2 l n τ [ f ( u n ) f ( s n ) + g ( v n ) g ( t n ) ] | | x n u ^ | | 2 + | | y n v ^ | | 2 a ( 2 b ) ( | | s n u n | | 2 + | | t n v n | | 2 ) .
Case 2: If f ( u n ) f ( s n ) + g ( v n ) g ( t n ) < 0 , then we set P n as
p n = a 2 2 τ · | | s n u n | | 2 + | | t n v n | | 2 ] f ( s n ) f ( u n ) + g ( t n ) g ( v n ) .
Case 2 (i): If p n b , then we choose l n ( a , b ) ( 0 , 2 ) . Hence,
| | x n + 1 u ^ | | 2 + | | y n + 1 v ^ | | 2 | | x n u ^ | | 2 + | | y n v ^ | | 2 l n ( 2 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) + 2 l n τ [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] | | x n u ^ | | 2 + | | y n v ^ | | 2 a ( 2 b ) · ( | | s n u n | | 2 + | | t n v n | | 2 ) + a 2 · ( | | s n u n | | 2 + | | t n v n | | 2 ) | | x n u ^ | | 2 + | | y n v ^ | | 2 a ( 2 a b ) · ( | | s n u n | | 2 + | | t n v n | | 2 ) .
Case 2 (ii): If a p n < b , then we set l n = p n . Hence,
| | x n + 1 u ^ | | 2 + | | y n + 1 v ^ | | 2 | | x n u ^ | | 2 + | | y n v ^ | | 2 a ( 2 b ) ( | | s n u n | | 2 + | | t n v n | | 2 ) + 2 l n τ [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] = | | x n u ^ | | 2 + | | y n v ^ | | 2 a ( 2 a b ) · ( | | s n u n | | 2 + | | t n v n | | 2 ) .
Case 2 (iii): If p n < a and | | s n u n | | 2 + | | t n v n | | 2 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] , then we set l n = 0.5 and have the following.
| | x n + 1 u ^ | | 2 + | | y n + 1 v ^ | | 2 | | x n u ^ | | 2 + | | y n v ^ | | 2 l n ( 2 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) + 2 l n τ [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] | | x n u ^ | | 2 + | | y n v ^ | | 2 l n ( 2 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) + l n ( | | s n u n | | 2 + | | t n v n | | 2 ) | | x n u ^ | | 2 + | | y n v ^ | | 2 l n ( 1 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) | | x n u ^ | | 2 + | | y n v ^ | | 2 1 4 · ( | | s n u n | | 2 + | | t n v n | | 2 ) .
Case 2 (iv): If p n < a and | | s n u n | | 2 + | | t n v n | | 2 < 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] , then we set l n as
l n = min b , 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] | | s n u n | | 2 + | | t n v n | | 2 a 2 .
Thus, we have the following.
a 2 l n b and 2 l n 2 + a 2 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] | | s n u n | | 2 + | | t n v n | | 2 .
So,
( 2 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) 2 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] ( 2 + a 2 ) ( | | s n u n | | 2 + | | t n v n | | 2 ) 4 τ · [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] ( 2 + a 2 ) ( | | s n u n | | 2 + | | t n v n | | 2 ) 2 ( | | s n u n | | 2 + | | t n v n | | 2 ) = a 2 · ( | | s n u n | | 2 + | | t n v n | | 2 ) ,
and this implies that
| | x n + 1 u ^ | | 2 + | | y n + 1 v ^ | | 2 | | x n u ^ | | 2 + | | y n v ^ | | 2 l n [ ( 2 l n ) ( | | s n u n | | 2 + | | t n v n | | 2 ) 2 τ [ f ( s n ) f ( u n ) + g ( t n ) g ( v n ) ] ] | | x n u ^ | | 2 + | | y n v ^ | | 2 ( a 2 ) 2 · ( | | s n u n | | 2 + | | t n v n | | 2 ) .
Set r n as
r n = min a ( 2 b ) , a ( 2 a b ) , 1 4 , ( a 2 ) 2 = min a ( 2 a b ) , ( a 2 ) 2 .
Hence, we obtain the following from (30), (31), (32), (34), and (35).
| | x n + 1 u ^ | | 2 + | | y n + 1 v ^ | | 2 | | x n u ^ | | 2 + | | y n v ^ | | 2 r n · ( | | s n u n | | 2 + | | t n v n | | 2 ) .
Therefore, { | | x n u ^ | | 2 + | | y n v ^ | | 2 } n N is nondecreasing, and lim n | | x n u ^ | | 2 + | | y n v ^ | | 2 exists, and this implies that { x n } n N , { y n } n N , { u n } n N , { v n } n N are bounded, and
n = 1 ( | | s n u n | | 2 + | | t n v n | | 2 ) < .
So,
n = 1 | | x n + 1 x n | | 2 = n = 1 l n 2 · | | s n u n | | 2 4 n = 1 | | s n u n | | 2 <
and
n = 1 | | y n + 1 y n | | 2 = n = 1 l n 2 · | | t n v n | | 2 4 n = 1 | | t n v n | | 2 < .
Since { x n } n N , { y n } n N , { u n } n N , and { v n } n N are bounded, there exist x ¯ , u ¯ H 1 , y ¯ , v ¯ H 2 , a subsequence { x n k } k N of { x n } n N , a subsequence { y n k } k N of { y n } n N , a subsequence { u n k } k N of { u n } n N , and a subsequence { v n k } k N of { v n } n N such that x n k x ¯ , y n k y ¯ , u n k u ¯ , and v n k v ¯ .
Next, by (7) and (10), we know
x n k u n k τ f ( u n k ) and y n k v n k τ g ( v n k ) ,
and this implies that
x ¯ u ¯ τ f ( u ¯ ) and y ¯ v ¯ τ g ( v ¯ ) .
Since s n = p r o x τ h ( · , v n ) ( 2 u n x n ) and t n = p r o x τ h ( s n , · ) ( 2 v n y n ) , we know
u n k s n k + u n k x n k τ x h ( s n k , v n k ) ,
and
v n k t n k + v n k y n k τ y h ( s n k , t n k ) .
So, we have
u ¯ x ¯ τ h x ( u ¯ , v ¯ ) and v ¯ y ¯ τ h y ( u ¯ , v ¯ ) .
This implies that
0 f ( u ¯ ) + h x ( u ¯ , v ¯ ) and 0 g ( v ¯ ) + h y ( u ¯ , v ¯ ) .
So, ( u ¯ , v ¯ ) is a critical point of J and this implies that ( u ¯ , v ¯ ) is a solution of the problem (BOP). □
Theorem 2.
In Theorem 1, if ρ > 0 and we further assume that f and g are ρ-strongly convex, then there exist u ¯ H 1 , v ¯ H 2 such that u n u ¯ and v n v ¯ , where ( u ¯ , v ¯ ) is a solution of the problem ( B O P ) .
Proof. 
In Theorem 1, there exist x ¯ , u ¯ H 1 , y ¯ , v ¯ H 2 , a subsequence { ( x n k , y n k , u n k , v n k ) } k N of { ( x n , y n , u n , v n ) } n N such that x n k x ¯ , y n k y ¯ , u n k u ¯ , v n k v ¯ , and ( u ¯ , v ¯ ) is a solution of the problem ( B O P ) . Further, we have
x ¯ u ¯ τ f ( u ¯ ) and y ¯ v ¯ τ g ( v ¯ ) ,
and
u ¯ x ¯ τ h x ( u ¯ , v ¯ ) and v ¯ y ¯ τ h y ( u ¯ , v ¯ ) .
Hence, if { ( u ^ n k , v ^ n k ) } k N is a subsequence of { ( u n , v n ) } n N such that u ^ n k u ^ and v ^ n k v ^ . Clearly, we know { ( x ^ n k , y ^ n k ) } k N is a bounded subsequence of { ( x n , y n ) } n N . So, without loss of generality, we may assume that x ^ n k x ^ and y ^ n k y ^ . Next, it follows from the proof of Theorem 1 that ( u ^ , v ^ ) is a solution of the problem ( B O P ) , and
x ^ u ^ τ f ( u ^ ) and y ^ v ^ τ g ( v ^ ) ,
and
u ^ x ^ τ h x ( u ^ , v ^ ) and v ^ y ^ τ h y ( u ^ , v ^ ) .
By (45) and (47), and f is when ρ -strongly convex, we have
τ · ρ · | | u ¯ u ^ | | 2 ( x ^ u ^ ) ( x ¯ u ¯ ) , u ^ u ¯ ,
and this implies that
τ · ρ · | | u ¯ u ^ | | 2 x ^ x ¯ , u ^ u ¯ | | u ¯ u ^ | | 2
Since x h ( x , y ) is convex, we have
0 ( u ^ x ^ ) ( u ¯ x ¯ ) , u ^ u ¯ = x ^ + x ¯ , u ^ u ¯ + | | u ^ u ¯ | | 2 .
By (50) and (51), we know u ¯ = u ^ . Similarly, we have v ¯ = v ^ . Therefore, we know u n u ¯ and v n v ¯ , and the proof is completed. □

Author Contributions

Conceptualization, M.-S.L. and C.-S.C.; methodology, M.-S.L.; formal analysis, C.-S.C.; resources, M.-S.L.; writing—original draft preparation, C.-S.C.; writing—review and editing, M.-S.L. All authors have read and agreed to the published version of the manuscript.

Funding

Chih-Sheng Chuang was supported by the National Science and Technology Council (NSTC 112-2115-M-415-001).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BOPBiconvex Optimization Problem (or Block Optimization Problem)
BCDBlack Coordinate Descent algorithm
PALMProximal ALternating Linearized Minimization
ASAPAlternating Structure-Adapted Proximal gradient descent algorithm

References

  1. Grant, M.; Boyd, S.; Ye, Y. Disciplined convex programming. In Global Optimization: From Theory to Implementation, Nonconvex Optimization and Its Applications; Liberti, L., Maculan, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 155–210. [Google Scholar]
  2. Al-Shatri, H.; Li, X.; Ganesan, R.S.; Klein, A.; Weber, T. Maximizing the sum rate in cellular networks using multiconvex optimization. IEEE Trans. Wirel. Commun. 2016, 15, 3199–3211. [Google Scholar] [CrossRef]
  3. Botle, J.; Sabach, S.; Teboulle, M. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program Ser. A 2014, 146, 459–494. [Google Scholar]
  4. Che, H.; Wang, J. A Two-Timescale Duplex Neurodynamic Approach to Biconvex Optimization. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2503–2514. [Google Scholar] [CrossRef] [PubMed]
  5. Chiu, W.Y. Method of reduction of variables for bilinear matrix inequality problems in system and control designs. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 1241–1256. [Google Scholar] [CrossRef]
  6. Fu, X.; Huang, K.; Sidiropoulos, N.D. On identifiability of nonnegative matrix factorization. IEEE Signal Process. Lett. 2018, 25, 328–332. [Google Scholar] [CrossRef]
  7. Gorski, J.; Pfeuffer, F.; Klamroth, K. Biconvex sets and optimization with biconvex functions: A survey and extensions. Math. Methods Oper. Res. 2007, 66, 373–407. [Google Scholar] [CrossRef]
  8. Hours, J.H.; Jones, C.N. A parametric multiconvex splitting technique with application to real-time NMPC. In Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA, 15–17 December 2014; pp. 5052–5057. [Google Scholar]
  9. Li, G.; Wen, C.; Zheng, W.X.; Zhao, G. Iterative identification of block-oriented nonlinear systems based on biconvex optimization. Syst. Control Lett. 2015, 79, 68–75. [Google Scholar] [CrossRef]
  10. Nikolova, M.; Tan, P. Alternating structure-adapted proximal gradient descent for nonconvex nonsmooth block-regularized problems. SIAM J. Optim. 2008, 29, 2053–2078. [Google Scholar] [CrossRef]
  11. Shah, S.; Yadav, A.K.; Castillo, C.D.; Jacobs, D.W.; Studer, C.; Goldstein, T. Biconvex Relaxation for Semidefinite Programming in Computer Vision. In Computer Vision—ECCV 2016; Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016. [Google Scholar]
  12. Wen, Z.; Yin, W.; Zhang, Y. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 2012, 4, 333–361. [Google Scholar] [CrossRef]
  13. Xu, Y.; Yin, W. A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 2013, 6, 1758–1789. [Google Scholar] [CrossRef]
  14. Auslender, A. Asymptotic properties of the fenchel dual functional and applications to decomposition problems. J. Optim. Theory Appl. 1992, 73, 427–449. [Google Scholar] [CrossRef]
  15. Bauschke, H.H.; Combettes, P.L. Convex Functions: Variantsn. In Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin, Germany, 2011; pp. 143–153. [Google Scholar]
  16. Elser, V.; Rankenburg, I.; Thibault, P. Searching with iterated maps. Proc. Natl. Acad. Sci. USA 2007, 104, 418–423. [Google Scholar] [CrossRef] [PubMed]
  17. Gravel, S.; Elser, V. Divide and concur: A general approach constraint satisfaction. Phys. Rev. E 2008, 78, 036706. [Google Scholar] [CrossRef] [PubMed]
  18. Aragón Artacho, F.J.; Borwein, J.M. Global convergence of a non-convex Douglas–Rachford iteration. J. Glob. Optim. 2013, 57, 753–769. [Google Scholar] [CrossRef]
  19. Aragón Artacho, F.J.; Campoy, R. A new projection method for finding the closet point in the intersection of convex sets. Comput. Optim. Appl. 2018, 69, 99–132. [Google Scholar] [CrossRef]
  20. Bauschke, H.H.; Moursi, W.M. On the Douglas–Rachford algorithm. Math. Program. 2017, 164, 263–284. [Google Scholar] [CrossRef]
  21. Borwein, J.M.; Sims, B. The Douglas–Rachford Algorithm in the Absence of Convexity. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer Optimization and Its Applications; Springer: New York, NY, USA, 2011; Volune 49, pp. 93–109. [Google Scholar]
  22. Eckstein, J.; Bertsekas, D.P. On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
  23. Butnariu, D.; Iusem, A.N. Totally Convex Functions. In Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2000; pp. 2–45. [Google Scholar]
  24. Marino, G.; Xu, H.K. Convergence of generalized proximal point algorithm. Comm. Pure Appl. Anal. 2004, 3, 791–808. [Google Scholar] [CrossRef]
  25. Rockafellar, R.T.; Wets, J.B. Variational Analysis; Springer: New York, NY, USA, 1998. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, M.-S.; Chuang, C.-S. Adaptive Douglas–Rachford Algorithms for Biconvex Optimization Problem in the Finite Dimensional Real Hilbert Spaces. Mathematics 2024, 12, 3785. https://doi.org/10.3390/math12233785

AMA Style

Lin M-S, Chuang C-S. Adaptive Douglas–Rachford Algorithms for Biconvex Optimization Problem in the Finite Dimensional Real Hilbert Spaces. Mathematics. 2024; 12(23):3785. https://doi.org/10.3390/math12233785

Chicago/Turabian Style

Lin, Ming-Shr, and Chih-Sheng Chuang. 2024. "Adaptive Douglas–Rachford Algorithms for Biconvex Optimization Problem in the Finite Dimensional Real Hilbert Spaces" Mathematics 12, no. 23: 3785. https://doi.org/10.3390/math12233785

APA Style

Lin, M.-S., & Chuang, C.-S. (2024). Adaptive Douglas–Rachford Algorithms for Biconvex Optimization Problem in the Finite Dimensional Real Hilbert Spaces. Mathematics, 12(23), 3785. https://doi.org/10.3390/math12233785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop