Next Article in Journal
Fairness in Healthcare Services for Italian Older People: A Convolution-Based Evaluation to Support Policy Decision Makers
Previous Article in Journal
A Weak Solution for a Nonlinear Fourth-Order Elliptic System with Variable Exponent Operators and Hardy Potential
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Stepsize Techniques in DR-Submodular Maximization †

School of Mathematics and Statistics, Shandong Normal University, Jinan 250014, China
*
Author to whom correspondence should be addressed.
Parts of Section 3 and Section 5 in this paper appeared in Proceedings of the 28th International Computing and Combinatorics Conference (COCOON 2022), Shenzhen, China, 22–24 October 2022.
Mathematics 2025, 13(9), 1447; https://doi.org/10.3390/math13091447
Submission received: 13 March 2025 / Revised: 18 April 2025 / Accepted: 24 April 2025 / Published: 28 April 2025
(This article belongs to the Special Issue Optimization Theory, Method and Application, 2nd Edition)

Abstract

:
The Diminishing-Return (DR)-submodular function maximization problem has garnered significant attention across various domains in recent years. Classic methods often employ continuous greedy or Frank–Wolfe approaches to tackle this problem; however, high iteration and subproblem solver complexity are typically required to control the approximation ratio effectively. In this paper, we introduce a strategy that employs a binary search to find the dynamic stepsize, integrating it into traditional algorithm frameworks to address problems with different constraint types. We demonstrate that algorithms using this dynamic stepsize strategy can achieve comparable approximation ratios to those using a fixed stepsize strategy. In the monotone case, the iteration complexity is O F ( 0 ) 1 ϵ 1 , while in the non-monotone scenario, it is O n + F ( 0 ) 1 ϵ 1 , where F denotes the objective function. We then apply this strategy to solving stochastic DR-submodular function maximization problems, obtaining corresponding iteration complexity results in a high-probability form. Furthermore, theoretical examples as well as numerical experiments validate that this stepsize selection strategy outperforms the fixed stepsize strategy.

1. Introduction

The problem of maximizing DR-submodular functions, which generalizes set submodular functions to more general domains such as integer lattices and box regions in Euclidean spaces, has emerged as a prominent research topic in optimization. Set submodular functions inherently capture the diminishing returns property, where the marginal gain of adding an element to a set decreases as the set expands. DR-submodular functions extend this fundamental property to continuous and mixed-integer domains, enabling broader applications in machine learning, graph theory, economics, and operations research [1,2,3,4]. This extension not only addresses practical problems with continuous variables but also provides a unified framework for solving set submodular maximization through continuous relaxation techniques [5,6].
The problem of deterministic DR-submodular function maximization considered in this paper can be formally written as follows:
max x X F ( x ) ,
where the feasible set X [ 0 , 1 ] n is compact and convex, and  F : [ 0 , 1 ] n R + is a differentiable DR-submodular function. While this problem is generally NP-hard, under certain structural assumptions, approximation algorithms with constant approximation ratios can be developed. For unconstrained scenarios, Niazadeh et al. [7] established a tight 1 2 -approximation algorithm, aligning with classical results for unconstrained submodular maximization. In constrained settings, monotonicity plays a critical role: convex-constrained monotone DR-submodular maximization admits a 1 1 e -approximation [1], whereas non-monotone cases under down-closed constraints achieve 1 e -approximation [1] with recent improvements to 0.401  [8]. For general convex constraints containing the origin, a  1 min x X x 4 -approximation guarantee is attainable [9,10,11].
While deterministic DR-submodular maximization has been extensively studied, practical scenarios often involve uncertainties where the objective function can only be accessed through stochastic evaluations. This motivates the investigation of stochastic DR-submodular maximization problems, which are typically formulated as follows:
max x X F ( x ) : = E ξ P [ f ( x , ξ ) ] ,
where the DR-submodular function F : [ 0 , 1 ] n R + is defined as the expectation of stochastic functions f ( x , ξ ) : [ 0 , 1 ] n × Ξ R + with ξ P . Building upon the Lyapunov framework established for deterministic problems [10], recent works like [12,13] have developed stochastic variants of continuous greedy algorithms. Specifically, Lian et al. [14] proposed SPIDER-based methods that reduce gradient evaluation complexity from O ( ϵ 3 ) in earlier works [12] to O ( ϵ 2 ) through variance reduction techniques.
The Lyapunov framework proposed by Du et al. [9,10] provides a unified perspective for analyzing DR-submodular maximization algorithms. By modeling algorithms as discretizations of ordinary differential equations (ODEs) in the time domain [ 0 ,   1 ] , this approach establishes a direct connection between continuous-time dynamics and discrete-time implementations. Specifically, the approximation ratio of discrete algorithms differs from their continuous counterparts by a residual term that diminishes as the stepsize approaches zero. However, the use of constant stepsizes in this framework imposes fundamental limitations: the required number of iterations grows inversely with stepsize magnitude, leading to linear computational complexity both in theory and practical implementations.
To address the limitations of fixed stepsize strategies, recent advances have explored dynamic stepsize adaptation for submodular optimization. For box-constrained DR-submodular maximization, Chen et al. [15] developed a 1 2 -approximation algorithm with O ( 1 / ϵ ) adaptive rounds, where stepsizes are selected through enumeration over a candidate set of size O ( log 1 / ϵ ) . Furthermore, Ene et al. [16] achieved 1 e -approximation for non-monotone cases using O log n ϵ log 1 ϵ log 2 n + m ϵ 2 parallel rounds, and  ( 1 1 e ) -approximation for monotone cases with O log n ϵ log m + n ϵ 2 rounds. These works inspire our development of binary search-based dynamic stepsizes that achieve comparable approximation guarantees while reducing computational complexity.
The dynamic stepsize strategy in this paper leverages binary search to approximate solutions to univariate equations by selecting intervals based on midpoint function value signs, ensuring convergence via monotonicity and continuity. While stochastic methods like simulated annealing [17] address non-convex/stochastic problems via probabilistic criteria, their guarantees depend on cooling schedules and lack deterministic convergence. In contrast, our binary search framework exploits the monotonicity/continuity of the stepsize equation (Equation (38)), achieving sufficiently precise solutions with guaranteed efficiency, avoiding cooling parameter dependencies and focusing on theoretical foundations for DR-submodular structures.

1.1. Contributions

This paper introduces a novel dynamic stepsize strategy for DR-submodular maximization problems, offering significant improvements over traditional fixed stepsize methods. Our approach achieves state-of-the-art approximation guarantees while reducing computational complexity. Notably, the iteration complexity of our algorithms is independent of the smoothness parameter L. In the monotone case, it is also independent of the variable dimension n. Furthermore, both the gradient evaluation complexity and function evaluation complexity exhibit only a logarithmic dependence on the problem dimension n and the smoothness parameter L. Below, we summarize the key contributions:
  • Deterministic DR-Submodular Maximization: For deterministic settings, our dynamic stepsize strategy achieves the following complexity bounds:
    In the monotone case, the iteration complexity is O F ( 0 ) 1 ϵ , where F ( 0 ) 1 reflects the gradient norm at the origin, and  ϵ denotes the discretization error.
    For non-monotone functions, the iteration complexity increases to O n + F ( 0 ) 1 ϵ , accounting for the added challenge posed by non-monotonicity.
    To determine the stepsize dynamically, we employ a binary search procedure, introducing an additional factor of O log n L ϵ to the evaluation complexity.
  • Stochastic DR-Submodular Maximization: Extending our approach to stochastic settings, we achieve comparable complexity results with high probability:
    For monotone objective functions, the iteration complexity remains O F ( 0 ) 1 ϵ .
    In the non-monotone case, the complexity is O n + F ( 0 ) 1 ϵ .
    These results demonstrate that our method maintains efficiency regardless of the smoothness parameter L, making it particularly suitable for large-scale stochastic optimization problems.
  • Empirical Validation: We validate the effectiveness of our dynamic stepsize strategy through three examples: multilinear extensions of set submodular functions, DR-submodular quadratic functions, and softmax extensions for determinantal point processes (DPPs). The results confirm that our approach outperforms fixed stepsize strategies in terms of both iteration complexity and practical performance.
Table 1 provides a unified overview of our algorithms’ theoretical guarantees and computational complexities (iteration and gradient evaluation bounds) under diverse problem settings, enabling readers to rapidly grasp the efficiency and adaptability of our dynamic stepsize framework.

1.2. Organizations

The organization of the rest of this manuscript is as follows. Section 2 introduces the fundamental concepts and key results that form the basis of our work. In Section 3, we outline the design principles of our dynamic stepsize strategy and establish theoretical guarantees for both monotone and non-monotone deterministic objective functions. Section 4 extends our approach to stochastic settings, presenting algorithms and analyses tailored for monotone and non-monotone DR-submodular functions under uncertainty. In Section 5, we evaluate the computational efficiency of our strategy through its application to three canonical DR-submodular functions, with comprehensive numerical experiments validating the efficacy of the dynamic stepsize approach. Section 6 summarizes our key findings while discussing both limitations and promising future research directions.

2. Preliminaries

We begin by introducing the formal definition of a non-negative DR-submodular function defined on the continuous domain [ 0 , 1 ] n , along with some fundamental properties.
Definition 1.
A function F : [ 0 , 1 ] n R + is said to be DR-submodular if for any two vectors x , y [ 0 , 1 ] n satisfying x y (coordinate-wise) and any scalar a 0 such that x + a e i ,   y + a e i [ 0 , 1 ] n , the following inequality holds:
F ( x + a e i ) F ( x ) F ( y + a e i ) F ( y ) ,
for all i = 1 , , n . Here, e i represents the i-th standard basis vector in R n .
This property reflects the diminishing returns behavior of F along each coordinate direction. Specifically, the marginal gain of increasing a single coordinate diminishes as the input vector grows larger.
To facilitate further discussions, we introduce additional notation. Throughout this paper, the inequality x y for two vectors means that x i y i holds for all i [ n ] . Additionally, the operation x y is defined as ( x y ) i = max { x i , y i } , and  x y is defined as ( x y ) i = min { x i , y i } .
An important result is that, in the differentiable case, DR-submodular functions are equivalent to the monotonic decrease in the gradient. Specifically, when F is differentiable, F is DR-submodular if and only if [2]
F ( x ) F ( y ) , x y [ 0 , 1 ] n .
Another essential property for differentiable DR-submodular functions that will be used in this paper is derived from the concavity-like behavior in non-negative directions, as stated in the following proposition.
Proposition 1
([18]). When F is differentiable and DR-submodular, then
F ( x ) , y x F ( x y ) + F ( x y ) 2 F ( x ) , x , y [ 0 , 1 ] n .
In this paper, we also require the function F to be L-smooth, meaning that for any x , y X , there holds
F ( x ) F ( y ) L x y ,
where · denotes the Euclidean norm unless otherwise specified. An important property of L-smooth functions is that they satisfy the following necessary (but not sufficient) condition:
F ( y ) F ( x ) F ( x ) , y x L 2 y x 2 .
For the stochastic DR-submodular maximization problem (2), we introduce additional notations to describe the stochastic approximation of the objective function’s full gradient.
- At each iteration j, let M j denote a random subset of samples drawn from P , with m representing the size of M j .
- The stochastic gradient at x is computed as
f ( x , M j ) : = 1 | M j | ξ M j f ( x , ξ ) .
- We use an unbiased estimator h j to approximate the true gradient F ( x j ) .
All algorithms and theoretical analyses in this paper for problems (1) and (2) rely on the following foundational assumption:
Assumption 1.
The problems under consideration satisfy these conditions:
1. 
F : [ 0 , 1 ] n R + is DR-submodular and L-smooth.
2. 
F ( 0 ) = 0 and 0 X .
3. 
A Linear-Objective Optimization (LOO) oracle is available, providing solutions to
max x X c x c R n .
The following assumption is essential for the stochastic problem (2).
Assumption 2.
The stochastic gradient is unbiased—i.e.,
L ξ [ f ( x , ξ ) ] = F ( x ) .
This assumption ensures that the mini-batch gradient estimator f ( x , M j ) satisfies
L M j [ f ( x , M j ) ] = F ( x ) ,
where M j denotes the random mini-batch sampled at iteration j. This property is critical for deriving high-probability guarantees in stochastic optimization.

Lyapunov Method for DR-Submodular Maximization

As discussed in [10], Lyapunov functions play a crucial role in the analysis of algorithms. Depending on the specific problem, the Lyapunov function can take various parametric forms. Taking the monotone DR-submodular maximization problem for example, the ideal algorithm can be designed as follows:
v ( x ( t ) ) arg max v X F ( x ( t ) ) , v , x ˙ ( t ) = v ( x ( t ) ) .
A unified parameterized form of the Lyapunov function is given by:
L ( x ( t ) ) = a t F ( x ( t ) ) b t F ( x ( t ) ) , t [ 0 , T ] ,
where a t and b t are time-dependent parameters.
The monotonicity of the Lyapunov function is closely tied to the approximation ratio of the algorithm, as demonstrated by the following inequality:
F ( x ( T ) ) b T b 0 a T F ( x * ) ,
where x * represents the optimal solution. The specific values of a T , b T , and T depend on the problem under consideration and are chosen accordingly to achieve the desired theoretical guarantees. In this problem, letting a t = b t = e t , T = 1 can guarantee the monotonicity of L ( x ( t ) ) and then the approximation ratio for monotone DR-submodular functions is:
F ( x ( 1 ) ) 1 1 e F ( x * ) .
For maximizing non-monotone DR-submodular functions with down-closed constraints, the ideal algorithm can be designed as follows:
v ( x ( t ) ) arg max v X { v : v 1 x ( t ) } F ( x ( t ) ) , v ,
x ˙ ( t ) = α t v ( x ( t ) ) .
In this problem, let a t = e t , b t = t , and  T = 1 . Then the best approximation ratio for monotone DR-submodular functions is:
F ( x ( 1 ) ) 1 e F ( x * ) ,
where X is down-closed.
For maximizing non-monotone DR-submodular functions with general convex constraints, the ideal algorithm can be designed as follows:
v ( x ( t ) ) arg max v X F ( x ( t ) ) , v ,
x ˙ ( t ) = α t [ v ( x ( t ) ) x ( t ) ] .
In this problem, let a t = ( t + 1 ) 2 , b t = t , and  T = 1 . Then the best approximation ratio for monotone DR-submodular functions is:
F ( x ( 1 ) ) 1 4 F ( x * ) ,
where X is only convex.
In this paper, we focus on the same algorithmic ODE forms as those discussed above. However, our key improvement lies in the discretization process. Specifically, we aim to enhance the iteration complexity by employing a dynamic stepsize strategy. This approach allows for more efficient approximations while maintaining the desired theoretical guarantees, thereby advancing the state-of-the-art in DR-submodular maximization algorithms.

3. Deterministic Scenarios

In this section, we discuss the dynamic stepsize algorithms for maximizing deterministic DR-submodular functions, considering two cases: monotone and non-monotone. The fixed stepsize versions of the algorithms discussed in this section already exist in the literature, as documented in [6,9,10].

3.1. Monotone Case

In this subsection, we discuss a dynamic stepsize algorithm designed to maximize a monotone DR-submodular function while ensuring an approximation guarantee. To better illustrate the strategy for selecting the stepsize, we first introduce an idealized algorithm that relies on an oracle capable of solving a univariate continuous monotone equation. Subsequently, we propose a practical and implementable version of the algorithm.

3.1.1. An Idealized Algorithm

The ideal version referred to above is depicted as Algorithm 1. Unlike the fixed stepsize approach used in [10] where δ j 1 K , our algorithm determines the stepsize by solving Equation (22). In brief, the stepsize is selected to ensure that the directional derivatives along v j between successive iterations differ precisely by ϵ .    
Algorithm 1: Ideal CG
Mathematics 13 01447 i001
Before analyzing the computational complexity and approximation guarantees of Algorithm 1, we must verify the feasibility of its output.
Lemma 1.
Algorithm 1 outputs a solution satisfying x X .
Proof. 
Note that
x ( 1 ) = x ( 0 ) + j δ j v j = j δ j v j ,
and j δ j = j t j + 1 t j = 1 . By the feasibility of v j and the convexity of P, we can prove the conclusion.    □
The iteration complexity bound is established as follows.
Lemma 2.
The iteration number K of Algorithm 1 satisfies
K min F ( 0 ) 1 , n L ϵ + 2 .
Proof. 
For j = 0 , , K 2 , we have
F ( x j ) F ( x j + 1 ) , 1 F ( x j ) F ( x j + 1 ) , v j = ϵ .
Summing up this inequality from j = 0 to K 2 yields
( K 2 ) ϵ j = 0 K 3 F ( x j ) F ( x j + 1 ) , 1 = F ( x 0 ) F ( x K 2 ) , 1 .
Noting that F ( 0 ) 1 F ( x 0 ) F ( x K 2 ) , 1 by the monotonicity of F, we can conclude that K F ( 0 ) 1 ϵ + 2 .
By the fact that F is L-smooth and DR-submodular, there holds
F ( x 0 ) F ( x K 2 ) , 1 n L .
Combining these bounds completes the proof.    □
As outlined in [10], the iteration complexity of the Frank–Wolfe algorithm for DR-submodular maximization is O n L ϵ . We now present the approximation guarantee and complexity results for Algorithm 1.
Theorem 1.
Assume that F is monotone. Then Algorithm 1 returns a solution x satisfying
F ( x ) ( 1 e 1 ) F ( x * ) ϵ ,
where x * denotes the optimal solution, with iteration complexity given by Equation (24).
Proof. 
Define the potential function L ( j ) : { 0 , , K 1 } R as
L ( j ) = e t j F ( x j ) F ( x * ) .
For j = 0 , , K 1 , there holds
L ( j + 1 ) L ( j ) = e t j + 1 ( F ( x j + 1 ) F ( x * ) ) e t j ( F ( x j ) F ( x * ) ) = e t j + 1 F ( x j + 1 ) F ( x j ) + e t j + 1 e t j F ( x j ) F ( x * ) = e t j + 1 0 δ j F ( x j + t v j ) , v j d t + e t j + 1 e t j F ( x j ) F ( x * )
e t j + 1 δ j F ( x j ) , v j ϵ + e t j + 1 e t j F ( x j ) F ( x * )
e t j + 1 δ j F ( x j ) , x * ϵ + e t j + 1 e t j F ( x j ) F ( x * )
e t j + 1 δ j F ( x j ) , x j x * x j ϵ + e t j + 1 e t j F ( x j ) F ( x * )
e t j + 1 δ j F ( x j x * ) F ( x j ) ϵ + e t j + 1 e t j F ( x j ) F ( x * )
e t j + 1 δ j F ( x * ) F ( x j ) ϵ + e t j + 1 e t j F ( x j ) F ( x * )
= e t j + 1 δ j ϵ + e t j + 1 δ j e t j + 1 + e t j F ( x * ) F ( x j )
e t j + 1 δ j ϵ ,
where (29) holds for the monotonicity of F , (30) holds for the definition of v j , (31) and (33) holds by the monotonicity of F and (32) is by Proposition 1. Thus,
L ( K ) L ( 0 ) = j = 0 K 1 L ( j + 1 ) L ( j ) j = 0 K 1 e t j + 1 δ j ϵ e ϵ j = 0 K 1 δ j = e ϵ .
Simultaneously, there holds
L ( K ) L ( 0 ) = e ( F ( x ) F ( x * ) ) ( F ( x 0 ) F ( x * ) ) .
Thus, we finally have
F ( x ) ( 1 e 1 ) F ( x * ) ϵ .
The proof is completed.    □

3.1.2. Algorithm with Binary Search

An oracle for solving Equation (22) exactly is not always feasible in general cases. To address this, we propose employing a binary search technique to compute an approximate solution. This approach preserves the approximation ratio achieved by Algorithm 1, leading to the development of Algorithm 2. The key distinction between these two algorithms lies in the determination of the stepsize δ j at each iteration.
In Algorithm 2, we utilize the bisection method to compute a compensation parameter δ j that satisfies condition (38). The implementation begins by initializing the search interval as [ 0 , t j ] . At each iteration, we evaluate the left-hand side of (38) at the midpoint of the current interval and determine whether the value belongs to the right sub-interval. Depending on this evaluation, we systematically discard either the left or right half of the interval and repeat the process. The well-defined nature of this procedure is guaranteed by the monotonicity and continuity of the left-hand side expression with respect to δ i .
Algorithm 2: Bisection continuous-greedy
Mathematics 13 01447 i002
Following a similar analysis to that of Algorithm 1, the number of iterations K in the “while” loop of Algorithm 2 can also be bounded.
Corollary 1.
For Algorithm 2, the iteration number satisfies
K 2 min { F ( 0 ) 1 , n L } ϵ + 2 .
To analyze the gradient evaluation complexity of F, it is necessary to examine the number of binary search steps required in each iteration to determine the stepsize.
Lemma 3.
For each iteration j { 0 , , K 1 } , the stepsize δ j can be determined within at most M = 1 + log 2 n L ϵ binary search steps.
Proof. 
Let δ * represent the exact solution to the equation
ϕ ( δ j ) : = F ( x j ) , v j F ( x j + δ j v j ) , v j ϵ 2 = 0 .
Due to the monotonicity and continuity of the univariate function ϕ ( δ j ) , the binary search process can identify an interval of length 2 M that contains δ * . Let δ j denote the right endpoint of this interval. Consequently, we have
0 δ j δ * 2 M ϵ 2 n L .
By the L-smoothness property of F, it follows that
F ( x j + δ * v j ) , v j F ( x j + δ j v j ) , v j = F ( x j + δ * v j ) F ( x j + δ j v j ) , v j
L ( δ j δ * ) v j 2
L · ϵ 2 n L · n
= ϵ 2 .
Thus, we obtain
F ( x j ) , v j F ( x j + δ j v j ) , v j = ϵ 2 + F ( x j + δ * v j ) , v j F ( x j + δ j v j ) , v j ϵ .
Additionally, by the monotonicity of F , we know
F ( x j + δ * v j ) , v j F ( x j + δ j v j ) , v j .
Combining these results with the definition of δ * , the proof is completed.    □
The approximation guarantee and oracle complexity of Algorithm 2 can now be derived.
Theorem 2.
Assume F is monotone. Then, Algorithm 2 outputs a solution x satisfying
F ( x ) ( 1 e 1 ) F ( x * ) ϵ .
The LOO oracle complexity is at most O F ( 0 ) 1 ϵ , and the gradient evaluation complexity is at most O F ( 0 ) 1 ϵ log n L ϵ .
Proof. 
The complexities of the LOO oracle and gradient evaluations can be established using Corollary 1 and Lemma 3. We now focus on deriving the approximation ratio.
Recall the function L ( j ) defined in (34). For  j = 0 , , K 1 , we have
L ( j + 1 ) L ( j ) = e t j + 1 0 δ j F ( x j + t v j ) , v j d t + e t j + 1 e t j F ( x j ) F ( x * ) .
From the structure of Algorithm 2, the DR-submodularity, and the monotonicity of F, it follows that
0 δ j F ( x j + t v j ) , v j d t 0 δ j F ( x j + δ j v j ) , v j d t
δ j F ( x j ) , v j ϵ
δ j F ( x j ) , x * ϵ
δ j F ( x j ) , x j x * x j ϵ
δ j F ( x j x * ) F ( x j ) ϵ
δ j F ( x * ) F ( x j ) ϵ .
Thus, we obtain
L ( j + 1 ) L ( j ) e t j + 1 δ j F ( x * ) F ( x j ) ϵ + e t j + 1 e t j F ( x j ) F ( x * )
= e t j + 1 δ j ϵ + e t j + 1 δ j e t j + 1 + e t j F ( x * ) F ( x j )
e t j + 1 δ j ϵ .
Summing up these inequalities for j = 0 to K 1 , we obtain
L ( K ) L ( 0 ) = j = 0 K 1 L ( j + 1 ) L ( j )
j = 0 K 1 e t j + 1 δ j ϵ
e ϵ .
On the other hand, by the definition of L ( j ) , we have
L ( K ) L ( 0 ) = e F ( x ) F ( x * ) F ( 0 ) F ( x * )
= e F ( x ) + ( 1 e ) F ( x * ) .
Combining these results yields the approximation guarantee stated in the theorem.    □

3.2. Non-Monotone Case

In this subsection, we discuss the dynamic stepsize strategy for maximizing DR-submodular functions after removing monotonicity, along with its theoretical guarantees. Unlike the monotonic case, here we categorize the constraints into two types: down-closed and general convex types.

3.2.1. Down-Closed Constraint

Algorithm 3 is proposed for scenarios involving down-closed constraints, which means that if x X and 0 y x , then y X . The fundamental framework is inspired by the measured continuous greedy (MCG) algorithm introduced in [6], initially raised for maximizing the multilinear extension relaxation of submodular set functions. In [10], MCG was demonstrated to require O ( n L ϵ ) iterations to ensure an approximation loss of ϵ .
Algorithm 3: Bisection MCG
Mathematics 13 01447 i003
The feasibility of the output produced by Algorithm 3 is ensured by the down-closed property of X .
Lemma 4.
The solution x generated by Algorithm 3 satisfies x X .
The proof of is presented in Appendix A.
The absence of the monotonicity assumption for F necessitates a distinct analysis of the iteration complexity for Algorithm 3 compared to Algorithm 2.
Lemma 5.
For Algorithm 3, the number of iterations K satisfies
K min n + 2 F ( 0 ) 1 ϵ , 2 n L ϵ + 2 .
Proof. 
The upper bound K 2 n L ϵ + 2 follows analogous reasoning to Corollary 1.
Note that at the K 1 -th iteration of the algorithm, we have F ( x K 2 ) 0 —i.e., there exists at least one i [ n ] such that F x i x = x K 2 > 0 , or else we can let v j = 0 and terminate the algorithm.
Now, consider the changing process of the sign of F ( x j ) for j = 0 , , K 2 .
Case I. F ( x K 2 ) 0 . In this case, the analysis is analogous to that in Lemma 2 and we have
K F ( 0 ) 1 ϵ + 2 O ( F ( 0 ) 1 ϵ ) .
Case II. F ( x K 2 ) 0 . First, we define the iteration index set S [ K 1 ] as the following:
S = j [ K 1 ] : i [ n ] such that F x i x = x j > 0 , F x i x = x j + 1 < 0 .
It is obvious that | S | n .
For j S , by the monotonicity of F w.r.t. K, we have
F ( x j ) + , 1 F ( x j + 1 ) + , 1 = F ( x j ) + F ( x j + 1 ) + , 1 F ( x j ) F ( x j + 1 ) , v j ϵ 2 ,
since v j , F ( x j ) and F ( x j + 1 ) are entry-wisely of the same sign. By the monotonicity of F ( x ) , the number of these iterations can be bounded as
F ( 0 ) 1 F ( 0 ) + 1 F ( x K 2 ) + 1 = j = 0 K 3 F ( x j ) + , 1 F ( x j + 1 ) + , 1 j S F ( x j ) + , 1 F ( x j + 1 ) + , 1 [ K 2 ] S · ϵ 2 .
The proof is completed.    □
Building upon the preceding analysis, we establish the following approximation guarantee and complexity results for Algorithm 3:
Theorem 3.
For any down-closed feasible set X [ 0 , 1 ] n , Algorithm 3 produces a solution x satisfying F ( x ) e 1 F ( x * ) ϵ with LOO oracle calls bounded by
O n + ϵ 1 F ( 0 ) 1 ,
and gradient evaluations at most
O n + ϵ 1 F ( 0 ) 1 log n L ϵ .
Proof. 
Redefine the potential function as L ( j ) = e t j F ( x j ) t j F ( x * ) . Then, for j = 0 , K 1 , there holds
L ( j + 1 ) L ( j ) = e t j + 1 F ( x j + 1 ) t j + 1 F ( x * ) e t j F ( x j ) + t j F ( x * ) = e t j + 1 F ( x j + 1 ) F ( x j ) + e t j + 1 e t j F ( x j ) ( t j + 1 t j ) F ( x * ) = e t j + 1 0 δ j e δ j F ( x j ) + t v j , v j d t + e t j + 1 e t j F ( x j ) ( t j + 1 t j ) F ( x * ) .
Similar to the proof of Theorem 2, we need to prove a lower bound on the difference of function values between two adjacent iteration points when F is non-monotone:
0 δ j e δ j F ( x j ) + t v j , v j d t δ j e δ j F ( x j ) , v j ϵ δ j e δ j F ( x j ) , x j x * x j ϵ δ j e δ j F ( x j ) x * ) F ( x j ) ϵ δ j e δ j ( 1 x j ) F ( x * ) F ( x j ) ϵ ,
where the third inequality is due to Proposition 1 and the fourth inequality is by Lemma 3 in [1], which implies that the following inequality holds:
F ( x x * ) 1 x F ( x * ) = 1 max i = 1 n x i ( t ) F ( x * ) ,
for x X . Additionally, for the upper bound on the -norm of x j , we have the following claim.    □
Claim 1.
For all the iteration points x j , j = 0 , , K , of Algorithm 3, we have x j 1 e t j .
The claim can be proved by induction. First, note that x 0 = 0 1 e 0 . Assume that x j 1 e t j for some j, then the proof can be finished by showing that
( x j + 1 ) i δ j e δ j + 1 δ j e δ j ( x j ) i δ j e δ j + 1 δ j e δ j 1 e t j 1 e t j δ j .
So the above formula yields
L ( j + 1 ) L ( j ) e t j + 1 δ j e δ j e t j F ( x * ) F ( x j ) ϵ + e t j + 1 e t j F ( x j ) ( t j + 1 t j ) F ( x * ) = e t j + 1 δ j e δ j ϵ + ( e t j + 1 t j δ j e δ j δ j ) F ( x * ) + ( e t j + 1 e t j e t j + 1 δ j e δ j ) F ( x j ) = e t j δ j ϵ + ( e t j + 1 e t j e t j + 1 δ j e δ j ) F ( x j ) e t j + 1 δ j e δ j ϵ .
By summing Equation (75) over j from 0 to K 1 , we obtain
L ( K ) L ( 0 ) j = 0 K 1 e t j δ j ϵ e ϵ j = 0 K 1 δ j = e ϵ .
Together with the fact L ( K ) L ( 0 ) = e F ( x ) F ( x * ) , we obtain the theorem.

3.2.2. General Convex Constraint

This section presents a Frank–Wolfe variant designed for maximizing non-monotone DR-submodular functions under general convex constraints, where stepsizes are determined through binary search operations on Equation (38). A key distinction from prior methods in [10] and earlier approaches lies in our iterative tracking protocol. Specifically, the method requires maintaining records of both parameter vectors and their corresponding function evaluations at each iteration. Upon completing the iteration sequence, the procedure outputs the stored point achieving maximum functional value, contrasting with traditional implementations that directly return the final computed iterate.
The feasibility and approximation characteristics of solution x generated by Algorithm 4 are formally established through the following analytical results.    
Algorithm 4: Bisection Frank-Wolfe
Mathematics 13 01447 i004
Lemma 6.
Algorithm 4 produces a feasible solution satisfying x X .
The proof of is presented in Appendix B.
Theorem 4.
The solution of Algorithm 4 satisfies
F ( x ) 1 4 F ( x * ) ϵ 2
with computational complexity characterized by O n + ϵ 1 F ( 0 ) 1 LOO oracle calls and O n + ϵ 1 F ( 0 ) 1 log n L ϵ gradient evaluations.
The proof of is presented in Appendix C.

4. Stochastic DR-Submodular Function Maximization

This section investigates stochastic maximization of DR-submodular functions under two distinct settings. Section 4.1 focuses on the monotone case, establishing theoretical guarantees for constrained optimization. Building upon this foundation, Section 4.2 extends the analysis to non-monotone scenarios, addressing both down-closed constraints and generalized convex constraints. For the fixed stepsize implementations of stochastic DR-submodular maximization algorithms, we refer readers to [12,14].

4.1. Stochastic Monotone DR-Submodular Maximization

Algorithm 5 implements a SPIDER-CG framework for continuous monotone DR-submodular optimization, integrating binary search for adaptive stepsize selection. Our approach builds on the recursive gradient estimator from [14], where Lian et al. consider an gradient approximation h j of F ( x j ) by adding an unbiased estimator of F ( x j ) F ( x j 1 ) to h j 1 , and  h 0 is given as an unbiased estimator of F ( x 0 ) . Building upon this variance-reduced foundation, we adopt the binary search method to find out a proper dynamic stepsize.    
Algorithm 5: Bisection Stochastic CG
Mathematics 13 01447 i005
We have the following theorem on the theoretical results of Algorithm 5.
Theorem 5.
Assume that F is monotone and set m = O ( ϵ 3 log F ( 0 ) 1 ϵ σ ) . Under Assumption 2, Algorithm 5 outputs a solution x satisfying
E [ F ( x ) ] ( 1 1 e ) F ( x * ) ϵ 1 ϵ .
With probability 1 σ , the LOO oracle complexity is bounded by O F ( 0 ) 1 ϵ and the gradient evaluation complexity is bounded by
O F ( 0 ) 1 ϵ 4 · log n L ϵ 2 · log F ( 0 ) 1 ϵ σ .
Proof. 
Firstly, we provide the proof of the complexity. For j = 0 , , K 2 , from the deterministic algorithm, we have
E f ( x j , M j ) f ( x j + 1 , M j ) , 1 = F ( x j ) F ( x j + 1 ) , 1 F ( x j ) F ( x j + 1 ) , v j = E f ( x j , M j ) f ( x j + 1 , M j ) , v j .
According to the Chernoff bounds, let X i = f ( x j , M j ) f ( x j + 1 , M j ) , v j , X = 1 m j = 0 m X i , μ = F ( x j ) F ( x j + 1 ) , v j , satisfying
P ( X ( 1 + ϵ ) μ ) 1 e ϵ 2 m μ 3 1 e ϵ 3 m 6 ( 1 + ϵ ) .
For each iteration, the probability of μ 1 1 + ϵ · ϵ 2 is at least 1 e ϵ 3 m 6 ( 1 + ϵ ) . For all j, the following inequality holds with a high probability 1 e ϵ 3 m 6 ( 1 + ϵ ) K ,
E f ( x j , M j ) f ( x j + 1 , M j ) , v j 1 1 + ϵ · ϵ 2 .
Then
F ( x 0 ) F ( x K 2 ) , 1 = j = 0 K 3 F ( x j ) F ( x j + 1 ) , 1 j = 0 K 3 E f ( x j , M j ) f ( x j + 1 , M j ) , v j ( K 2 ) · 1 1 + ϵ · ϵ 2 .
It is noteworthy that
F ( x 0 ) 1 = E [ f ( x 0 , M j ) ] 1 E f ( x 0 , M j ) f ( x K 2 , M j ) , 1 .
Then we can obtain
E f ( x 0 , M j ) 1 ( K 2 ) · 1 1 + ϵ · ϵ 2 ,
K 2 F ( x 0 ) 1 ( 1 + ϵ ) ϵ + 2 .
Then we can obtain the complexity O F ( x 0 ) 1 ϵ with probability 1 e ϵ 3 m 6 ( 1 + ϵ ) K . According to Taylor expansion, we have
1 e ϵ 3 m 6 ( 1 + ϵ ) K 1 K · e ϵ 3 m 6 ( 1 + ϵ ) .
Denote σ : = K · e ϵ 3 m 6 ( 1 + ϵ ) . The number m of set M j is at most
6 ϵ 3 ( 1 + ϵ ) log 2 ϵ 1 F ( x 0 ) 1 ( 1 + ϵ ) + 2 σ .
Let
ϕ ( δ j ) : = f ( x j , M j ) , v j f ( x j + δ j v j , M j ) , v j ϵ 2 .
E f ( x j , M j ) , v j f ( x j + δ * v j , M j ) , v j = ϵ 2 .
From the deterministic case, letting the right endpoint of the interval be δ j yields
0 δ j δ * 2 M 1 1 1 + ϵ ϵ 2 n L .
From the above, for all j, there holds the following inequation in high probability.
E f ( x j , M j ) , v j E f ( x j + δ j v j , M j ) , v j 1 1 + ϵ · ϵ 2 .
Then
E f ( x j + δ j v j , M j ) , v j E f ( x j + δ * v j , M j ) , v j = E f ( x j + δ j v j , M j ) f ( x j + δ * v j , M j ) , v j = F ( x j + δ j v j , M j ) F ( x j + δ * v j , M j ) , v j L ( δ j δ * ) v j 2 1 1 1 + ϵ · ϵ 2 .
Now, we focus on the approximation ratio. Define the function L ( j ) as
L ( j ) : = e t j E F ( x j ) F ( x * ) .
For j = 0 , K 1 , there holds
L ( j + 1 ) L ( j ) = e t j + 1 E 0 δ j F x j + t v j , v j d t + e t j + 1 e t j E [ F ( x j ) ] F ( x * ) .
By the form of Algorithm 5 and the DR-submodularity and monotonicity of F, we have
E 0 δ j F x j + t v j , v j d t E 0 δ j F x j + δ j v j , v j d t E δ j F x j + δ j v j , v j = δ j E E h j + 1 , v j = δ j E E h j + f x j + 1 , M j + 1 f x j , M j + 1 , v j = δ j E E h j , v j + E f x j + 1 , M j + 1 f x j , M j + 1 , v j δ j E E h j , x j x * x j + E f x j + 1 , M j + 1 f x j , M j + 1 , v j = δ j E F x j , x j x * x j + E f x j + 1 , M j + 1 f x j , M j + 1 , v j δ j E F x j x * F x j + E f x j + 1 , M j + 1 f x j , M j + 1 , v j δ j E F x * F x j + E f x j + 1 , M j + 1 f x j , M j + 1 , v j .
Thus, we have
L ( j + 1 ) L ( j )
δ j e t j + 1 E F x * F x j + E f x j + 1 , M j + 1 f x j , M j + 1 , v j
+ e t j + 1 e t j E F x j F x * δ j e t j + 1 E F x * F x j ϵ 1 ϵ + e t j + 1 e t j E F x j F x * = δ j e t j + 1 e t j + 1 + e t j F x * E F x j δ j e t j + 1 ϵ 1 ϵ
δ j e t j + 1 ϵ 1 ϵ .
Summing up all the above inequalities from j = 0 to K 1 yields
L ( K ) L ( 0 ) = j = 0 K 1 L ( j + 1 ) L ( j ) j = 0 K 1 δ j e t j + 1 ϵ 1 ϵ e ϵ 1 ϵ .
On the other hand, by the definition of function L, we have
L ( K ) L ( 0 ) = e E F x K F x * E F 0 F x * = e E F x + 1 e F x * .
Then
E F x K 1 1 e F x * ϵ 1 ϵ .
   □

4.2. Stochastic Non-Monotone DR-Submodular Maximization

This subsection investigates the stochastic maximization of non-monotone DR-submodular functions under two constraint classes: down-closed convex sets and general convex domains.

4.2.1. Down-Closed Constraint

Algorithm 6 is designed for a stochastic non-monotone DR-submodular function with a down-closed constraint.    
Algorithm 6: Bisection Stochastic MCG
Mathematics 13 01447 i006
Theorem 6.
Assume that X [ 0 , 1 ] n is down-closed and set m = O ϵ 3 log F ( 0 ) 1 ϵ σ + n σ . Under Assumption 2, Algorithm 6 outputs a solution x satisfying
E [ F ( x ) ] e 1 F ( x * ) ϵ .
With probability 1 σ , the LOO oracle complexity is bounded by O F ( 0 ) 1 ϵ + n and the gradient evaluation complexity is bounded by
O ϵ 3 F ( 0 ) 1 ϵ + n · log n L ϵ 2 · log F ( 0 ) 1 ϵ σ .
The proof of is presented in Appendix D.

4.2.2. General Convex Constraint

In this subsection, we present the dynamic stepsize algorithm for solving the maximization of stochastic non-monotone DR-submodular functions with general convex constraints.
Theorem 7.
Algorithm 7 outputs a solution x satisfying
E [ F ( x ) ] 1 4 F ( x * ) ϵ 2 .
With probability 1 σ , the LOO oracle complexity is bounded by O F ( 0 ) 1 ϵ + n and the gradient evaluation complexity is bounded by O ϵ 3 F ( 0 ) 1 ϵ + n · log n L ϵ 2 · log F ( x 0 ) 1 ϵ σ .
The proof of is presented in Appendix E.
Algorithm 7: Bisection Stochastic Frank–Wolfe
Mathematics 13 01447 i007

5. Examples

To explore the potential acceleration offered by a dynamic stepsize strategy, we present three illustrative examples in this section.
Multilinear Relaxation for Submodular Maximization. Let V be a finite ground set, and let f : 2 V R be a function. The multilinear extension of f is defined as
F ( x ) = S V f ( S ) i S x i i S ( 1 x i ) ,
where x [ 0 , 1 ] | V | . It is well known that the function f is submodular (i.e., f ( S { e } ) f ( S ) f ( T { e } ) f ( T ) for any S T V and e T ) if and only if F is DR-submodular. Therefore, maximizing a submodular function f can be achieved by first solving the maximization of its multilinear extension and then obtaining a feasible solution to the original problem through a rounding method. Such algorithms are known to provide strong approximation guarantees [5,19].
Let e i denote the vector whose i-th entry is 1 and all other entries are 0. The upper bound of F ( 0 ) 1 can be derived as follows.
Lemma 7.
Let F denote of a submodular set function f. Suppose the feasible set X satisfies X { e i } i = 1 n and we have
F ( 0 ) 1 n F ( x * ) .
For the multilinear extension of a submodular set function, the Lipschitz constant L is given by L = O ( n 2 ) F ( x * ) [6].
Softmax Relaxation for DPP MAP Problem. Determinantal point processes (DPPs) are probabilistic models that emphasize diversity by capturing repulsive interactions, making them highly valuable in machine learning for tasks requiring varied selections. Let H denote the positive semi-definite kernel matrix associated with a DPP. The softmax extension of the DPP maximum a posteriori (MAP) problem is expressed as
F ( x ) = log det diag ( x ) ( H I ) + I , x [ 0 , 1 ] n ,
where I represents the identity matrix. Based on Corollary 2 in [4], the gradient of the softmax extension F ( x ) can be written as follows:
i F ( x ) = ( diag ( x ) ( H I ) + I ) 1 ( H I ) i i , i [ n ] .
Consequently, the 1 -norm of the gradient at 0 is given by
F ( 0 ) 1 = i = 1 n | H i i 1 | .
In practical scenarios involving DPPs, the matrix H is often a Gram matrix, where the diagonal elements H i i are universally bounded. This implies that the asymptotic growth of F ( 0 ) 1 is upper-bounded by O ( n ) .
DR-Submodular Quadratic Functions. Consider a quadratic function F ( x ) of the form
F ( x ) = 1 2 x T A x + a T x + c ,
where A R n × n (i.e., A is a matrix with non-positive entries). In this case, F ( x ) is DR-submodular. It is straightforward to verify that F ( 0 ) 1 = a 1 n , and the gradient Lipschitz constant is given by L = A 2 .
The computational complexities of the algorithms designed to address the three constrained DR-submodular function maximization problems outlined earlier are compiled in Table 2. The results presented in the table reveal that the dynamic stepsize strategy introduced in this work offers a significant advantage in terms of complexity over the constant stepsize approach, both for the MLE and softmax relaxation problems. However, in the quadratic case, a definitive comparison of their complexities cannot be made, because they are determined by the 1 -norm of the linear term vector and the 2 -norm of the quadratic term matrix, respectively, and there is no inherent relationship between the magnitudes of these two quantities.

Numerical Experiments

We conduct numerical experiments to evaluate different stepsize selection strategies for solving DR-submodular maximization problems. Our investigation focuses on two fundamental classes of objective functions: quadratic DR-submodular functions and softmax extension functions. The experimental framework builds upon established methodologies from [9,14], with necessary adaptations for our specific analysis.
Our experimental evaluation considers two problem classes: softmax extension problems and quadratic DR-submodular problems with linear constraints. Since neither problem class inherently satisfies monotonicity, we augment both functions with an additional b T x term, where b is a positive vector with components in appropriate ranges. This modification enables the verification of Algorithms 2 and 5 by ensuring monotonicity preservation.
We evaluate Algorithms 2–4 on the softmax extension problems, while testing the stochastic algorithms (Algorithms 5–7) on quadratic DR-submodular problems with incorporated random variables. The randomization methodology follows the principled approach outlined in [14].
For each problem class, we consider decision space dimensions n { 40 , 100 } , with the number of constraints m set as 0.5 n for each dimension. The approximation parameter ϵ is fixed at 0.1, and the constant stepsize strategy employs 100 iterations. Each configuration is executed with five independent trials, with averaged results reported.
Figure 1 presents the performance comparison for softmax extension problems, while Figure 2 displays the results for stochastic quadratic DR-submodular problems. Both figures demonstrate the evolution of achieved function values across different stepsize strategies, providing empirical insights into algorithmic efficiency.
From the numerical results, we observe that for both deterministic and stochastic problems, dynamic stepsizes generally lead to lower iteration complexity compared to constant stepsizes, especially for larger problem dimensions. This finding highlights the advantages of using dynamic stepsizes in solving DR-submodular maximization problems.

6. Conclusions

This paper introduces a dynamic stepsize strategy for DR-submodular maximization, achieving iteration complexities independent of the smoothness parameter L. In deterministic settings, monotone cases attain ( 1 1 / e ) -approximation with O F ( 0 ) 1 ϵ iterations, while non-monotone problems under down-closed or general convex constraints achieve 1 / e and 1 / 4 -approximations with O n + F ( 0 ) 1 ϵ iterations. For stochastic optimization, variance reduction techniques (e.g., SPIDER) further reduce gradient evaluation complexities while maintaining high-probability guarantees. Empirical results on multilinear extensions, DPP softmax relaxations, and DR-submodular quadratics validate the practical efficiency of our methods compared to fixed stepsize baselines.
Our work has three key limitations: first, while our dynamic strategy matches the iteration complexity of fixed stepsize methods, it does not guarantee superiority for all L-smooth DR-submodular functions. Second, Algorithm 1 avoids the L-smoothness assumption but requires a univariate equation oracle to solve Equation (22), which lacks practical applications as no real-world examples have been identified to support this assumption. Third, our stepsize mechanism heavily relies on the DR-submodularity property, limiting its applicability to non-DR-submodular functions or mixed-integer domains. These limitations highlight opportunities for future research to extend our framework to broader function classes and practical scenarios.

Author Contributions

Conceptualization, Y.Z.; Methodology, Q.L. and Y.Z.; Validation, M.L.; Formal analysis, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

The author Yang Zhou was supported by the National Natural Science Foundation of China (No. 12371099). This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that they have no known competing financial or non-financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A. Proof of Lemma 4

Proof. 
By the algorithm, we can first obtain the following relations:
x K = x 0 + j = 1 K 1 δ j e δ j v j
    j = 1 K 1 δ j v j P .
The conclusions can then be obtained by the down-closeness of P. □

Appendix B. Proof of Lemma 6

Proof. 
The conclusion can be derived by the fact that x j P for j = 0 , , K , which can be proved by induction. Note that x 0 P and x j + 1 = 2 δ j x j + 1 2 δ j v j is a convex combination of x j and v j P . Thus, x j + 1 P if x j P and the proof is completed. □

Appendix C. Proof of Theorem 4

Proof. 
For Algorithm 4, we have the claim that for j = 0 , , K , there holds
x j 1 2 t j .
We first prove the claim by induction. Note that x 0 = 0 satisfies the inequality. Assume that for some j = 0 , , K 1 , the inequality holds; then for i [ n ] , we have
x j + 1 i = x j i + 1 2 δ j v j i x j i 2 δ j 1 2 t j + 1 2 δ j 1 2 t j δ j = 1 2 t j + 1 .
Thus, inequality (A3) is proved. Redefine the potential function L ( j ) = 4 t j F ( x j ) 2 t j F ( x * ) ; then
L ( j + 1 ) L ( j ) = 4 t j + 1 F ( x j + 1 ) 2 t j + 1 F ( x * ) 4 t j F ( x j ) + 2 t j F ( x * ) = 4 t j + 1 F ( x j + 1 ) F ( x j ) + 4 t j + 1 4 t j F ( x j ) ( 2 t j + 1 2 t j ) F ( x * ) r = 4 t j + 1 0 1 2 δ j F ( x j + t ( v j x j ) ) , v j x j d t + 4 t j + 1 4 t j F ( x j ) ( 2 t j + 1 2 t j ) F ( x * ) .
Together by inequalities (73), (A3), and
0 1 2 δ j F ( x j + t ( v j x j ) ) , v j x j d t 1 2 δ j F ( x j + 1 2 δ j ( v j x j ) ) , v j x j 1 2 δ j F ( x j ) , v j x j ϵ 1 2 δ j [ F ( x j ) , x * x j ϵ ] 1 2 δ j [ F ( x j x * ) 2 F ( x j ) ϵ ] ,
we obtain
L ( j + 1 ) L ( j ) = 4 t j + 1 1 2 δ j [ F ( x j x * ) 2 F ( x j ) ϵ ] + 4 t j + 1 4 t j F ( x j ) ( 2 t j + 1 2 t j ) F ( x * ) 4 t j + 1 1 2 δ j [ 2 t j F ( x * ) 2 F ( x j ) ϵ ] + 4 t j + 1 4 t j F ( x j ) ( 2 t j + 1 2 t j ) F ( x * ) = 2 t j 2 δ j 1 2 F ( x * ) 2 t j F ( x j ) 4 t j + 1 ( 1 2 δ j ) ϵ .
Note that if there exists a j { 0 , , K 1 } such that F ( x * ) 2 t j F ( x j ) 0 , then by the form of the output x in Algorithm 4, we have F ( x ) F ( x j ) 2 t j F ( x * ) 1 2 F ( x * ) , which satisfies (103). Otherwise, we have
r L ( j + 1 ) L ( j ) 4 t j + 1 ( 1 2 δ j ) ϵ ,
and
L ( K ) L ( 0 ) j = 0 K 1 4 t j + 1 ( 1 2 δ j ) ϵ = j = 0 K 1 2 t j + 1 2 t j + 1 2 t j ϵ 2 j = 0 K 1 2 t j + 1 2 t j ϵ = 2 ϵ .
Together with the fact that
L ( K ) L ( 0 ) = 4 F ( x ) F ( x * ) ,
we complete the proof. □

Appendix D. Proof of Theorem 6

Proof. 
Similar to Lemma 5, the difference in the analysis is as follows.
For j S , by the monotonicity of F w.r.t. K, we obtain the following effect with a high probability ( 1 e ϵ 3 m 6 ( 1 + ϵ ) ) K from Theorem 5:
E [ f ( x j , M j ) + f ( x j + 1 , M j ) + , 1 ] = F ( x j ) + F ( x j + 1 ) + , 1 F ( x j ) + F ( x j + 1 ) + , v j = E [ f ( x j , M j ) + f ( x j + 1 , M j ) + , v j ] 1 1 + ϵ · ϵ 2 .
since v j , F ( x j ) and F ( x j + 1 ) are entry-wisely of the same sign. By the monotonicity of F ( x ) , the number of these iterations can be bounded as
F ( 0 ) 1 F ( 0 ) + 1 F ( x K 2 ) + 1 = j = 0 K 3 F ( x j ) + , 1 F ( x j + 1 ) + , 1 j = 0 K 3 E [ f ( x j , M j ) + f ( x j + 1 , M j ) + , v j ] j S E [ f ( x j , M j ) + f ( x j + 1 , M j ) + , v j ] [ K 2 ] S · 1 1 + ϵ · ϵ 2 .
From Theorem 5, the number m of set M j is at most 6 ϵ 3 ( 1 + ϵ ) ln ( 2 ( 1 + ϵ ) F ( 0 ) 1 ϵ σ + n + 2 σ ) .
Now, we present the proof of the approximation ratio. Redefine the potential function as L ( j ) = e t j E [ F ( x j ) ] t j F ( x * ) . Then for j = 0 , K 1 , the following holds:
L ( j + 1 ) L ( j ) = e t j + 1 E [ F ( x j + 1 ) ] t j + 1 F ( x * ) e t j E [ F ( x j ) ] + t j F ( x * ) = e t j + 1 E [ F ( x j + 1 ) ] E [ F ( x j ) ] + e t j + 1 e t j E [ F ( x j ) ] ( t j + 1 t j ) F ( x * ) = e t j + 1 E [ 0 δ j e δ j F ( x j + t v j ) , v j d t ] + e t j + 1 e t j E [ F ( x j ) ] ( t j + 1 t j ) F ( x * ) .
We need to prove a lower bound on the difference of function values between two adjacent iteration points when F is non-monotone:
0 δ j e δ j F ( x j + t v j ) , v j d t 0 δ j e δ j F ( x j + δ j e δ j v j ) , v j d t = δ j e δ j F ( x j + δ j e δ j v j ) , v j = δ j e δ j E [ h j + 1 ] , v j = δ j e δ j E [ h j + f ( x j + 1 , M j + 1 ) f ( x j , M j + 1 ) ] , v j δ j e δ j [ E [ h j ] , x j x * x j + E [ f ( x j + 1 , M j + 1 ) f ( x j , M j + 1 ) ] , v j ] δ j e δ j [ F ( x j ) x * ) F ( x j ) + E [ f ( x j + 1 , M j + 1 ) f ( x j , M j + 1 ) ] , v j ] δ j e δ j [ ( 1 x j ) F ( x * ) F ( x j ) ϵ 1 ϵ ] ,
where the fourth inequality is by Lemma 3 in [1], which implies that the following inequality holds:
F ( x x * ) 1 x F ( x * ) = 1 max i = 1 n x i ( t ) F ( x * ) .
for x P . From the deterministic case, for the upper bound on the -norm of x j , we have the following claim.
Claim 2.
For all the iteration points  x j , j = 0 , , K , of Algorithm 3, we have  x j 1 e t j .
So, the above formula yields
L ( j + 1 ) L ( j ) e t j + 1 δ j e δ j [ ( 1 x j ) F ( x * ) F ( x j ) ϵ 1 ϵ ] + e t j + 1 e t j E [ F ( x j ) ] ( t j + 1 t j ) F ( x * ) = e t j + 1 δ j e δ j ϵ 1 ϵ + ( e t j + 1 t j δ j e δ j δ j ) F ( x * ) + ( e t j + 1 e t j e t j + 1 δ j e δ j ) F ( x j ) = e t j δ j ϵ 1 ϵ + ( e t j + 1 e t j e t j + 1 δ j e δ j ) E [ F ( x j ) ] e t j + 1 δ j e δ j ϵ 1 ϵ .
By summing Equation (A16) over j from 0 to K 1 , we obtain
L ( K ) L ( 0 ) j = 0 K 1 e t j δ j ϵ 1 ϵ e ϵ 1 ϵ j = 0 K 1 δ j = e ϵ 1 ϵ .
Together with the fact that L ( K ) L ( 0 ) = e E [ F ( x ) ] F ( x * ) , we obtain the theorem. □

Appendix E. Proof of Theorem 7

Proof. 
For Algorithm 7, we have the claim that for j = 0 , , K , there holds
x j 1 2 t j .
Redefine the potential function L ( j ) = 4 t j E [ F ( x j ) ] 2 t j F ( x * ) ; then
L ( j + 1 ) L ( j ) = 4 t j + 1 E [ F ( x j + 1 ) ] 2 t j + 1 F ( x * ) 4 t j E [ F ( x j ) ] + 2 t j F ( x * ) = 4 t j + 1 E [ F ( x j + 1 ) ] E [ F ( x j ) ] + 4 t j + 1 4 t j E [ F ( x j ) ] ( 2 t j + 1 2 t j ) F ( x * ) = 4 t j + 1 E [ 0 1 2 δ j F ( x j + t ( v j x j ) ) , v j x j d t ] + 4 t j + 1 4 t j E [ F ( x j ) ] ( 2 t j + 1 2 t j ) F ( x * ) .
Together by inequalities (73), (A3), and
0 1 2 δ j F ( x j + t ( v j x j ) ) , v j x j d t 1 2 δ j F ( x j + 1 2 δ j ( v j x j ) ) , v j x j = 1 2 δ j E [ h j + 1 ] , v j x j = 1 2 δ j E [ h j + f ( x j + 1 , M j + 1 ) f ( x j , M j + 1 ) ] , v j x j = 1 2 δ j E [ h j ] , v j x j + E [ f ( x j + 1 , M j + 1 ) f ( x j , M j + 1 ) ] , v j x j 1 2 δ j E [ h j ] , x * x j + E [ f ( x j + 1 , M j + 1 ) f ( x j , M j + 1 ) ] , v j x j 1 2 δ j [ F ( x j x * ) 2 F ( x j ) + E [ f ( x j + 1 , M j + 1 ) f ( x j , M j + 1 ) ] , v j x j ] 1 2 δ j [ F ( x j x * ) 2 F ( x j ) ϵ 1 ϵ ] ,
we obtain
L ( j + 1 ) L ( j ) = 4 t j + 1 1 2 δ j [ E [ F ( x j x * ) 2 F ( x j ) ] ϵ 1 ϵ ] + 4 t j + 1 4 t j E [ F ( x j ) ] ( 2 t j + 1 2 t j ) F ( x * ) 4 t j + 1 1 2 δ j [ 2 t j F ( x * ) 2 E [ F ( x j ) ] ϵ 1 ϵ ] + 4 t j + 1 4 t j E [ F ( x j ) ] ( 2 t j + 1 2 t j ) F ( x * ) = 2 t j 2 δ j 1 2 F ( x * ) 2 t j E [ F ( x j ) ] 4 t j + 1 ( 1 2 δ j ) ϵ 1 ϵ .
Note that if there exists a j { 0 , , K 1 } such that F ( x * ) 2 t j E [ F ( x j ) ] 0 , then by the form of the output x in Algorithm 7, we have E [ F ( x ) ] E [ F ( x j ) ] 2 t j F ( x * ) 1 2 F ( x * ) , which satisfies (103). Otherwise, we have
L ( j + 1 ) L ( j ) 4 t j + 1 ( 1 2 δ j ) ϵ 1 ϵ ,
and
L ( K ) L ( 0 ) j = 0 K 1 4 t j + 1 ( 1 2 δ j ) ϵ 1 ϵ = j = 0 K 1 2 t j + 1 2 t j + 1 2 t j ϵ 1 ϵ 2 j = 0 K 1 2 t j + 1 2 t j ϵ 1 ϵ = 2 ϵ 1 ϵ .
Together with the fact that
L ( K ) L ( 0 ) = 4 E [ F ( x ) ] F ( x * ) ,
we complete the proof. □

References

  1. Bian, A.; Levy, K.; Krause, A.; Buhmann, J.M. Continuous DR-submodular maximization: Structure and algorithms. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 486–496. [Google Scholar]
  2. Bian, A.A.; Mirzasoleiman, B.; Buhmann, J.; Krause, A. Guaranteed non-convex optimization: Submodular maximization over continuous domains. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 2017, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 111–120. [Google Scholar]
  3. Bian, Y.; Buhmann, J.M.; Krause, A. Continuous submodular function maximization. arXiv 2020, arXiv:2006.13474. [Google Scholar]
  4. Gillenwater, J.; Kulesza, A.; Taskar, B. Near-Optimal MAP Inference for Determinantal Point Processes. In Proceedings of the 25th International Conference on Neural Information Processing Systems, NIPS’12, Lake Tahoe, NV, USA, 3–6 December 2012; Curran Associates Inc.: Red Hook, NY, USA, 2012; Volume 2, pp. 2735–2743. [Google Scholar]
  5. Calinescu, G.; Chekuri, C.; Pál, M.; Vondrák, J. Maximizing a monotone submodular function subject to a matroid constraint. SIAM J. Comput. 2011, 40, 1740–1766. [Google Scholar] [CrossRef]
  6. Feldman, M.; Naor, J.; Schwartz, R. A unified continuous greedy algorithm for submodular maximization. In Proceedings of the 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS ’11, Palm Springs, CA, USA, 22–25 October 2011; IEEE Computer Society: Washington, DC, USA, 2011; pp. 570–579. [Google Scholar]
  7. Niazadeh, R.; Roughgarden, T.; Wang, J.R. Optimal Algorithms for Continuous Non-Monotone Submodular and DR-Submodular Maximization. J. Mach. Learn. Res. 2020, 21, 1–31. [Google Scholar]
  8. Buchbinder, N.; Feldman, M. Constrained submodular maximization via new bounds for dr-submodular functions. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, Vancouver, BC, Canada, 24–28 June 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1820–1831. [Google Scholar]
  9. Du, D.; Liu, Z.; Wu, C.; Xu, D.; Zhou, Y. An improved approximation algorithm for maximizing a DR-submodular function over a convex set. arXiv 2022, arXiv:2203.14740. [Google Scholar]
  10. Du, D. Lyapunov function approach for approximation algorithm design and analysis: With applications in submodular maximization. arXiv 2022, arXiv:2205.12442. [Google Scholar]
  11. Mualem, L.; Feldman, M. Resolving the Approximability of Offline and Online Non-Monotone DR-Submodular Maximization over General Convex Sets. In Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR, Valencia, Spain, 25–27 April 2023; pp. 2542–2564. [Google Scholar]
  12. Mokhtari, A.; Hassani, H.; Karbasi, A. Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization. arXiv 2018, arXiv:1804.09554. [Google Scholar]
  13. Hassani, H.; Karbasi, A.; Mokhtari, A.; Shen, Z. Stochastic Conditional Gradient++: (Non)Convex Minimization and Continuous Submodular Maximization. SIAM J. Optim. 2020, 30, 3315–3344. [Google Scholar] [CrossRef]
  14. Lian, Y.; Xu, D.; Du, D.; Zhou, Y. A Stochastic Non-Monotone DR-Submodular Maximization Problem over a Convex Set. In Proceedings of the Computing and Combinatorics: 28th International Conference, COCOON 2022, Shenzhen, China, 22–24 October 2022; Springer Nature: Berlin/Heidelberg, Germany, 2023; Volume 13595, pp. 1–11. [Google Scholar]
  15. Chen, L.; Feldman, M.; Karbasi, A. Unconstrained submodular maximization with constant adaptive complexity. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, Phoenix, AZ, USA, 23–26 June 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 102–113. [Google Scholar]
  16. Ene, A.; Nguyen, H. Parallel algorithm for non-monotone DR-submodular maximization. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 13–18 July 2020; pp. 2902–2911. [Google Scholar]
  17. Delahaye, D.; Chaimatanan, S.; Mongeau, M. Simulated annealing: From basics to applications. In Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 1–35. [Google Scholar]
  18. Hassani, H.; Soltanolkotabi, M.; Karbasi, A. Gradient Methods for Submodular Maximization. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30, pp. 5843–5853. [Google Scholar]
  19. Chekuri, C.; Vondrák, J.; Zenklusen, R. Submodular Function Maximization via the Multilinear Relaxation and Contention Resolution Schemes. SIAM J. Comput. 2014, 43, 1831–1879. [Google Scholar] [CrossRef]
Figure 1. Numerical results for softmax problems.
Figure 1. Numerical results for softmax problems.
Mathematics 13 01447 g001
Figure 2. Numerical results for stochastic quadratic DR-submodular problems.
Figure 2. Numerical results for stochastic quadratic DR-submodular problems.
Mathematics 13 01447 g002
Table 1. Algorithms and theoretical guarantees in this paper. D = Deterministic; S = Stochastic; M = Monotone; NM = Non-Monotone; DC = Down-closed; GC = General convex; Grad Eval = Gradient evaluation complexity; S-Grad Eval = Single (per-sample) gradient evaluation complexity.
Table 1. Algorithms and theoretical guarantees in this paper. D = Deterministic; S = Stochastic; M = Monotone; NM = Non-Monotone; DC = Down-closed; GC = General convex; Grad Eval = Gradient evaluation complexity; S-Grad Eval = Single (per-sample) gradient evaluation complexity.
AlgorithmSettingComplexityApproximation Ratio
Algorithm 2D-M/GCIter: O F ( 0 ) 1 ϵ 1 1 / e (Theorem 2)
Grad eval: O F ( 0 ) 1 ϵ log n L ϵ
Algorithm 3D-NM/DCIter: O n + F ( 0 ) 1 ϵ 1 e (Theorem 3)
Grad eval: O n + F ( 0 ) 1 ϵ log n L ϵ
Algorithm 4D-NM/GCIter: O n + F ( 0 ) 1 ϵ 1 4 (Theorem 4)
Grad eval: O n + F ( 0 ) 1 ϵ log n L ϵ
Algorithm 5S-M/GCOracle: O F ( 0 ) 1 ϵ 1 1 / e (Theorem 5)
S-Grad eval: O F ( 0 ) 1 ϵ 4 · log n L ϵ 2 · log F ( 0 ) 1 ϵ σ
Algorithm 6S-NM/DCOracle: O n + F ( 0 ) 1 ϵ 1 e (Theorem 6)
S-Grad eval: O ϵ 3 F ( 0 ) 1 ϵ + n · log n L ϵ 2 · log F ( 0 ) 1 ϵ σ
Algorithm 7S-NM/GCOracle: O n + F ( 0 ) 1 ϵ 1 4 (Theorem 7)
S-Grad eval: O ϵ 3 F ( 0 ) 1 ϵ + n · log n L ϵ 2 · log F ( 0 ) 1 ϵ σ
Table 2. Comparison of complexities between dynamic and constant stepsizes for three examples (grad.eval: complexity of gradient evaluation).
Table 2. Comparison of complexities between dynamic and constant stepsizes for three examples (grad.eval: complexity of gradient evaluation).
ExamplesDynamic StepsizeConstant Stepsize
LOOgrad.eval LOO (grad.eval)
MLE O n ϵ O ( n ϵ log n ϵ ) O ( n 3 ϵ )
Softmax O n ϵ O ( n ϵ log n L ϵ ) O ( n L ϵ )
Quadratic O | | a | | 1 ϵ O ( | | a | | 1 ϵ log n | | A | | 2 ϵ ) O ( n | | A | | 2 ϵ )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Li, M.; Liu, Q.; Zhou, Y. Dynamic Stepsize Techniques in DR-Submodular Maximization. Mathematics 2025, 13, 1447. https://doi.org/10.3390/math13091447

AMA Style

Li Y, Li M, Liu Q, Zhou Y. Dynamic Stepsize Techniques in DR-Submodular Maximization. Mathematics. 2025; 13(9):1447. https://doi.org/10.3390/math13091447

Chicago/Turabian Style

Li, Yanfei, Min Li, Qian Liu, and Yang Zhou. 2025. "Dynamic Stepsize Techniques in DR-Submodular Maximization" Mathematics 13, no. 9: 1447. https://doi.org/10.3390/math13091447

APA Style

Li, Y., Li, M., Liu, Q., & Zhou, Y. (2025). Dynamic Stepsize Techniques in DR-Submodular Maximization. Mathematics, 13(9), 1447. https://doi.org/10.3390/math13091447

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop