Next Article in Journal
Using Large Language Models to Analyze Interviews for Driver Psychological Assessment: A Performance Comparison of ChatGPT and Google-Gemini
Previous Article in Journal
Research on Urban UAV Path Planning Technology Based on Zaslavskii Chaotic Multi-Objective Particle Swarm Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Iterative Approach for Finding Minimal-Optimal Solutions of the Min-Max Programming Problem with Addition-Overlap Functions

1
Shaoxing Key Laboratory for Smart Society Monitoring, Prevention & Control, School of International Business, Zhejiang Yuexiu University, Shaoxing 312000, China
2
Graduate Institute of Business and Management, College of Management, Chang Gung University, Taoyuan 33302, Taiwan
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(10), 1712; https://doi.org/10.3390/sym17101712
Submission received: 2 September 2025 / Revised: 1 October 2025 / Accepted: 9 October 2025 / Published: 12 October 2025
(This article belongs to the Section Mathematics)

Abstract

Finding an optimal solution of the min-max programming problem with addition-overlap function constraints has been studied in the literature. Since the definition of overlap functions is very general and has no explicit formulations, a bisection method was proposed to yield a uniform-optimal solution for such optimization problems with a general overlap function involved in its constraint part. The uniform-optimal solution could be improved if the system manager wants extra properties such as also yielding lower cost performance. The minimal-optimal solution is proposed other than the uniform-optimal solution to answer manager’s call. In this paper, we propose an iterative method to yield at least one minimal-optimal solution. Our method starts from the uniform-optimal solution and systematically reduces the values of some non-binding variables while preserving the feasibility and optimality. To rigorously establish our method, we specify explicitly a nontrivial “min-shifted” overlap function in place of the general overlap function of the min-max programming problem. The proof of finding a minimal-optimal solution of our algorithm is given. By shifting the searching sequence of decision variables, our algorithm may find other minimal-optimal solutions which provide more flexibility for the system manager to choose. Numerical examples are provided to illustrate the procedures of the algorithm.

1. Introduction

Fuzzy preference modeling has increasingly relied on relational approaches to systematically capture preference relations between alternatives [1]. Within this framework, t-norm-based aggregation functions have emerged as fundamental tools, particularly in the study of fuzzy relational equations that represent preference structures in uncertain environments [2,3]. These t-norms have proven particularly valuable because they satisfy critical properties like associativity and monotonicity, making them suitable for multi-stage decision processes [4,5].
However, as noted by Fodor and Roubens [6], the associativity condition required by t-norms is not essential in many real-world preference scenarios. This theoretical insight led to the development of overlap functions—a more flexible class of aggregation operators that preserves continuity and boundary conditions while relaxing the need for associativity [7]. In particular, the seminal work by Dimuro et al. [8] introduced the concept of additive generator pairs, providing a systematic framework for constructing overlap functions with desirable theoretical properties.
Beyond their theoretical appeal, overlap functions have demonstrated considerable practical value in various application domains. For instance, they have been used to improve image segmentation via advanced thresholding techniques [9] and to facilitate more sophisticated three-way decision-making models in interval-valued fuzzy information systems [10]. Their ability to model situations of indifference, where no clear preference exists between alternatives, has also made them a key component in advanced fuzzy decision-making frameworks [11,12].
Building on this foundation, recent studies have explored the use of overlap functions in optimization frameworks, particularly within the context of min-max programming. The min-max optimization problem has been extensively applied across various domains, including optimal control systems [13], robust machine learning [14], and fuzzy optimization [15].
In fuzzy optimization, significant attention has focused on min-max programming problems with addition-min fuzzy relational inequalities (FRIs), particularly in modeling resource allocation for BitTorrent-like peer-to-peer (BT-P2P) file-sharing systems [16,17]. The general model can be expressed as follows:
Minimize Z ( x ) = max { x 1 , , x n } subject   to g i ( x ) = j = 1 n min { a i j , x j } b i , i I , x j [ 0 , 1 ] , j J ,
where parameters a i j [ 0 , 1 ] and b i > 0 for any i I = { 1 , 2 , , m } , j J = { 1 , 2 , , n } .
Yang et al. [18] demonstrated that the solution set of addition-min FRIs is determined by a unique maximal solution and a convex set of minimal solutions. Building on this structural insight, Yang et al. [19] first proposed the uniform-optimal solution for model (1), where all variables share an identical optimal value. Later, Chiu et al. [20] simplified the solution process by reformulating model (1) as a single-variable optimization problem, thereby enabling the derivation of the uniform-optimal solution through both an analytical and an iterative method. Advancements in solving FRIs—a key tool for optimization under uncertainty—have been comprehensively reviewed by Deo et al. [21]. Specifically, they systematically categorized solution algorithms, ranging from traditional linear programming to modern metaheuristics such as Particle Swarm Optimization, while also exploring advanced topics, including bipolar fuzzy relations and hybrid models. Furthermore, recent advances in max-min optimization models have been developed to evaluate the stability of the widest-interval solution for systems of max-min FRIs [22,23] and addition-min FRIs [24].
While mathematically convenient, the uniform-optimal solution of model (1) may prove suboptimal for practical applications like BT-P2P data transmission. Yang [25] addressed this limitation through an optimal-vector-based (OVB) algorithm to compute so-called minimal-optimal solutions. Guu et al. [26] also proposed a two-phase method optimizing transmission costs under congestion constraints to obtain another minimal-optimal solution, which are strictly better or at least no worse than the uniform-optimal solution in terms of cost. Subsequent studies expanded on this concept, including the exploration of leximax minimal solutions [27], active-set-based methods [28], and iterative approaches for generalized min-max problems [29].
These developments underscore the practical importance of identifying not only optimal but also structurally minimal solutions in min-max optimization models. Motivated by this, the present study extends the concept to a more broader class of problems constrained by addition-overlap functions. Specifically, we consider the following min-max programming model:
Minimize Z ( x ) = max { x 1 , , x n } subject   to g i ( x ) = j = 1 n O ( a i j , x j ) b i , i I , x j [ 0 , 1 ] , j J ,
where parameters a i j [ 0 , 1 ] and b i > 0 for any i I = { 1 , 2 , , m } , j J = { 1 , 2 , , n } . O ( a i j , x j ) is the overlap function, which will be defined later in Definition 1.
When the constraint system in problem (2) is feasible, Wu et al. [30] demonstrated that a uniform-optimal solution is guaranteed to exist. Notably, this result remains valid for the optimization problem (2) regardless of the specific operator chosen from the class of overlap functions. Their approach involves transforming the original problem into an equivalent single-variable optimization problem, which can be efficiently solved using the classical bisection method.
Moreover, Wu et al. [30] extended the min-max optimization problem (2), incorporating a novel overlap function operator (4)—termed the “min-shifted” overlap function. Incorporating this operator, the min-max model becomes
Minimize Z ( x ) = max { x 1 , , x n } subject   to g i ( x ) = j = 1 n O ( a i j , x j ) b i , i I , x j [ 0 , 1 ] , j J ,
where the overlap function O ( a i j , x j ) is defined as
O ( a i j , x j ) = min { a i j , x j } min { a i j , x j } + ( 1 max { 0 , x j + a i j 1 } ) .
Although the classical bisection method can be applied for general overlap functions, including the min-shifted overlap function in (4), Wu et al. [30] adopted a distinct approach for solving model (3) and then derived the uniform-optimal solution using a tailored algorithm.
However, to the best of our knowledge, no existing method provides an effective way to compute minimal-optimal solutions for problem (3). This paper fills this gap by proposing an iterative refinement strategy that starts from the uniform-optimal solution and systematically reduces adjustable variables while preserving both feasibility and optimality. The resulting minimal-optimal solution not only retains global optimality but also satisfies a strong minimality condition, ensuring that no component can be further decreased without violating the problem’s constraints.
The contribution of our approach is demonstrated by its ability to successfully solve a problem that was previously intractable. The primary evidence for its effectiveness lies in the following aspects:
(a).
We have rigorously proven that our method always produces a feasible and minimal-optimal solution (as established in Section 3 and Section 4).
(b).
The numerical examples in Section 5 confirm that the algorithm functions as intended, correctly identifying diverse minimal-optimal solutions for various problem instances.
(c).
By finding minimal-optimal solutions, our method provides decision-makers with superior, structurally efficient alternatives compared to the uniform-optimal solution, which is a significant practical advancement.
The remainder of this paper is organized as follows: Section 2 reviews fundamental concepts of overlap functions and their properties. Section 3 formulates the min-max optimization problem with addition-overlap constraints. Section 4 presents our iterative algorithm for finding minimal-optimal solutions along with its theoretical analysis. Section 5 provides numerical examples demonstrating the method’s effectiveness, and Section 6 concludes with directions for future research.

2. Some Properties of the Min-Shifted Overlap Function

In this section, we review the definition of overlap function and explore the relevant properties of the min-shifted overlap function as defined in (4). This foundational knowledge will aid us to find the minimal-optimal solution for problem (3).
Definition 1.
(See [7,8]). A bivariate function O : [ 0 , 1 ] 2 [ 0 , 1 ] is said to be an overlap function if it satisfies the following conditions:
(O1) 
O ( x , y ) = O ( y , x ) for all x , y [ 0 , 1 ] ;
(O2) 
O ( x , y ) = 0 if and only if x y = 0 ;
(O3) 
O ( x , y ) = 1 if and only if x = y = 1 ;
(O4) 
O is non-decreasing;
(O5) 
O is continuous.
For convenience, the form of the min-shifted overlap function in (4) is simplified as follows:
O ( a , x ) = min { a , x } min { a , x } + ( 1 max { 0 , x + a 1 } ) , for   all a , x [ 0 , 1 ] .
Let D = min { a , x } + ( 1 max { 0 , x + a 1 } ) be the denominator of (5). Analyzing the value of the overlap function O ( a , x ) for (5), the following results can be obtained.
Lemma 1
([30], Lemma 1). For (5), there are
(i). 
O ( a , 0 ) = 0 and O ( 0 , x ) = 0 ;
(ii). 
O ( a , 1 ) = a and O ( 1 , x ) = x ;
(iii). 
D 1 , and if a x 0 and a x 1 , then D > 1 ;
(iv). 
O ( a , x ) min { a , x } ;
(v). 
If 0 < a and 0 < x 1 x 2 1 , then O ( a , x 2 ) O ( a , x 1 ) x 2 x 1 .
Lemma 2
([30], Lemma 2). For (5), the following results hold:
(i). 
If x a and x + a 1 , then O ( a , x ) = x x + 1 .
(ii). 
If x a and x + a 1 , then O ( a , x ) = x 2 a .
(iii). 
If x a and x + a 1 , then O ( a , x ) = a a + 1 .
(iv). 
If x a and x + a 1 , then O ( a , x ) = a 2 x .
When a [ 0 , 1 ] is used as a given parameter and a = 0.5 is the dividing point, then the value of the min-shifted overlap function O ( a , x ) in (5) can be derived, and the two sets of results are roughly illustrated in Figure 1 and Figure 2.
(1)
When a 0.5 , the value of O ( a , x ) in (5) is determined by three cases, as detailed below:
  • Case 1: x a
We have x a 0.5 and x + a 1 . It follows that min { a , x } = x and max { 0 , x + a 1 } = 0 . Substituting these into O ( a , x ) yields
O ( a , x ) = x x + 1 .
  • Case 2: a x 1 a
Here, a x and x + a 1 . These imply min { a , x } = a and max { 0 , x + a 1 } = 0 . Substituting into O ( a , x ) gives
O ( a , x ) = a a + 1 .
  • Case 3: x 1 a
Since a 0.5 , we have x 0.5 a and x + a 1 . Thus, min { a , x } = a and max { 0 , x + a 1 } = x + a 1 . Substituting these into O ( a , x ) results in
O ( a , x ) = a a + [ 1 ( x + a 1 ) ] = a 2 x .
To summarize, when a 0.5 , O ( a , x ) is given by the piecewise function:
O ( a , x ) = x x + 1 if x a , a a + 1 if a x 1 a , a 2 x if x 1 a .
Figure 1 (where a 0.5 ) depicts a piecewise function with a flat segment along line AB, where x [ a , 1 a ] , indicating that the overlap function O ( a , x ) is continuous and non-decreasing in this case.
(2)
When a 0.5 , the value of O ( a , x ) in (5) is determined by three cases, as detailed below:
  • Case 1: x 1 a
We have x 0.5 a and x + a 1 . It follows that min { a , x } = x and max { 0 , x + a 1 } = 0 . Substituting these into O ( a , x ) yields
O ( a , x ) = x x + 1 .
  • Case 2: 1 a x a
Here, x a and x + a 1 . These imply min { a , x } = x and max { 0 , x + a 1 } = x + a 1 . Substituting into O ( a , x ) gives
O ( a , x ) = x x + [ 1 ( x + a 1 ) ] = x 2 a .
  • Case 3: a x 1
Since a 0.5 , we have 0.5 a x and x + a 1 . Thus, min { a , x } = a and max { 0 , x + a 1 } = x + a 1 . Substituting these into O ( a , x ) results in
O ( a , x ) = a a + [ 1 ( x + a 1 ) ] = a 2 x .
To summarize, when a 0.5 , O ( a , x ) is given by the piecewise function:
O ( a , x ) = x x + 1 if x 1 a , x 2 a if 1 a x a , a 2 x if a x 1 .
Figure 2 (where a 0.5 ) shows a piecewise function without any flat segment, meaning that O ( a , x ) is continuous and strictly increasing for all 0 x 1 .
Theorem 1.
For the overlap function O ( a , x ) defined in (5), when 0 a 0.5 , the following properties hold:
(i).
Strict monotonicity for small x: If 0 x a , then O ( a , x ) is a strictly increasing function.
(ii).
Symmetry: If a x 1 a , then O ( a , x ) = O ( a , 1 x ) .
(iii).
Strict monotonicity for large x: If 1 a x 1 , then O ( a , x ) is a strictly increasing function.
Proof. 
(i).
Strict monotonicity for small x:
Given 0 a 0.5 and 0 x a , we consider two values 0 x 1 < x 2 a and show that O ( a , x 2 ) O ( a , x 1 ) > 0 for a > 0 .
From (6), we have O ( a , x 1 ) = x 1 x 1 + 1 and O ( a , x 2 ) = x 2 x 2 + 1 . Thus,
O ( a , x 2 ) O ( a , x 1 ) = x 2 x 2 + 1 x 1 x 1 + 1 = x 2 x 1 ( x 1 + 1 ) ( x 2 + 1 ) > 0 .
Since the denominator is always positive, the numerator x 2 x 1 ensures the expression remains positive. Hence, O ( a , x ) is strictly increasing in this interval.
(ii).
Symmetry:
Given 0 a 0.5 and a x 1 a , we analyze O ( a , x ) and O ( a , 1 x ) as follows:
  • Evaluating O ( a , x ) : Since a x 1 a , we have a x and a + x 1 , which implies that min { a , x } = a , max { 0 , x + a 1 } = 0 , and leading to
    O ( a , x ) = min { a , x } min { a , x } + ( 1 max { 0 , x + a 1 } ) = a a + ( 1 0 ) = a a + 1 .
  • Evaluating O ( a , 1 x ) : Since a x 1 a , we have a 1 x 1 a and ( 1 x ) + a 1 , which implies that min { a , 1 x } = a , max { 0 , ( 1 x ) + a 1 } = 0 , and leading to
O ( a , 1 x ) = min { a , 1 x } min { a , 1 x } + ( 1 max { 0 , ( 1 x ) + a 1 } ) = a a + ( 1 0 ) = a a + 1 .
Since both expressions are identical for all a x 1 a , it follows that O ( a , x ) = O ( a , 1 x ) .
(iii).
Strict monotonicity for large x:
Given 0 a 0.5 and 1 a x 1 , we analyze two values 1 a x 1 < x 2 1 and show that O ( a , x 2 ) O ( a , x 1 ) > 0 for a > 0 .
From (6), we have O ( a , x 1 ) = a 2 x 1 and O ( a , x 2 ) = a 2 x 2 . Thus,
O ( a , x 2 ) O ( a , x 1 ) = a 2 x 2 a 2 x 1 = a ( x 2 x 1 ) ( 2 x 1 ) ( 2 x 2 ) > 0 .
Again, since the denominator is positive and a ( x 2 x 1 ) > 0 , we conclude that O ( a , x ) is strictly increasing in this interval. □
Theorem 1 states that for 0 a 0.5 , the min-shifted overlap function O ( a , x ) is strictly increasing in the intervals 0 x a and 1 a x 1 , while remaining symmetry in the middle interval a x 1 a .
For example, when a = 0.45 , the overlap function O ( 0.45 , x ) is strictly increasing for 0 x 0.45 and 0.55 x 1 . However, by Theorem 1, it remains symmetry at O ( 0.45 , x ) = 0.310345 in the middle interval a = 0.45 x 1 a = 0.55 .
Theorem 2.
For the overlap function O ( a , x ) defined in (5), if 0.5 a 1 , then O ( a , x ) is strictly increasing for all 0 x 1 .
Proof. 
Given 0.5 a 1 and 0 x 1 , we analyze the following three cases and show that O ( a , x 2 ) O ( a , x 1 ) > 0 whenever x 1 < x 2 .
  • Case 1: 0 x 1 < x 2 1 a
From (7), we have O ( a , x 1 ) = x 1 x 1 + 1 and O ( a , x 2 ) = x 2 x 2 + 1 . Thus, the difference is
O ( a , x 2 ) O ( a , x 1 ) = x 2 x 2 + 1 x 1 x 1 + 1 = x 2 x 1 ( x 1 + 1 ) ( x 2 + 1 ) > 0 .
Since the denominator ( x 1 + 1 ) ( x 2 + 1 ) is always positive and the numerator x 2 x 1 remains positive, it follows that O ( a , x ) is strictly increasing in this interval.
  • Case 2: 1 a x 1 < x 2 a
From (7), we have O ( a , x 1 ) = x 1 2 a and O ( a , x 2 ) = x 2 2 a . Thus, the difference is
O ( a , x 2 ) O ( a , x 1 ) = x 2 2 a x 1 2 a = x 2 x 1 2 a > 0 .
Since the denominator 2 a is always positive and the numerator x 2 x 1 is positive. Hence, O ( a , x ) is strictly increasing in this interval.
  • Case 3: a x 1 < x 2 1
From (7), we have O ( a , x 1 ) = a 2 x 1 and O ( a , x 2 ) = a 2 x 2 . Thus, the difference is
O ( a , x 2 ) O ( a , x 1 ) = a 2 x 2 a 2 x 1 = a ( x 2 x 1 ) ( 2 x 1 ) ( 2 x 2 ) > 0 .
Since a > 0 , the denominator is always positive, and the numerator a ( x 2 x 1 ) is positive. Hence, O ( a , x ) is strictly increasing in this interval.
Since O ( a , x ) is strictly increasing in all three cases, we conclude that it is strictly increasing for all 0 x 1 . □
Theorem 2 presents a different result from Theorem 1. It states that for 0.5 a 1 , the min-shifted overlap function O ( a , x ) , as defined in (5), is strictly increasing over the entire interval 0 x 1 . For example, when a = 0.55 , the overlap function O ( 0.55 , x ) is strictly increasing throughout the interval 0 x 1 . In other words, O ( 0.55 , x ) takes unique values for every distinct x in this interval.
Proposition 1.
For the min-shifted overlap function O ( a , x ) defined as in (5), the following strict quasi-monotonicity property holds:
O ( a , x 1 ) < O ( a , x 2 ) x 1 < x 2 , a , x 1 , x 2 [ 0 , 1 ] .
Proof. 
By Definition 1, the function O ( a , x ) is non-decreasing and continuous in x [ 0 , 1 ] for any fixed a [ 0 , 1 ] . Suppose, for contradiction, that O ( a , x 1 ) < O ( a , x 2 ) but x 1 x 2 . Since O ( a , x ) is non-decreasing in x, it follows that
O ( a , x 1 ) O ( a , x 2 ) ,
which contradicts the initial assumption that O ( a , x 1 ) < O ( a , x 2 ) . Therefore, the assumption x 1 x 2 is false, and it must hold that x 1 < x 2 . □
To provide an intuitive understanding of Proposition 1, we examine the graph of the min-shifted overlap function O ( a , x ) over the domain x [ 0 , 1 ] for fixed values of a ( 0 , 1 ) . As illustrated in Figure 3, the function O ( a , x ) is strictly increasing with respect to x for any fixed value of a.
For example, consider the case where a = 0.4 . If O ( 0.4 , x 1 ) = 0.285714 < O ( 0.4 , x 2 ) = 0.3 , it follows that x 1 [ 0.4 , 0.6 ] and x 1 < x 2 = 2 3 , which is consistent with the strict quasi-monotonicity of the function.
Proposition 2.
For a given O ( a , x 1 ) < O ( a , x 2 ) , the following properties hold:
(i). 
If a 0.5 and x 1 [ 0 , a ] [ 1 a , 1 ] , or if 0.5 a 1 and x 1 [ 0 , a ] , then x 1 < x 2 1 .
(ii). 
If a 0.5 and x 1 [ a , 1 a ] , then x 1 1 a < x 2 1 .
Proof. 
We prove each case separately based on the structure of the min-shifted overlap function O ( a , x ) .
(i).
For x 1 [ 0 , a ] or x 1 [ 1 a , 1 ] when a 0.5 , Theorem 1:(i) and (iii) ensure that O ( a , x ) is strictly increasing in these intervals. Similarly, when a 0.5 and x 1 [ 0 , a ] , Theorem 2 ensures that O ( a , x ) is strictly increasing on the entire interval [ 0 , 1 ] , which includes [ 0 , a ] . Thus, in both scenarios, O ( a , x 1 ) < O ( a , x 2 ) implies that x 1 < x 2 1 .
(ii).
If a 0.5 and x 1 [ a , 1 a ] , then by Theorem 1:(ii), the function O ( a , x ) is symmetric in this interval: O ( a , x ) = O ( a , 1 x ) . This symmetry implies that O ( a , x ) is not strictly increasing in [ a , 1 a ] , so it is possible to have O ( a , x 1 ) = O ( a , x 2 ) even if x 1 x 2 in this range. Since we are given that O ( a , x 1 ) < O ( a , x 2 ) , it must be that x 2 lies outside this non-strict region. Because the function resumes strict monotonicity in the interval ( 1 a , 1 ] , it follows that x 2 ( 1 a , 1 ] . Therefore, we have x 1 1 a < x 2 1 .
Moreover, based on (6) and (7), the value of x in the equation O ( a , x ) = γ can be obtained via the inverse function method as follows:
(1)
If a 0.5 and O ( a , x ) = γ , then there is
x = γ 1 γ if γ a a + 1 , [ a , 1 a ] if γ = a a + 1 , 2 γ a γ if a a + 1 γ a .
(2)
If 0.5 a 1 and O ( a , x ) = γ , then there is
x = γ 1 γ if γ 1 a 2 a , γ ( 2 a ) if 1 a 2 a γ a 2 a , 2 γ a γ if a 2 a γ a .
Note that the value of the variable x does not exist if O ( a , x ) = γ > a , since we have O ( a , x ) min { a , x } according to Lemma 1:(iv).
Why study the min-shifted overlap function? From the above analysis, by adjusting the parameter a, we could obtain different shapes of the figures. This is to illustrate that the min-shifted overlap function has very rich capacity to model different scenarios. Recall that an overlap function is a mathematical tool used to quantify the degree of “overlap” or shared characteristics between two or more objects or concepts. Take Figure 3, for example; if O ( a , x ) is used to model the “indifference” between a and x, then selecting a = 0.2 gives a wide range [ 0.2 , 0.8 ] of x, which gives the same low level of indifference. When one increases a to be close to 0.5 , this range shrinks. Another feature is the piecewise property. One can see that the min-shifted overlap function could offer three piecewise segments to capture the features of asymmetric indifference between two objects.

3. A Single-Variable Optimization Problem for the Min-Max Programming Problem (2)

In this section, we review how to find the uniform-optimal solution to the min-max programming problem (2). Some definitions and relevant properties are given.
Definition 2.
Let x = ( x j ) j J be a vector in [ 0 , 1 ] n . The solution set of problems (2) and (3) is denoted by X ( A , b ) : = { x [ 0 , 1 ] n | g i ( x ) = j = 1 n O ( a i j , x j ) b i , i I } .
Definition 3.
Problem (2) is said to be consistent if X ( A , b ) . A solution x * X ( A , b ) is called optimal for problem (2) if Z ( x * ) Z ( x ) for all x X ( A , b ) .
Definition 4.
Let x 1 = ( x j 1 ) j J and x 2 = ( x j 2 ) j J be two vectors. For any vector x 1 and x 2 , x 1 x 2 if and only if x j 1 x j 2 for all j J . A vector x X ( A , b ) is said to be minimal if x x implies x = x for any x X ( A , b ) .
Proposition 3.
Problem (2) is consistent, X ( A , b ) , if and only if j = 1 n a i j b i holds for all i I .
Proof. 
  • (⇒) Since X ( A , b ) , and O ( a i j , x j ) min { a i j , x j } for all i I , j J , according to Lemma 1-(iv), we have g i ( x ) b i , i I such that
    j = 1 n a i j j = 1 n min { a i j , x j } j = 1 n O ( a i j , x j ) = g i ( x ) b i , for all i I .
  • (⇐) Since j = 1 n a i j b i , i I is given, and Lemma 1:(ii) states O ( a i j , 1 ) = a i j , it follows that x = ( x j ) j J with x j = 1 for all j J is a feasible solution to problem (2). Specifically, we have
    g i ( x ) = j = 1 n O ( a i j , 1 ) = j = 1 n a i j b i holds , for all i I .
Proposition 3 shows that the necessary and sufficient condition of the solution set X ( A , b ) of (2) is j = 1 n a i j b i for all i I . This condition can be used to check the consistency of the problem (2) before finding the optimal solution.
As previously mentioned, to find a uniform-optimal solution of the problem (2), Wu et al. [30] have shown that it can be transformed into a single-variable optimization problem, as follows:
Minimize Z ( y ) = y subject   to G i ( y ) = j = 1 n O ( a i j , y ) b i , i I , y [ 0 , 1 ] ,
where parameters a i j [ 0 , 1 ] and b i > 0 , for all i I = { 1 , 2 , , m } , j J = { 1 , 2 , , n } .
Theorem 3
([30], Theorem 2). Suppose y * is the optimal solution to problem (10), and let x * = ( x 1 * , , x n * ) be an optimal solution to problem (2) with optimal value Z ( x * ) = max { x 1 * , , x n * } . Then, max { x 1 * , , x n * } = y * holds.
Theorem 3 demonstrates that the min-max programming problem (2), when constrained by a solvable general addition-overlap function, can be reformulated as a single-variable optimization problem (10). This transformation enables the use of the bisection method to determine the optimal value y * . This methodology also applies to problem (3), which is subject to the min-shifted overlap function. Consequently, a uniform-optimal solution x * = ( x j * ) j J for problem (3), where all variables take the same value, x j * = y * for all j J , can be expressed as
x * = ( x 1 * , x 2 * , , x n * ) = ( y * , y * , , y * ) .
Example 1.
Find the optimal solution to the following min-max programming problem, which is subject to a system of addition-overlap functions as defined in problem (3).
Minimize Z ( x ) = max { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 } subject   to g 1 ( x ) 1.8 , g 2 ( x ) 0.6 , g 3 ( x ) 2.0 , g 4 ( x ) 1.8 , x j [ 0 , 1 ] , j J = { 1 , 2 , 3 , 4 , 5 , 6 } ,
where
g 1 ( x ) = O ( 0.35 , x 1 ) + O ( 0.6 , x 2 ) + O ( 0.3 , x 3 ) + O ( 0.4 , x 4 ) + O ( 0.42 , x 5 ) + O ( 0.45 , x 6 ) , g 2 ( x ) = O ( 0.2 , x 1 ) + O ( 0.3 , x 2 ) + O ( 0.68 , x 3 ) + O ( 0.4 , x 4 ) + O ( 0.8 , x 5 ) + O ( 0.6 , x 6 ) , g 3 ( x ) = O ( 0.25 , x 1 ) + O ( 0.4 , x 2 ) + O ( 0.32 , x 3 ) + O ( 0.3 , x 4 ) + O ( 0.7 , x 5 ) + O ( 0.8 , x 6 ) , g 4 ( x ) = O ( 0.26 , x 1 ) + O ( 0.3 , x 2 ) + O ( 0.45 , x 3 ) + O ( 0.6 , x 4 ) + O ( 0.42 , x 5 ) + O ( 0.9 , x 6 ) ,
and the operator O ( a i j , x j ) is the min-shifted overlap function with
O ( a i j , x j ) = min { a i j , x j } min { a i j , x j } + ( 1 max { 0 , x j + a i j 1 } ) .
According to Theorem 3, to solve the optimal solution of problem (11), it can be transformed into the following single-variable problem.
Minimize Z ( y ) = y subject   to G 1 ( y ) 1.8 , G 2 ( y ) 0.6 , G 3 ( y ) 2.0 , G 4 ( y ) 1.8 , y [ 0 , 1 ] ,
where
G 1 ( y ) = O ( 0.35 , y ) + O ( 0.6 , y ) + O ( 0.3 , y ) + O ( 0.4 , y ) + O ( 0.42 , y ) + O ( 0.45 , y ) , G 2 ( y ) = O ( 0.2 , y ) + O ( 0.3 , y ) + O ( 0.68 , y ) + O ( 0.4 , y ) + O ( 0.8 , y ) + O ( 0.6 , y ) , G 3 ( y ) = O ( 0.25 , y ) + O ( 0.4 , y ) + O ( 0.32 , y ) + O ( 0.3 , y ) + O ( 0.7 , y ) + O ( 0.8 , y ) , G 4 ( y ) = O ( 0.26 , y ) + O ( 0.3 , y ) + O ( 0.45 , y ) + O ( 0.6 , y ) + O ( 0.42 , y ) + O ( 0.9 , y ) .
By applying either the method proposed in [30] or the classical bisection method to solve problem (12), we obtain the optimal value y * = 0.643875 . According to Theorem 3, the corresponding uniform-optimal solution x * = ( x j * ) j J for problem (11) is
x * = ( x 1 * , , x 6 * ) = ( 0.643875 , , 0.643875 ) .
In Example 1, the uniform-optimal solution x * = ( x j * ) j J , where x j * = y * = 0.643875 for all j J = { 1 , 2 , , 6 } , is substituted into each constraint of problem (11). The corresponding values of the constraint functions are computed as follows:
g 1 ( x * ) = 1.869 > 1.8 , g 2 ( x * ) = 2.159 > 0.6 , g 3 ( x * ) = 2.000 = 2.0 , and g 4 ( x * ) = 2.106 > 1.8 .

4. Solving the Minimal-Optimal Solution for Problem (3)

As mentioned above, the bisection method can be used to solve the min-max programming problem with a general addition-overlap function, including the problem (3), to obtain the uniform-optimal solution x * = ( x j * ) j J . However, no efficient method is currently available for directly computing a minimal-optimal solution to problem (3).
Although the uniform-optimal solution can be obtained for problem (3), it is not necessarily a “minimal solution” in terms of constraint satisfaction. Nevertheless, it serves as an upper bound among all optimal solutions. Therefore, identifying a minimal-optimal solution to the problem (3) involves determining which components of the uniform-optimal solution can be reduced without violating feasibility or compromising optimality. In other words, we need to distinguish which components are fixed and unchangeable, and which ones can be adjusted.

4.1. The Critical Constraints

Definition 5.
Let x * = ( x j * ) j J with x j * = y * for all j J be the uniform-optimal solution of problem (3). Define the following:
(i).
The critical set is defined as I ( x * ) = { i I g i ( x * ) = b i } .
(ii).
For i I ( x * ) , the constraint g i ( x * ) = b i is called a critical constraint.
(iii).
For i I ( x * ) , define the binding set as J i ( x * ) = J i * 1 J i * 2 J i * 3 , and a variable x j is called a binding variable if j J i ( x * ) , where
J i * 1 = { j J 0 a i j < 0.5   and   0 x j * a i j } , J i * 2 = { j J 0 a i j < 0.5   and   1 a i j x j * 1 } . J i * 3 = { j J 0.5 a i j 1   and   0 x j * 1 } .
Lemma 3.
Let x * = ( x j * ) j J , with x j * = y * for all j J , be the uniform-optimal solution of problem (3). Then, the critical set is non-empty, that is, I ( x * ) = { i I g i ( x * ) = b i } .
Proof. 
By Theorem 3, the uniform solution x * = ( y * , , y * ) is optimal for problem (3). Suppose, for contradiction, that I ( x * ) = . Then, for every i I , we have
j = 1 n O ( a i j , y * ) > b i .
Since the overlap function O ( a , x ) is continuous and non-decreasing in x, we can slightly decrease y * to a value y < y * such that
j = 1 n O ( a i j , y ) b i , i I .
This implies that y is still feasible for (3) but strictly smaller than y * , which contradicts the optimality of y * . Therefore, I ( x * ) must hold. □
Lemma 3 shows that when the optimal value y * of problem (3) is determined, there must exist at least one critical constraint as follows:
g i ( x * ) = j = 1 n O ( a i j , x j * ) = j = 1 n O ( a i j , y * ) = b i , for i I ( x * ) .
Such behavior can be explicitly observed in Example 1 through the equality g 3 ( x * ) = b 3 . Moreover, the critical constraints g i ( x * ) = b i for i I ( x * ) in (13) are essential for characterizing the minimal-optimal solution to problem (3). The theoretical foundation established in Theorems 1 and 2 rigorously supports this relationship. In particular:
  • Theorem 1:(i) shows that for 0 a 0.5 , the overlap function O ( a , x ) is strictly increasing on the interval [ 0 , a ] .
  • Theorem 1:(iii) ensures that O ( a , x ) is strict monotonicity over the interval [ 1 a , 1 ] if 0 a 0.5 .
  • Theorem 2 guarantees that O ( a , x ) is strictly increasing over the entire interval [ 0 , 1 ] when 0.5 a 1 .
Lemma 4
(Binding variables). Let x * = ( x j * ) j J , with x j * = y * for all j J , be the uniform-optimal solution to problem (3). Let x = ( x j ) j J be any other optimal solution of the same problem. Then
(i). 
For i I ( x * ) and j J i * 1 such that 0 x j * a i j , it must hold that x j = x j * ;
(ii). 
For i I ( x * ) and j J i * 2 such that 1 a i j x j * 1 , it also holds that x j = x j * ;
(iii). 
For i I ( x * ) and j J i * 3 such that 0 x j * 1 , it holds that x j = x j * .
Proof. 
(i).
Suppose, for contradiction, that x = ( x j ) j J x * = ( x j * ) j J for some j J i * 1 . Without loss of generality, assume x k = x k * for all k J , k r , and x r < x r * for some r J i * 1 .
Since 0 a i r < 0.5 and 0 x r * a i r , by Theorem 1-(i), the overlap function O ( a i r , x ) is strictly increasing over the interval [ 0 , a i r ] . Therefore,
O ( a i r , x r ) < O ( a i r , x r * ) .
For the critical constraint corresponding to i I ( x * ) , we have
g i ( x ) = j J , j r O ( a i j , x j ) + O ( a i r , x r ) < j J , j r O ( a i j , x j * ) + O ( a i r , x r * ) = b i ,
which contradicts the feasibility of x , as the i-th critical constraint is violated.
Alternatively, suppose x j x j * for all j J , with x k = x k * = y * for all k J , k r , and x r > x r * = y * . Then the objective value of x is
Z ( x ) = max j J { x j } = x r > Z ( x * ) = max j J { x j * } = y * ,
which contradicts the optimality of x .
Hence, in either case, for i I ( x * ) , we conclude that x j = x j * for all j J i * 1 .
(ii)
The proof follows a similar argument. Since j J i * 2 and 1 a i j x j * 1 , the overlap function O ( a i j , x j ) is strictly increasing on the interval [ 1 a i j , 1 ] by Theorem 1-(iii). For the critical constraint g i ( x * ) = b i , i I ( x * ) , reducing x j would decrease O ( a i j , x j ) , thereby reducing g i ( x ) below b i , violating feasibility. Increasing x j would lead to x j > y * , which contradicts the optimality of x . Thus, it must be that x j = x j * for j J i * 2 .
(iii)
For any j J i * 3 . Theorem 2 ensures that O ( a i j , x j ) is strictly increasing over the entire interval [ 0 , 1 ] . Therefore, for the critical constraint g i ( x * ) = b i , i I ( x * ) , reducing x j would decrease O ( a i j , x j ) , and consequently g i ( x ) < b i , violating feasibility. Increasing x j would result in x j > x j * = y * , again contradicting the optimality of x . Hence, it must be that x j = x j * for j J i * 3 .
The results established in Lemma 4 reveal an essential structural property of the minimal-optimal solution to problem (3): Certain variables, especially those identified as binding variables x j , for j J i ( x * ) with respect to the critical constraints g i ( x * ) = b i , where i I ( x * ) , must remain fixed at the uniform-optimal value y * in any optimal solution.
Although the uniform-optimal solution x * = ( y * , , y * ) is usually not minimal-optimal, it serves as an upper bound for each component of any minimal-optimal solution. In particular, Lemma 4 confirms that the binding variables x j = x j * = y * , for all j J i ( x * ) = J i * 1 J i * 2 J i * 3 and i I ( x * ) , are fixed and cannot be decreased without violating feasibility or optimality. In contrast, the remaining variables x j = x j * , where j J J i ( x * ) for i I ( x * ) , may be further reduced while maintaining feasibility. This observation provides a solid foundation for constructing a minimal-optimal solution of problem (3).
Example 2.
Apply the optimal value y * obtained in Example 1 to identify the critical set, critical constraint, and binding variables, according to Lemma 4.
From Example 1, we have the optimal value y * = 0.643875 and the uniform-optimal solution x * = ( x j * ) 1 × 6 , where x j * = y * for all j J = { 1 , , 6 } . Substituting x * into each constraint of problem (11) yields the following values:
g 1 ( x * ) = 1.869 > 1.8 , g 2 ( x * ) = 2.159 > 0.6 , g 3 ( x * ) = 2.000 = 2.0 , and g 4 ( x * ) = 2.106 > 1.8 .
By Definition 5, the critical set is
I ( x * ) = { 3 } , and the critical constraint is g 3 ( x * ) = b 3 .
In particular, we have
g 3 ( x * ) = O ( 0.25 , x 1 * ) + O ( 0.4 , x 2 * ) + O ( 0.32 , x 3 * ) + O ( 0.3 , x 4 * ) + O ( 0.7 , x 5 * ) + O ( 0.8 , x 6 * ) = 2.0 = b 3 .
Analyzing the overlap function components of the critical constraint:
  • 0 a 31 = 0.25 , a 33 = 0.32 , a 34 = 0.3 0.5 , but x j * [ 0 , a 3 j ] for j = 1 , 3 , 4 J 3 * 1 = ,
  • 0 a 32 = 0.4 0.5 and 0.6 x 2 * 1 J 3 * 2 = { 2 } ,
  • 0.5 a 35 = 0.7 , a 36 = 0.8 1 J 3 * 3 = { 5 , 6 } .
Combining the three subsets, the binding set is
J 3 ( x * ) = J 3 * 1 J 3 * 2 J 3 * 3 = { 2 , 5 , 6 } .
Therefore, the binding variables x 2 = x 5 = x 6 = y * must remain fixed and cannot be reduced, while the remaining variables x 1 , x 3 , and x 4 may be adjusted in the process of identifying a minimal-optimal solution.

4.2. Alternative Optimal Solutions of Problem (3)

Lemma 4 identifies the binding and adjustable variables that determine the structure of a minimal-optimal solution to problem (3). The remaining challenge is to reduce the values of the adjustable variables while preserving optimality. Fortunately, since the uniform-optimal solution provides an upper bound for all optimal solutions, reducing the value of any one adjustable variable may still yield an alternative optimal solution.
Let x * = ( x j * ) j J be the uniform-optimal solution to problem (3), where x j * = y * for all j J . Define
x ( 0 ) * = ( x j * ) j J = ( y * , y * , , y * )   and   x ̲ ( j ) = ( y * , , y * , x ̲ j * , y * , , y * ) , j J
and
x ̲ j * = max i I { x j ( i ) } ,
x j ( i ) = min { s i j | O ( a i j , s i j ) = b i b i } , if b i b i > 0 , 0 , if b i b i 0 ,
with
b i = k J , k j O ( a i k , y * ) = g i ( x ( 0 ) * ) O ( a i j , y * ) .
By the definition in (14), the vector x ( 0 ) * is equivalent to the uniform-optimal solution of problem (3). In the vector x ̲ ( j ) , all components are equal to y * except the j-th component, which is x ̲ j * . The following theorem demonstrates that x ̲ ( j ) is also an optimal solution.
Theorem 4.
Let the vectors x ( 0 ) * and x ̲ ( j ) be denoted as in (14). Then
(i). 
y * x ̲ j * holds;
(ii). 
x ̲ ( j ) is an optimal solution of problem (3).
Proof. 
(i).
From (14) and (15), we know that g i ( x ( 0 ) * ) b i for all i I . Consider two cases:
  • Case 1: If b i b i 0 for all i I , then x j ( i ) = 0 such that x ̲ j * = 0 , hence y * x ̲ j * .
  • Case 2: If b i b i > 0 for some i I , then it implies that
    g i ( x ( 0 ) * ) = b i + O ( a i j , y * ) b i O ( a i j , y * ) b i b i = O ( a i j , x j ( i ) ) ,
    where x j ( i ) is defined as the smallest value such that O ( a i j , x j ( i ) ) = b i b i . Since O ( a i j , x ) is non-decreasing in x, it follows that y * x j ( i ) . Thus,
    y * x ̲ j * = max i I { x j ( i ) } .
Combining both cases, we conclude that y * x ̲ j * holds.
(ii).
For the vector x ̲ ( j ) , from (14) and (15), we have
g i ( x ̲ ( j ) ) = b i + O ( a i j , x ̲ j * ) .
To show that x ̲ ( j ) is an optimal solution to problem (3), we check the feasibility and optimality:
  • Feasibility:
    Case 1: If x ̲ j * = 0 , then it follows that b i b i 0 , i.e., b i b i , for all i I and O ( a i j , x ̲ j * ) = 0 . Therefore,
    g i ( x ̲ ( j ) ) = b i + O ( a i j , x ̲ j * ) = b i + 0 = b i b i .
    Case 2: If b i b i > 0 for some i I , by the definition of x j ( i ) = min { s i j | O ( a i j , s i j ) = b i b i } and x ̲ j * = max i I { x j ( i ) } , we have
    g i ( x ̲ ( j ) ) = b i + O ( a i j , x ̲ j * ) b i + O ( a i j , x j ( i ) ) = b i + ( b i b i ) = b i .
    In both cases, g i ( x ̲ ( j ) ) b i for all i I ; therefore, x ̲ ( j ) is feasible to problem (3).
  • Optimality: From part (i), we know that y * x ̲ j * , so
    Z ( x ̲ ( j ) ) = max { y * , , y * , x ̲ j * , y * , , y * } = y * = Z ( x * ) .
    Since x ̲ ( j ) is feasible and achieves the same objective value, it is an optimal solution to problem (3).
Theorem 4 provides valuable insight: for each j J = { 1 , 2 , , n } , the vector x ̲ ( j ) is an optimal solution to problem (3). These alternative optimal solutions offer flexibility and can be utilized to identify a minimal-optimal solution. Furthermore, by using different combinations of such optimal solutions, it is possible to yield different minimal-optimal solutions to problem (3).
Theorem 5.
Let the vectors x ( 0 ) * and x ( j ) be denoted as in (14). Define the index set
I ( j ) # = { i I | max i I { x j ( i ) } } .
If b i b i > 0 holds for some i I , then for i I ( j ) # , it follows that
O ( a i j , x ̲ j * ) = b i b i   and   g i ( x ( j ) ) = b i .
Proof. 
Since b i b i > 0 holds for some i I , it follows that I ( j ) # = { i I | max i I { x j ( i ) } } . By the definition of x ̲ j * , we have
x ̲ j * = max i I { x j ( i ) } = x j ( i ) , i I ( j ) # .
Moreover, from Theorem 4, we know that x ( j ) X ( A , b ) , i.e., it is a feasible solution to problem (3). Therefore, for all i I , it holds that
g i ( x ( j ) ) = O ( a i j , x ̲ j * ) + b i b i ,
which implies
O ( a i j , x ̲ j * ) b i b i , for all i I .
Now, using (16), for i I ( j ) # , we have
O ( a i j , x ̲ j * ) = O ( a i j , max i I { x j ( i ) } ) = O ( a i j , x j ( i ) ) = b i b i .
Therefore,
g i ( x ( j ) ) = O ( a i j , x ̲ j * ) + b i = ( b i b i ) + b i = b i , i I ( j ) # .

4.3. Iterative Approach for Finding the Minimal-Optimal Solution

Based on the optimal value y * and the uniform-optimal solution x ( 0 ) * = ( x j * ) j J , where x j * = y * for all j J , together with the properties established in the previous sections, we propose the following iterative approach to find a minimal-optimal solution to problem (3).
  • The iterative approach:
  • Step 1: Perform the following process for iterations k = 1 , , n :
    At the k-th iteration, compute the vector
    x ( k ) * = ( x ̲ 1 * , x ̲ 2 * , , x ̲ k * , x k + 1 ( 0 ) , , x n ( 0 ) ) ,
    where
    x k + 1 ( 0 ) = = x n ( 0 ) = y * ,   and   x ̲ k * = max i I { x k ( i ) } .
    For each component, x k ( i ) is defined as
    x k ( i ) = min { s i k | O ( a i k , s i k ) = b i b i ( k ) } , if b i b i ( k ) > 0 , 0 , if b i b i ( k ) 0 ,
    and
    b i ( k ) = g i ( x ( k 1 ) * ) O ( a i k , y * ) .
  • Step 2: After the n-th iteration, the vector
    x ( n ) * = ( x ̲ 1 * , x ̲ 2 * , , x ̲ n * )
    is produced as the output. This vector is the candidate minimal-optimal solution.
To prove that the vector x ( n ) * obtained through this iterative procedure is indeed a minimal-optimal solution to problem (3), we first establish the following properties.
Theorem 6.
Let the vector x ( k ) * = ( x ̲ 1 * , x ̲ 2 * , , x ̲ k * , x k + 1 ( 0 ) , , x n ( 0 ) ) , where k = 1 , 2 , , n , be obtained by the iterative approach described above. Then, the following properties hold:
(i). 
For every k J , the vector x ( k ) * is a feasible solution to problem (3).
(ii). 
Each component satisfies x ̲ k * x k ( 0 ) = y * for all k J .
(iii). 
For every k J , the vector x ( k ) * is an optimal solution to problem (3).
(iv). 
The inequality g i ( x ( k 1 ) * ) g i ( x ( k ) * ) 0 holds for all i I and k = 1 , 2 , , n .
Proof. 
(i).
According to Theorem 4 and the definition of
x ̲ ( j ) = ( y * , , y * , x ̲ j * , y * , , y * ) , j J ,
we know that the vector
x ( 1 ) * = x ̲ ( 1 ) = ( x ̲ 1 * , x 2 ( 0 ) , , x n ( 0 ) ) ,   where   x ̲ 1 * = max i I { x 1 ( i ) }
is an optimal solution of problem (3). Therefore, for k = 1 , we have
g i ( x ( 1 ) * ) = O ( a i 1 , x ̲ 1 * ) + j = 2 n O ( a i j , x j ( 0 ) ) = O ( a i 1 , x ̲ 1 * ) + b i ( 1 ) b i , i I .
For k = 2 , consider the vector
x ( 2 ) * = ( x ̲ 1 * , x ̲ 2 * , x 3 ( 0 ) , , x n ( 0 ) ) ,
and observe that
g i ( x ( 2 ) * ) = O ( a i 1 , x ̲ 1 * ) + O ( a i 2 , x ̲ 2 * ) + j = 3 n O ( a i j , x j ( 0 ) ) = g i ( x ( 1 ) * ) + O ( a i 2 , x ̲ 2 * ) O ( a i 2 , x 2 ( 0 ) ) .
We consider two cases to verify the feasibility of x ( 2 ) * .
  • Case 1: If b i b i ( 2 ) 0 for all i I , i.e., b i ( 2 ) b i , then x 2 ( i ) = 0 and x ̲ 2 * = 0 . Substituting into (19), we get
    g i ( x ( 2 ) * ) = g i ( x ( 1 ) * ) + O ( a i 2 , x ̲ 2 * ) O ( a i 2 , x 2 ( 0 ) ) = g i ( x ( 1 ) * ) + 0 O ( a i 2 , y * ) = b i ( 2 ) b i .
  • Case 2: If b i b i ( 2 ) > 0 for some i I , then by construction, x 2 ( i ) = min { s i 2 | O ( a i 2 , s i 2 ) = b i b i ( 2 ) } and x ̲ 2 * = max i I { x 2 ( i ) } , it implies that
    g i ( x ( 2 ) * ) = g i ( x ( 1 ) * ) + O ( a i 2 , x ̲ 2 * ) O ( a i 2 , x 2 ( 0 ) ) g i ( x ( 1 ) * ) + O ( a i 2 , x 2 ( i ) ) O ( a i 2 , y * ) = g i ( x ( 1 ) * ) + [ b i b i ( 2 ) ] O ( a i 2 , y * ) = g i ( x ( 1 ) * ) + [ b i ( g i ( x ( 1 ) * ) O ( a i 2 , y * ) ) ] O ( a i 2 , y * ) = b i .
  • Combining both cases, we conclude that x ( 2 ) * is feasible to problem (3), i.e.,
g i ( x ( 2 ) * ) = O ( a i 1 , x ̲ 1 * ) + O ( a i 2 , x ̲ 2 * ) + j = 3 n O ( a i j , x j ( 0 ) ) b i , i I .
The argument extends similarly to k = 3 , , n , and in particular,
g i ( x ( k ) * ) = j = 1 k O ( a i j , x ̲ j * ) + j = k + 1 n O ( a i j , x j ( 0 ) ) b i , i I .
For k = n ,
g i ( x ( n ) * ) = j = 1 n O ( a i j , x ̲ j * ) b i , i I .
(ii).
Since both x ( 0 ) * and x ( k ) * X ( A , b ) for all k J , then from part (i), we know that
g i ( x ( k 1 ) * ) b i , i I .
Moreover, according to (17) and (18), we have
x k ( 0 ) = y * , x ̲ k * = max i I { x k ( i ) } ,   and   b i ( k ) = g i ( x ( k 1 ) * ) O ( a i k , x k ( 0 ) ) .
  • If b i b i ( k ) 0 for all i I , then x k ( i ) = 0 and x ̲ k * = 0 , so x ̲ k * = 0 x k ( 0 ) = y * .
  • If b i b i ( k ) > 0 for some i I , then it implies that
    g i ( x ( k 1 ) * ) = b i ( k ) + O ( a i k , x k ( 0 ) ) b i O ( a i k , x k ( 0 ) ) b i b i ( k ) = O ( a i k , x k ( i ) ) ,
    where x k ( i ) is defined as the smallest value such that O ( a i k , x k ( i ) ) = b i b i ( k ) . Since O ( a i j , x ) is non-decreasing in x, it follows that x k ( 0 ) x k ( i ) . Thus,
    x k ( 0 ) x ̲ k * = max i I { x k ( i ) } .
Thus, x ̲ k * x k ( 0 ) = y * holds for all k J .
(iii).
By (i) and (ii) above, for all k J , there are x ( k ) * X ( A , b ) and x ̲ k * x k ( 0 ) = y * .
For k = 1 , 2 , , n 1 , we have
x ( k ) * = ( x ̲ 1 * , x ̲ 2 * , , x ̲ k * , x k + 1 ( 0 ) , , x n ( 0 ) ) = ( x ̲ 1 * , x ̲ 2 * , , x ̲ k * , y * , , y * ) .
Thus, Z ( x ( k ) * ) = max { x ̲ 1 * , x ̲ 2 * , , x ̲ k * , y * , , y * } = y * for all k < n .
Now, assume x ̲ k * < y * for all k J . Then
Z ( x ( n ) * ) = max { x ̲ 1 * , x ̲ 2 * , , x ̲ n * } < y * ,
which contradicts the fact that y * is the optimal value of problem (3). Therefore, Z ( x ( n ) * ) = y * , and all x ( k ) * are optimal.
(iv).
From (ii), we have x k ( 0 ) x ̲ k * ; hence,
O ( a i k , x k ( 0 ) ) O ( a i k , x ̲ k * ) , i I .
It follows that
g i ( x ( k 1 ) * ) g i ( x ( k ) * ) = O ( a i k , x k ( 0 ) ) O ( a i k , x ̲ k * ) 0 , i I , k = 1 , 2 , , n .
Theorem 6 shows that, for all k J , the vector x ( k ) * produced at each step of the iterative procedure remains an optimal solution to problem (3). In particular, this implies that the final vector x ( n ) * , computed at the last iteration, is also optimal. Furthermore, by Definition 4 and the property of Theorem 6:(ii), that x ̲ k * x k ( 0 ) = y * for all k J , the sequence of iterates satisfies the following component-wise inequality:
x ( n ) * x ( n 1 ) * x ( 1 ) * x ( 0 ) * .
These properties are crucial in establishing that the final output x ( n ) * is a minimal-optimal solution of problem (3).
Theorem 7.
Let the vector x ( k ) * , for k J , be obtained from the iterative approach, and I ( x * ) = { i I | g i ( x * ) = b i } denote the critical set. Define the index set
I ( k ) * = { i I | max i I { x k ( i ) } } , k J .
If b i b i ( k ) > 0 for some i I and k J , then
(i). 
I ( x ( 0 ) * ) I ( x ( 1 ) * ) and I ( x ( k ) * ) I ( x ( k + 1 ) * ) for all k J { n } ;
(ii). 
g i ( x ( k ) * ) = b i , for all i I ( k ) * .
Proof. 
(i).
By Lemma 3, I ( x ( 0 ) * ) = { i I | g i ( x ( 0 ) * ) = b i } . Since b i b i ( 1 ) = b i b i > 0 for some i I , Theorem 6 provides that g i ( x ( 1 ) * ) = b i , i I ( 1 ) * = I ( 1 ) # .
Assume by contradiction that there exists i I ( x ( 0 ) * ) such that i I ( x ( 1 ) * ) . Then,
g i ( x ( 0 ) * ) = b i , but g i ( x ( 1 ) * ) > b i ,
which implies g i ( x ( 0 ) * ) g i ( x ( 1 ) * ) < 0 , contradicting Theorem 6:(iv), which states that g i ( x ( 0 ) * ) g i ( x ( 1 ) * ) 0 for all i I . Hence, we have
I ( x ( 0 ) * ) I ( x ( 1 ) * ) .
The same argument applies inductively to show
I ( x ( k ) * ) I ( x ( k + 1 ) * ) , for all k J { n } .
(ii).
Since b i b i ( k ) > 0 for some i I , it follows that I ( k ) * = { i I | max i I { x k ( i ) } } . By the definition of x ̲ k * , we have
x ̲ k * = max i I { x k ( i ) } = x k ( i ) , i I ( k ) * .
From Theorem 6:(i), we know that x ( k ) * X ( A , b ) . This follows that
g i ( x ( k ) * ) = O ( a i k , x ̲ k * ) + b i ( k ) b i ,
which implies
O ( a i k , x ̲ k * ) b i b i ( k ) , for all i I .
Now, using (20), for i I ( k ) * , we have
O ( a i k , x ̲ k * ) = O ( a i k , max i I { x k ( i ) } ) = O ( a i k , x k ( i ) ) = b i b i ( k ) ,
so that
g i ( x ( k ) * ) = O ( a i k , x ̲ k * ) + b i ( k ) = ( b i b i ( k ) ) + b i ( k ) = b i , i I ( k ) * .
Theorem 7 identifies that for every i I ( k ) * , the function value g i ( x ( k ) * ) = b i , and the corresponding component x ̲ k * satisfies O ( a i k , x ̲ k * ) = b i b i ( k ) . This confirms that such constraints are tightly satisfied by x ( k ) * and, importantly, that any further decrease in the component x ̲ k * would cause violation of feasibility for some critical constraints. Hence, Theorem 7 provides the foundational structure and contradiction mechanism required to prove the minimal-optimality of x ( n ) * .
Theorem 8.
The vector x ( n ) * = ( x ̲ 1 * , x ̲ 2 * , , x ̲ n * ) obtained from the iterative approach is a minimal-optimal solution to problem (3).
Proof. 
By Theorem 6:(iii), x ( n ) * is an optimal solution to problem (3), i.e.,
g i ( x ( n ) * ) = O ( a i 1 , x ̲ 1 * ) + + O ( a i n , x ̲ n * ) b i , i I .
Suppose, for contradiction, that there exists another minimal-optimal solution x ^ = ( x ^ j ) j J X ( A , b ) such that x ^ x ( n ) * , where x ^ j = x ̲ j * , j J , j k and x ^ k < x ̲ k * .
From Theorem 7, we know
I ( x ( k ) * ) I ( x ( n ) * ) , and g i ( x ( k ) * ) = b i , for i I ( k ) * .
Hence,
g i ( x ( k ) * ) = j = 1 k 1 O ( a i j , x ̲ j * ) + O ( a i k , x ̲ k * ) + j = k + 1 n O ( a i j , x j ( 0 ) ) = b i , i I ( k ) * ,
and
g i ( x ( n ) * ) = j = 1 k 1 O ( a i j , x ̲ j * ) + O ( a i k , x ̲ k * ) + j = k + 1 n O ( a i j , x ̲ j * ) = b i , i I ( k ) * .
Furthermore, from the definition of x k ( i ) in (17), we have
x ̲ k * = x k ( i ) = min { s i k | O ( a i k , s i k ) = b i b i ( k ) } , for i I ( k ) * .
Since x ^ k < x ̲ k * and O ( a i k , x ) is strictly increasing in x (by Theorems 1 and 2), we have
O ( a i k , x ^ k ) < O ( a i k , x ̲ k * ) ,
which implies
g i ( x ^ ) = j = 1 k 1 O ( a i j , x ̲ j * ) + O ( a i k , x ^ k ) + j = k + 1 n O ( a i j , x ̲ j * ) < j = 1 k 1 O ( a i j , x ̲ j * ) + O ( a i k , x ̲ k * ) + j = k + 1 n O ( a i j , x ̲ j * ) = g i ( x ( n ) * ) = b i , i I ( k ) * .
This contradicts the assumption that x ^ X ( A , b ) . Therefore, such x ^ cannot exist, and x ( n ) * is indeed a minimal-optimal solution to problem (3). □

5. Solution Procedure and a Numerical Example

Based on Theorem 8, starting from the uniform-optimal solution x ( 0 ) * = ( x j ( 0 ) ) j J with x j ( 0 ) = y * for all j J , the proposed iterative approach can be used to obtain a minimal-optimal solution x ( n ) * to problem (3). Furthermore, according to Lemma 4, for the initial optimal solution x ( 0 ) * , the binding variables x j = x j ( 0 ) = y * , for all j J i ( x ( 0 ) * ) and i I ( x ( 0 ) * ) , must remain fixed and cannot be reduced. In contrast, the remaining variables x j , for j J J i ( x ( 0 ) * ) and i I ( x ( 0 ) * ) , may potentially be reduced.
By incorporating these properties, we propose a systematic solution procedure for identifying a minimal-optimal solution to the min-max programming problem involving addition-overlap function composition.
Step 1.
Check the consistency of problem (3) by verifying whether the inequality j = 1 n a i j b i holds for all i I , according to Proposition 3. Terminate the procedure if the problem is inconsistent.
Step 2.
Transform problem (3) into the single-variable optimization problem (10), and solve it using the method proposed in [30] or the bisection method to obtain the optimal value y * . The corresponding uniform-optimal solution is given by x * = ( x j * ) j J = ( y * , , y * ) .
Step 3.
Determine the critical set I ( x * ) = { i I | g i ( x * ) = b i } and, for each i I ( x * ) , compute the corresponding binding set J i ( x * ) = J i * 1 J i * 2 J i * 3 according to Definition 5.
Step 4.
Iteratively construct the sequence of vectors x ( k ) * = ( x ̲ 1 * , x ̲ 2 * , , x ̲ k * , x k + 1 ( 0 ) , , x n ( 0 ) ) for k = 1 , 2 , , n , where x j ( 0 ) = y * for j = k + 1 , , n .
  • If k J i ( x * ) for any i I ( x * ) , set x ̲ k * = y * by Lemma 4.
  • Otherwise, compute
    x ̲ k * = max i I { x k ( i ) } ,
    where
    x k ( i ) = min { s i k | O ( a i k , s i k ) = b i b i ( k ) } , if b i b i ( k ) > 0 , 0 , if b i b i ( k ) 0 ,
    and
    b i ( k ) = g i ( x ( k 1 ) * ) O ( a i k , y * ) .
Step 5.
Return the final vector x ̲ * * = x ( n ) * = ( x ̲ 1 * , x ̲ 2 * , , x ̲ n * ) , which is a minimal-optimal solution to problem (3).
  • Computational Complexity and Convergence: Problem (3) involves m constraints, each containing n variables. The overall computational complexity and convergence behavior of the proposed solution procedure are primarily determined by Steps 2 and 4.
In Step 2, the optimal value y * is obtained by solving a single-variable optimization problem using the method proposed in [30], which requires O ( m n ) computational complexity. The convergence of this method has been formally established in [30].
In Step 4, each of the n variables is processed once. For each variable, evaluating all constraints involves computing up to ( 8 n + 13 ) m + ( m 1 ) operations. Thus, the total computational complexity for Step 4 is O ( m n 2 ) .
Overall, the total computational complexity of the entire solution procedure is O ( m n 2 ) .
Moreover, since each variable is examined exactly once, and each update either retains or reduces its value while preserving feasibility and optimality, the algorithm completes in exactly n iterations. Finally, by Theorems 6 and 8, the final solution is guaranteed to be both optimal and minimal in the sense of component-wise ordering among feasible solutions. Therefore, the proposed algorithm is guaranteed to converge to a minimal-optimal solution in a finite number of steps.
The proposed procedure provides a systematic method for constructing a minimal-optimal solution to problem (3). To demonstrate its effectiveness and practical applicability, we now apply it step-by-step to the specific instance described in Example 1.
Example 3.
Apply the proposed solution procedure to determine a minimal-optimal solution to Example 1.
Based on the data given in Example 1, we carry out the solution procedure step-by-step as follows:
  • Step 1. Check the consistency of problem by Proposition 3:
j = 1 6 a 1 j = 2.52 > 1.8 = b 1 , j = 1 6 a 2 j = 2.98 > 0.6 = b 2 ,
j = 1 6 a 3 j = 2.77 > 2.0 = b 3 , and j = 1 6 a 4 j = 2.93 > 1.8 = b 4 .
Since the condition j = 1 6 a i j b i holds for all i I = { 1 , 2 , 3 , 4 } , the problem (11) in Example 1 is consistent.
  • Step 2. Solve the optimal value y * , yielding the uniform-optimal solution x * .
From Example 1, we have
y * = 0.643875 ,   and   x * = ( x 1 * , , x 6 * ) = ( 0.643875 , , 0.643875 ) .
  • Step 3. Determine the critical set I ( x * ) and the binding set J i ( x * ) for each i I ( x * ) , as defined in Definition 5.
From Example 2, we have
I ( x * ) = { 3 } ,   and   J 3 ( x * ) = { 2 , 5 , 6 } .
  • Step 4. Iteratively construct the sequence of vectors x ( k ) * = ( x ̲ 1 * , , x ̲ k * , x k + 1 ( 0 ) , , x n ( 0 ) ) for k = 1 , 2 , , 6 , where x j ( 0 ) = y * for j = k + 1 , , 6 .
  • For k = 1 , since 1 J 3 ( x * ) = { 2 , 5 , 6 } , compute
x ̲ 1 * = max i I { x 1 ( i ) } ,
where
x 1 ( i ) = min { s i 1 | O ( a i 1 , s i 1 ) = b i b i ( 1 ) } , if b i b i ( 1 ) > 0 , 0 , if b i b i ( 1 ) 0 ,
with
b i ( 1 ) = g i ( x ( 0 ) * ) O ( a i 1 , y * ) , for i I = { 1 , 2 , 3 , 4 } .
Calculations then yield
b 1 ( 1 ) = g 1 ( x ( 0 ) * ) O ( a 11 , y * ) = 1.86896 0.25926 = 1.60970 , b 2 ( 1 ) = g 2 ( x ( 0 ) * ) O ( a 21 , y * ) = 2.15918 0.16667 = 1.99251 , b 3 ( 1 ) = g 3 ( x ( 0 ) * ) O ( a 31 , y * ) = 2.00000 0.20000 = 1.80000 , b 4 ( 1 ) = g 4 ( x ( 0 ) * ) O ( a 41 , y * ) = 2.10643 0.20635 = 1.90008 ,
b 1 b 1 ( 1 ) = 1.8 1.60970 = 0.19030 , b 2 b 2 ( 1 ) = 0.6 1.99251 = 1.39251 , b 3 b 3 ( 1 ) = 2.0 1.80000 = 0.20000 , b 4 b 4 ( 1 ) = 1.8 1.90008 = 0.10008 .
Using the inverse function method specified in (8) and (9), we compute
x 1 ( 1 ) = min { s 11 | O ( 0.35 , s 11 ) = 0.19030 } = 0.23503 , x 1 ( 2 ) = 0 ,
x 1 ( 3 ) = min { s 31 | O ( 0.25 , s 31 ) = 0.20000 } = 0.25 , x 1 ( 4 ) = 0 .
Obtain
x ̲ 1 * = max i I { x 1 ( i ) } = { 0.23503 , 0 , 0.25 , 0 } = 0.25 ,
and
x ( 1 ) * = ( 0.25 , 0.643875 , 0.643875 , 0.643875 , 0.643875 , 0.643875 ) .
  • For k = 2 , since 2 J 3 ( x * ) = { 2 , 5 , 6 } , set
x ̲ ( 2 ) * = y * = 0.643875 .
Obtain the vector
x ( 2 ) * = ( 0.25 , 0.643875 , 0.643875 , 0.643875 , 0.643875 , 0.643875 ) .
  • For k = 3 , since 3 J 3 ( x * ) = { 2 , 5 , 6 } , compute
x ̲ 3 * = max i I { x 3 ( i ) } ,
b i ( 3 ) = g i ( x ( 2 ) * ) O ( a i 3 , y * ) ,   and   b i b i ( 3 ) , for i I = { 1 , 2 , 3 , 4 } .
Calculations then yield
b 1 b 1 ( 3 ) = 1.8 1.57893 = 0.22107 , b 2 b 2 ( 3 ) = 0.6 1.67139 = 1.07139 , b 3 b 3 ( 3 ) = 2.0 1.75758 = 0.24242 , b 4 b 4 ( 3 ) = 1.8 1.76825 = 0.03175 ,
x 3 ( 1 ) = 0.28381 , x 3 ( 2 ) = 0 , x 3 ( 3 ) = 0.32 , x 3 ( 4 ) = 0.03279 ,
Obtain
x ̲ 3 * = max i I { x 3 ( i ) } = { 0.28381 , 0 , 0.32 , 0.03279 } = 0.32 ,
and
x ( 3 ) * = ( 0.25 , 0.643875 , 0.32 , 0.643875 , 0.643875 , 0.643875 ) .
  • For k = 4 , since 4 J 3 ( x * ) = { 2 , 5 , 6 } , compute
x ̲ 4 * = max i I { x 4 ( i ) } ,
b i ( 4 ) = g i ( x ( 3 ) * ) O ( a i 4 , y * ) ,   and   b i b i ( 4 ) , for i I = { 1 , 2 , 3 , 4 } .
Calculations then yield
b 1 b 1 ( 4 ) = 1.8 1.51474 = 0.28526 , b 2 b 2 ( 4 ) = 0.6 1.61886 = 1.01886 , b 3 b 3 ( 4 ) = 2.0 1.76923 = 0.23077 , b 4 b 4 ( 4 ) = 1.8 1.56824 = 0.23176 ,
x 4 ( 1 ) = 0.39911 , x 4 ( 2 ) = 0 , x 4 ( 3 ) = 0.3 , x 4 ( 4 ) = 0.30168 .
Obtain
x ̲ 4 * = max i I { x 4 ( i ) } = { 0.39911 , 0 , 0.3 , 0.30168 } = 0.39911 ,
and
x ( 4 ) * = ( 0.25 , 0.643875 , 0.32 , 0.39911 , 0.643875 , 0.643875 ) .
  • For k = 5 and k = 6 , since both k J 3 ( x * ) = { 2 , 5 , 6 } , set
x ̲ ( 5 ) * = x ̲ ( 6 ) * = y * = 0.643875 ,
and obtain the final vector
x ( 6 ) * = ( 0.25 , 0.643875 , 0.32 , 0.39911 , 0.643875 , 0.643875 ) .
  • Step 5. Return the final vector:
x ̲ * * = x ( 6 ) * = ( 0.25 , 0.643875 , 0.32 , 0.39911 , 0.643875 , 0.643875 ) ,
which is a minimal-optimal solution to Example 1.
It is worth noting that the proposed solution procedure can be applied to obtain other minimal-optimal solutions. Theoretically, the six variables in Example 3 can be permuted in 6! = 720 different sequences. However, since the critical set I ( x * ) = { 3 } and the corresponding binding set J 3 ( x * ) = { 2 , 5 , 6 } , the values of variables x ̲ 2 * = x ̲ 5 * = x ̲ 6 * = y * = 0.643875 must remain unchanged, in accordance with Lemma 4. As a result, only the remaining three variables x 1 , x 3 , and x 4 can be adjusted, yielding 3!=6 possible permutations. Besides the original sequence x 1 , x 2 , , x 6 used in Example 3, the following five sequences can be considered to generate other minimal-optimal solutions:
Seq 1 : x 1 , x 2 , x 4 , x 3 , x 5 , x 6 ; Seq 2 : x 3 , x 2 , x 1 , x 4 , x 5 , x 6 ; Seq 3 : x 3 , x 2 , x 4 , x 1 , x 5 , x 6 ; Seq 4 : x 4 , x 2 , x 1 , x 3 , x 5 , x 6 ;   and Seq 5 : x 4 , x 2 , x 3 , x 1 , x 5 , x 6 .
Accordingly, only one distinct minimal-optimal solution x ̲ * * 2 to Example 3 can be obtained using our solution procedure based on these sequences, as follows:
x ̲ * * 2 = ( 0.341365 , 0.643875 , 0.32 , 0.3 , 0.643875 , 0.643875 ) .

6. Conclusions

In this study, we investigated a min-max optimization problem constrained by a system of addition-overlap functions. Classical methods such as the bisection approach are effective in finding a uniform-optimal solution for such a problem, where all decision variables share the same optimal value. However, these methods are insufficient for identifying minimal-optimal solutions, which are often more informative in practical applications.
To address this gap, we developed an iterative procedure that gradually reduces adjustable variables based on overlap function properties while maintaining feasibility and optimality. This approach relies on a careful identification of critical constraints and binding variables, which must remain fixed, allowing non-binding components to be selectively minimized. We formally proved that the resulting solution satisfies the condition of minimal-optimality. Through illustrative examples, we further demonstrated that the proposed method can yield multiple distinct minimal-optimal solutions, depending on the order in which variables are processed.
Our results contribute both theoretically and algorithmically: the framework enhances the understanding of solution structure in addition-overlap constrained problems and offers a practical tool for constructing minimal-optimal solutions. Future work may involve the pursuit of sufficient conditions, the development of numerical subroutines, and necessary error analyses for broader classes of overlap functions or exploring its applications in practical optimization scenarios.

Author Contributions

Conceptualization, S.-M.G.; Methodology, Y.-K.W. and S.-M.G.; Supervision, S.-M.G.; Validation, Y.-C.C.; Writing—original draft, Y.-K.W.; Writing—review and editing, Y.-K.W. and Y.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by MOST 106-2221-E-182-038-MY2 and BMRPD17.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Sy-Ming Guu was employed by the Graduate Institute of Business and Management, College of Management, Chang Gung University and Department of Neurology, Chang Gung Memorial Hospital LinKou. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The authors declare that this study received funding from the National Science and Technology Council, Taiwan (Grant No. MOST 106-2221-E-182-038-MY2) and from Chang Gung Memorial Hospital, Linkou (Grant No. BMRPD17). The funders had no role in the study design, data collection and analysis, interpretation of data, writing of the manuscript, or the decision to submit it for publication.

References

  1. Beliakov, G.; Pradera, A.; Calvo, T. Aggregation Functions: A Guide for Practitioners; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  2. Di Nola, A.; Pedrycz, W.; Sessa, S.; Sanchez, E. Fuzzy relation equations theory as a basis of fuzzy modelling: An overview. Fuzzy Sets Syst. 1991, 40, 415–429. [Google Scholar] [CrossRef]
  3. Pedrycz, W. On generalized fuzzy relational equations and their applications. J. Math. Anal. Appl. 1985, 107, 520–536. [Google Scholar] [CrossRef]
  4. Gupta, M.M.; Qi, J. Design of fuzzy logic controllers based on generalized T-operators. Fuzzy Sets Syst. 1991, 40, 473–489. [Google Scholar] [CrossRef]
  5. Gupta, M.M.; Qi, J. Theory of t-norms and fuzzy inference. Fuzzy Sets Syst. 1991, 40, 431–450. [Google Scholar] [CrossRef]
  6. Fodor, J.; Roubens, M. Fuzzy Preference Modeling Multicriteria Decision Support; Theory and Decision Library; Springer: New York, NY, USA, 1994. [Google Scholar]
  7. Bustince, H.; Fernandaz, J.; Mesiar, R.; Montero, J.; Orduna, R. Overlap functions. Nonlinear Anal. Theory, Methods Appl. 2010, 72, 1488–1499. [Google Scholar] [CrossRef]
  8. Dimuro, G.P.; Bedregal, B.; Bustince, H.; Asi, M.J.; Mesiar, R. On additive generators of overlap functions. Fuzzy Sets Syst. 2016, 287, 76–96. [Google Scholar] [CrossRef]
  9. Bustince, H.; Barrenechea, E.; Pagola, M. Image thresholding using restricted equivalent functions and maximizing the measures of similarity. Fuzzy Sets Syst. 2007, 158, 496–516. [Google Scholar] [CrossRef]
  10. Wang, J.J.; Li, X.N. An overlap function-based three-way intelligent decision model under interval-valued fuzzy information systems. Expert Syst. Appl. 2024, 238, 122036. [Google Scholar] [CrossRef]
  11. Bustince, H.; Pagola, M.; Mesiar, R.; Hullermeier, E.; Herrera, F. Grouping, overlap, and generalized bientropic functions for fuzzy modeling of pairwise comparisons. IEEE Trans. Fuzzy Syst. 2012, 20, 405–415. [Google Scholar] [CrossRef]
  12. Sanz, J.A.; Fernandez, A.; Bustince, H.; Herrera, F. Improving the performance of fuzzy rule-based classification systems with interval-valued fuzzy sets and genetic amplitude tuning. Inf. Sci. 2010, 180, 3674–3685. [Google Scholar] [CrossRef]
  13. Vinter, R.B. Minimax optimal control. SIAM J. Control Optim. 2005, 44, 939–968. [Google Scholar] [CrossRef]
  14. Cipriani, C.; Scagliotti, A.; Wöhrer, T. A minimax optimal control approach for robust neural ODEs. In Proceedings of the 2024 European Control Conference (ECC), Stockholm, Sweden, 25–28 June 2024; pp. 58–64. [Google Scholar]
  15. Zhang, S.; Hou, Y.; Zhang, S.; Zhang, M. Fuzzy control model and simulation for nonlinear supply chain system with lead times. Complexity 2017, 2017, 2017634. [Google Scholar] [CrossRef]
  16. Li, J.X.; Yang, S.J. Fuzzy relation inequalities about the data transmission mechanism in BitTorrent-like Peer-to-Peer file sharing systems. In Proceedings of the 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, FSKD, Chongqing, China, 29–31 May 2012; pp. 452–456. [Google Scholar]
  17. Qiu, J.J.; Yang, X.P. Min-max programming problem with constraints of addition-min-product fuzzy relation inequalities. Fuzzy Optim. Decis. Mak. 2022, 21, 291–317. [Google Scholar] [CrossRef]
  18. Yang, X.P.; Lin, H.T.; Zhou, X.G.; Cao, B.Y. Addition-min fuzzy relation inequalities with application in BitTorrent-like peer-to-peer file sharing system. Fuzzy Sets Syst. 2018, 343, 126–140. [Google Scholar] [CrossRef]
  19. Yang, X.P.; Zhou, X.G.; Cao, B.Y. Min-max programming problem subject to addition-min fuzzy relation inequalities. IEEE Trans. Fuzzy Syst. 2016, 24, 111–119. [Google Scholar] [CrossRef]
  20. Chiu, Y.L.; Guu, S.M.; Yu, J.; Wu, Y.K. A single-variable method for solving min-max programming problem with addition-min fuzzy relational inequalities. Fuzzy Optim. Decis. Mak. 2019, 18, 433–449. [Google Scholar] [CrossRef]
  21. Deo, U.; Jain, J.K.; Beg, M.M.S. Advances in fuzzy relation inequalities and optimization techniques. In Proceedings of the 2025 2nd International Conference on Computational Intelligence, Communication Technology and Networking (CICTN), Ghaziabad, India, 6–7 February 2025; pp. 833–838. [Google Scholar]
  22. Zhang, L. Optimal symmetric interval solution of fuzzy relation inequality considering the stability in P2P educational information resources sharing system. Fuzzy Sets Syst. 2024, 478, 108835. [Google Scholar] [CrossRef]
  23. Chen, M.O.; Zhu, G.C.; Islam, S.; Yang, X.P. Centralized solution in max-min fuzzy relation inequalities. AIMS Math. 2025, 10, 7864–7890. [Google Scholar] [CrossRef]
  24. Yang, X.P.; Hao, Z.F.; Shu, Q.Y. Stability of the solution in P2P network system based on two-sided fuzzy relation inequality. IEEE Trans. Fuzzy Syst. 2025, 33, 2526–2538. [Google Scholar] [CrossRef]
  25. Yang, X.P. Optimal-vector-based algorithm for solving min-max programming subject to addition-min fuzzy relation inequality. IEEE Trans. Fuzzy Syst. 2017, 25, 1127–1140. [Google Scholar] [CrossRef]
  26. Guu, S.M.; Yu, J.; Wu, Y.K. A two-phase approach to finding a better managerial solution for systems with addition-min fuzzy relational inequalities. IEEE Trans. Fuzzy Syst. 2018, 16, 2251–2260. [Google Scholar] [CrossRef]
  27. Yang, X.P. Leximax minimum solution of addition-min fuzzy relation inequalities. Inf. Sci. 2020, 524, 184–198. [Google Scholar] [CrossRef]
  28. Wu, Y.K.; Guu, S.M. An active-set approach to finding a minimal-optimal solution to the min-max programming problem with addition-min fuzzy relational inequalities. Fuzzy Sets Syst. 2022, 447, 39–53. [Google Scholar] [CrossRef]
  29. Wu, Y.K.; Guu, S.M. Solving minimal-optimal solutions for the generalized min-max programming problem with addition-min composition. Fuzzy Sets Syst. 2024, 477, 108825. [Google Scholar] [CrossRef]
  30. Wu, Y.K.; Guu, S.M.; Chang, Y.C. A single-variable method for solving the min-max programming problem with addition-overlap function composition. AIMS Math. 2024, 12, 3183. [Google Scholar] [CrossRef]
Figure 1. The value of O ( a , x ) with a 0.5 .
Figure 1. The value of O ( a , x ) with a 0.5 .
Symmetry 17 01712 g001
Figure 2. The value of O ( a , x ) with a 0.5 .
Figure 2. The value of O ( a , x ) with a 0.5 .
Symmetry 17 01712 g002
Figure 3. Plot of O ( a , x ) for different values of a over x [ 0 , 1 ] .
Figure 3. Plot of O ( a , x ) for different values of a over x [ 0 , 1 ] .
Symmetry 17 01712 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Y.-K.; Guu, S.-M.; Chang, Y.-C. An Iterative Approach for Finding Minimal-Optimal Solutions of the Min-Max Programming Problem with Addition-Overlap Functions. Symmetry 2025, 17, 1712. https://doi.org/10.3390/sym17101712

AMA Style

Wu Y-K, Guu S-M, Chang Y-C. An Iterative Approach for Finding Minimal-Optimal Solutions of the Min-Max Programming Problem with Addition-Overlap Functions. Symmetry. 2025; 17(10):1712. https://doi.org/10.3390/sym17101712

Chicago/Turabian Style

Wu, Yan-Kuen, Sy-Ming Guu, and Ya-Chan Chang. 2025. "An Iterative Approach for Finding Minimal-Optimal Solutions of the Min-Max Programming Problem with Addition-Overlap Functions" Symmetry 17, no. 10: 1712. https://doi.org/10.3390/sym17101712

APA Style

Wu, Y.-K., Guu, S.-M., & Chang, Y.-C. (2025). An Iterative Approach for Finding Minimal-Optimal Solutions of the Min-Max Programming Problem with Addition-Overlap Functions. Symmetry, 17(10), 1712. https://doi.org/10.3390/sym17101712

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop