Next Article in Journal
Hermite-Hadamard Type Inequalities for Interval (h1, h2)-Convex Functions
Next Article in Special Issue
An Optimisation-Driven Prediction Method for Automated Diagnosis and Prognosis
Previous Article in Journal
Integral Involving Bessel Functions Arising in Propagation Phenomena
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Method for Solving the Robust Continuous-Time Linear Programming Problems

Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 802, Taiwan
Mathematics 2019, 7(5), 435; https://doi.org/10.3390/math7050435
Submission received: 12 April 2019 / Revised: 9 May 2019 / Accepted: 10 May 2019 / Published: 16 May 2019
(This article belongs to the Special Issue Numerical Optimization and Applications)

Abstract

:
A robust continuous-time linear programming problem is formulated and solved numerically in this paper. The data occurring in the continuous-time linear programming problem are assumed to be uncertain. In this paper, the uncertainty is treated by following the concept of robust optimization, which has been extensively studied recently. We introduce the robust counterpart of the continuous-time linear programming problem. In order to solve this robust counterpart, a discretization problem is formulated and solved to obtain the ϵ -optimal solution. The important contribution of this paper is to locate the error bound between the optimal solution and ϵ -optimal solution.

1. Introduction

The theory of continuous-time linear programming problem has received considerable attention for a long time. Tyndall [1,2] treated rigorously a continuous-time linear programming problem with the constant matrices, which was originated from the “bottleneck problem” proposed by Bellman [3]. Levison [4] generalized the results of Tyndall by considering the time-dependent matrices in which the functions shown in the objective and constraints were assumed to be continuous on the time interval [ 0 , T ] . In this paper, we are going to consider the continuous-time linear programming problem in which the data are assumed to be uncertain. The data here mean the real-valued functions or real numbers. Based on the assumption of uncertainty, we are going to propose and solve the so-called robust continuous-time linear programming problem.
Meidan and Perold [5], Papageorgiou [6] and Schechter [7] have obtained many interesting results of the continuous-time linear programming problem. Also, Anderson et al. [8,9,10], Fleischer and Sethuraman [11] and Pullan [12,13,14,15,16] investigated a subclass of the continuous-time linear programming problem, which is called the separated continuous-time linear programming problem and can be used to model the job-shop scheduling problems. Weiss [17] proposed a Simplex-like algorithm to solve the separated continuous-time linear programming problem. Also, Shindin and Weiss [17,18] studied a more generalized separated continuous-time linear programming problem. However, the error estimate was not studied in the above articles. One of the contributions of this paper is to obtain the error bound between the optimal solution and numerical optimal solution.
On the other hand, the nonlinear type of continuous-time optimization problems was also studied by Farr and Hanson [19,20], Grinold [21,22], Hanson and Mond [23], Reiland [24,25], Reiland and Hanson [26] and Singh [27]. The nonsmooth continuous-time optimization problems was studied by Rojas-Medar et al. [28] and Singh and Farr [29]. The nonsmooth continuous-time multiobjective programming problems was also studied by Nobakhtian and Pouryayevali [30,31]. Zalmai [32,33,34,35] investigated the continuous-time fractional programming problems. However, the numerical methods were not developed in the above articles.
Wen and Wu [36,37,38] have developed different numerical methods to solve the continuous-time linear fractional programming problems. In order to solve the continuous-time problems, the discretized problems should be considered by dividing the time interval [ 0 , T ] into many subintervals. Since the functions considered in Wen and Wu [36,37,38] are assumed to be continuous on [ 0 , T ] , we can take this advantage to equally divide the time interval [ 0 , T ] . In other words, each subinterval has the same length. In Wu [39], the functions are assumed to be piecewise continuous on the time interval [ 0 , T ] . In this case, the time interval cannot be equally divided. The reason is that, in order to develop the numerical technique, the functions should be continuous on each subinterval. Therefore a different methodology for not equally partitioning the time interval was proposed in Wu [39]. In this paper, we shall solve a more general model that considers the uncertain data in continuous-time linear programming problem. In other words, we are going to solve the robust counterpart of the continuous-time linear programming problem. In this paper, we still consider the piecewise continuous functions on the time interval [ 0 , T ] . Therefore, the time interval cannot be equally divided.
Addressing the uncertain data in optimization problems has become an attractive research topic. As early as the mid 1950s, Dantzig [40] introduced the stochastic optimization as an approach to model uncertain data by assuming scenarios for the data occurring with different probabilities. One of the difficulties is to present the exact distribution for the data. Therefore, the so-called robust optimization might be another choice for modeling the optimization problems with uncertain data. The basic idea of robust optimization is to assume that each uncertain data falls into a set. For example, the real-valued data can be assumed to fall into a closed interval for convenience. In order to address the optimization problems with uncertain data falling into the uncertainty sets, Ben-Tal and Nemirovski [41,42] and independently El Ghaoui [43,44] proposed to solve the so-called robust optimization problems. Now, this topic has increasingly received much attention. For example, the research articles contributed by Averbakh and Zhao [45], Ben-Tal et al. [46], Bertsimas et al. [47,48,49], Chen et al. [50], Erdoǧan and Lyengar [51], Zhang [52] and the references therein can present the main stream of this topic. In this paper, we propose the robust counterpart of the continuous-time linear programming problem and design a practical algorithm to solve this problem.
This paper is organized as follows. In Section 2, a robust continuous-time linear programming problems is formulated. Under some algebraic calculation, it can be transformed into a traditional form of the continuous-time linear programming problem. In Section 3, we introduce the discretization problem of the transformed problem. Based on the solutions obtained from the discretization problem, we can construct the feasible solutions of the transformed problem. Under these settings, the error bound will be derived. In order to obtain the approximate solution, we also introduce the concept of ϵ -optimal solutions. In Section 4, we study the convergence of approximate solutions. In Section 5, based on the obtained results, the computational procedure is proposed and a numerical example is provided to demonstrate the usefulness of this practical algorithm.

2. Robust Continuous-Time Linear Programming Problems

In this section, a robust continuous-time linear programming problems will be formulated. We shall use some algebraic calculation to transform it into a traditional form of the continuous-time linear programming problem. Now, we consider the following continuous-time linear programming problem:
( CLP ) max j = 1 q 0 T a j ( t ) · z j ( t ) d t subject   to j = 1 q B i j · z j ( t ) c i ( t ) + j = 1 q 0 t K i j · z j ( s ) d s for   all   t [ 0 , T ]   and   i = 1 , , p ; z j L 2 [ 0 , T ]   and   z j ( t ) 0 for   all   j = 1 , , q   and   t [ 0 , T ] ,
where B i j and K i j are nonnegative real numbers for i = 1 , , p and j = 1 , , q , and a j and c i are the real-valued functions. It is obvious that if the real-valued functions c i are assumed to be non-negative on [ 0 , T ] for i = 1 , , p , then the primal problem (CLP) is feasible with a trivial feasible solution z j ( t ) = 0 for all j = 1 , , q .
Suppose that some of the data B i j and K i j are uncertain such that they should fall into the uncertainty sets U B i j and U K i j , respectively. Given any fixed i { 1 , 2 , , p } , we denote by I i ( B ) and I i ( K ) the set of indices in which B i j and K i j are assumed to be uncertain. In other words, if j I i ( B ) , then B i j is uncertain, and if j I i ( K ) , then K i j is uncertain. Therefore I i ( B ) and I i ( K ) are subsets of { 1 , 2 , , q } . It is clear that if the data B i j is certain, then the uncertainty set U B i j = { B i j } is a singleton set. We also assume that functions a j and c i are pointwise-uncertain in the sense that, given each t [ 0 , T ] , the uncertain data a j ( t ) and c i ( t ) should fall into the uncertainty sets V a j ( t ) and V c i ( t ) , respectively. If function a j or c i is assumed to be certain, then each function value a j ( t ) or c i ( t ) is assumed to be certain for t [ 0 , T ] . If function a j or c i is assumed to be uncertain, then the function value a j ( t ) or c i ( t ) may be certain for some t [ 0 , T ] . We denote by I ( a ) and I ( c ) the sets of indices in which the functions a j and c i are assumed to be uncertain, respectively. In other words, if j I ( a ) , then the function a j is uncertain, and if i I ( c ) , then the function c i is uncertain.
The robust counterpart of problem (CLP) is assumed to take each data in the corresponding uncertainty sets, and it is formulated as follows:
( RCLP ) max j = 1 q 0 T a j ( t ) · z j ( t ) d t subject   to j = 1 q B i j · z j ( t ) c i ( t ) + j = 1 q 0 t K i j · z j ( s ) d s for   all   t [ 0 , T ]   and   i = 1 , , p ; z j L 2 [ 0 , T ]   and   z j ( t ) 0 for   all   j = 1 , , q   and   t [ 0 , T ] ; for   all   B i j U B i j for   all   i = 1 , , p   and   j = 1 , , q ; for   all   K i j U K i j for   all   i = 1 , , p   and   j = 1 , , q ; for   all   a j ( t ) V a j ( t ) for   all   t [ 0 , T ]   and   j = 1 , , q ; for   all   c i ( t ) V c i ( t ) for   all   t [ 0 , T ]   and   i = 1 , , p .
We can see that the robust counterpart ( RCLP ) is a continuous-time programming problem with infinitely many number of constraints. Therefore, it is difficult to solve. However, if we can determine the suitable uncertainty sets U B i j , U K i j , V a j ( t ) and V c i ( t ) , then this semi-infinite problem can be transformed into a conventional continuous-time linear programming problem.
We assume that all the uncertain data fall into the closed intervals, which will be described below.
  • For B i j with j I i ( B ) and K i j with j I i ( K ) , we assume that the uncertain data B i j and K i j should fall into the closed intervals
    U B i j = B i j ( 0 ) B ^ i j , B i j ( 0 ) + B ^ i j
    and
    U K i j = K i j ( 0 ) K ^ i j , K i j ( 0 ) + K ^ i j ,
    respectively, where B i j ( 0 ) 0 and K i j ( 0 ) 0 are the known nominal data of B i j and K i j , respectively, and B ^ i j 0 and K ^ i j 0 are the uncertainties such that
    B i j ( 0 ) B ^ i j 0   and   K i j ( 0 ) K ^ i j 0 .
    For j I i ( B ) , we use the notation B i j ( 0 ) to denote the certain data with uncertainty B ^ i j = 0 . Also, we use the notation K i j ( 0 ) to denote the certain data with uncertainty K ^ i j = 0 for j I i ( K ) .
  • For a j with j I ( a ) and c i with i I ( c ) , we assume that
    V a j ( t ) = a j ( 0 ) ( t ) a ^ j ( t ) , a j ( 0 ) ( t ) + a ^ j ( t )   and   V c i ( t ) = c i ( 0 ) ( t ) c ^ i ( t ) , c i ( 0 ) ( t ) + c ^ i ( t ) ,
    where a j ( 0 ) ( t ) and c i ( 0 ) ( t ) are the known nominal data of a j ( t ) and c i ( t ) , respectively, and a ^ j ( t ) 0 and c ^ i ( t ) 0 are the uncertainties of a j ( t ) and c i ( t ) , respectively. For j I ( a ) , we use the notation a j ( 0 ) ( t ) to denote the certain function with uncertainties a ^ j ( t ) = 0 for all t [ 0 , T ] . Also, we use the notation c i ( 0 ) ( t ) to denote the certain function with uncertainties c ^ j ( t ) = 0 for i I ( c ) and t [ 0 , T ] .
    In this case, the robust counterpart ( RCLP ) is written as follows:
    ( RCLP ) max j I ( a ) 0 T a j ( 0 ) ( t ) · z j ( t ) d t + j I ( a ) 0 T a j ( t ) · z j ( t ) d t subject   to { j : j I i ( B ) } B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B i j · z j ( t ) c i ( 0 ) ( t ) + { j : j I i ( K ) } 0 t K i j ( 0 ) · z j ( s ) d s + { j : j I i ( K ) } 0 t K i j · z j ( s ) d s   for i I ( c )   and   for   all   t [ 0 , T ] ; { j : j I i ( B ) } B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B i j · z j ( t ) c i ( t ) + { j : j I i ( K ) } 0 t K i j ( 0 ) · z j ( s ) d s + { j : j I i ( K ) } 0 t K i j · z j ( s ) d s   for i I ( c )   and   for   all   t [ 0 , T ] ; z j L 2 [ 0 , T ]   and   z j ( t ) 0 for   all   j = 1 , , q   and   t [ 0 , T ] ; for   all   B i j U B i j with j I i ( B ) ; for   all   K i j U K i j with j I i ( K ) ; for   all   a j ( t ) V a j ( t ) with j I ( a ) for   all   t [ 0 , T ] ; for   all   c i ( t ) V c i ( t ) with i I ( c ) for   all   t [ 0 , T ] .
Next, we are going to convert the above semi-infinite problem ( RCLP ) into a conventional continuous-time linear programming problem.
First of all, the problem ( RCLP ) can be rewritten as the following equivalent form
( RCLP 1 ) max ϕ subject   to ϕ j I ( a ) 0 T a j ( 0 ) ( t ) · z j ( t ) d t + j I ( a ) 0 T a j ( t ) · z j ( t ) d t { j : j I i ( B ) } B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B i j · z j ( t ) c i ( 0 ) ( t ) + { j : j I i ( K ) } 0 t K i j ( 0 ) · z j ( s ) d s + { j : j I i ( K ) } 0 t K i j · z j ( s ) d s   for i I ( c )   and   for   all   t [ 0 , T ] ; { j : j I i ( B ) } B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B i j · z j ( t ) c i ( t ) + { j : j I i ( K ) } 0 t K i j ( 0 ) · z j ( s ) d s + { j : j I i ( K ) } 0 t K i j · z j ( s ) d s   for i I ( c )   and   for   all   t [ 0 , T ] ; ϕ R ; z j L 2 [ 0 , T ]   and   z j ( t ) 0 for   all   j = 1 , , q   and   t [ 0 , T ] ; for   all   B i j U B i j with j I i ( B ) ; for   all   K i j U K i j with j I i ( K ) ; for   all   a j ( t ) V a j ( t ) with j I ( a ) for   all   t [ 0 , T ] ; for   all   c i ( t ) V c i ( t ) with i I ( c ) for   all   t [ 0 , T ] .
Given any fixed i { 1 , , p } , for j I i ( B ) , since z j ( t ) 0 and B i j B i j ( 0 ) + B ^ i j , we have
{ j : j I i ( B ) } B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B i j · z j ( t ) j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) .
Similarly, for j I i ( K ) , since z j ( s ) 0 and K i j ( 0 ) K ^ i j K i j , we also have
{ j : j I i ( K ) } 0 t K i j ( 0 ) · z j ( s ) d s + { j : j I i ( K ) } 0 t K i j · z j ( s ) d s j = 1 q 0 t K i j ( 0 ) · z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s .
Using (1) and (2), we consider the following cases.
  • For i I ( c ) , since c i ( 0 ) ( t ) c ^ i ( t ) c i ( t ) for all t [ 0 , T ] , we obtain
    { j : j I i ( B ) } B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B i j · z j ( t ) { j : j I i ( K ) } 0 t K i j ( 0 ) · z j ( s ) d s { j : j I i ( K ) } 0 t K i j · z j ( s ) d s c i ( t ) j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) j = 1 q K i j ( 0 ) · 0 t z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s c i ( 0 ) ( t ) c ^ i ( t )
    for all B i j U B i j , K i j U K i j and c i ( t ) V c i ( t ) , which implies
    max B i j , K i j , c i { j : j I i ( B ) } B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B i j · z j ( t ) { j : j I i ( K ) } 0 t K i j ( 0 ) · z j ( s ) d s { j : j I i ( K ) } 0 t K i j · z j ( s ) d s c i ( t ) = j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) j = 1 q 0 t K i j ( 0 ) · z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s c i ( 0 ) ( t ) c ^ i ( t ) ,
    where the equality can be attained.
  • For i I ( c ) , we obtain
    { j : j I i ( B ) } B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B i j · z j ( t ) { j : j I i ( K ) } 0 t K i j ( 0 ) · z j ( s ) d s { j : j I i ( K ) } 0 t K i j · z j ( s ) d s c i ( 0 ) ( t ) j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) j = 1 q K i j ( 0 ) · 0 t z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s c i ( 0 ) ( t ) ,
    which implies
    max B i j , K i j { j : j I i ( B ) } B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B i j · z j ( t ) { j : j I i ( K ) } 0 t K i j ( 0 ) · z j ( s ) d s { j : j I i ( K ) } 0 t K i j · z j ( s ) d s c i ( 0 ) ( t ) = j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) j = 1 q 0 t K i j ( 0 ) · z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s c i ( 0 ) ( t ) ,
    where the equality can be attained. On the other hand, since a j ( 0 ) ( t ) a ^ j ( t ) a j ( t ) for j I ( a ) and t [ 0 , T ] , we have
    j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t j I ( a ) 0 T a j ( 0 ) ( t ) · z j ( t ) d t + j I ( a ) 0 T a j ( t ) · z j ( t ) d t ,
    which implies
    min a j j I ( a ) 0 T a j ( 0 ) ( t ) · z j ( t ) d t + j I ( a ) 0 T a j ( t ) · z j ( t ) d t = j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t ,
    where the equality can also be attained. Therefore, from (3), (4) and (5), we conclude that ( ϕ , z ( t ) ) = ( ϕ , z 1 ( t ) , , z n ( t ) ) is a feasible solution of problem ( RCLP 1 ) if and only if it satisfies the following inequalities:
    j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) j = 1 q 0 t K i j ( 0 ) · z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s + c i ( 0 ) ( t ) c ^ i ( t )   for i I ( c )
    and
    j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) j = 1 q 0 t K i j ( 0 ) · z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s + c i ( 0 ) ( t )   for i I ( c )
    and
    ϕ j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t .
This shows that problem ( RCLP 1 ) is equivalent to the following problem
( RCLP 2 ) max ϕ subject   to ϕ j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t ; j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q K i j ( 0 ) · 0 t z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s for   all   t [ 0 , T ]   and   i I ( c ) ; j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) c i ( 0 ) ( t ) + j = 1 q K i j ( 0 ) · 0 t z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s for   all   t [ 0 , T ]   and   i I ( c ) ; ϕ R ; z j L 2 [ 0 , T ]   and   z j ( t ) 0 for   all   j = 1 , , q   and   t [ 0 , T ] ,
which can also be rewritten as the following continuous-time linear programming problem:
( RCLP 3 ) max j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t subject   to j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q K i j ( 0 ) · 0 t z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s for   all   t [ 0 , T ]   and   i I ( c ) ; j = 1 q B i j ( 0 ) · z j ( t ) + { j : j I i ( B ) } B ^ i j · z j ( t ) c i ( 0 ) ( t ) + j = 1 q K i j ( 0 ) · 0 t z j ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z j ( s ) d s for   all   t [ 0 , T ]   and   i I ( c ) ; z j L 2 [ 0 , T ]   and   z j ( t ) 0 for   all   j = 1 , , q   and   t [ 0 , T ] .
According to the duality theory in continuous-time linear programming problem, the dual problem of (RCLP3) can be formulated as follows
( DRCLP 3 ) max i = 1 p 0 T c i ( 0 ) ( t ) · w i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w i ( t ) d t subject   to i = 1 p B i j ( 0 ) · w i ( t ) + { i : j I i ( B ) } B ^ i j · w i ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w i ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( s ) d s for   all   t [ 0 , T ]   and   j I ( a ) ; i = 1 p B i j ( 0 ) · w i ( t ) + { i : j I i ( B ) } B ^ i j · w i ( t ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T w i ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( s ) d s for   all   t [ 0 , T ]   and   j I ( a ) ; w i L 2 [ 0 , T ]   and   w i ( t ) 0 for   all   i = 1 , , p   and   t [ 0 , T ] .
In the sequel, we are going to design a computational procedure to numerically solve the robust counterpart (RCLP3).

3. Discretization

In this section, we shall introduce the discretization problem. Based on the solutions obtained from the discretization problem, we can construct the feasible solutions of the transformed problem. Under these settings, the error bound can be derived. On the other hand, in order to obtain the approximate solution, we also introduce the concept of ϵ -optimal solutions.
In order to develop the efficient numerical method, we assume that the following conditions are satisfied.
  • For i = 1 , , p and j = 1 , , q , the real numbers B i j ( 0 ) 0 and K i j ( 0 ) 0 . The real numbers B i j ( 0 ) B ^ i j 0 for j I i ( B ) and K i j ( 0 ) K ^ i j 0 for j I i ( K ) .
  • For i = 1 , , p and j = 1 , , q , the functions a j ( 0 ) and c i ( 0 ) are piecewise continuous on [ 0 , T ] . For j I ( a ) and i I ( c ) , the functions a ^ j and c ^ i are also piecewise continuous on [ 0 , T ] , which says that the functions a j ( 0 ) a ^ j and c i ( 0 ) c ^ i are piecewise continuous on [ 0 , T ] for j I ( a ) and i I ( c ) .
  • For each j = 1 , , q , the following inequality is satisfied:
    i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j > 0 .
  • For each i = 1 , , p and for
    B ¯ i min j I i ( B ) B i j ( 0 ) + B ^ i j : B i j ( 0 ) + B ^ i j > 0   and   B ˜ i min j I i ( B ) B i j ( 0 ) : B i j ( 0 ) > 0 ,
    the following inequality is satisfied:
    min i = 1 , , p min B ¯ i , B ˜ i = σ > 0 .
    In other words,
    σ B i j ( 0 ) + B ^ i j if B i j ( 0 ) + B ^ i j 0 for j I i ( B ) B i j ( 0 ) if B i j ( 0 ) 0 for j I i ( B ) .
Let A j denote the set of discontinuities of functions a j ( 0 ) a ^ j for j I ( a ) and a j ( 0 ) for j I ( a ) . Let C j denote the set of discontinuities of functions c i ( 0 ) c ^ i for i I ( c ) and c i ( 0 ) for i I ( c ) . Then, we see that A j and C i are finite subsets of [ 0 , T ] . In order to determine the partition of the time interval [ 0 , T ] , we consider the following set
D = j = 1 q A j i = 1 p C i 0 , T .
Then, D is a finite subset of [ 0 , T ] written by
D = d 0 , d 1 , d 2 , , d r ,
where, for convenience, we set d 0 = 0 and d r = T . It means that d 0 and d r may be the endpoint-continuities of functions a ¯ j and c ¯ i . Let P n be a partition of [ 0 , T ] such that D P n , which means that each closed [ d v , d v + 1 ] is also divided into many closed subintervals. In this case, the time interval [ 0 , T ] is not necessarily equally divided into n closed subintervals. Let
P n = e 0 ( n ) , e 1 ( n ) , , e n ( n ) ,
where e 0 ( n ) = 0 and e n ( n ) = T . Then, the n closed subintervals are denoted by
E ¯ l ( n ) = e l 1 ( n ) , e l ( n ) for l = 1 , , n .
We also write
E l ( n ) = e l 1 ( n ) , e l ( n ) and F l ( n ) = e l 1 ( n ) , e l ( n ) .
Let d l ( n ) denote the length of closed interval E ¯ l ( n ) for l = 1 , , n , and let
P n = max l = 1 , , n d l ( n ) .
In the limiting case, we shall assume that
P n 0   as   n .
In this paper, we assume that there exists n * , n * N such that
n * · r n n * · r   and   P n T n * .
Therefore, in the limiting case, we assume that n * , which implies n . In the sequel, when we say that n , it implicitly means that n * .
For example, suppose that the length of closed interval [ d v , d v + 1 ] is l v for v = 1 , , r . Each closed interval [ d v , d v + 1 ] is equally divided by n * subintervals for v = 1 , , r . In this case, the total subintervals are n = n * · r . We also see that n * = n * and
P n = 1 n * · max v = 1 , , r l v T n * ,   and   n   if   and   only   if   n * .
Under the above construction for the partition P n , we see that the functions a j ( 0 ) a ^ j for j I ( a ) , a j ( 0 ) for j I ( a ) , c i ( 0 ) c ^ i for i I ( c ) and c i ( 0 ) for i I ( c ) are continuous on the open interval E l ( n ) for l = 1 , , n . Now, we define
a l j ( n ) = inf t E ¯ l ( n ) a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a ) inf t E ¯ l ( n ) a j ( 0 ) ( t ) for j I ( a )
and
c l i ( n ) = inf t E ¯ l ( n ) c i ( 0 ) ( t ) c ^ i ( t ) for i I ( c ) inf t E ¯ l ( n ) c i ( 0 ) ( t ) for i I ( c )
and the vectors
a l ( n ) = a l 1 ( n ) , a l 2 ( n ) , , a l q ( n ) R q and c l ( n ) = c l 1 ( n ) , c l 2 ( n ) , , c l p ( n ) R p .
Then, we see that
a l j ( n ) a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a ) a j ( 0 ) ( t ) for j I ( a ) and c l i ( n ) c i ( 0 ) ( t ) c ^ i ( t ) for i I ( c ) c i ( 0 ) ( t ) for i I ( c )
for all t E ¯ l ( n ) and l = 1 , , n .
For each n N and l = 1 , , n , we define the following linear programming problem:
( P n ) max l = 1 n j = 1 q d l ( n ) · a l j ( n ) · z l j subject   to j = 1 q B i j ( 0 ) · z 1 j + { j : j I i ( B ) } B ^ i j · z 1 j c 1 i ( n )   for i = 1 , , p ; j = 1 q B i j ( 0 ) · z l j + { j : j I i ( B ) } B ^ i j · z l j c l i ( n ) + k = 1 l 1 j = 1 q d k ( n ) · K i j ( 0 ) · z k j k = 1 l 1 { j : j I i ( K ) } d k ( n ) · K ^ i j · z k j   for l = 2 , , n   and   i = 1 , , p ; z l j 0   for l = 1 , , n   and   j = 1 , , q .
According to the duality theory of linear programming, the dual problem of ( P n ) is given by
( D ^ n ) min l = 1 n i = 1 p c l i ( n ) · w ^ l i subject   to i = 1 p B i j ( 0 ) · w ^ l i + { i : j I i ( B ) } B ^ i j · w ^ l i d l ( n ) a l j ( n ) + d l ( n ) · k = l + 1 n i = 1 p K i j ( 0 ) · w ^ k i d l ( n ) · k = l + 1 n { i : j I i ( K ) } K ^ i j · w ^ k i   for l = 1 , , n 1   and   j = 1 , , q ; i = 1 p B i j ( 0 ) · w ^ n i + { i : j I i ( B ) } B ^ i j · w ^ n i d n ( n ) a n j ( n )   for j = 1 , , q ; w ^ l i 0   for l = 1 , , n   and   i = 1 , , p .
Now, let
w l i = w ^ l i d l ( n ) .
Then, by dividing d l ( n ) on both sides of the constraints, the dual problem ( D ^ n ) can be equivalently written by
( D n ) min l = 1 n i = 1 p d l ( n ) · c l i ( n ) · w l i subject   to i = 1 p B i j ( 0 ) · w l i + { i : j I i ( B ) } B ^ i j · w l i a l j ( n ) + k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w k i k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w k i   for l = 1 , , n 1   and   j = 1 , , q ; i = 1 p B i j ( 0 ) · w n i + { i : j I i ( B ) } B ^ i j · w n i a n j ( n )   for j = 1 , , q ; w l i 0   for l = 1 , , n   and   i = 1 , , p .
Recall that d l ( n ) denotes the length of closed interval E ¯ l ( n ) . We also define
s l ( n ) = max k = l , , n d k ( n )
Then, we have
s l ( n ) = max d l ( n ) , d l + 1 ( n ) , , d n ( n ) = max d l ( n ) , s l + 1 ( n )
which say that
s l ( n ) d l ( n ) and P n s l ( n ) s l + 1 ( n )
for l = 1 , , n 1 . For further discussion, we adopt the following notations:
τ ¯ l ( n ) = max j = 1 , , q a l j ( n )   and   τ l ( n ) = max k = l , , n τ ¯ k ( n )
ν = max j = 1 , , q i = 1 p K i j ( 0 ) { i : j I i ( K ) } K ^ i j
ϕ = max i = 1 , , p j = 1 q K i j ( 0 ) { j : j I i ( K ) } K ^ i j τ = max j = 1 , , q τ j , where τ j = sup t [ 0 , T ] a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a ) sup t [ 0 , T ] a j ( 0 ) ( t ) for j I ( a ) ζ = max i = 1 , , p ζ i , where ζ i = sup t [ 0 , T ] c i ( 0 ) ( t ) c ^ i ( t ) for i I ( c ) sup t [ 0 , T ] c i ( 0 ) ( t ) for i I ( c ) .
Now, we also have
τ l ( n ) = max τ ¯ l ( n ) , τ ¯ l + 1 ( n ) , , τ ¯ n ( n ) = max τ ¯ l ( n ) , τ l + 1 ( n )
which say that
τ l ( n ) τ l + 1 ( n ) .
It is obvious that
τ ¯ l ( n ) τ l ( n ) τ
for any n N and l = 1 , , n .
Proposition 1.
The following statements hold true.
(i)
Let
w l ( n ) = τ l ( n ) σ · 1 + s l ( n ) · ν σ n l 0 ,
where σ is given in (7). We define w ˘ l i ( n ) = w l ( n ) for i = 1 , , p and l = 1 , , n , and consider the following vector
w ˘ ( n ) = w ˘ 1 ( n ) , w ˘ 2 ( n ) , , w ˘ n ( n ) with w ˘ l ( n ) = w ˘ l 1 ( n ) , w ˘ l 2 ( n ) , , w ˘ l p ( n ) .
Then w ˘ ( n ) is a feasible solution of problem ( D n ) . Moreover, we have
w ˘ l i ( n ) τ σ · exp r · T · ν σ
for all n N , i = 1 , , p and l = 1 , , n .
(ii)
Given a feasible solution w ( n ) of problem ( D n ) , we define
w ¯ l i ( n ) = min w l i ( n ) , w l ( n )
for i = 1 , , p and l = 1 , , n , and consider the following vector
w ¯ ( n ) = w ¯ 1 ( n ) , w ¯ 2 ( n ) , , w ¯ n ( n ) with w ¯ l ( n ) = w ¯ l 1 ( n ) , w ¯ l 2 ( n ) , , w ¯ l p ( n ) .
Then, w ¯ ( n ) is a feasible solution of problem ( D n ) satisfying the following inequalities
w ¯ l i ( n ) w l ( n ) τ σ · exp r · T · ν σ
for all n N , i = 1 , , p and l = 1 , , n . Moreover, if each c l i ( n ) is non-negative and w ( n ) is an optimal solution of problem ( D n ) , then w ¯ ( n ) is also an optimal solution of problem ( D n ) .
Proof. 
To prove part (i), by (6), for each j, there exists i j { 1 , 2 , , p } such that B i j j ( 0 ) > 0 for j I i j ( B ) or B i j j ( 0 ) + B ^ i j j > 0 for j I i j ( B ) . Since 0 < σ B i j j ( 0 ) for j I i j ( B ) or 0 < σ B i j j ( 0 ) + B ^ i j j for j I i j ( B ) by (7), we have
i = 1 p B i j ( 0 ) · w ˘ l i + { i : j I i ( B ) } B ^ i j · w ˘ l i B i j j ( 0 ) · w ˘ l i j ( n ) for j I i j ( B ) B i j j ( 0 ) + B ^ i j j · w ˘ l i j ( n ) for j I i j ( B ) = B i j j ( 0 ) · τ l ( n ) σ · 1 + s l ( n ) · ν σ n l for j I i j ( B ) B i j j ( 0 ) + B ^ i j j · τ l ( n ) σ · 1 + s l ( n ) · ν σ n l for j I i j ( B ) τ l ( n ) · 1 + s l ( n ) · ν σ n l .
Since
i = 1 p d k ( n ) · K i j ( 0 ) · w ˘ k i ( n ) { i : j I i ( K ) } d k ( n ) · K ^ i j · w ˘ k i ( n ) i = 1 p K i j ( 0 ) { i : j I i ( K ) } K ^ i j · s k ( n ) · τ k ( n ) σ · 1 + s k ( n ) · ν σ n k ( by   ( 13 ) ) s k ( n ) · ν · τ k ( n ) σ 1 + s k ( n ) · ν σ n k ( by   ( 15 ) ) ,
it follows that, for l = 1 , , n 1 ,
a l j ( n ) + k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w ˘ k i ( n ) k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w ˘ k i ( n ) τ l ( n ) + k = l + 1 n s k ( n ) · ν · τ k ( n ) σ 1 + s k ( n ) · ν σ n k ( by   ( 14 ) )   and   ( 21 ) ) τ l ( n ) + k = l + 1 n s l ( n ) · ν · τ l ( n ) σ 1 + s l ( n ) · ν σ n k ( by   ( 13 )   and   ( 16 ) ) = τ l ( n ) · 1 + k = l + 1 n s l ( n ) · ν σ · 1 + s l ( n ) · ν σ n k = τ l ( n ) · 1 + s l ( n ) · ν σ n l .
Therefore, from (20), we obtain
i = 1 p B i j ( 0 ) · w ˘ l i ( n ) + { i : j I i ( B ) } B ^ i j · w ˘ l i ( n ) a l j ( n ) + k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w ˘ k i ( n ) k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w ˘ k i ( n )
for l = 1 , , n 1 and
i = 1 p B i j ( 0 ) · w ˘ n i ( n ) + { i : j I i ( B ) } B ^ i j · w ˘ n i ( n ) τ n ( n ) a n j ( n ) ,
which show that w ˘ ( n ) is indeed a feasible solution of problem ( D n ) . On the other hand, for l = 1 , , n , from (9), (13) and (17), we have
τ l ( n ) σ · 1 + s l ( n ) · ν σ n l τ σ · 1 + P n · ν σ n τ σ · 1 + T n * · ν σ n τ σ · 1 + r · T n · ν σ n .
Since
1 + r · T n · ν σ n exp r · T · ν σ   as   n *   in   the   limiting   case ,
i.e., n , this proves (19).
To prove part (ii), for each j = 1 , , q and l = 1 , , n , we define the index set
I l j = i : B i j ( 0 ) + B ^ i j > 0 for j I i ( B ) i : B i j ( 0 ) > 0 for j I i ( B ) .
Then, I l j by (6). For each l = 1 , , n , we see that
i = 1 p B i j ( 0 ) · w ¯ l i ( n ) + { i : j I i ( B ) } B ^ i j · w ¯ l i ( n ) = i I l j B i j ( 0 ) · w ¯ l i ( n ) + { i I l j : j I i ( B ) } B ^ i j ( 0 ) · w ¯ l i ( n ) .
For each fixed l = 1 , , n , we also define the index set
I ¯ l j = i I l j : w ¯ l i ( n ) = w l ( n ) .
For each fixed j = 1 , , q and l = 1 , , n , we consider the following two cases.
  • Suppose that I ¯ l j , i.e., there exists i l j such that B i l j j ( 0 ) > 0 for j I i l j ( B ) or B i l j j ( 0 ) + B ^ i l j j > 0 for j I i l j ( B ) , and w ¯ l i l j ( n ) = w l ( n ) = w ˘ l i l j ( n ) . In this case, by (20), for l = 1 , , n , we have
    i = 1 p B i j ( 0 ) · w ¯ l i ( n ) + { i : j I i ( B ) } B ^ i j · w ¯ l i ( n ) B i l j j ( 0 ) · w ¯ l i l j ( n ) for j I i l j ( B ) B i l j j ( 0 ) + B ^ i l j j · w ¯ l i l j ( n ) for j I i l j ( B ) = B i l j j ( 0 ) · w ˘ l i l j ( n ) for j I i l j ( B ) B i l j j ( 0 ) + B ^ i l j j · w ˘ l i l j ( n ) for j I i l j ( B ) τ l ( n ) · 1 + s l ( n ) · ν σ n l .
    Since w ¯ k i ( n ) w l ( n ) = w ˘ k i ( n ) for each k, by referring to (22), for l = 1 , , n 1 , we also have
    a l j ( n ) + k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w ¯ k i ( n ) k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w ¯ k i ( n ) k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w ˘ k i ( n ) k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w ˘ k i ( n ) τ l ( n ) · 1 + s l ( n ) · ν σ n l ,
    which implies, by (25),
    a l j ( n ) + k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w ¯ k i ( n ) k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w ¯ k i ( n ) i = 1 p B i j ( 0 ) · w ¯ l i ( n ) + { i : j I i ( B ) } B ^ i j · w ¯ l i ( n ) .
    For l = n , from (25), we also have
    i = 1 p B i j ( 0 ) · w ¯ n i ( n ) + { i : j I i ( B ) } B ^ i j · w ¯ n i ( n ) τ n ( n ) a n j ( n ) .
  • Suppose that I ¯ l j = , i.e., if i I l j then w ¯ l i ( n ) = w l i ( n ) for l = 1 , , n . In this case, for l = 1 , , n 1 , we have
    i = 1 p B i j ( 0 ) · w ¯ l i ( n ) + { i : j I i ( B ) } B ^ i j · w ¯ l i ( n ) k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w ¯ k i ( n ) k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w ¯ k i ( n )
    = i I l j B i j ( 0 ) · w ¯ l i ( n ) + { i I l j : j I i ( B ) } B ^ i j ( 0 ) · w ¯ l i ( n ) k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w ¯ k i ( n ) k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w ¯ k i ( n ) ( by   ( 23 ) )
    = i I l j B i j ( 0 ) · w l i ( n ) + { i I l j : j I i ( B ) } B ^ i j ( 0 ) · w l i ( n ) k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w ¯ k i ( n ) k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w ¯ k i ( n )
    = i = 1 p B i j ( 0 ) · w l i ( n ) + { i : j I i ( B ) } B ^ i j · w l i ( n ) k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w ¯ k i ( n ) k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w ¯ k i ( n )
    i = 1 p B i j ( 0 ) · w l i ( n ) + { i : j I i ( B ) } B ^ i j · w l i ( n ) k = l + 1 n i = 1 p d k ( n ) · K i j ( 0 ) · w k i ( n ) k = l + 1 n { i : j I i ( K ) } d k ( n ) · K ^ i j · w k i ( n ) ( since w k i ( n ) w ¯ k i ( n ) and d k ( n ) · K i j 0 ) a l j ( n ) ( since w ( n ) is a feasible solution of problem ( D n ) ) .
    For l = n , by (23) again, we also have
    i = 1 p B i j ( 0 ) · w ¯ n i ( n ) + { i : j I i ( B ) } B ^ i j · w ¯ n i ( n ) = i I n j B i j ( 0 ) · w ¯ n i ( n ) + { i I n j : j I i ( B ) } B ^ i j ( 0 ) · w n i ( n ) = i I n j B i j ( 0 ) · w ¯ n i ( n ) + { i I n j : j I i ( B ) } B ^ i j ( 0 ) · w n i ( n ) = i = 1 p B i j ( 0 ) · w n i ( n ) + { i : j I i ( B ) } B ^ i j · w n i ( n ) a n j ( n ) ( by the feasibility of w ( n ) ) .
Therefore, from the above two cases, we conclude that w ¯ ( n ) is indeed a feasible solution of problem ( D n ) . Since problem ( D n ) is a minimization problem and
l = 1 n i = 1 p d l ( n ) · c l i ( n ) · w ¯ l i ( n ) l = 1 n i = 1 p d l ( n ) · c l i ( n ) · w l i ( n ) ,
it says that if w ( n ) is an optimal solution of problem ( D n ) , then w ¯ ( n ) is an optimal solution of problem ( D n ) . This completes the proof. □
Proposition 2.
Suppose that the primal problem ( P n ) is feasible with a feasible solution z ( n ) = ( z 1 ( n ) , z 2 ( n ) , , z n ( n ) ) , where z l ( n ) = ( z l 1 ( n ) , z l 2 ( n ) , , z l q ( n ) ) for l = 1 , , n . Then
0 z l j ( n ) ζ σ · 1 + P n · ϕ σ l 1
and
0 z l j ( n ) ζ σ · exp r · T · ϕ σ
for all j = 1 , , q , l = 1 , , n and n N .
Proof. 
By (6), for each j, there exists i j { 1 , 2 , , p } such that B i j j ( 0 ) > 0 for j I i j ( B ) or B i j j ( 0 ) + B ^ i j j > 0 for j I i j ( B ) ; that is, 0 < σ B i j j ( 0 ) for j I i j ( B ) or 0 < σ B i j j ( 0 ) + B ^ i j j for j I i j ( B ) . If ϕ = 0 , then K is a zero matrix. In this case, using the feasibility of z ( n ) , we have
0 σ · z l j ( n ) B i j j ( 0 ) · z l j ( n ) for j I i j ( B ) B i j j ( 0 ) + B ^ i j j · z l j ( n ) for j I i j ( B ) s = 1 q B i j s ( 0 ) · z l s ( n ) + { s : s I i ( B ) } B ^ i j s · z l s ( n ) c l i j ( n ) ζ
which implies
0 z l j ( n ) ζ σ .
For the case of ϕ 0 , we want to show that
z l j ( n ) ζ σ · 1 + P n · ϕ σ l 1
for all j = 1 , , q and l = 1 , , n . We shall prove it by induction on l. Since z ( n ) is a feasible solution of problem ( P n ) , for l = 1 , we have
B i j j ( 0 ) · z 1 j ( n ) for j I i j ( B ) B i j j ( 0 ) + B ^ i j j · z 1 j ( n ) for j I i j ( B ) s = 1 q B i j s ( 0 ) · z 1 s ( n ) + { s : s I i ( B ) } B ^ i j s · z 1 s ( n ) c 1 i j ( n ) ,
Therefore, for each j, we obtain
z 1 j ( n ) c 1 i j ( n ) B i j j ( 0 ) for j I i j ( B ) c 1 i j ( n ) B i j j ( 0 ) + B ^ i j j for j I i j ( B ) ζ σ .
Suppose that
z l j ( n ) ζ σ · 1 + P n · ϕ σ l 1
for l = 1 , 2 , , r 1 . Then, for each j, we have
l = 1 r 1 z l j ( n ) l = 1 r 1 ζ σ · 1 + P n · ϕ σ l 1 = ζ ϕ · P n · 1 + P n · ϕ σ r 1 1 .
By the feasibility of z ( n ) , we have
B i j j ( 0 ) · z r j ( n ) for j I i j ( B ) B i j j ( 0 ) + B ^ i j j · z r j ( n ) for j I i j ( B ) s = 1 q B i j s ( 0 ) · z r s ( n ) + { s : s I i ( B ) } B ^ i j s · z r s ( n )
c r i j ( n ) + k = 1 r 1 s = 1 q d k ( n ) · K i j s ( 0 ) · z k s ( n ) k = 1 r 1 { s : s I i ( K ) } d k ( n ) · K ^ i j s · z k s ( n )
c r i j ( n ) + P n · s = 1 q K i j s { s : s I i ( K ) } K ^ i j s · ζ ϕ · P n · 1 + P n · ϕ σ r 1 1 ( by   ( 27 ) )
ζ + ζ · 1 + P n · ϕ σ r 1 1 ,
which implies
z r j ( n ) ζ B i j j ( 0 ) · 1 + P n · ϕ σ r 1 for j I i j ( B ) ζ B i j j ( 0 ) + B ^ i j j · 1 + P n · ϕ σ r 1 for j I i j ( B ) ζ σ · 1 + P n · ϕ σ r 1 .
Therefore, by induction, we obtain
z l j ( n ) ζ σ · 1 + P n · ϕ σ l 1 ζ σ · 1 + P n · ϕ σ n
for all j = 1 , , q and l = 1 , , n . From (9), since
ζ σ · 1 + P n · ϕ σ n ζ σ · 1 + T n * · ϕ σ n ζ σ · 1 + r · T n · ϕ σ n
and
ζ σ · 1 + r · T n · ϕ σ n ζ σ · exp r · T · ϕ σ as n * ( i . e . , n ) ,
using (28), we complete the proof. □
Let z ¯ ( n ) = ( z ¯ 1 ( n ) , z ¯ 2 ( n ) , , z ¯ n ( n ) ) with z ¯ l ( n ) = ( z ¯ l 1 ( n ) , z ¯ l 2 ( n ) , , z ¯ l q ( n ) ) be a feasible solution of problem ( P n ) . For each j = 1 , , q , we construct the step function z ^ j ( n ) : [ 0 , T ] R as follows:
z ^ j ( n ) ( t ) = z ¯ l j ( n ) if t F l ( n ) for l = 1 , , n z ¯ n j ( n ) if t = T .
Then we have the following feasibility.
Proposition 3.
Let z ¯ ( n ) = ( z ¯ 1 ( n ) , z ¯ 2 ( n ) , , z ¯ n ( n ) ) be a feasible solution of the primal problem ( P n ) , where z ¯ l ( n ) = ( z ¯ l 1 ( n ) , z ¯ l 2 ( n ) , , z ¯ l q ( n ) ) for l = 1 , , n . Then the vector-valued step function ( z ^ 1 ( n ) , , z ^ q ( n ) ) defined in ( 29 ) is a feasible solution of problem ( RCLP 3 ) .
Proof. 
Since z ¯ ( n ) is a feasible solution of problem ( P n ) , it follows that
j = 1 q B i j ( 0 ) · z ¯ 1 j + { j : j I i ( B ) } B ^ i j · z ¯ 1 j c 1 i ( n ) for i = 1 , , p
and
j = 1 q B i j ( 0 ) · z ¯ l j + { j : j I i ( B ) } B ^ i j · z ¯ l j c l i ( n ) + k = 1 l 1 j = 1 q d k ( n ) · K i j ( 0 ) · z ¯ k j k = 1 l 1 { j : j I i ( K ) } d k ( n ) · K ^ i j · z ¯ k j for l = 2 , , n and i = 1 , , p
Now, we consider the following two cases.
  • Suppose that t F l ( n ) for l = 2 , , n . Recall that e l 1 ( n ) is the left-end point of closed interval E ¯ l ( n ) . Then we have
    j = 1 q B i j ( 0 ) · z ^ j ( n ) ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j ( n ) ( t ) j = 1 q K i j ( 0 ) · 0 t z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } K ^ i j · 0 t z ^ j ( n ) ( s ) d s
    = j = 1 q B i j ( 0 ) · z ^ j ( n ) ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j ( n ) ( t ) j = 1 q k = 1 l 1 E ¯ k ( n ) K i j ( 0 ) · z ^ j ( n ) ( s ) d s j = 1 q e l 1 ( n ) t K i j ( 0 ) · z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } k = 1 l 1 E ¯ k ( n ) K ^ i j · z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } e l 1 ( n ) t K ^ i j · z ^ j ( n ) ( s ) d s
    j = 1 q B i j ( 0 ) · z ^ j ( n ) ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j ( n ) ( t ) j = 1 q k = 1 l 1 E ¯ k ( n ) K i j ( 0 ) · z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } k = 1 l 1 E ¯ k ( n ) K ^ i j · z ^ j ( n ) ( s ) d s ( since K i j ( 0 ) 0 for all i and j , and K i j ( 0 ) K ^ i j 0 for j I i ( K ) )
    = j = 1 q B i j ( 0 ) · z ¯ l j + { j : j I i ( B ) } B ^ i j · z ¯ l j k = 1 l 1 j = 1 q d k ( n ) · K i j ( 0 ) · z ¯ k j + k = 1 l 1 { j : j I i ( K ) } d k ( n ) · K ^ i j · z ¯ k j c l i ( n ) ( by   ( 31 ) ) c i ( 0 ) ( t ) c ^ i ( t ) for i I ( c ) c i ( 0 ) ( t ) for i I ( c ) ( by   ( 11 ) )
    For l = 1 , the desired inequality can be similarly obtained by (30).
  • Suppose that t = T . Then, we can similarly show that
    j = 1 q B i j ( 0 ) · z ^ j ( n ) ( T ) + { j : j I i ( B ) } B ^ i j · z ^ j ( n ) ( T ) j = 1 q K i j ( 0 ) · 0 T z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } K ^ i j · 0 T z ^ j ( n ) ( s ) d s c n i ( n ) c i ( 0 ) ( T ) c ^ i ( T ) for i I ( c ) c i ( 0 ) ( T ) for i I ( c ) ( by   ( 11 ) )
Therefore, we conclude that ( z ^ 1 ( n ) , , z ^ q ( n ) ) is indeed a feasible solution of problem (RCLP3). This completes the proof. □
Given any optimization problem (P), we denote by V ( P ) the optimal objective value of problem (P). For example, the optimal objective value of problem (RCLP3) is denoted by V ( RCLP 3 ) . Now we assume that z ¯ ( n ) is an optimal solution of problem ( P n ) . Then we can also construct a feasible solution ( z ^ 1 ( n ) , , z ^ q ( n ) ) of problem ( RCLP 3 ) according to (29). Then, using (11), we have
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( t ) d t = j = 1 q l = 1 n E ¯ l ( n ) a j ( 0 ) ( t ) · z ¯ l j ( n ) d t j I ( a ) l = 1 n E ¯ l ( n ) a ^ j ( t ) · z ¯ l j ( n ) d t j = 1 q l = 1 n E ¯ l ( n ) a l j ( n ) · z ¯ l j ( n ) d t = j = 1 q l = 1 n d l ( n ) · a l j ( n ) · z ¯ l j ( n ) = V ( P n ) .
Therefore, we have
V ( RCLP 3 ) j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( t ) d t ( by Proposition 3 ) V ( P n ) ( by   ( 32 ) ) .
Using the weak duality theorem for the primal and dual pair of problems (DRCLP3) and (RCLP3), we see that
V ( DRCLP 3 ) V ( RCLP 3 ) V ( P n ) = V ( D n ) .
Next, we want to show that
lim n V ( D n ) = V ( DRCLP 3 ) .
Let w ( n ) = ( w 1 ( n ) , w 2 ( n ) , , w n ( n ) ) with w l ( n ) = ( w l 1 ( n ) , w l 2 ( n ) , , w l p ( n ) ) be an optimal solution of problem ( D n ) . We define
w ¯ l i ( n ) = min w l i ( n ) , w l ( n )
for i = 1 , , p and l = 1 , , n , where w l ( n ) is defined in (18), and consider the following vector
w ¯ ( n ) = w ¯ 1 ( n ) , w ¯ 2 ( n ) , , w ¯ n ( n ) with w ¯ l ( n ) = w ¯ l 1 ( n ) , w ¯ l 2 ( n ) , , w ¯ l p ( n )
Then, according to part (ii) of Proposition 1, we see that w ¯ ( n ) is an optimal solution of problem ( D n ) satisfying the following inequalities
w ¯ l i ( n ) w l ( n ) τ σ · exp r · T · ν σ
for all n N , i = 1 , , p and l = 1 , , n .
For each l = 1 , , n and j = 1 , , q , we define a real-valued function h ¯ l j ( n ) on the half-open interval F l ( n ) by
h ¯ l j ( n ) ( t ) = e l ( n ) t · i = 1 p K i j ( 0 ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j · w ¯ l i ( n ) ,
where e l ( n ) is defined in (8). For l = 1 , , n , let
π ¯ l ( n ) = max max j I ( a ) sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) , max j I ( a ) sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n )
and
π l ( n ) = max k = l , , n π ¯ k ( n ) .
Then, we have
π l ( n ) = max π ¯ l ( n ) , π ¯ l + 1 ( n ) , , π ¯ n ( n ) = max π ¯ l ( n ) , π l + 1 ( n )
which says that
π l ( n ) π l + 1 ( n )
and
π l ( n ) π ¯ l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) for j I ( a ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j a l j ( n ) for j I ( a )
for l = 1 , , n 1 . We want to prove
lim n π ¯ l ( n ) = 0 = lim n π l ( n ) .
We first provide some useful lemmas.
Lemma 1.
For i = 1 , , p , j = 1 , , q and l = 1 , , n , we have
sup t E l ( n ) a j ( 0 ) ( t ) a l j ( n ) 0 for j I ( a ) sup t E l ( n ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) 0 for j I ( a ) as n
and
sup t E l ( n ) c i ( 0 ) ( t ) c l i ( n ) 0 for i I ( c ) sup t E l ( n ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n ) 0 for i I ( c ) as n .
Proof. 
According to the construction of partition P n , we see that a j ( 0 ) for j I ( a ) , a j ( 0 ) a ^ j for j I ( a ) , c i ( 0 ) for i I ( c ) and c i ( 0 ) c ^ i for i I ( c ) are continuous on the open interval E l ( n ) = ( e l 1 ( n ) , e l ( n ) ) . It is not hard to obtain the desired results. □
Lemma 2.
For each l = 1 , , n , we have
lim n π ¯ l ( n ) = 0 = lim n π l ( n ) .
Proof. 
It suffices to prove
sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) 0 for j I ( a ) sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) 0 for j I ( a ) as n .
From (9), since
d l ( n ) P n r · T n 0 as n
and w ¯ l i ( n ) is bounded according to (35), it follows that, given any fixed j = 1 , , q ,
d l ( n ) · K i j ( 0 ) · w ¯ l i ( n ) 0 for j I i ( K ) d l ( n ) · K i j ( 0 ) K ^ i j · w ¯ l i ( n ) 0 for j I i ( K ) as n .
Now, for j I ( a ) , we have
0 sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) sup t E l ( n ) h ¯ l j ( n ) ( t ) + sup t E l ( n ) a j ( 0 ) ( t ) a l j ( n ) d l ( n ) · i = 1 p K i j ( 0 ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j · w ¯ l i ( n ) + sup t E l ( n ) a j ( 0 ) ( t ) a l j ( n )
and, for j I ( a ) , we have
0 sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) sup t E l ( n ) h ¯ l j ( n ) ( t ) + sup t E l ( n ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) d l ( n ) · i = 1 p K i j ( 0 ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j · w ¯ l i ( n ) + sup t E l ( n ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) .
Using Lemma 1, we complete the proof. □
We define
σ * = min j = 1 , , q i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j .
From (7), we see that 0 < σ σ * . Also, from (35), we see that the sequence { h ¯ l j ( n ) } n = 1 of functions is uniformly bounded, which also says that { π l ( n ) } n = 1 is uniformly bounded. Therefore there exists a constant x such that π l ( n ) x for all n N and l = 1 , , n . Now, we define a real-valued function p ( n ) on [ 0 , T ] by
p ( n ) ( t ) = x , if t = e l 1 ( n ) for l = 1 , , n π l ( n ) , if t E l ( n ) for l = 1 , , n max max j I ( a ) a j ( 0 ) ( T ) a n j ( n ) , max j I ( a ) a j ( 0 ) ( T ) a ^ j ( T ) a n j ( n ) , if t = e n ( n ) = T ,
and a real-valued function f ( n ) : [ 0 , T ] R + by
f ( n ) ( t ) = p ( n ) ( t ) σ * · exp ν · ( T t ) σ * .
The following lemma will be used for further discussion.
Lemma 3.
We have
f ( n ) ( T ) · i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j a j ( T ) a n j ( n ) for j I ( a ) a j ( T ) a ^ j ( T ) a n j ( n ) for j I ( a )
and, for t F l ( n ) and l = 1 , , n ,
f ( n ) ( t ) · i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) + t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) { i : j I i ( K ) } K ^ i j d s for j I ( a ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) + t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) { i : j I i ( K ) } K ^ i j d s for j I ( a ) .
Moreover, the sequence of real-valued functions { f ( n ) } n = 1 is uniformly bounded.
Proof. 
For t F l ( n ) , from (41), we have
t T f ( n ) ( s ) d s = t T p ( n ) ( s ) σ * · exp ν · ( T s ) σ * d s = t e l ( n ) π l ( n ) σ * · exp ν · ( T s ) σ * d s + k = l + 1 n E ¯ k ( n ) π k ( n ) σ * · exp ν · ( T s ) σ * d s t e l ( n ) π l ( n ) σ * · exp ν · ( T s ) σ * d s + k = l + 1 n E ¯ k ( n ) π l ( n ) σ * · exp ν · ( T s ) σ * d s ( by   ( 39 ) ) = t T π l ( n ) σ * · exp ν · ( T s ) σ * d s = π l ( n ) ν · exp ν · ( T t ) σ * 1
Since
σ * · f ( n ) ( t ) = x · exp ν · T e l 1 ( n ) σ * for t = e l 1 ( n ) for l = 1 , , n π l ( n ) · exp ν · ( T t ) σ * for t E l ( n ) for l = 1 , , n ,
using (42), it follows that, for t F l ( n ) ,
σ * · f ( n ) ( t ) x · 1 + ν π l ( n ) · e l 1 ( n ) T f ( n ) ( s ) d s for t = e l 1 ( n ) for l = 1 , , n π l ( n ) + ν · t T f ( n ) ( s ) d s for t E l ( n ) for l = 1 , , n π l ( n ) + ν · t T f ( n ) ( s ) d s ( since π l ( n ) x )
For t = e n ( n ) = T , we also have,
σ * · f ( n ) ( T ) = max max j I ( a ) a j ( 0 ) ( T ) a n j ( n ) , max j I ( a ) a j ( 0 ) ( T ) a ^ j ( T ) a n j ( n ) .
For each j = 1 , , q and l = 1 , , n , we consider the following cases.
  • For t = e n ( n ) = T , from (44), we have
    f ( n ) ( T ) · i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j σ * · f ( n ) ( T ) a j ( 0 ) ( T ) a n j ( n ) for j I ( a ) a j ( 0 ) ( T ) a ^ j ( T ) a n j ( n ) for j I ( a ) .
  • For t F l ( n ) , by (43) and (40), we have
    f ( n ) ( t ) · i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j σ * · f ( n ) ( t ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) + t T i = 1 p f ( n ) ( s ) · i = 1 p K i j ( 0 ) { i : j I i ( K ) } K ^ i j d s for j I ( a ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) + t T i = 1 p f ( n ) ( s ) · i = 1 p K i j ( 0 ) { i : j I i ( K ) } K ^ i j d s for j I ( a ) .
Finally, it is obvious that the sequence of real-valued functions { f ( n ) } n = 1 is uniformly bounded. This completes the proof. □
For each i = 1 , , p , we define the step function w ^ i ( n ) ( t ) : [ 0 , T ] R by
w ^ i ( n ) ( t ) = w ¯ l i ( n ) + f ( n ) ( t ) if t F l ( n ) for l = 1 , , n w ¯ n i ( n ) + f ( n ) ( T ) if t = T .
Remark 1.
According to (35) and Lemma 3, we see that the family of vector-valued functions { w ^ ( n ) } n N is uniformly bounded.
Proposition 4.
For any n N , the vector-valued step function ( w ^ 1 ( n ) , , w ^ p ( n ) ) is a feasible solution of problem ( DRCLP 3 ) .
Proof. 
For t F l ( n ) and l = 1 , , n , we have
i = 1 p B i j ( 0 ) · w ^ i ( n ) ( t ) + { i : j I i ( B ) } B ^ i j · w ^ i ( n ) ( t ) t T i = 1 p K i j ( 0 ) · w ^ i ( n ) ( s ) { i : j I i ( K ) } K ^ i j · w ^ i ( n ) ( s ) d s
= i = 1 p B i j ( 0 ) · w ¯ l i ( n ) + { i : j I i ( B ) } B ^ i j · w ¯ l i ( n ) t e l ( n ) i = 1 p K i j ( 0 ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j · w ¯ l i ( n ) d s k = l + 1 n E ¯ k ( n ) i = 1 p K i j ( 0 ) · w ¯ k i ( n ) { i : j I i ( K ) } K ^ i j · w ¯ k i ( n ) d s + f ( n ) ( t ) · i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) { i : j I i ( K ) } K ^ i j d s
= i = 1 p B i j ( 0 ) · w ^ i ( n ) ( t ) + { i : j I i ( B ) } B ^ i j · w ^ i ( n ) ( t ) k = l + 1 n d k ( n ) · i = 1 p K i j ( 0 ) · w ¯ k i ( n ) { i : j I i ( K ) } K ^ i j · w ¯ k i ( n ) + f ( n ) ( t ) · i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) { i : j I i ( K ) } K ^ i j d s h ¯ l j ( n ) ( t )
a l j ( n ) h ¯ l j ( n ) ( t ) + f ( n ) ( t ) · i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) { i : j I i ( K ) } K ^ i j d s ( by   the   feasibility   of   w ¯ ( n )   for   problem   ( D n ) )
a l j ( n ) + a j ( 0 ) ( t ) a l j ( n ) for j I ( a ) a l j ( n ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) for j I ( a ) ( by Lemma 3 )
= a j ( 0 ) ( t ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a )
Suppose that t = T . Then we have
i = 1 p B i j ( 0 ) · w ^ i ( n ) ( T ) + { i : j I i ( B ) } B ^ i j · w ^ i ( n ) ( T )
= i = 1 p B i j ( 0 ) · w ¯ n i ( n ) + { i : j I i ( B ) } B ^ i j · w ¯ n i ( n ) + f ( n ) ( T ) · i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j
a n j ( n ) + f ( n ) ( T ) · i = 1 p B i j ( 0 ) + { i : j I i ( B ) } B ^ i j ( by the feasibility of w ¯ ( n ) )
a n j ( n ) + a j ( 0 ) ( T ) a n j ( n ) for j I ( a ) a n j ( n ) + a j ( 0 ) ( T ) a ^ j ( T ) a n j ( n ) for j I ( a ) ( by Lemma 3 )
= a j ( 0 ) ( T ) for j I ( a ) a j ( 0 ) ( T ) a ^ j ( T ) for j I ( a ) .
Therefore, we conclude that w ^ ( n ) is indeed a feasible solution of problem (DRCLP3), and the proof is complete. □
For each i = 1 , , p and j = 1 , , q , we define the step functions a ¯ j ( n ) : [ 0 , T ] R and c ¯ i ( n ) : [ 0 , T ] R as follows:
a ¯ j ( n ) ( t ) = a l j ( n ) if t F l ( n ) for l = 1 , , n a n j ( n ) if t = T .
and
c ¯ i ( n ) ( t ) = c l i ( n ) if t F l ( n ) for l = 1 , , n c n i ( n ) if t = T ,
respectively. For each i = 1 , , p , we also define the step function w ¯ i ( n ) : [ 0 , T ] R by
w ¯ i ( n ) ( t ) = w ¯ l i ( n ) if t F l ( n ) for l = 1 , , n w ¯ n i ( n ) if t = T .
Lemma 4.
For i = 1 , , p and j = 1 , , q , we have
0 T a j ( 0 ) ( t ) a ¯ j ( n ) ( t ) · z ^ j ( n ) ( t ) d t 0 for j I ( a ) 0 T a j ( 0 ) ( t ) a ^ j ( t ) a ¯ j ( n ) ( t ) · z ^ j ( n ) ( t ) d t 0 for j I ( a ) as n
and
0 T c i ( 0 ) ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) d t 0 for i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) d t 0 for i I ( c ) as n .
Proof. 
It is obvious that the following functions
a j ( 0 ) ( t ) a ¯ j ( n ) ( t ) · z ^ j ( n ) ( t ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) a ¯ j ( n ) ( t ) · z ^ j ( n ) ( t ) for j I ( a )
and
c i ( 0 ) ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) for i I ( c ) c i ( 0 ) ( t ) c ^ i ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) for i I ( c )
are continuous a.e. on [ 0 , T ] , i.e., they are Riemann-integrable on [ 0 , T ] . In other words, their Riemann integral and Lebesgue integral are identical. From Lemma 1, we see that
a j ( 0 ) ( t ) a ¯ j ( n ) ( t ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) a ¯ j ( n ) ( t ) for j I ( a ) 0 as n   a . e . on [ 0 , T ]
and
c i ( 0 ) ( t ) c ¯ i ( n ) ( t ) for i I ( c ) c i ( 0 ) ( t ) c ^ i ( t ) c ¯ i ( n ) ( t ) for i I ( c ) 0 as n   a . e . on [ 0 , T ] .
Since the set of functions { z ^ j ( n ) } n = 1 for j = 1 , , q is uniformly bounded according to Proposition 2, using the Lebesgue bounded convergence theorem, we obtain (46). On the other hand, since set of functions { w ¯ i ( n ) } n = 1 for i = 1 , , p is uniformly bounded according to Proposition 1, using the Lebesgue bounded convergence theorem, we can obtain (47). This completes the proof. □
Theorem 1.
The following statements hold true.
(i)
We have
lim sup n V ( D n ) = V ( DRCLP 3 )   and   0 V ( DRCLP 3 ) V ( D n ) ε n ,
where
ε n = V ( D n ) + i = 1 p l = 1 n E ¯ l ( n ) c i ( 0 ) ( t ) · w ¯ l i ( n ) d t i I ( c ) l = 1 n E ¯ l ( n ) c ^ i ( t ) · w ¯ l i ( n ) d t + i = 1 p l = 1 n E ¯ l ( n ) π l ( n ) σ * · exp ν · ( T t ) σ * · c i ( 0 ) ( t ) d t i I ( c ) l = 1 n E ¯ l ( n ) π l ( n ) σ * · exp ν · ( T t ) σ * · c ^ i ( t ) d t
satisfying ε n 0 as n .
(ii)
(No Duality Gap). Suppose that the primal problem ( P n ) is feasible. We have
V ( DRCLP 3 ) = V ( RCLP 3 ) = lim sup n V ( D n ) = lim sup n V ( P n )
and
0 V ( RCLP 3 ) V ( P n ) ε n .
Proof. 
To prove part (i), we have
0 V ( DRCLP 3 ) V ( D n ) ( by   ( 33 ) ) = V ( DRCLP 3 ) l = 1 n i = 1 p E ¯ l ( n ) c l i ( n ) · w ¯ l i ( n ) d t i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( t ) d t l = 1 n i = 1 p E ¯ l ( n ) c l i ( n ) · w ¯ l i ( n ) d t ( by Proposition 4 ) = i I ( c ) 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · w ^ i ( t ) d t l = 1 n i = 1 p E ¯ l ( n ) c l i ( n ) · w ¯ l i ( n ) d t = i I ( c ) l = 1 n E ¯ l ( n ) c i ( 0 ) ( t ) c l i ( n ) · w ¯ l i ( n ) d t + i I ( c ) l = 1 n E ¯ l ( n ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n ) · w ¯ l i ( n ) d t + i I ( c ) 0 T f ( n ) ( t ) · c i ( 0 ) ( t ) d t + i I ( c ) 0 T f ( n ) ( t ) · c i ( 0 ) ( t ) c ^ i ( t ) d t = i I ( c ) 0 T c i ( 0 ) ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) d t + i I ( c ) 0 T f ( n ) ( t ) · c i ( 0 ) ( t ) d t + i I ( c ) 0 T f ( n ) ( t ) · c i ( 0 ) ( t ) c ^ i ( t ) d t ε n
Since p ( n ) is continuous a.e. on [ 0 , T ] , it follows that f ( n ) is also continuous a.e. on [ 0 , T ] , which says that f ( n ) is Riemann-integrable on [ 0 , T ] . In other words, the Riemann integral and Lebesgue integral of f ( n ) on [ 0 , T ] are identical. Since π l ( n ) 0 as n by Lemma 2, it follows that p ( n ) 0 as n a.e. on [ 0 , T ] , which implies that f ( n ) 0 as n a.e. on [ 0 , T ] . Applying the Lebesgue bounded convergence theorem for integrals, we obtain
0 T f ( n ) ( t ) · c i ( 0 ) ( t ) d t 0 as n for i I ( c ) 0 T f ( n ) ( t ) · c i ( 0 ) ( t ) c ^ i ( t ) d t 0 as n for i I ( c ) .
Using Lemma 4, we conclude that ε n 0 as n . Also, from (49), we obtain
V ( D n ) V ( DRCLP 3 ) V ( D n ) + ε n ,
which implies
lim sup n V ( D n ) V ( DRCLP 3 ) lim sup n V ( D n ) + ε n lim sup n V ( D n ) + lim sup n ε n = lim sup n V ( D n ) .
It is easy to see that ε n can be written as (48), which proves part (i).
To prove part (ii), by part (i) and inequality (33), we obtain
V ( DRCLP 3 ) V ( RCLP 3 ) lim sup n V ( D n ) = V ( DRCLP 3 ) .
Since V ( D n ) = V ( P n ) for each n N , we also have
V ( DRCLP 3 ) = V ( RCLP 3 ) = lim sup n V ( D n ) = lim sup n V ( P n )
and
0 V ( RCLP 3 ) V ( P n ) = V ( DRCLP 3 ) V ( D n ) ε n .
This completes the proof. □
Proposition 5.
The following statements hold true.
(i)
Suppose that the primal problem ( P n ) is feasible. Let z ^ j ( n ) be defined in ( 29 ) for j = 1 , , q . Then, the error between V ( RCLP 3 ) and the objective value of ( z ^ 1 ( n ) , , z ^ q ( n ) ) is less than or equal to ε n defined in ( 48 ) , i.e.,
0 V ( RCLP 3 ) j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( t ) d t ε n .
(ii)
Let w ^ i ( n ) be defined in ( 45 ) for i = 1 , , p . Then, the error between V ( DRCLP 3 ) and the objective value of ( w ^ 1 ( n ) , , w ^ p ( n ) ) is less than or equal to ε n , i.e.,
0 i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( t ) d t V ( DRCLP 3 ) ε n
Proof. 
To prove part (i), Proposition 3 says that ( z ^ 1 ( n ) , , z ^ q ( n ) ) is a feasible solution of problem (RCLP3). Since
l = 1 n j = 1 q E ¯ l ( n ) a l j ( n ) · z ^ j ( n ) ( t ) d t = l = 1 n j = 1 q d l ( n ) a l j ( n ) · z ¯ l j ( n ) = V ( P n ) = V ( D n )
and
a l j ( n ) a j ( 0 ) ( t ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a )
for all t E ¯ l ( n ) and l = 1 , , n , it follows that
0 V ( RCLP 3 ) j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( t ) d t V ( RCLP 3 ) l = 1 n j = 1 q E ¯ l ( n ) a l j ( n ) · z ^ j ( n ) ( t ) d t = V ( DRCLP 3 ) V ( D n ) ( by   ( 51 ) and part ( ii ) of Theorem 1 ) ε n ( by part ( i ) of Theorem 1 ) .
To prove part (ii), we have
0 i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( t ) d t V ( DRCLP 3 ) ( by Proposition 4 ) i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( t ) d t V ( D n ) ( since V ( D n ) V ( DRCLP 3 ) by part ( i ) of Theorem 1 ) = ε n
This completes the proof. □
Definition 1.
Given any ϵ > 0 , we say that the feasible solution ( z 1 ( ϵ ) , , z q ( ϵ ) ) of problem ( RCLP 3 ) is an ϵ-optimal solution if and only if
0 V ( RCLP 3 ) j = 1 q 0 T a j ( 0 ) ( t ) · z j ( ϵ ) ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( ϵ ) ( t ) d t < ϵ .
We say that the feasible solution ( w 1 ( ϵ ) , , w p ( ϵ ) ) of problem V ( DRCLP 3 ) is an ϵ-optimal solution if and only if
0 i = 1 p 0 T c i ( 0 ) ( t ) · w i ( ϵ ) ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w i ( ϵ ) ( t ) d t V ( DRCLP 3 ) < ϵ .
Theorem 2.
Given any ϵ > 0 , the following statements hold true.
(i)
The ϵ-optimal solution of problem ( RCLP 3 ) exists in the sense: There exists n N such that ( z 1 ( ϵ ) , , z q ( ϵ ) ) = ( z ^ 1 ( n ) , , z ^ q ( n ) ) , where ( z ^ 1 ( n ) , , z ^ q ( n ) ) is obtained from Proposition 5 satisfying ε n < ϵ .
(ii)
The ϵ-optimal solution of problem ( DRCLP 3 ) exists in the sense: There exists n N such that ( w 1 ( ϵ ) , , w p ( ϵ ) ) = ( w ^ 1 ( n ) , , w ^ p ( n ) ) , where ( w ^ 1 ( n ) , , w ^ p ( n ) ) is obtained from Proposition 5 satisfying ε n < ϵ .
Proof. 
Given any ϵ > 0 , from Proposition 5, since ε n 0 as n , there exists n N such that ε n < ϵ . Then, the result follows immediately. □

4. Convergence of Approximate Solutions

In this section, we shall study the convergence of approximate solutions. We first provide some useful lemmas that can guarantee the feasibility of solutions. On the other hand, the strong duality theorem can also be established using the limits of approximate solutions.
In the sequel, by referring to (29) and (45), we shall present the convergent properties of the sequences { z ^ ( n ) } n = 1 and { w ^ ( n ) } n = 1 that are constructed from the optimal solutions z ¯ ( n ) of problem ( P n ) and the optimal solution w ¯ ( n ) of problem ( D n ) , respectively. We first provide a useful lemma.
Lemma 5.
Let the real-valued function η be defined by
η ( t ) = τ σ · exp ν · ( T t ) σ
on [ 0 , T ] , and let ( w 1 ( 0 ) , , w p ( 0 ) ) be a feasible solution of dual problem (DRCLP3). We define
w i ( 1 ) ( t ) = min w i ( 0 ) ( t ) , η ( t )
for all i = 1 , , p and t [ 0 , T ] . Then ( w 1 ( 1 ) , , w p ( 1 ) ) is a feasible solution of dual problem(DRCLP3)satisfying w i ( 1 ) ( t ) w i ( 0 ) ( t ) and w i ( 1 ) ( t ) η ( t ) for all i = 1 , , p and t [ 0 , T ] .
Proof. 
By the feasibility of ( w 1 ( 0 ) , , w p ( 0 ) ) for problem (DRCLP3), we have
i = 1 p B i j ( 0 ) · w i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 0 ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w i ( 0 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( 0 ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T w i ( 0 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( 0 ) ( s ) d s for j I ( a )
Since K i j ( 0 ) 0 for j I i ( K ) , K i j ( 0 ) K ^ i j 0 for j I i ( K ) and w i ( 1 ) ( t ) w i ( 0 ) ( t ) , from (54), we obtain
i = 1 p B i j ( 0 ) · w i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 0 ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w i ( 1 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( 1 ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T w i ( 1 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( 1 ) ( s ) d s for j I ( a ) .
For any fixed t [ 0 , T ] , we define the index sets
I = j : w i ( 0 ) ( t ) η ( t ) and I > = j : w i ( 0 ) ( t ) > η ( t )
and consider
i = 1 p B i j ( 0 ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 1 ) ( t ) = i I B i j ( 0 ) · w i ( 1 ) ( t ) + i I > B i j ( 0 ) · w i ( 1 ) ( t ) + { i : j I i ( B ) , i I } B ^ i j · w i ( 1 ) ( t ) + { i : j I i ( B ) , i I > } B ^ i j · w i ( 1 ) ( t ) .
Then, for each fixed i, we have the following three cases:
  • Suppose that I > = (i.e., the second sum is zero). Then, we see that w i ( 0 ) ( t ) = w i ( 1 ) ( t ) for all i. Therefore, we have
    i = 1 p B i j ( 0 ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 1 ) ( t ) = i = 1 p B i j ( 0 ) · w i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 0 ) ( t ) .
    From (55), we obtain
    i = 1 p B i j ( 0 ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 1 ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w i ( 1 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( 1 ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T w i ( 1 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( 1 ) ( s ) d s for j I ( a ) .
  • Suppose that I > , and that
    B i j ( 0 ) = 0 for all i I > and j I i ( B ) B i j ( 0 ) + B ^ i j = 0 for all i I > and j I i ( B ) .
    Then, from (56), we obtain
    i = 1 p B i j ( 0 ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 1 ) ( t ) = i I B i j ( 0 ) · w i ( 1 ) ( t ) + { i : j I i ( B ) , i I } B ^ i j · w i ( 1 ) ( t ) = i I B i j ( 0 ) · w i ( 0 ) ( t ) + { i : j I i ( B ) , i I } B ^ i j · w i ( 0 ) ( t ) = i I B i j ( 0 ) · w i ( 0 ) ( t ) + i I > B i j ( 0 ) · w i ( 0 ) ( t ) + { i : j I i ( B ) , i I } B ^ i j · w i ( 0 ) ( t ) + { i : j I i ( B ) , i I > } B ^ i j · w i ( 0 ) ( t ) = i = 1 p B i j ( 0 ) · w i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 0 ) ( t )
    From (55), we obtain
    i = 1 p B i j ( 0 ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 1 ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w i ( 1 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( 1 ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T w i ( 1 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( 1 ) ( s ) d s for j I ( a ) .
  • Suppose that I > and there exists i * I > such that B i * j ( 0 ) 0 for j I i * ( B ) or B i * j ( 0 ) + B ^ i * j 0 for j I i * ( B ) , i.e., B i * j ( 0 ) σ for j I i * ( B ) or B i * j ( 0 ) + B ^ i * j σ for j I i * ( B ) by (7). Therefore, we obtain
    i = 1 p B i j ( 0 ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 1 ) ( t ) i I > B i j ( 0 ) · w i ( 1 ) ( t ) + { i : j I i ( B ) , i I > } B ^ i j · w i ( 1 ) ( t ) = i I > B i j ( 0 ) · η ( t ) + { i : j I i ( B ) , i I > } B ^ i j · η ( t ) σ · η ( t ) .
    From (52), we see that
    σ · η ( t ) = τ + ν · t T η ( s ) d s ,
    which implies
    σ · η ( t ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T η ( s ) d s { i : j I i ( K ) } K ^ i j · t T η ( s ) d s for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T η ( s ) d s { i : j I i ( K ) } K ^ i j · t T η ( s ) d s for j I ( a ) .
    for all t [ 0 , T ] . Using (66), (57) and the fact of w i ( 1 ) ( t ) η ( t ) and K i j ( 0 ) K ^ i j 0 for j I i ( K ) , we also have
    i = 1 p B i j ( 0 ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 1 ) ( t ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T η ( s ) d s { i : j I i ( K ) } K ^ i j · t T η ( s ) d s for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T η ( s ) d s { i : j I i ( K ) } K ^ i j · t T η ( s ) d s for j I ( a ) . a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T w i ( 1 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( 1 ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w i ( 1 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w i ( 1 ) ( s ) d s for j I ( a ) .
Therefore, we conclude that ( w 1 ( 1 ) , , w p ( 1 ) ) is a feasible solution of (DRCLP3), and the proof is complete. □
For further discussion, we need the following useful lemmas.
Lemma 6.
(Riesz and Sz.-Nagy [53] (p. 64)) Let { f k } k = 1 be a sequence in L 2 [ 0 , T ] . If the sequence { f k } k = 1 is uniformly bounded with respect to · 2 , then exists a subsequence { f k j } j = 1 that weakly converges to some f 0 L 2 [ 0 , T ] . In other words, for any g L 2 [ 0 , T ] , we have
lim j 0 T f k j ( t ) g ( t ) d t = 0 T f 0 ( t ) g ( t ) d t .
Lemma 7.
(Levinson [4]) If the sequence { f k } k = 1 is uniformly bounded on [ 0 , T ] with respect to · 2 and weakly converges to f 0 L 2 [ 0 , T ] , then
f 0 ( t ) lim sup k f k ( t ) and f 0 ( t ) lim inf k f k ( t ) a . e . o n [ 0 , T ] .
Theorem 3. (Strong Duality Theorem)
Suppose that the primal problem ( P n ) is feasible. Let { ( z ^ 1 ( n ) , , z ^ q ( n ) ) } n = 1 and { ( w ^ 1 ( n ) , , w ^ p ( n ) ) } n = 1 be the sequences that are constructed from the optimal solutions ( z ¯ 1 ( n ) , , z ¯ q ( n ) ) of problem ( P n ) and the optimal solution ( w ¯ 1 ( n ) , , w ¯ p ( n ) ) of problem ( D n ) according to (29) and (45), respectively. Then, the following statements hold true.
(i)
For each j = 1 , , q , there is a subsequence { z ^ j ( n k ) } k = 1 of { z ^ j ( n ) } n = 1 such that { z ^ j ( n k ) } k = 1 weakly converges to some z ^ j * , and ( z ^ 1 * , , z ^ p * ) forms an optimal solution of primal problem ( RCLP 3 ) .
(ii)
For each n, we define
w ˘ i ( n ) ( t ) = min w ^ i ( n ) ( t ) , η ( t )
where η is defined in ( 52 ) . Then, for each i = 1 , , p , there is a subsequence { w ˘ i ( n k ) } k = 1 of { w ˘ i ( n ) } n = 1 such that { w ˘ i ( n k ) } k = 1 weakly converges to some w ^ i * , and ( w ^ 1 * , , w ^ p * ) forms an optimal solution of dual problem ( DRCLP 3 ) .
Moreover we have V ( DRCLP 3 ) = V ( RCLP 3 ) .
Proof. 
From Proposition 2, it follows that the sequence of functions { ( z ^ 1 ( n ) , , z ^ q ( n ) ) } n = 1 is uniformly bounded with respect to · 2 . Using Lemma 6, there exists a subsequence z ^ 1 ( n k ( 1 ) ) k = 1 of z ^ 1 ( n ) n = 1 that weakly converges to some z ^ 1 ( 0 ) L 2 [ 0 , T ] . Using Lemma 6 again, there exists a subsequence z ^ 2 ( n k ( 2 ) ) k = 1 of z ^ 2 ( n k ( 1 ) ) k = 1 that weakly converges to some z ^ 2 ( 0 ) L 2 [ 0 , T ] . By induction, there exists a subsequence z ^ j ( n k ( j ) ) k = 1 of z ^ j ( n k ( j 1 ) ) k = 1 that weakly converges to some z ^ j ( 0 ) L 2 [ 0 , T ] for each j. Therefore we can construct the subsequences { z ^ j ( n k ) } k = 1 that weakly converge to z ^ j ( 0 ) L 2 [ 0 , T ] for all j = 1 , , q . Since ( z ^ 1 ( n k ) , , z ^ q ( n k ) ) is a feasible solution of problem (RCLP3) for each n k , for all t [ 0 , T ] , we have z ^ j ( n k ) ( t ) 0 for all j = 1 , , q and
j = 1 q B i j ( 0 ) · z ^ j ( n k ) ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j ( n k ) ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q K i j ( 0 ) · 0 t z ^ j ( n k ) ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j ( n k ) ( s ) d s for i I ( c ) c i ( 0 ) ( t ) + j = 1 q K i j ( 0 ) · 0 t z ^ j ( n k ) ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j ( n k ) ( s ) d s for i I ( c )
From Lemma 7, we have
z ^ j ( 0 ) ( t ) lim inf k z ^ j ( n k ) ( t ) 0   a . e . in [ 0 , T ] ,
which says that z ^ j ( 0 ) ( t ) 0 a.e. in [ 0 , T ] for all j = 1 , , q . It is clear to see that the sequence
j = 1 q B i j ( 0 ) · z ^ j ( n k ) ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j ( n k ) ( t ) k = 1
weakly converges to
j = 1 q B i j ( 0 ) · z ^ j ( 0 ) ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j ( 0 ) ( t ) .
For i I ( c ) , we obtain
j = 1 q B i j ( 0 ) · z ^ j ( 0 ) ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j ( 0 ) ( t ) lim sup k j = 1 q B i j ( 0 ) · z ^ j ( n k ) ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j ( n k ) ( t ) ( by Lemma 7 ) c i ( 0 ) ( t ) c ^ i ( t ) + lim sup k j = 1 q K i j ( 0 ) · 0 t z ^ j ( n k ) ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j ( n k ) ( s ) d s ( by   ( 59 ) ) = c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q K i j ( 0 ) · 0 t z ^ j ( 0 ) ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j ( 0 ) ( s ) d s   a . e . in   [ 0 , T ] ( by the weak convergence )
For i I ( c ) , we can similarly obtain
j = 1 q B i j ( 0 ) · z ^ j ( 0 ) ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j ( 0 ) ( t ) c i ( 0 ) ( t ) + j = 1 q K i j ( 0 ) · 0 t z ^ j ( 0 ) ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j ( 0 ) ( s ) d s .
Let N 0 be the subset of [ 0 , T ] on which the inequalities (60) and (61) are violated, let N 1 be the subset of [ 0 , T ] on which ( z ^ 1 ( 0 ) ( t ) , , z ^ q ( 0 ) ( t ) ) 0 , let N = N 0 N 1 , and define
( z ^ 1 * ( t ) , , z ^ q * ( t ) ) = ( z ^ 1 ( 0 ) ( t ) , , z ^ q ( 0 ) ( t ) ) if t N 0 if t N ,
where the set N has measure zero. Then ( z ^ 1 * ( t ) , , z ^ q * ( t ) ) 0 for all t [ 0 , T ] and ( z ^ 1 * ( t ) , , z ^ q * ( t ) ) = ( z ^ 1 ( 0 ) ( t ) , , z ^ q ( 0 ) ( t ) ) a.e. on [ 0 , T ] , i.e., z ^ j * ( t ) = z ^ j ( 0 ) ( t ) a.e. on [ 0 , T ] for each j. We also see that z ^ j * L 2 [ 0 , T ] for each j.
  • For t N , from (60), we have
    j = 1 q B i j ( 0 ) · z ^ j * ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j * ( t ) = j = 1 q B i j ( 0 ) · z ^ j ( 0 ) ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j ( 0 ) ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q K i j ( 0 ) · 0 t z ^ j ( 0 ) ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j ( 0 ) ( s ) d s c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q K i j ( 0 ) · 0 t z ^ j * ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j * ( s ) d s .
    From (61), we can similarly obtain
    j = 1 q B i j ( 0 ) · z ^ j * ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j * ( t ) c i ( 0 ) ( t ) + j = 1 q K i j ( 0 ) · 0 t z ^ j * ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j * ( s ) d s .
  • For t N , from (60), we have
    j = 1 q B i j ( 0 ) · z ^ j * ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j * ( t ) = 0 c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q K i j ( 0 ) · 0 t z ^ j ( 0 ) ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j ( 0 ) ( s ) d s c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q K i j ( 0 ) · 0 t z ^ j * ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j * ( s ) d s .
    From (61), we can similarly obtain
    j = 1 q B i j ( 0 ) · z ^ j * ( t ) + { j : j I i ( B ) } B ^ i j · z ^ j * ( t ) c i ( 0 ) ( t ) + j = 1 q K i j ( 0 ) · 0 t z ^ j * ( s ) d s { j : j I i ( K ) } K ^ i j · 0 t z ^ j * ( s ) d s .
This shows that ( z ^ 1 * ( t ) , , z ^ q * ( t ) ) is a feasible solution of problem (RCLP3). Since ( z ^ 1 * ( t ) , , z ^ q * ( t ) ) = ( z ^ 1 ( 0 ) ( t ) , , z ^ q ( 0 ) ( t ) ) a.e. on [ 0 , T ] , it follows that the subsequence { ( z ^ 1 ( n k ) , , z ^ q ( n k ) ) } k = 1 is also weakly convergent to ( z ^ 1 * ( t ) , , z ^ q * ( t ) ) .
On the other hand, since ( w ^ 1 ( n ) , , w ^ p ( n ) ) is a feasible solution of problem (DRCLP3) for each n, Lemma 5 says that ( w ˘ 1 ( n ) , , w ˘ p ( n ) ) is also a feasible solution of problem (DRCLP3) for each n satisfying w ˘ i ( n ) ( t ) w ^ i ( n ) ( t ) for each i = 1 , , p and t [ 0 , T ] . From Remark 1, it follows that the sequence { ( w ^ 1 ( n ) , , w ^ p ( n ) ) } n = 1 is uniformly bounded. Since
η ( t ) = τ σ · exp ν · ( T t ) σ τ σ · exp ν · T σ for all t [ 0 , T ] ,
we see that the sequence { ( w ˘ 1 ( n ) , , w ˘ p ( n ) ) } n = 1 is also uniformly bounded, which implies that the sequence { ( w ˘ 1 ( n ) , , w ˘ p ( n ) ) } n = 1 is uniformly bounded with respect to · 2 . Using Lemma 6, We can similarly show that there is a subsequence { ( w ˘ 1 ( n k ) , , w ˘ p ( n k ) ) } k = 1 of { ( w ˘ 1 ( n ) , , w ˘ p ( n ) ) } n = 1 that weakly converges to some ( w ˘ 1 ( 0 ) , , w ˘ p ( 0 ) ) L p 2 [ 0 , T ] . Since ( w ˘ 1 ( n k ) , , w ˘ p ( n k ) ) is a feasible solution of problem (DRCLP3) for each n k , for each t [ 0 , T ] , we have w ˘ i ( n k ) ( t ) 0 for all i = 1 , , p and
i = 1 p B i j ( 0 ) · w ˘ i ( n k ) ( t ) + { i : j I i ( B ) } B ^ i j · w ˘ i ( n k ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w ˘ i ( n k ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w ˘ i ( n k ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T w ˘ i ( n k ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w ˘ i ( n k ) ( s ) d s for j I ( a )
From Lemma 7, for each i, we have
w ˘ i ( 0 ) ( t ) lim inf k w ˘ i ( n k ) ( t ) 0   a . e . in   [ 0 , T ] ,
which says that w ˘ i ( 0 ) ( t ) a.e. in [ 0 , T ] for all i = 1 , , p . For j I ( a ) , by taking the limit inferior on both sides of (62), we obtain
i = 1 p B i j ( 0 ) · w ˘ i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j · w ˘ i ( 0 ) ( t ) lim inf n k i = 1 p B i j ( 0 ) · w ˘ i ( n k ) ( t ) + { i : j I i ( B ) } B ^ i j · w ˘ i ( n k ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + lim inf n k i = 1 p K i j ( 0 ) · t T w ˘ i ( n k ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w ˘ i ( n k ) ( s ) d s = a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w ˘ i ( 0 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w ˘ i ( 0 ) ( s ) d s   a . e .   in   [ 0 , T ] ( by the weak convergence )
For j I ( a ) , we can similarly obtain
i = 1 p B i j ( 0 ) · w ˘ i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j · w ˘ i ( 0 ) ( t ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T w ˘ i ( 0 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w ˘ i 0 ) ( s ) d s
For each i = 1 , , p , we see that w ˘ i ( n k ) ( t ) η ( t ) for each k and for all t [ 0 , T ] . Let N ^ 0 be the subset of [ 0 , T ] on which the inequalities (63) and (64) are violated, let N ^ 1 be the subset of [ 0 , T ] on which ( w ˘ 1 ( 0 ) ( t ) , , w ˘ p ( 0 ) ( t ) ) 0 , let N ^ = N ^ 0 N ^ 1 , and define
( w ^ 1 * ( t ) , , w ^ p * ( t ) ) = ( w ˘ 1 ( 0 ) ( t ) , , w ˘ p ( 0 ) ( t ) ) if t N ^ ( η ( t ) , , η ( t ) ) if t N ^ ,
where the set N ^ has measure zero. Then, for each i = 1 , , p , w ^ i * ( t ) 0 for all t [ 0 , T ] and w ^ i * ( t ) = w ˘ i ( 0 ) ( t ) a.e. on [ 0 , T ] . We also see that w ^ i * L 2 [ 0 , T ] . Now we are going to claim that ( w ^ 1 * , , w ^ p * ) is a feasible solution of ( DRCLP 3 ) .
  • Suppose that t N ^ . For j I ( a ) , from (63), we have
    i = 1 p B i j ( 0 ) · w ^ i * ( t ) + { i : j I i ( B ) } B ^ i j · w ^ i * ( t ) = i = 1 p B i j ( 0 ) · w ˘ i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j · w ˘ i ( 0 ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w ˘ i ( 0 ) ( s ) d s { i : j I i ( K ) } K ^ i j · t T w ˘ i ( 0 ) ( s ) d s = a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w ^ i * ( s ) d s { i : j I i ( K ) } K ^ i j · t T w ^ i * ( s ) d s .
    For j I ( a ) , from (64), we can similarly obtain
    i = 1 p B i j ( 0 ) · w ^ i * ( t ) + { i : j I i ( B ) } B ^ i j · w ^ i * ( t ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T w ^ i * ( s ) d s { i : j I i ( K ) } K ^ i j · t T w ^ i * ( s ) d s .
  • Suppose that t N ^ . For each i = 1 , , p , since w ˘ i ( n k ) ( t ) η ( t ) for all t [ 0 , T ] , using Lemma 7, we have
    w ˘ i ( 0 ) ( t ) lim sup n k w ˘ i ( n k ) ( t ) η ( t )   a . e . on   [ 0 , T ] .
    For each i = 1 , , p , since w ^ i * ( t ) = w ˘ i ( 0 ) ( t ) a.e. on [ 0 , T ] , it follows that
    w ^ i * ( t ) η ( t )   a . e . on   [ 0 , T ] .
    From (52), we see that
    σ · η ( t ) = τ + ν · t T η ( s ) d s ,
    which implies
    σ · η ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T η ( s ) d s { i : j I i ( K ) } K ^ i j · t T η ( s ) d s
    for j I ( a ) and for all t [ 0 , T ] . Therefore, for j I ( a ) , we obtain
    i = 1 p B i j ( 0 ) · w ^ i * ( t ) + { i : j I i ( B ) } B ^ i j · w ^ i * ( t ) = i = 1 p B i j ( 0 ) · η ( t ) + { i : j I i ( B ) } B ^ i j · η ( t ) σ · η ( t ) ( by ( 7 ) ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T η ( s ) d s { i : j I i ( K ) } K ^ i j · t T η ( s ) d s ( by   ( 66 ) ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p K i j ( 0 ) · t T w ^ i * ( s ) d s { i : j I i ( K ) } K ^ i j · t T w ^ i * ( s ) d s ( by   ( 65 ) ) .
    For j I ( a ) , we can similarly obtain
    i = 1 p B i j ( 0 ) · w ^ i * ( t ) + { i : j I i ( B ) } B ^ i j · w ^ i * ( t ) a j ( 0 ) ( t ) + i = 1 p K i j ( 0 ) · t T w ^ i * ( s ) d s { i : j I i ( K ) } K ^ i j · t T w ^ i * ( s ) d s .
Therefore, we conclude that ( w ^ 1 * , , w ^ p * ) is a feasible solution of (DRCLP3). Since ( w ^ 1 * ( t ) , , w ^ p * ( t ) ) = ( w ˘ 1 ( 0 ) ( t ) , , w ˘ p ( 0 ) ( t ) ) a.e. on [ 0 , T ] , it follows that the subsequence { ( w ˘ 1 ( n k ) , , w ˘ p ( n k ) ) } k = 1 also weakly converges to ( w ^ 1 * , , w ^ p * ) .
Finally, we want to prove the optimality. Now, we have
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( n k ) ( t ) d t
= j I ( a ) 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t + j I ( a ) 0 T a j ( 0 ) ( t ) a ^ j ( t ) · z ^ j ( n k ) ( t ) d t
= j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ^ j ( n k ) ( t ) d t + j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n k ) · z ^ j ( n k ) ( t ) d t + j = 1 q l = 1 n k E ¯ l ( n k ) a l j ( n k ) · z ^ j ( n k ) ( t ) d t
= j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t + j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t + j = 1 q l = 1 n k d l ( n k ) · a l j ( n k ) · z ¯ l j ( n k ) d t
= j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t + j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t + V ( P n k )
and
i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( n k ) ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( n k ) ( t ) d t
= i I ( c ) 0 T c i ( 0 ) ( t ) · w ^ i ( n k ) ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · w ^ i ( n k ) ( t ) d t
= i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) · w ¯ l i ( n k ) d t + i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c ^ i ( t ) · w ¯ l i ( n k ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) · f ( n k ) ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · f ( n k ) ( t ) d t
= i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t + i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t + i = 1 p l = 1 n k d l ( n k ) · c l i ( n k ) · w ¯ l i ( n k ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) · f ( n k ) ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · f ( n k ) ( t ) d t
= i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t + i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t + V ( D n k ) + i I ( c ) 0 T c i ( 0 ) ( t ) · f ( n k ) ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · f ( n k ) ( t ) d t .
Since V ( P n k ) = V ( D n k ) and w ˘ i ( n k ) w ^ i ( n k ) for each i and n k , from (67) and (68), we have
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t i = 1 p 0 T c i ( 0 ) ( t ) · w ˘ i ( n k ) ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ˘ i ( n k ) ( t ) d t i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t i I ( c ) 0 T c i ( 0 ) ( t ) · f ( n k ) ( t ) d t i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · f ( n k ) ( t ) d t .
Using Lemma 4, for j I ( a ) , we have
0 l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t = 0 T a j ( 0 ) ( t ) a ¯ j ( n k ) ( t ) · z ^ j ( n k ) ( t ) d t 0 as k
and, for j I ( a ) , we have
0 l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t = 0 T a j ( 0 ) ( t ) a ^ j ( t ) a ¯ j ( n k ) ( t ) · z ^ j ( n k ) ( t ) d t 0 as k .
Also, for i I ( c ) , we have
0 l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t = 0 T c i ( 0 ) ( t ) c ¯ i ( n k ) ( t ) · w ¯ i ( n k ) ( t ) d t 0 as k
and, for i I ( c ) , we have
0 l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t = 0 T c i ( 0 ) ( t ) c ^ i ( t ) c ¯ i ( n k ) ( t ) · w ¯ i ( n k ) ( t ) d t 0 as k .
By taking limit on both sides of (69), and using (50) and (70)–(73), we obtain
lim k j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( n k ) ( t ) d t lim k i = 1 p 0 T c i ( 0 ) ( t ) · w ˘ i ( n k ) ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ˘ i ( n k ) ( t ) d t .
Using the weak convergence, we also obtain
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j * ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j * ( t ) d t i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i * ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i * ( t ) d t .
According to the weak duality theorem between problems (RCLP3) and (DRCLP3), we have that
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j * ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j * ( t ) d t = i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i * ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i * ( t ) d t .
and conclude that ( z ^ 1 * , , z ^ q * ) and ( w ^ 1 * , , w ^ p * ) are the optimal solutions of problems (RCLP3) and (DRCLP3), respectively. Theorem 1 also says that V ( DRCLP 3 ) = V ( RCLP 3 ) . This completes the proof. □

5. Computational Procedure and Numerical Example

In this section, based on the above results, we can design a computational procedure. Also, a numerical example is provided to demonstrate the usefulness of this practical algorithm. The purpose is to provide the computational procedure to obtain the approximate solutions of the continuous-time linear programming problem (RCLP3). Of course, the approximate solutions will be the step functions. According to Proposition 5, it is possible to obtain the appropriate step functions so that the corresponding objective function value is close enough to the optimal objective function value when n is taken to be sufficiently large.
Recall that, from Theorem 1 and Proposition 5, the error upper bound between the approximate objective value and the optimal objective value is given by
ε n = V ( D n ) + i = 1 p l = 1 n E ¯ l ( n ) c i ( 0 ) ( t ) · w ¯ l i ( n ) d t + E ¯ l ( n ) π l ( n ) σ * · exp ν · ( T t ) σ * · c i ( 0 ) ( t ) d t i I ( c ) l = 1 n E ¯ l ( n ) c ^ i ( t ) · w ¯ l i ( n ) d t + E ¯ l ( n ) π l ( n ) σ * · exp ν · ( T t ) σ * · c ^ i ( t ) d t
In order to obtain π l ( n ) , by referring to (37), we need to solve
sup t E l ( n ) h ¯ l j ( n ) ( t ) + α j ( t ) ,
where the real-valued function α j is given by
α j ( t ) = a j ( 0 ) ( t ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a ) .
Now, we define the real-valued function h l j ( n ) on E ¯ l ( n ) by
h l j ( n ) ( t ) = h ¯ l j ( n ) ( t ) + α j ( t ) , if t E l ( n ) lim t e l 1 ( n ) + h ¯ l j ( n ) ( t ) + α j ( t ) , if t = e l 1 ( n ) lim t e l ( n ) h ¯ l j ( n ) ( t ) + α j ( t ) , if t = e l ( n )
Since α j is continuous on the open interval E l ( n ) , it follows that h ¯ l j ( n ) + α j is continuous on E l ( n ) . This also says that h l j ( n ) is continuous on the compact interval E ¯ l ( n ) . In other words, we have
sup t E ¯ l ( n ) h l j ( n ) ( t ) = max t E ¯ l ( n ) h l j ( n ) ( t ) .
Moreover, if the function h ¯ l j ( n ) + α j is well-defined at the end-points e l 1 ( n ) and e l ( n ) , then we see that
h l j ( n ) ( t ) = h ¯ l j ( n ) ( t ) + α j ( t ) for all t E ¯ l ( n ) .
Then, the supremum in (74) can be obtained by the following equality
max t E ¯ l ( n ) h l j ( n ) ( t ) = max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , sup t E l ( n ) h ¯ l j ( n ) ( t ) + α j ( t ) .
In order to further design the computational procedure, we need to assume that each α j is twice-differentiable on [ 0 , T ] for the purpose of applying the Newton’s method, which also says that α j is twice-differentiable on the open interval E l ( n ) for all l = 1 , , n . From (75), we need to solve the following simple type of optimization problem
max e l 1 ( n ) t e l ( n ) h l j ( n ) ( t ) .
The KKT condition says that if t * is an optimal solution of problem (76), then the following conditions should be satisfied:
d d t h l j ( n ) ( t ) t = t * λ 1 + λ 2 = 0 and λ 1 t * e l ( n ) = 0 = λ 2 t * + e l 1 ( n ) ,
where λ 1 , λ 2 0 are the Lagrange multipliers. Then, we can see that the optimal solution is
t * = e l 1 ( n )   or   t * = e l ( n )   or   satisfying   d d t h l j ( n ) ( t ) t = t * = 0 .
Let Z l j ( n ) denote the set of all zeros of the real-valued function d d t ( h l j ( n ) ( t ) ) on E l ( n ) . Then, we have
max t E ¯ l ( n ) h l j ( n ) ( t ) = max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , max t * Z l j ( n ) h l j ( n ) ( t * ) , if Z l j ( n ) max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , if Z l j ( n ) = .
For t E l ( n ) , we have
d d t h l j ( n ) ( t ) = i = 1 p K i j ( 0 ) · w ¯ l i ( n ) + { i : j I i ( K ) } K ^ i j · w ¯ l i ( n ) + d d t α j ( t )
and
d 2 d t 2 h l j ( n ) ( t ) = d 2 d t 2 α j ( t ) .
We consider the following cases.
  • Suppose that α j is a linear function of t assumed by
    α j ( t ) = a j · t + b j for j = 1 , , q .
    Then, for t E l ( n ) ,
    h l j ( n ) ( t ) = e l ( n ) t · i = 1 p K i j ( 0 ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j · w ¯ l i ( n ) + a j · t + b j .
    Using (77), we obtain the following maximum:
    (1)
    If
    a j = i = 1 p K i j ( 0 ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j · w ¯ l i ( n )
    then
    max t E ¯ l ( n ) h l j ( n ) ( t ) = e l ( n ) · i = 1 p K i j ( 0 ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j · w ¯ l i ( n ) + b j ;
    (2)
    If
    a j i = 1 p K i j ( 0 ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j · w ¯ l i ( n )
    then
    max t E ¯ l ( n ) h l j ( n ) ( t ) = max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) .
  • Suppose that α j is not a linear function of t. In order to obtain the zero t * of d d t ( h l j ( n ) ( t ) ) , we can apply the Newton’s method to generate a sequence { t m } m = 1 such that t m t * as m . The iteration is given by
    t m + 1 = t m d d t h l j ( n ) ( t ) t = t m d 2 d t 2 h l j ( n ) ( t ) t = t m = i = 1 p K i j ( 0 ) · w ¯ l i ( n ) + { i : j I i ( K ) } K ^ i j · w ¯ l i ( n ) + d d t α j ( t ) t = t m d 2 d t 2 α j ( t ) t = t m
    for m = 0 , 1 , 2 , . The initial guess is t 0 . Since the real-valued function d d t ( h l j ( n ) ( t ) ) may have more than one zero, we need to apply the Newton’s method by taking as many as possible for the initial guess t 0 .
    Now, the computational procedure is given below.
  • Step 1. Set the error tolerance ϵ and the initial value of natural number n N .
  • Step 2. Find the optimal objective value V ( D n ) and optimal solution w ¯ of dual problem ( D n ) .
  • Step 3. Find the set Z l j ( n ) of all zeros of the real-valued function d d t ( h l j ( n ) ( t ) ) by applying the Newton method given in (80).
  • Step 4. Evaluate the maximum (76) according to (77)–(79), and evaluate the supremum (74) according to (75).
  • Step 5. Obtain π ¯ l ( n ) using (37) and the supremum obtained in Step 4. Also, using the values of π ¯ l ( n ) to obtain π l ( n ) according to (38).
  • Step 6. Evaluate the error upper bound ε n according to (48). If ε n < ϵ , then go to Step 7; otherwise, consider one more subdivision for each closed subinterval and set n n + n ^ for some integer n ^ and go to Step 2, where n ^ is the number of new points of subdivisions such that n satisfies (9). For example, the inequality (10) can be used.
  • Step 7. Find the optimal solution z ¯ ( n ) of primal problem ( P n ) .
  • Step 8. Set the step functions ( z ^ 1 ( n ) ( t ) , , z ^ q ( n ) ( t ) ) defined in (29), which will be the approximate solution of problem (RCLP3). The actual error between V ( RCLP 3 ) and the objective value of ( z ^ 1 ( n ) ( t ) , , z ^ q ( n ) ( t ) ) is less than or equal to ε n by Proposition 5, where the error tolerance ϵ is reached for this partition P n .
In the sequel, we present a numerical example that considers the piecewise continuous functions on the time interval [ 0 , T ] . We consider T = 1 and the following problem
maximize 0 1 a 1 ( t ) · z 1 ( t ) + a 2 ( t ) · z 2 ( t ) d t subject   to B 11 · z 1 ( t ) + B 12 · z 2 ( t ) c 1 ( t ) + 0 t K 11 · z 1 ( s ) + K 12 · z 2 ( s ) d s for   all   t [ 0 , 1 ] ; B 21 · z 1 ( t ) + B 22 · z 2 ( t ) c 2 ( t ) + 0 t K 21 · z 1 ( s ) + K 22 · z 2 ( s ) d s for   all   t [ 0 , 1 ] ; z 1 , z 2 L 2 [ 0 , 1 ]   and   z 1 ( t ) , z 2 ( t ) R + for   all   t [ 0 , T ] .
The uncertainties are assumed below.
  • The data B 12 = B 21 = 0 are assumed to be certain.
  • The data B 11 and B 22 are assumed to be uncertain with the nominal data B 11 ( 0 ) = 5.9 and B 22 ( 0 ) = 4.8 and the uncertainties B ^ 11 ( 0 ) = 0.1 and B ^ 22 ( 0 ) = 0.2 , respectively.
  • The data K 11 , K 12 , K 21 and K 22 are assumed to be uncertain with the nominal data K 11 ( 0 ) = 1.1 , K 12 ( 0 ) = 2.1 , K 21 ( 0 ) = 3.1 and K 22 ( 0 ) = 1.1 and the uncertainties K ^ i j = 0.1 for i , j = 1 , 2 , respectively.
  • The data a 1 and a 2 are assumed to be uncertain with the nominal data
    a 1 ( 0 ) ( t ) = e t + 0.05 t , if   0 t 0.2 sin t + 0.01 t , if   0.2 < t 0.6 t 2 + 0.02 t , if   0.6 < t 1   and   a 2 ( 0 ) ( t ) = 2 t + 0.02 t , if   0 t 0.5 t + 0.01 t , if   0.5 < t 0.7 t 2 + 0.02 t , if   0.7 < t 1
    and the uncertainties
    a ^ 1 ( t ) = 0.05 t , if   0 t 0.2 0.01 t , if   0.2 < t 0.6 0.02 t , if   0.6 < t 1   and   a ^ 2 ( t ) = 0.02 t , if   0 t 0.5 0.01 t , if   0.5 < t 0.7 0.02 t , if   0.7 < t 1 ,
    respectively.
  • The data c 1 and c 2 are assumed to be uncertain with the nominal data
    c 1 ( 0 ) ( t ) = t 3 + 0.02 t , if   0 t 0.3 ( ln t ) 2 + 0.01 t , if   0.3 < t 0.5 t 2 + 0.03 t , if   0.5 < t 0.8 cos t + 0.01 t , if   0.8 < t 1   and   c 2 ( 0 ) ( t ) = t + 0.01 t , if   0 t 0.4 5 t + 0.02 t , if   0.4 < t 0.5 t 3 + 0.01 t , if   0.5 < t 0.8 t 2 + 0.02 t , if   0.8 < t 1 .
    and the uncertainties
    c ^ 1 ( t ) = 0.01 t , if   0 t 0.3 0.02 t , if   0.3 < t 0.5 0.03 t , if   0.5 < t 0.8 0.01 t , if   0.8 < t 1   and   c ^ 2 ( t ) = 0.01 t , if   0 t 0.4 0.02 t , if   0.4 < t 0.5 0.01 t , if   0.5 < t 0.8 0.02 t , if   0.8 < t 1 .
From the discontinuities of the above functions and according to the setting of partition, we see that r = 8 and
D = d 0 = 0 , d 1 = 0.2 , d 2 = 0.3 , d 3 = 0.4 , d 4 = 0.5 , d 5 = 0.6 , d 6 = 0.7 , d 7 = 0.8 , d 8 = 1 .
For n * = 2 , it means that each closed interval [ d v , d v + 1 ] is equally divided by two subintervals for v = 0 , 1 , 7 . In this case, we have n = 2 · 8 = 16 . Therefore, we obtain a partition P 16 . By the definitions of desired quantities, we have ν = 4 and σ = σ * = 5 .
Now, in the following Table 1, for different values of n * , we present the error bound ε n . We denote by
V ( CLP n ) = j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n ) ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( n ) ( t ) d t
the approximate optimal objective value of problem (RCLP3). Theorem 1 and Proposition 5 say that
0 V ( RCLP 3 ) V ( CLP n ) ε n
and
0 V ( CLP n ) V ( P n ) V ( RCLP 3 ) V ( P n ) ε n .
Suppose that the decision-maker can tolerate the error ϵ = 0.0005 . Then, we see that n * = 300 is sufficient to achieve this error ϵ by referring to the error bound ε n = 0.0004346 . The numerical results are obtained by using MATLAB in which the active set method is used to solve the primal and dual linear programming problems ( P n ) and ( D n ) , respectively. We need to mention that there is a warning message from MATLAB when the Simplex method is used to solve the dual problem ( D n ) for large n. However, the MATLAB has no problem solving the primal problem ( P n ) using Simplex method.

6. Conclusions

The continuous-time linear programming problem with uncertain data has been studied in this paper. The data mean the real-valued functions or real numbers. Based on the assumption of uncertainty, we have numerically solved the so-called robust continuous-time linear programming problem.
The robust continuous-time linear programming problem has been formulated as problem (RCLP) that has also been transformed into the standard continuous-time linear programming problem (RCLP3). In this paper, we have successfully presented a computational procedure to obtain the error bound between the approximate objective function value and the optimal objective function value of problem (RCLP3). In order to design a computational procedure, we introduce a discretization problem. Based on the solutions obtained from the discretization problem, the error bound has been derived. We introduce the concept of ϵ -optimal solutions for the purpose of obtaining the approximate solution. On the other hand, we have also studied the convergence of approximate solutions and have established the strong duality theorem.
In the future, we are going to extend the computational procedure proposed in this paper to solve the nonlinear type of robust continuous-time optimization problems, which will be a challenging topic.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Tyndall, W.F. A Duality Theorem for a Class of Continuous Linear Programming Problems. SIAM J. Appl. Math. 1965, 15, 644–666. [Google Scholar] [CrossRef]
  2. Tyndall, W.F. An Extended Duality Theorem for Continuous Linear Programming Problems. SIAM J. Appl. Math. 1967, 15, 1294–1298. [Google Scholar] [CrossRef]
  3. Bellman, R.E. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  4. Levinson, N. A Class of Continuous Linear Programming Problems. J. Math. Anal. Appl. 1966, 16, 73–83. [Google Scholar] [CrossRef]
  5. Meidan, R.; Perold, A.F. Optimality Conditions and Strong Duality in Abstract and Continuous-Time Linear Programming. J. Optim. Theory Appl. 1983, 40, 61–77. [Google Scholar] [CrossRef]
  6. Papageorgiou, N.S. A Class of Infinite Dimensional Linear Programming Problems. J. Math. Anal. Appl. 1982, 87, 228–245. [Google Scholar] [CrossRef]
  7. Schechter, M. Duality in Continuous Linear Programming. J. Math. Anal. Appl. 1972, 37, 130–141. [Google Scholar] [CrossRef]
  8. Anderson, E.J.; Nash, P.; Perold, A.F. Some Properties of a Class of Continuous Linear Programs. SIAM J. Control Optim. 1983, 21, 758–765. [Google Scholar] [CrossRef]
  9. Anderson, E.J.; Philpott, A.B. On the Solutions of a Class of Continuous Linear Programs. SIAM J. Control Optim. 1994, 32, 1289–1296. [Google Scholar] [CrossRef]
  10. Anderson, E.J.; Pullan, M.C. Purification for Separated Continuous Linear Programs. Math. Methods Oper. Res. 1996, 43, 9–33. [Google Scholar] [CrossRef]
  11. Fleischer, L.; Sethuraman, J. Efficient Algorithms for Separated Continuous Linear Programs: The Multicommodity Flow Problem with Holding Costs and Extensions. Math. Oper. Res. 2005, 30, 916–938. [Google Scholar] [CrossRef]
  12. Pullan, M.C. An Algorithm for a Class of Continuous Linear Programs. SIAM J. Control Optim. 1993, 31, 1558–1577. [Google Scholar] [CrossRef]
  13. Pullan, M.C. Forms of Optimal Solutions for Separated Continuous Linear Programs. SIAM J. Control Optim. 1995, 33, 1952–1977. [Google Scholar] [CrossRef]
  14. Pullan, M.C. A Duality Theory for Separated Continuous Linear Programs. SIAM J. Control Optim. 1996, 34, 931–965. [Google Scholar] [CrossRef]
  15. Pullan, M.C. Convergence of a General Class of Algorithms for Separated Continuous Linear Programs. SIAM J. Optim. 2000, 10, 722–731. [Google Scholar] [CrossRef]
  16. Pullan, M.C. An Extended Algorithm for Separated Continuous Linear Programs. Math. Program. Ser. A 2002, 93, 415–451. [Google Scholar] [CrossRef]
  17. Weiss, G. A Simplex Based Algorithm to Solve Separated Continuous Linear Programs. Math. Program. Ser. A 2008, 115, 151–198. [Google Scholar] [CrossRef]
  18. Shindin, E.; Weiss, G. Structure of Solutions for Continuous Linear Programs with Constant Coefficients. SIAM J. Optim. 2015, 25, 1276–1297. [Google Scholar] [CrossRef]
  19. Farr, W.H.; Hanson, M.A. Continuous Time Programming with Nonlinear Constraints. J. Math. Anal. Appl. 1974, 45, 96–115. [Google Scholar] [CrossRef]
  20. Farr, W.H.; Hanson, M.A. Continuous Time Programming with Nonlinear Time-Delayed. J. Math. Anal. App. 1974, 46, 41–61. [Google Scholar] [CrossRef]
  21. Grinold, R.C. Continuous Programming Part One: Linear Objectives. J. Math. Anal. Appl. 1969, 28, 32–51. [Google Scholar] [CrossRef]
  22. Grinold, R.C. Continuous Programming Part Two: Nonlinear Objectives. J. Math. Anal. Appl. 1969, 27, 639–655. [Google Scholar] [CrossRef]
  23. Hanson, M.A.; Mond, B. A Class of Continuous Convex Programming Problems. J. Math. Anal. Appl. 1968, 22, 427–437. [Google Scholar] [CrossRef]
  24. Reiland, T.W. Optimality Conditions and Duality in Continuous Programming I: Convex Programs and a Theorem of the Alternative. J. Math. Anal. Appl. 1980, 77, 297–325. [Google Scholar] [CrossRef]
  25. Reiland, T.W. Optimality Conditions and Duality in Continuous Programming II: The Linear Problem Revisited. J. Math. Anal. Appl. 1980, 77, 329–343. [Google Scholar] [CrossRef]
  26. Reiland, T.W.; Hanson, M.A. Generalized Kuhn-Tucker Conditions and Duality for Continuous Nonlinear Programming Problems. J. Math. Anal. Appl. 1980, 74, 578–598. [Google Scholar] [CrossRef]
  27. Singh, C. A Sufficient Optimality Criterion in Continuous Time Programming for Generalized Convex Functions. J. Math. Anal. Appl. 1978, 62, 506–511. [Google Scholar] [CrossRef]
  28. Rojas-Medar, M.A.; Brandao, J.V.; Silva, G.N. Nonsmooth Continuous-Time Optimization Problems: Sufficient Conditions. J. Math. Anal. Appl. 1998, 227, 305–318. [Google Scholar] [CrossRef]
  29. Singh, C.; Farr, W.H. Saddle-Point Optimality Criteria of Continuous Time Programming without Differentiability. J. Math. Anal. Appl. 1977, 59, 442–453. [Google Scholar] [CrossRef]
  30. Nobakhtian, S.; Pouryayevali, M.R. Optimality Criteria for Nonsmooth Continuous-Time Problems of Multiobjective Optimization. J. Optim. Theory Appl. 2008, 136, 69–76. [Google Scholar] [CrossRef]
  31. Nobakhtian, S.; Pouryayevali, M.R. Duality for Nonsmooth Continuous-Time Problems of Vector Optimization. J. Optim. Theory Appl. 2008, 136, 77–85. [Google Scholar] [CrossRef]
  32. Zalmai, G.J. Duality for a Class of Continuous-Time Homogeneous Fractional Programming Problems. Z. Oper. Res. Ser. A-B 1986, 30, 43–48. [Google Scholar] [CrossRef]
  33. Zalmai, G.J. Duality for a Class of Continuous-Time Fractional Programming Problems. Utilitas Math. 1987, 31, 209–218. [Google Scholar] [CrossRef]
  34. Zalmai, G.J. Optimality Conditions and Duality for a Class of Continuous-Time Generalized Fractional Programming Problems. J. Math. Anal. Appl. 1990, 153, 365–371. [Google Scholar] [CrossRef]
  35. Zalmai, G.J. Optimality Conditions and Duality models for a Class of Nonsmooth Constrained Fractional Optimal Control Problems. J. Math. Anal. Appl. 1997, 210, 114–149. [Google Scholar] [CrossRef]
  36. Wen, C.-F.; Wu, H.-. C Using the Dinkelbach-Type Algorithm to Solve the Continuous-Time Linear Fractional Programming Problems. J. Glob. Optim. 2011, 49, 237–263. [Google Scholar] [CrossRef]
  37. Wen, C.-F.; Wu, H.-. C Using the Parametric Approach to Solve the Continuous-Time Linear Fractional Max-Min Problems. J. Glob. Optim. 2012, 54, 129–153. [Google Scholar] [CrossRef]
  38. Wen, C.-F.; Wu, H.-C. The Approximate Solutions and Duality Theorems for the Continuous-Time Linear Fractional Programming Problems. Numer. Funct. Anal. Optim. 2012, 33, 80–129. [Google Scholar] [CrossRef]
  39. Wu, H.-C. Solving the Continuous-Time Linear Programming Problems Based on the Piecewise Continuous Functions. Numer. Funct. Anal. Optim. 2016, 37, 1168–1201. [Google Scholar] [CrossRef]
  40. Dantzig, G.B. Linear Programming under Uncertainty. Manag. Sci. 1955, 1, 197–206. [Google Scholar] [CrossRef]
  41. Ben-Tal, A.; Nemirovski, A. Robust Convex Optimization. Math. Oper. Res. 1998, 23, 769–805. [Google Scholar] [CrossRef]
  42. Ben-Tal, A.; Nemirovski, A. Robust Solutions of Uncertain Linear Programs. Oper. Res. Lett. 1999, 25, 1–13. [Google Scholar] [CrossRef]
  43. El Ghaoui, L.; Lebret, H. Robust Solutions to Least-Squares Problems with Uncertain Data. SIAM J. Matrix Anal. Appl. 1997, 18, 1035–1064. [Google Scholar] [CrossRef]
  44. El Ghaoui, L.; Oustry, F.; Lebret, H. Robust Solutions to Uncertain Semidefinite Programs. SIAM J. Optim. 1998, 9, 33–52. [Google Scholar] [CrossRef]
  45. Averbakh, I.; Zhao, Y.-B. Explicit Reformulations for Robust Optimization Problems with General Uncertainty Sets. SIAM J. Optim. 2008, 18, 1436–1466. [Google Scholar] [CrossRef]
  46. Ben-Tal, A.; Boyd, S.; Nemirovski, A. Extending Scope of Robust Optimization: Comprehensive Robust Counterpart of Uncertain Problems. Math. Program. Ser. B 2006, 107, 63–89. [Google Scholar] [CrossRef]
  47. Bertsimas, D.; Natarajan, K.; Teo, C.-P. Persistence in Discrete Optimization under Data Uncertainty. Math. Program. Ser. B 2006, 108, 251–274. [Google Scholar] [CrossRef]
  48. Bertsimas, D.; Sim, M. The Price of Robustness. Oper. Res. 2004, 52, 35–53. [Google Scholar] [CrossRef]
  49. Bertsimas, D.; Sim, M. Tractable Approximations to Robust Conic Optimization Problems. Math. Program. Ser. B 2006, 107, 5–36. [Google Scholar] [CrossRef]
  50. Chen, X.; Sim, M.; Sun, P. A Robust Optimization Perspective on Stochastic Programming. Oper. Res. 2007, 55, 1058–1071. [Google Scholar] [CrossRef]
  51. Erdoǧan, E.; Iyengar, G. Ambiguous Chance Constrained Problems and Robust Optimization. Math. Program. Ser. B 2006, 107, 37–61. [Google Scholar] [CrossRef]
  52. Zhang, Y. General Robust Optimization Formulation for Nonlinear Programming. J. Optim. Theory Appl. 2007, 132, 111–124. [Google Scholar] [CrossRef]
  53. Riesz, F.; -Nagy, B.S. Functional Analysis; Frederick Ungar Publishing Co.: New York, NY, USA, 1955. [Google Scholar]
Table 1. Numerical Results.
Table 1. Numerical Results.
n * n = n * · 8 ε n V ( P n ) V ( C L P n )
216 0.0608844 0.1245110 0.1351945
1080 0.0128651 0.1464201 0.1488228
50400 0.0026014 0.1511276 0.1516192
100800 0.0013025 0.1517239 0.1519705
2001600 0.0006517 0.1520228 0.1521462
3002400 0.0004346 0.1521225 0.1522048
4003200 0.0003260 0.1521724 0.1522341
5004000 0.0002608 0.1522023 0.1522517

Share and Cite

MDPI and ACS Style

Wu, H.-C. Numerical Method for Solving the Robust Continuous-Time Linear Programming Problems. Mathematics 2019, 7, 435. https://doi.org/10.3390/math7050435

AMA Style

Wu H-C. Numerical Method for Solving the Robust Continuous-Time Linear Programming Problems. Mathematics. 2019; 7(5):435. https://doi.org/10.3390/math7050435

Chicago/Turabian Style

Wu, Hsien-Chung. 2019. "Numerical Method for Solving the Robust Continuous-Time Linear Programming Problems" Mathematics 7, no. 5: 435. https://doi.org/10.3390/math7050435

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop