Next Article in Journal
A Compromised Decision-Making Approach to Third-Party Logistics Selection in Sustainable Supply Chain Using Fuzzy AHP and Fuzzy VIKOR Methods
Next Article in Special Issue
Optimal Power Flow Analysis Based on Hybrid Gradient-Based Optimizer with Moth–Flame Optimization Algorithm Considering Optimal Placement and Sizing of FACTS/Wind Power
Previous Article in Journal
A Spatial Panel Structural Vector Autoregressive Model with Interactive Effects and Its Simulation
Previous Article in Special Issue
An Optimisation-Driven Prediction Method for Automated Diagnosis and Prognosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Solutions for Uncertain Continuous-Time Linear Programming Problems with Time-Dependent Matrices

Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 802, Taiwan
Mathematics 2021, 9(8), 885; https://doi.org/10.3390/math9080885
Submission received: 13 March 2021 / Revised: 10 April 2021 / Accepted: 12 April 2021 / Published: 16 April 2021
(This article belongs to the Special Issue Numerical Optimization and Applications)

Abstract

:
The uncertainty for the continuous-time linear programming problem with time-dependent matrices is considered in this paper. In this case, the robust counterpart of the continuous-time linear programming problem is introduced. In order to solve the robust counterpart, it will be transformed into the conventional form of the continuous-time linear programming problem with time-dependent matrices. The discretization problem is formulated for the sake of numerically calculating the ϵ -optimal solutions, and a computational procedure is also designed to achieve this purpose.

1. Introduction

The theory of the continuous-time linear programming problem originated from the “bottleneck problem” proposed by Bellman [1]. A continuous-time linear programming problem with constant matrices was formulated and studied rigorously by Tyndall [2,3]. In this paper, we study the time-dependent matrices that is more complicated than the constant matrices. Although Levinson [4] also studied the problem of time-dependent matrices, the numerical methodology was not proposed to effectively calculate the optimal solution. Wu [5] tried to design a computational procedure to calculate the approximate optimal solutions. When the data in the continuous-time linear programming problem with time-dependent matrices are uncertain, a robust counterpart should be established and solved, which is the purpose of this paper.
The separated continuous-time linear programming problem is a subclass of the continuous-time linear programming problem, and it can be used to model the job-shop scheduling problems. Owing to its special structure, many researchers such as Anderson et al. [6,7,8], Fleischer and Sethuraman [9], and Pullan [10,11,12,13,14] have paid much attention to investigating its optimal solutions without providing the numerical technique. On the other hand, many interesting theoretical results have also been established by Meidan and Perold [15], Papageorgiou [16], and Schechter [17]. From the computational viewpoint, Weiss [18] designed a simplex-like algorithm that can be used to solve the separated continuous-time linear programming problem. In Wu [5,19], different computational procedures have been proposed to solve the continuous-time linear programming problem in which the functions are assumed to be piecewise continuous rather than continuous on the time interval [ 0 , T ] . Especially, Wu [5] solved the more complicated problem that involves time-dependent matrices. In this paper, we consider the uncertain continuous-time linear programming problem with time-dependent matrices, which is more complicated than the problem studied in Wu [5].
There are many types of continuous-time optimization problems that have been theoretically studied without considering the numerical issue. For example, the nonlinear types of continuous-time optimization problems have been studied by Farr and Hanson [20,21], Grinold [22,23], Hanson and Mond [24], Reiland [25,26], Reiland and Hanson [27], and Singh [28]. On the other hand, Rojas-Medar et al. [29], and Singh and Farr [30] studied the nonsmooth continuous-time optimization problems. Additionally, Nobakhtian and Pouryayevali [31,32] studied the nonsmooth continuous-time multiobjective programming problems. Especially, the continuous-time fractional programming problems have been theoretically investigated by Zalmai [33,34,35,36]. From the numerical viewpoint, Wen and Wu [37,38,39] have developed many different numerical techniques to solve the continuous-time linear fractional programming problems. As a matter of fact, numerically solving the continuous-time optimization problems is a difficult task. Even for a median size of continuous-time optimization problems, a great deal of computer resources may be needed.
Solving optimization problems that involve uncertain data has attracted many researchers. The pioneering work for solving the stochastic optimization problem was initiated by Dantzig [40], in which the uncertain data were driven by the observed probabilities. The main difficulty is how to fit the uncertain data using some known exact probability distribution function. Alternatively, the so-called robust optimization might pave another avenue to model the optimization problems with uncertain data. The basic idea of robust optimization assumes that each uncertain data should fall into a predetermined set. In other words, the uncertainty can be circumscribed beforehand. For example, the real-valued data can be assumed to fall into a bounded closed interval in R for convenience. Ben-Tal and Nemirovski [41,42], and El Ghaoui [43,44] proposed to solve the so-called robust optimization problems by assuming the optimization problems with uncertain data falling into uncertainty sets. The interested readers may refer to the articles contributed by Averbakh and Zhao [45], Ben-Tal et al. [46], Bertsimas et al. [47,48,49], Chen et al. [50], Erdoǧan and Iyengar [51], Zhang [52], and the references therein. Wu [53] proposed a computational procedure to solve the robust continuous-time linear programming problem with constant matrices. In this paper, we solve the robust continuous-time linear programming problem with time-dependent matrices by designing a practical algorithm. We emphasize that the problems with time-dependent matrices are more complicated than the problems with constant matrices. In Section 2, we formulate a robust continuous-time linear programming problem with time-dependent matrices and transform it into a conventional form of the continuous-time linear programming problem with time-dependent matrices under some algebraic calculation. In Section 3, in order to numerically solve the desired problems, a discretization problem is introduced. In Section 4, we derive the analytic formula of error bound. We also introduce the concept of an ϵ -optimal solution to obtain the approximate solution. In Section 5, the convergent property of approximate solutions is studied. In Section 6, we design a computational procedure and provide a numerical example to demonstrate the usefulness of this practical algorithm.

2. Robust Continuous-Time Linear Programming Problems

We consider the following continuous-time linear programming problem with time-dependent matrices:
( CLP ) max j = 1 q 0 T a j ( t ) · z j ( t ) d t subject   to j = 1 q B i j ( t ) · z j ( t ) c i ( t ) + j = 1 q 0 t K i j ( t , s ) · z j ( s ) d s for   all   t [ 0 , T ]   and   i = 1 , , p ; z j L 2 [ 0 , T ]   and   z j ( t ) 0   for   all   j = 1 , , q   and   t [ 0 , T ] ,
where B i j and K i j are the nonnegative real-valued functions defined on [ 0 , T ] and [ 0 , T ] × [ 0 , T ] , respectively, for i = 1 , , p and j = 1 , , q . When the real-valued functions c i are assumed to be nonnegative on [ 0 , T ] for i = 1 , , p , it is obvious that the primal problem (CLP) is feasible with a trivial feasible solution z j ( t ) = 0 for all j = 1 , , q .
The functions a j and c i can be assumed to be certain or uncertain data for i = 1 , , p and j = 1 , , q . When the functions a j and c i are assumed to be uncertain, they are considered to be the pointwise-uncertain. In other words, given each t [ 0 , T ] , the uncertain data a j ( t ) and c i ( t ) will be assumed to fall into the uncertainty sets V a j ( t ) and V c i ( t ) , respectively, which are predetermined by the decision-makers. When the functions a j and c i are assumed to be certain, the function values a j ( t ) and c i ( t ) are assumed to be certain for each t [ 0 , T ] . When the function values a j ( t ) and c i ( t ) are assumed to be certain, we can also consider the uncertainty sets V a j ( t ) = { a j ( t ) } and V c i ( t ) = { a j ( t ) } to be the singleton sets. We denote by I ( a ) and I ( c ) the sets of indices in which the functions a j and c i are assumed to be uncertain, respectively. In other words, if j I ( a ) , the function a j is uncertain and, if i I ( c ) , the function c i is uncertain.
We also assume that some of the functions B i j ( t ) and K i j ( t , s ) are pointwise-uncertain by similarly considering the uncertainty sets U B i j ( t ) and U K i j ( t , s ) , respectively. Given any fixed i { 1 , 2 , , p } , we also denote by I i ( B ) and I i ( K ) the set of indices in which the functions B i j and K i j are assumed to be uncertain. Therefore, I i ( B ) and I i ( K ) are subsets of { 1 , 2 , , q } .
The robust counterpart of the original continuous-time linear programming problem (CLP) is assumed to take each data in the corresponding uncertainty set, and it is formulated as follows:
( RCLP ) max j = 1 q 0 T a j ( t ) · z j ( t ) d t subject to j = 1 q B i j ( t ) · z j ( t ) c i ( t ) + j = 1 q 0 t K i j ( t , s ) · z j ( s ) d s for all t [ 0 , T ] and i = 1 , , p ; z j L 2 [ 0 , T ] and z j ( t ) 0 for j = 1 , , q and t [ 0 , T ] ; for all a j ( t ) V a j ( t ) for all t [ 0 , T ] and j = 1 , , q ; for all c i ( t ) V c i ( t ) for all t [ 0 , T ] and i = 1 , , p ; for all B i j ( t ) U B i j ( t ) for all t [ 0 , T ] , i = 1 , , p and j = 1 , , q ; for all K i j ( t , s ) U K i j ( t , s ) for all ( t , s ) [ 0 , T ] × [ 0 , T ] , i = 1 , , p and j = 1 , , q .
The robust counterpart ( RCLP ) as shown above is a semi-infinite problem, which has infinitely many numbers of constraints. Therefore, it is really hard to solve. Usually, the uncertainty sets are taken to be bounded closed intervals in R . When the uncertainty sets U B i j ( t ) , U K i j ( t , s ) , V a j ( t ) , and V c i ( t ) are taken to be bounded closed intervals in R , the semi-infinite problem ( RCLP ) can be transformed into a conventional continuous-time linear programming problem with time-dependent matrices. Now, the uncertainty sets are described below.
  • For B i j ( t ) with j I i ( B ) and K i j ( t , s ) with j I i ( K ) , the uncertain data B i j ( t ) and K i j ( t , s ) are assumed to fall into the bounded closed intervals given by
    U B i j ( t ) = B i j ( 0 ) ( t ) B ^ i j ( t ) , B i j ( 0 ) ( t ) + B ^ i j ( t )
    and
    U K i j ( t , s ) = K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) , K i j ( 0 ) ( t , s ) + K ^ i j ( t , s ) ,
    respectively, where B i j ( 0 ) ( t ) 0 and K i j ( 0 ) ( t , s ) 0 denote the known nominal data of B i j ( t ) and K i j ( t , s ) , respectively, and B ^ i j ( t ) 0 and K ^ i j ( t , s ) 0 denote the uncertainties such that
    B i j ( 0 ) ( t ) B ^ i j ( t ) 0 and K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) 0 .
    For j I i ( B ) , we also use the notation B i j ( 0 ) ( t ) to denote the certain data with uncertainty B ^ i j ( t ) = 0 , and use the notation K i j ( 0 ) ( t , s ) to denote the certain data with uncertainty K ^ i j ( t , s ) = 0 for j I i ( K ) .
  • For a j with j I ( a ) and c i with i I ( c ) , the uncertain data a j and c i are assumed to fall into the bounded closed intervals given by
    V a j ( t ) = a j ( 0 ) ( t ) a ^ j ( t ) , a j ( 0 ) ( t ) + a ^ j ( t ) and V c i ( t ) = c i ( 0 ) ( t ) c ^ i ( t ) , c i ( 0 ) ( t ) + c ^ i ( t ) ,
    where a j ( 0 ) ( t ) and c i ( 0 ) ( t ) denote the known nominal data of a j ( t ) and c i ( t ) , respectively, and a ^ j ( t ) 0 and c ^ i ( t ) 0 denote the uncertainties of a j ( t ) and c i ( t ) , respectively. For j I ( a ) , we also use the notation a j ( 0 ) ( t ) to denote the certain data with uncertainties a ^ j ( t ) = 0 and use the notation c i ( 0 ) ( t ) to denote the certain data with uncertainties c ^ j ( t ) = 0 for i I ( c ) .
Under the above settings, the robust counterpart ( RCLP ) is written as follows:
( RCLP ) max j I ( a ) 0 T a j ( 0 ) ( t ) · z j ( t ) d t + j I ( a ) 0 T a j ( t ) · z j ( t ) d t subject to { j : j I i ( B ) } B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B i j ( t ) · z j ( t ) c i ( 0 ) ( t ) + { j : j I i ( K ) } 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s + { j : j I i ( K ) } 0 t K i j ( t , s ) · z j ( s ) d s for i I ( c ) and for all t [ 0 , T ] ; { j : j I i ( B ) } B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B i j ( t ) · z j ( t ) c i ( t ) + { j : j I i ( K ) } 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s + { j : j I i ( K ) } 0 t K i j ( t , s ) · z j ( s ) d s for i I ( c ) and for all t [ 0 , T ] ; z j L 2 [ 0 , T ] and z j ( t ) 0 for j = 1 , , q and t [ 0 , T ] ; for all B i j ( t ) U B i j ( t ) with j I i ( B ) for all t [ 0 , T ] ; for all K i j ( t , s ) U K i j ( t , s ) with j I i ( K ) for all ( t , s ) [ 0 , T ] × [ 0 , T ] ; for all a j ( t ) V a j ( t ) with j I ( a ) for all t [ 0 , T ] ; for all c i ( t ) V c i ( t ) with i I ( c ) for all t [ 0 , T ] .
We convert the above semi-infinite problem ( RCLP ) into a conventional continuous-time linear programming problem with time-dependent matrices. We first rewrite the problem ( RCLP ) as the following equivalent form:
( RCLP 1 ) max ϕ subject to ϕ j I ( a ) 0 T a j ( 0 ) ( t ) · z j ( t ) d t + j I ( a ) 0 T a j ( t ) · z j ( t ) d t { j : j I i ( B ) } B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B i j ( t ) · z j ( t ) c i ( 0 ) ( t ) + { j : j I i ( K ) } 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s + { j : j I i ( K ) } 0 t K i j ( t , s ) · z j ( s ) d s for i I ( c ) and for all t [ 0 , T ] ; { j : j I i ( B ) } B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B i j ( t ) · z j ( t ) c i ( t ) + { j : j I i ( K ) } 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s + { j : j I i ( K ) } 0 t K i j ( t , s ) · z j ( s ) d s for I ( c ) and for all t [ 0 , T ] ; ϕ R ; z j L 2 [ 0 , T ] and z j ( t ) 0 for j = 1 , , q and t [ 0 , T ] ; for all B i j ( t ) U B i j ( t ) with j I i ( B ) for all t [ 0 , T ] ; for all K i j ( t , s ) U K i j ( t , s ) with j I i ( K ) for all ( t , s ) [ 0 , T ] × [ 0 , T ] ; for all a j ( t ) V a j ( t ) with j I ( a ) for all t [ 0 , T ] ; for all c i ( t ) V c i ( t ) with i I ( c ) for all t [ 0 , T ] .
Given any fixed i { 1 , , p } , for j I i ( B ) , since z j ( t ) 0 and B i j ( t ) B i j ( 0 ) ( t ) + B ^ i j ( t ) , we have
{ j : j I i ( B ) } B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B i j ( t ) · z j ( t ) j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) .
Similarly, for j I i ( K ) , since z j ( s ) 0 and K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) K i j ( t , s ) , we also have
{ j : j I i ( K ) } 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s + { j : j I i ( K ) } 0 t K i j ( t , s ) · z j ( s ) d s j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s .
Using the inequalities (1) and (2), we consider the following cases.
  • For i I ( c ) , since c i ( 0 ) ( t ) c ^ i ( t ) c i ( t ) for all t [ 0 , T ] , we obtain
    { j : j I i ( B ) } B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B i j ( t ) · z j ( t ) { j : j I i ( K ) } 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K i j ( t , s ) · z j ( s ) d s c i ( t ) j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s c i ( 0 ) ( t ) c ^ i ( t )
    for all B i j ( t ) U B i j ( t ) , K i j ( t , s ) U K i j ( t , s ) , and c i ( t ) V c i ( t ) , which implies
    max B i j , K i j , c i { j : j I i ( B ) } B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B i j ( t ) · z j ( t ) { j : j I i ( K ) } 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K i j ( t , s ) · z j ( s ) d s c i ( t ) = j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s c i ( 0 ) ( t ) c ^ i ( t ) ,
    where the equality can be attained.
  • For i I ( c ) , we obtain
    { j : j I i ( B ) } B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B i j ( t ) · z j ( t ) { j : j I i ( K ) } 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K i j ( t , s ) · z j ( s ) d s c i ( 0 ) ( t ) j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s c i ( 0 ) ( t ) ,
    which implies
    max B i j , K i j { j : j I i ( B ) } B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B i j ( t ) · z j ( t ) { j : j I i ( K ) } 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K i j ( t , s ) · z j ( s ) d s c i ( 0 ) ( t ) = j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s c i ( 0 ) ( t ) ,
where the equality can be attained. Since a j ( 0 ) ( t ) a ^ j ( t ) a j ( t ) for j I ( a ) and t [ 0 , T ] , we have
j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t j I ( a ) 0 T a j ( 0 ) ( t ) · z j ( t ) d t + j I ( a ) 0 T a j ( t ) · z j ( t ) d t ,
which implies
min a j j I ( a ) 0 T a j ( 0 ) ( t ) · z j ( t ) d t + j I ( a ) 0 T a j ( t ) · z j ( t ) d t = j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t ,
where the equality can also be attained. From the Equalities (3)–(5), it follows that ( ϕ , z ( t ) ) = ( ϕ , z 1 ( t ) , , z n ( t ) ) is a feasible solution of problem ( RCLP 1 ) if and only if it satisfies the following inequalities:
j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s + c i ( 0 ) ( t ) c ^ i ( t ) for i I ( c ) ; j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s + c i ( 0 ) ( t ) for i I ( c ) ; ϕ j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t .
This shows that problem ( RCLP 1 ) is equivalent to the following problem:
( RCLP 2 ) max ϕ subject to ϕ j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t ; j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s for all t [ 0 , T ] and i I ( c ) ; j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) c i ( 0 ) ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s for all t [ 0 , T ] and i I ( c ) ; ϕ R ; z j L 2 [ 0 , T ] and z j ( t ) 0 for j = 1 , , q and t [ 0 , T ]
which can also be rewritten as the following continuous-time linear programming problem:
( RCLP 3 ) max j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t subject to j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s for all t [ 0 , T ] and i I ( c ) ; j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) c i ( 0 ) ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s for all t [ 0 , T ] and i I ( c ) ; z j L 2 [ 0 , T ] and z j ( t ) 0 for j = 1 , , q and t [ 0 , T ] .
The duality theory in continuous-time linear programming problem states that the dual problem of (RCLP3) can be formulated as follows:
( DRCLP 3 ) max i = 1 p 0 T c i ( 0 ) ( t ) · w i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w i ( t ) d t subject to i = 1 p B i j ( 0 ) ( t ) · w i ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w i ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( s ) d s for all t [ 0 , T ] and j I ( a ) ; i = 1 p B i j ( 0 ) ( t ) · w i ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w i ( t ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( s ) d s for all t [ 0 , T ] and j I ( a ) ; w i L 2 [ 0 , T ] and w i ( t ) 0 for i = 1 , , p and t [ 0 , T ] .

3. Discretization

In order to efficiently develop the numerical method, the following conditions are assumed to be satisfied.
  • For i = 1 , , p and j = 1 , , q , the functions B i j ( 0 ) and K i j ( 0 ) are assumed to be nonnegative on [ 0 , T ] and [ 0 , T ] × [ 0 , T ] , respectively. The functions B i j ( 0 ) B ^ i j for j I i ( B ) and K i j ( 0 ) K ^ i j for j I i ( K ) are also assumed to be nonnegative on [ 0 , T ] and [ 0 , T ] × [ 0 , T ] , respectively.
  • For i = 1 , , p and j = 1 , , q , the functions a j ( 0 ) and c i ( 0 ) are assumed to be piecewise continuous on [ 0 , T ] . For j I ( a ) and i I ( c ) , the functions a ^ j and c ^ i are assumed to be piecewise continuous on [ 0 , T ] , which also suggest that the functions a j ( 0 ) a ^ j and c i ( 0 ) c ^ i are piecewise continuous on [ 0 , T ] for j I ( a ) and i I ( c ) .
  • For each j = 1 , , q and t [ 0 , T ] , the following inequality is satisfied:
    i = 1 p B i j ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) > 0 .
  • For each i = 1 , , p and for
    B ¯ i min j I i ( B ) inf t [ 0 , T ] B i j ( 0 ) ( t ) + B ^ i j ( t ) : B i j ( 0 ) ( t ) + B ^ i j ( t ) > 0
    and
    B ˜ i min j I i ( B ) inf t [ 0 , T ] B i j ( 0 ) ( t ) : B i j ( 0 ) ( t ) > 0 ,
    the following inequality is satisfied:
    min i = 1 , , p min B ¯ i , B ˜ i = σ > 0 .
    In other words,
    σ B i j ( 0 ) ( t ) + B ^ i j ( t ) if B i j ( 0 ) ( t ) + B ^ i j ( t ) 0 for j I i ( B ) B i j ( 0 ) ( t ) if B i j ( 0 ) ( t ) 0 for j I i ( B ) .
Let A j , S i , B i j , and K i j denote the set of discontinuities of a j ( t ) , c i ( t ) , B i j ( t ) , and K i j ( t , s ) , respectively. Then, A j , S i , and B i j are finite subsets of [ 0 , T ] and K i j is a finite subset of [ 0 , T ] × [ 0 , T ] . We also write
K i j = K i j ( 1 ) × K i j ( 2 ) ,
where K i j ( 1 ) and K i j ( 2 ) are a finite subset of [ 0 , T ] . In order to determine the partition of the time interval [ 0 , T ] , we consider the following set:
D = j = 1 q A j i = 1 p S i i = 1 p j = 1 q B i j i = 1 p j = 1 q K i j ( 1 ) i = 1 p j = 1 q K i j ( 2 ) 0 , T .
Then, D is a finite subset of [ 0 , T ] written by
D = d 0 , d 1 , d 2 , , d r ,
where, for convenience, we set d 0 = 0 and d r = T . Let P n be a partition of [ 0 , T ] satisfying D P n . In other words, each closed [ d v , d v + 1 ] is also divided into many closed subintervals.
Let
P n = e 0 ( n ) , e 1 ( n ) , , e n ( n ) ,
where e 0 ( n ) = 0 and e n ( n ) = T . The n closed subintervals are denoted by
E ¯ l ( n ) = e l 1 ( n ) , e l ( n ) for l = 1 , , n .
For convenience, we also write
E l ( n ) = e l 1 ( n ) , e l ( n ) and F l ( n ) = e l 1 ( n ) , e l ( n ) .
We denote by d l ( n ) the length of closed interval E ¯ l ( n ) . Let
P n = max l = 1 , , n d l ( n )
and assume
P n 0 as n .
From a computational viewpoint, we also assume that there exists n , n N satisfying
n · r n n · r and P n T n .
When n , we have n . In the sequel, n implicitly means that n .
Under the above construction for the partition P n , it is clear that the functions a j ( 0 ) a ^ j for j I ( a ) , a j ( 0 ) for j I ( a ) , c i ( 0 ) c ^ i for i I ( c ) , and c i ( 0 ) for i I ( c ) are continuous on each open interval E l ( n ) for l = 1 , , n . Now, we define
a l j ( n ) = inf t E ¯ l ( n ) a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a ) inf t E ¯ l ( n ) a j ( 0 ) ( t ) for j I ( a )
and
c l i ( n ) = inf t E ¯ l ( n ) c i ( 0 ) ( t ) c ^ i ( t ) for i I ( c ) inf t E ¯ l ( n ) c i ( 0 ) ( t ) for i I ( c )
and the vectors
a l ( n ) = a l 1 ( n ) , a l 2 ( n ) , , a l q ( n ) R q and c l ( n ) = c l 1 ( n ) , c l 2 ( n ) , , c l p ( n ) R p .
Then,
a l j ( n ) a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a ) a j ( 0 ) ( t ) for j I ( a ) and c l i ( n ) c i ( 0 ) ( t ) c ^ i ( t ) for i I ( c ) c i ( 0 ) ( t ) for i I ( c )
for all t E ¯ l ( n ) and l = 1 , , n .
For the time-dependent matrices B ( t ) and K ( t , s ) , the ( i , j ) th entries of constant matrices B l ( n ) and K l k ( n ) are defined and denoted by
B l i j ( n ) = sup t E ¯ l ( n ) B i j ( 0 ) ( t ) + B ^ i j ( t ) for j I i ( B ) sup t E ¯ l ( n ) B i j ( 0 ) ( t ) for j I i ( B )
and
K l k i j ( n ) = inf ( t , s ) E ¯ l ( n ) × E ¯ k ( n ) K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) for j I i ( K ) inf ( t , s ) E ¯ l ( n ) × E ¯ k ( n ) K i j ( 0 ) ( t , s ) for j I i ( K ) .
Then,
B l i j ( n ) B i j ( 0 ) ( t ) + B ^ i j ( t ) for j I i ( B ) B i j ( 0 ) ( t ) for j I i ( B )
for all t E ¯ l ( n ) and l = 1 , , n , and
K l k i j ( n ) K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) for j I i ( K ) K i j ( 0 ) ( t , s ) for j I i ( K )
for all ( t , s ) E ¯ l ( n ) × E ¯ k ( n ) and l , k = 1 , , n .
Remark 1.
From (7), it follows that, if B l i j ( n ) 0 , then B l i j ( n ) σ > 0 for all i = 1 , , p , j = 1 , , q and l = 1 , , n . Given any fixed t E ¯ l ( n ) , from (6), for any j = 1 , , q , there exists i j { 1 , , p } such that
B i j j ( 0 ) ( t ) + B ^ i j j ( t ) > 0 i f j I i j ( B ) B i j j ( 0 ) ( t ) > 0 i f j I i j ( B )
which suggests that B l i j j ( n ) 0 , i.e., B l i j j ( n ) σ > 0 . In other words, for each j and l, there exists i l j { 1 , 2 , , p } such that B l i l j j ( n ) σ > 0 .
Given any n N and l = 1 , , n , we formulate the following linear programming problem:
( P n ) max l = 1 n j = 1 q d l ( n ) · a l j ( n ) · z l j subject to j = 1 q B 1 i j ( n ) · z 1 j c 1 i ( n ) for i = 1 , , p ; j = 1 q B l i j ( n ) · z l j c l i ( n ) + k = 1 l 1 j = 1 q d k ( n ) · K l k i j ( n ) · z k j for l = 2 , , n and i = 1 , , p ; z l j 0 for l = 1 , , n and j = 1 , , q .
According to the duality theory of linear programming, the dual problem of ( P n ) is given by
( D ^ n ) min l = 1 n i = 1 p c l i ( n ) · w ^ l i subject to i = 1 p B l i j ( n ) · w ^ l i d l ( n ) a l j ( n ) + d l ( n ) · k = l + 1 n i = 1 p K k l i j ( n ) · w ^ k i for l = 1 , , n 1 and j = 1 , , q ; i = 1 p B l i j ( n ) · w ^ n i d n ( n ) a n j ( n ) for j = 1 , , q ; w ^ l i 0 for l = 1 , , n and i = 1 , , p .
Now, let
w l i = w ^ l i d l ( n ) .
By dividing d l ( n ) on both sides of the constraints, the dual problem ( D ^ n ) can be equivalently written by
( D n ) min l = 1 n i = 1 p d l ( n ) · c l i ( n ) · w l i subject to i = 1 p B l i j ( n ) · w l i a l j ( n ) + k = l + 1 n i = 1 p d k ( n ) · K k l i j ( n ) · w k i for l = 1 , , n 1 and j = 1 , , q ; i = 1 p B n i j ( n ) · w n i a n j ( n ) for j = 1 , , q ; w l i 0 for l = 1 , , n and i = 1 , , p .
Remark 2.
We have the following observations.
  • If c l ( n ) 0 for all l = 1 , , n , then the problem ( P n ) is feasible, since ( z 1 , z 2 , , z n ) = 0 is a feasible solution of ( P n ) . If the vector-valued function c is nonnegative, then c l ( n ) 0 for all l = 1 , , n , which suggests that the primal problem ( P n ) is feasible.
  • The dual problem ( D n ) is always feasible for each n N , which can be realized from part (i) of Proposition 1 given below.
Recall that d l ( n ) denotes the length of closed interval E ¯ l ( n ) . We also define
s l ( n ) = max k = l , , n d k ( n )
Then, we have
s l ( n ) = max d l ( n ) , d l + 1 ( n ) , , d n ( n ) = max d l ( n ) , s l + 1 ( n )
which suggests that
s l ( n ) d l ( n ) and P n s l ( n ) s l + 1 ( n ) for l = 1 , , n 1 .
For convenience, we adopt the following notations:
τ ¯ l ( n ) = max j = 1 , , q a l j ( n ) and τ l ( n ) = max k = l , , n τ ¯ k ( n )
σ ¯ l ( n ) = min i = 1 , , p min j = 1 , , q B l i j ( n ) : B l i j ( n ) > 0 and σ l ( n ) = min k = l , , n σ ¯ k ( n )
ν ¯ l ( n ) = max k = 1 , , n max j = 1 , , q i = 1 p K k l i j ( n ) and ν l ( n ) = max k = l , , n ν ¯ k ( n ) ϕ ¯ l ( n ) = max k = 1 , , n max i = 1 , , p j = 1 q K k l i j ( n ) and ϕ l ( n ) = max k = l , , n ϕ ¯ k ( n ) τ = max j = 1 , , q τ j , where τ j = sup t [ 0 , T ] a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a ) sup t [ 0 , T ] a j ( 0 ) ( t ) for j I ( a ) ζ = max i = 1 , , p ζ i , where ζ i = sup t [ 0 , T ] c i ( 0 ) ( t ) c ^ i ( t ) for i I ( c ) sup t [ 0 , T ] c i ( 0 ) ( t ) for i I ( c ) ν = max j = 1 , , q sup ( t , s ) [ 0 , T ] × [ 0 , T ] { i : j I i ( K ) } K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) + { i : j I i ( K ) } K i j ( 0 ) ( t , s )
= max j = 1 , , q sup ( t , s ) [ 0 , T ] × [ 0 , T ] i = 1 p K i j ( 0 ) ( t , s ) { i : j I i ( K ) } K ^ i j ( t , s ) ϕ = max i = 1 , , p sup ( t , s ) [ 0 , T ] × [ 0 , T ] { i : j I i ( K ) } K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) + { i : j I i ( K ) } K i j ( 0 ) ( t , s )
= max i = 1 , , p sup ( t , s ) [ 0 , T ] × [ 0 , T ] j = 1 q K i j ( 0 ) ( t , s ) { j : j I i ( K ) } K ^ i j ( t , s ) .
For each l = 1 , , n , by Remark 1, since B l i j ( n ) σ > 0 for B l i j ( n ) 0 and there exists i l j such that B l i l j j ( n ) σ , it follows that σ l ( n ) σ . We also have the following inequalities:
σ l ( n ) σ l + 1 ( n ) , τ l ( n ) τ l + 1 ( n ) and ν l ( n ) ν l + 1 ( n )
and
τ ¯ l ( n ) τ l ( n ) τ , ν ¯ l ( n ) ν l ( n ) ν and σ ¯ l ( n ) σ l ( n ) σ > 0
for any n N .
Proposition 1.
The following statements hold true.
(i)
Let
w l ( n ) = τ l ( n ) σ l ( n ) · 1 + s l ( n ) · ν l ( n ) σ l ( n ) n l 0 .
We write w ˘ l i ( n ) = w l ( n ) for i = 1 , , p and l = 1 , , n and consider the following vector:
w ˘ ( n ) = w ˘ 1 ( n ) , w ˘ 2 ( n ) , , w ˘ n ( n ) w i t h w ˘ l ( n ) = w ˘ l 1 ( n ) , w ˘ l 2 ( n ) , , w ˘ l p ( n ) .
Then, w ˘ ( n ) is a feasible solution of problem ( D n ) satisfying
w ˘ l i ( n ) τ σ · exp r · T · ν σ
for all n N , i = 1 , , p and l = 1 , , n .
(ii)
Given any feasible solution w ( n ) of problem ( D n ) , we define
w ¯ l i ( n ) = min w l i ( n ) , w l ( n )
for i = 1 , , p and l = 1 , , n and consider the following vector:
w ¯ ( n ) = w ¯ 1 ( n ) , w ¯ 2 ( n ) , , w ¯ n ( n ) w i t h w ¯ l ( n ) = w ¯ l 1 ( n ) , w ¯ l 2 ( n ) , , w ¯ l p ( n ) .
Then, w ¯ ( n ) is a feasible solution of problem ( D n ) satisfying the following inequalities:
w ¯ l i ( n ) w l ( n ) τ σ · exp r · T · ν σ
for all n N , i = 1 , , p and l = 1 , , n . Suppose that each c l ( n ) is nonnegative and that w ( n ) is an optimal solution of problem ( D n ) . Then, w ¯ ( n ) is also an optimal solution of problem ( D n ) .
Proof. 
For the proof, refer to Wu [5]. □
Proposition 2.
Suppose that the primal problem ( P n ) is feasible with a feasible solution z ( n ) = ( z 1 ( n ) , z 2 ( n ) , , z n ( n ) ) , where z l ( n ) = ( z l 1 ( n ) , z l 2 ( n ) , , z l q ( n ) ) for l = 1 , , n . Then,
0 z l j ( n ) ζ σ · 1 + P n · ϕ σ l 1 ζ σ · exp r · T · ϕ σ
for all j = 1 , , q , l = 1 , , n and n N .
Proof. 
For the proof, refer to Wu [5]. □
Let z ¯ ( n ) = ( z ¯ 1 ( n ) , z ¯ 2 ( n ) , , z ¯ n ( n ) ) with z ¯ l ( n ) = ( z ¯ l 1 ( n ) , z ¯ l 2 ( n ) , , z ¯ l q ( n ) ) be an optimal solution of problem ( P n ) . We construct a vector-valued step function z ^ ( n ) : [ 0 , T ] R q as follows:
z ^ ( n ) ( t ) = z ¯ l ( n ) if t F l ( n ) = e l 1 ( n ) , e l ( n ) for l = 1 , , n z ¯ n ( n ) if t = T .
The following result will be useful for further discussions.
Proposition 3.
Suppose that z ¯ ( n ) = ( z ¯ 1 ( n ) , z ¯ 2 ( n ) , , z ¯ n ( n ) ) is a feasible solution of primal problem ( P n ) , where z ¯ l ( n ) = ( z ¯ l 1 ( n ) , z ¯ l 2 ( n ) , , z ¯ l q ( n ) ) for l = 1 , , n . Then, the vector-valued step function z ^ ( n ) defined in (26) is a feasible solution of problem ( RCLP 3 ) .
Proof. 
The feasibility of z ¯ ( n ) suggests that
B 1 ( n ) z ¯ 1 ( n ) c 1 ( n ) and B l ( n ) z ¯ l ( n ) c l ( n ) + k = 1 l 1 d k ( n ) K l k ( n ) z ¯ k ( n ) for l = 2 , , n .
Therefore, we obtain
j = 1 q B i j ( 0 ) ( t ) · z ¯ 1 j + { j : j I i ( B ) } B ^ i j ( t ) · z ¯ 1 j = { j : j I i ( B ) } B i j ( 0 ) ( t ) · z ¯ 1 j + { j : j I i ( B ) } B i j ( 0 ) ( t ) + B ^ i j ( t ) · z ¯ 1 j j = 1 q B 1 i j ( n ) · z ¯ 1 j c 1 i ( n )
for i = 1 , , p and
j = 1 q B i j ( 0 ) ( t ) · z ¯ l j + { j : j I i ( B ) } B ^ i j ( t ) · z ¯ l j k = 1 l 1 j = 1 q d k ( n ) · K i j ( 0 ) ( t , s ) · z ¯ k j + k = 1 l 1 { j : j I i ( K ) } d k ( n ) · K ^ i j ( t , s ) · z ¯ k j = { j : j I i ( B ) } B i j ( 0 ) ( t ) · z ¯ l j + { j : j I i ( B ) } B i j ( 0 ) ( t ) + B ^ i j ( t ) · z ¯ l j k = 1 l 1 d k ( n ) · { j : j I i ( K ) } K i j ( 0 ) ( t , s ) + { j : j I i ( K ) } K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) · z ¯ k j j = 1 q B l i j ( n ) · z ¯ l j k = 1 l 1 d k ( n ) · K l k i j ( n ) · z ¯ k j c l i ( n )
for l = 2 , , n and i = 1 , , p . Two cases will be considered below.
  • Suppose that t F l ( n ) for l = 2 , , n . Since e l 1 ( n ) is the left-end point of closed interval E ¯ l ( n ) , we have
    j = 1 q B i j ( 0 ) ( t ) · z ^ j ( n ) ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( n ) ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( n ) ( s ) d s = j = 1 q B i j ( 0 ) ( t ) · z ^ j ( n ) ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( n ) ( t ) j = 1 q k = 1 l 1 E ¯ k ( n ) K i j ( 0 ) ( t , s ) · z ^ j ( n ) ( s ) d s j = 1 q e l 1 ( n ) t K i j ( 0 ) ( t , s ) · z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } k = 1 l 1 E ¯ k ( n ) K ^ i j ( t , s ) · z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } e l 1 ( n ) t K ^ i j ( t , s ) · z ^ j ( n ) ( s ) d s j = 1 q B i j ( 0 ) ( t ) · z ^ j ( n ) ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( n ) ( t ) j = 1 q k = 1 l 1 E ¯ k ( n ) K i j ( 0 ) ( t , s ) · z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } k = 1 l 1 E ¯ k ( n ) K ^ i j ( t , s ) · z ^ j ( n ) ( s ) d s ( since K i j ( 0 ) ( t , s ) 0 for all i and j , and K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) 0 for j I i ( K ) ) = j = 1 q B i j ( 0 ) ( t ) · z ¯ l j + { j : j I i ( B ) } B ^ i j ( t ) · z ¯ l j k = 1 l 1 j = 1 q d k ( n ) · K i j ( 0 ) ( t , s ) · z ¯ k j + k = 1 l 1 { j : j I i ( K ) } d k ( n ) · K ^ i j ( t , s ) · z ¯ k j c l i ( n ) ( by ( 28 ) ) c i ( 0 ) ( t ) c ^ i ( t ) for i I ( c ) c i ( 0 ) ( t ) for i I ( c ) ( by ( 9 ) )
    For l = 1 , using (27), we can similarly obtain the desired inequality.
  • Suppose that t = T . Then, we have
    j = 1 q B i j ( 0 ) ( T ) · z ^ j ( n ) ( T ) + { j : j I i ( B ) } B ^ i j ( T ) · z ^ j ( n ) ( T ) j = 1 q 0 T K i j ( 0 ) ( T , s ) · z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } 0 T K ^ i j ( T , s ) · z ^ j ( n ) ( s ) d s = j = 1 q B i j ( 0 ) ( T ) · z ^ j ( n ) ( T ) + { j : j I i ( B ) } B ^ i j ( T ) · z ^ j ( n ) ( T ) j = 1 q k = 1 n E ¯ k ( n ) K i j ( 0 ) ( T , s ) · z ^ j ( n ) ( s ) d s + { j : j I i ( K ) } k = 1 n E ¯ k ( n ) K ^ i j ( T , s ) · z ^ j ( n ) ( s ) d s = j = 1 q B i j ( 0 ) ( T ) · z ¯ n j + { j : j I i ( B ) } B ^ i j ( T ) · z ¯ n j j = 1 q k = 1 n d k ( n ) · K i j ( 0 ) ( T , s ) · z ¯ k j + { j : j I i ( K ) } k = 1 n d k ( n ) · K ^ i j ( T , s ) · z ¯ k j j = 1 q B i j ( 0 ) ( T ) · z ¯ n j + { j : j I i ( B ) } B ^ i j ( T ) · z ¯ n j j = 1 q k = 1 n 1 d k ( n ) · K i j ( 0 ) ( T , s ) · z ¯ k j + { j : j I i ( K ) } k = 1 n 1 d k ( n ) · K ^ i j ( T , s ) · z ¯ k j ( since K i j ( 0 ) ( T , s ) 0 for all i and j , and K i j ( 0 ) ( T , s ) K ^ i j ( T , s ) 0 for j I i ( K ) ) c n i ( n ) ( from ( 28 ) by taking l = n ) c i ( 0 ) ( T ) c ^ i ( T ) for i I ( c ) c i ( 0 ) ( T ) for i I ( c ) ( b y ( 9 ) )
Therefore, we conclude that ( z ^ 1 ( n ) , , z ^ q ( n ) ) is a feasible solution of problem (RCLP3), and the proof is complete. □

4. Analytic Formula of the Error Bound

Given any optimization problem (P), we write V ( P ) to denote the optimal objective value of problem (P). For example, the optimal objective value of problem (RCLP3) is denoted by V ( RCLP 3 ) .
Suppose that z ¯ ( n ) is an optimal solution of problem ( P n ) . Then, using (9), we have
0 T a ( t ) z ^ ( n ) ( t ) d t l = 1 n E ¯ l ( n ) a l ( n ) z ¯ l ( n ) d t = l = 1 n d l ( n ) · a l ( n ) z ¯ l ( n ) = V ( P n ) .
Therefore, we have
V ( RCLP 3 ) 0 T a ( t ) z ^ ( n ) ( t ) d t ( by Proposition 3 ) V ( P n ) ( by ( 29 ) ) .
According to the weak duality theorem for the primal-dual pair of problems (DRCLP3) and (RCLP3), we obtain
V ( DRCLP 3 ) V ( RCLP 3 ) V ( P n ) = V ( D n ) .
In the sequel, we want to show that
lim n V ( D n ) = V ( DRCLP 3 ) .
Let w ( n ) = ( w 1 ( n ) , w 2 ( n ) , , w n ( n ) ) with w l ( n ) = ( w l 1 ( n ) , w l 2 ( n ) , , w l p ( n ) ) be an optimal solution of problem ( D n ) . We define
w ¯ l i ( n ) = min w l i ( n ) , w l ( n ) ,
where w l ( n ) is defined in (23), for i = 1 , , p and l = 1 , , n , and consider the following vector:
w ¯ ( n ) = w ¯ 1 ( n ) , w ¯ 2 ( n ) , , w ¯ n ( n ) with w ¯ l ( n ) = w ¯ l 1 ( n ) , w ¯ l 2 ( n ) , , w ¯ l p ( n )
Then, part (ii) of Proposition 1 states that w ¯ ( n ) is an optimal solution of problem ( D n ) satisfying the following inequalities:
w ¯ l i ( n ) w l ( n ) τ σ · exp r · T · ν σ
for all n N , i = 1 , , p and l = 1 , , n .
For each l = 1 , , n and j = 1 , , q , we define a real-valued function h ¯ l j ( n ) on the half-open interval F l ( n ) = [ e l 1 ( n ) , e l ( n ) ) given by
h ¯ l j ( n ) ( t ) = e l ( n ) t · i = 1 p K l l i j ( n ) · w ¯ l i ( n ) + i = 1 p B l i j ( n ) · w ¯ l i ( n ) i = 1 p B i j ( 0 ) ( t ) · w ¯ l i ( n ) + { i : j I i ( B ) } B ^ i j ( t ) · w ¯ l i ( n ) + t e l ( n ) i = 1 p K i j ( 0 ) ( s , t ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ¯ l i ( n ) i = 1 p K l l i j ( n ) · w ¯ l i ( n ) d s + k = l + 1 n E ¯ k ( n ) i = 1 p K i j ( 0 ) ( s , t ) · w ¯ k i ( n ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ¯ k i ( n ) i = 1 p K k l i j ( n ) · w ¯ k i ( n ) d s .
For each j = 1 , , q , we also define a real number
r j ( n ) = i = 1 p B n i j ( n ) · w ¯ n i ( n ) i = 1 p B i j ( 0 ) ( T ) · w ¯ n i ( n ) + { i : j I i ( B ) } B ^ i j ( T ) · w ¯ n i ( n ) .
For l = 1 , , n , let
π ¯ l ( n ) = max max j I ( a ) sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) , max j I ( a ) sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n )
and
π l ( n ) = max k = l , , n π ¯ k ( n ) .
Then, we have
π l ( n ) = max π ¯ l ( n ) , π ¯ l + 1 ( n ) , , π ¯ n ( n ) = max π ¯ l ( n ) , π l + 1 ( n )
which suggests that
π l ( n ) π l + 1 ( n )
and, for any t E l ( n ) ,
π l ( n ) π ¯ l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) for j I ( a ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j a l j ( n ) for j I ( a )
for l = 1 , , n 1 . We want to prove
lim n π ¯ l ( n ) = 0 = lim n π l ( n ) .
We need some useful lemmas.
Lemma 1.
For i = 1 , , p ; j = 1 , , q ; and l = 1 , , n , we have
sup t E l ( n ) a j ( 0 ) ( t ) a l j ( n ) 0 f o r j I ( a ) sup t E l ( n ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) 0 f o r j I ( a ) a s n
and
sup t E l ( n ) B l i j ( n ) B i j ( 0 ) ( t ) 0 f o r j I i ( B ) sup t E l ( n ) B l i j ( n ) B i j ( 0 ) ( t ) B ^ i j ( t ) 0 f o r j I i ( B ) a s n .
Proof. 
According to the construction of partition P n , it is clear that a j ( 0 ) for j I ( a ) and a j ( 0 ) a ^ j for j I ( a ) are continuous on the open interval E l ( n ) = ( e l 1 ( n ) , e l ( n ) ) . Let
a j = a j ( 0 ) for j I ( a ) a j ( 0 ) a ^ j for j I ( a ) .
Then, we also see that the function a j is continuous on the open interval E l ( n ) = ( e l 1 ( n ) , e l ( n ) ) and
a l j ( n ) = inf t E ¯ l ( n ) a j .
Given a decreasing sequence { δ m } m = 1 satisfying δ m > 0 for all m and δ m 0 as n , where δ 1 is defined by
δ 1 = 1 2 · e l ( n ) e l 1 ( n ) ,
we can define the closed interval
E l m ( n ) = e l 1 ( n ) + δ m , e l ( n ) δ m ,
which implies
E l ( n ) = m = 1 E l m ( n ) and E l m 1 ( n ) E l m 2 ( n ) for m 2 > m 1 .
Since E l m ( n ) E l ( n ) , we see that a j is uniformly continuous on each closed interval E l m ( n ) . Therefore, given any ϵ > 0 , there exists δ > 0 such that | t 1 t 2 | < δ implies
a j ( t 1 ) a j ( t 2 ) < ϵ 2 for any t 1 , t 2 E l m ( n ) .
Since the length of E l ( n ) is less than or equal to P n T / n with n by (8), we can consider a sufficiently large n 0 N satisfying T / n 0 < δ . In this case, each length of E l ( n ) for l = 1 , , n is less than δ for n n 0 , which also suggests that, if n n 0 , then (39) is satisfied for any t 1 , t 2 E l m ( n ) . We consider the following cases.
  • Suppose that the infimum a l j ( n ) is attained at t ( n ) E l ( n ) . Using (38), there exists m satisfying t ( n ) E l m ( n ) . Given any t E l ( n ) , we see that t E l m 0 ( n ) for some m 0 . Let m = max { m 0 , m } . Using (38), it follows that t , t ( n ) E l m ( n ) . Therefore, we obtain
    a j ( t ) a l j ( n ) = a j ( t ) a j t ( n ) < ϵ 2
    since the length of E l m ( n ) is less than δ , where ϵ is independent of t because of the uniform continuity.
  • Suppose that the infimum a l j ( n ) is not attained at any point in E l ( n ) . The continuity of a j on the open interval E l ( n ) suggests that that the infimum a l j ( n ) is either the right-hand limit or left-hand limit given by
    a l j ( n ) = lim t e l 1 ( n ) + a j ( t ) or a l j ( n ) = lim t e l ( n ) a j ( t ) .
    Therefore, for sufficiently large n, i.e., the open interval E l ( n ) is sufficiently small such that its length is less than δ , we can obtain
    a j ( t ) a l j ( n ) < ϵ 2
    for all t E l ( n ) .
From the above two cases, since a j ( t ) a l j ( n ) for all t E l ( n ) , we conclude that
0 sup t E l ( n ) a j ( t ) a l j ( n ) ϵ 2 < ϵ for l = 1 , , n .
This completes the proof. □
Lemma 2.
For i = 1 , , p ; j = 1 , , q ; and l , k = 1 , , n , we have
sup ( s , t ) E k ( n ) × E l ( n ) K i j ( 0 ) ( s , t ) K k l i j ( n ) 0 f o r j I i ( K ) sup ( s , t ) E k ( n ) × E l ( n ) K i j ( 0 ) ( s , t ) K ^ i j ( s , t ) K k l i j ( n ) 0 f o r j I i ( K ) a s n
and
sup t E l ( n ) E ¯ k ( n ) K i j ( 0 ) ( s , t ) K k l i j ( n ) · w ¯ k i ( n ) d s 0 f o r j I i ( K ) sup t E l ( n ) E ¯ k ( n ) K i j ( 0 ) ( s , t ) K ^ i j ( s , t ) K k l i j ( n ) · w ¯ k i ( n ) d s 0 f o r j I i ( K ) a s n .
Proof. 
Let
K i j = K i j ( 0 ) for j I i ( K ) K i j ( 0 ) K ^ i j for j I i ( K ) .
Then,
K k l i j ( n ) = inf ( s , t ) E ¯ k ( n ) × E ¯ l ( n ) K i j ( s , t ) .
According to the construction of partition P n , we also see that K i j is continuous on the open rectangle
E k ( n ) × E l ( n ) = e k 1 ( n ) , e k ( n ) × e l 1 ( n ) , e l ( n ) .
Let { δ m } m = 1 be a decreasing sequence and be convergent to zero such that δ m > 0 for all m, where δ 1 is defined by
δ 1 = 1 2 · min e k ( n ) e k 1 ( n ) , e l ( n ) e l 1 ( n ) .
Therefore, we can define the compact rectangle
E k m ( n ) × E l m ( n ) = e k 1 ( n ) + δ m , e k ( n ) δ m × e l 1 ( n ) + δ m , e l ( n ) δ m .
The following inclusion
m = 1 E k m ( n ) × E l m ( n ) E k ( n ) × E l ( n )
is obvious. For ( s , t ) E k ( n ) × E l ( n ) , there exist m 1 and m 2 such that s E k m 1 ( n ) and t E l m 2 ( n ) , respectively. Let m = max { m 1 , m 2 } . Then, we have E k m 1 ( n ) E k m ( n ) and E l m 2 ( n ) E l m ( n ) . Therefore, we obtain
( s , t ) E k m 1 ( n ) × E l m 2 ( n ) E k m ( n ) × E l m ( n ) ,
which proves
E k ( n ) × E l ( n ) = m = 1 E k m ( n ) × E l m ( n ) .
We also see that
E k m 1 ( n ) × E l m 1 ( n ) E k m 2 ( n ) × E l m 2 ( n ) for m 2 > m 1 .
Since E k m ( n ) × E l m ( n ) E k ( n ) × E l ( n ) , it follows that K i j is continuous on each compact rectangle E k m ( n ) × E l m ( n ) , which also means that K i j is uniformly continuous on each compact rectangle E k m ( n ) × E l m ( n ) . Therefore, given any ϵ > 0 , there exists δ > 0 such that
t 1 t 2 < δ and s 1 s 2 < δ
implies
K i j s 1 , t 1 K i j s 2 , t 2 < ϵ 2
for ( s 1 , t 1 ) , ( s 2 , t 2 ) E k m ( n ) × E l m ( n ) . Since the length of E k ( n ) is less than or equal to P n T / n with n using (8), we can consider a sufficiently large n 0 N satisfying T / n 0 < δ . In this case, each length of E k ( n ) for k = 1 , , n is less than δ , which means that, if n n 0 , then (42) is satisfied for any ( s 1 , t 1 ) , ( s 2 , t 2 ) E k m ( n ) × E l m ( n ) . We consider the following cases.
  • Suppose that the infimum K k l i j ( n ) is attained at ( s ( n ) , t ( n ) ) E k ( n ) × E l ( n ) . Using (40), there exists m satisfying ( s ( n ) , t ( n ) ) E k m ( n ) × E l m ( n ) . Given any ( s , t ) E k ( n ) × E l ( n ) , we see that ( s , t ) E k m 0 ( n ) × E l m 0 ( n ) for some m 0 . Let m = max { m , m 0 } . Using (41), it follows that ( s , t ) , ( s ( n ) , t ( n ) ) E k m ( n ) × E l m ( n ) . Then, we have
    K i j ( s , t ) K k l i j ( n ) = K i j ( s , t ) K i j s ( n ) , t ( n ) < ϵ 2 ,
    since the lengths of E k m ( n ) and E l m ( n ) are less than δ , where ϵ is independent from ( s , t ) in E k ( n ) × E l ( n ) because of the uniform continuity.
  • Suppose that the infimum K k l i j ( n ) is not attained at any point in E k ( n ) × E l ( n ) . Let
    K i j = K i j ( s , t ) : ( s , t ) E k ( n ) × E l ( n ) .
    Since K i j is continuous on the open rectangle E k ( n ) × E l ( n ) , it follows that the infimum K k l i j ( n ) is in the boundary of the closure of K i j and is the limit of the function K i j on E k ( n ) × E l ( n ) . Therefore, for sufficiently large n, i.e., the open rectangle E k ( n ) × E l ( n ) is sufficiently small such that the lengths of E k ( n ) and E l ( n ) are less than δ , we have
    K i j ( s , t ) K k l i j ( n ) < ϵ 2
    for all ( s , t ) E k ( n ) × E l ( n ) .
From the above two cases, we conclude that
0 sup ( s , t ) E k ( n ) × E l ( n ) K i j ( s , t ) K k l i j ( n ) ϵ 2 < ϵ
and
0 sup t E l ( n ) E ¯ k ( n ) K i j ( s , t ) K k l i j ( n ) · w ¯ k i ( n ) d s ϵ 2 · d k ( n ) · w ¯ k i ( n ) < ϵ · T · τ σ · exp r · T · ν σ ( using ( 32 ) ) ,
which implies
sup t E l ( n ) E ¯ k ( n ) K i j ( s , t ) K k l i j ( n ) · w ¯ k i ( n ) d s 0 as n .
This completes the proof. □
Lemma 3.
For each l = 1 , , n , we have
lim n π ¯ l ( n ) = 0 = lim n π l ( n ) .
Proof. 
It suffices to prove
sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) 0 for j I ( a ) sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) 0 for j I ( a ) as n .
From (8), since
d l ( n ) P n r · T n 0 as n
and w ¯ l i ( n ) is bounded according to (32), it follows that
e l ( n ) t · K l l i j ( n ) · w ¯ l i ( n ) d l ( n ) · K l l i j ( n ) · w ¯ l i ( n ) 0 as n .
Using Lemmas 1 and 2, we have
sup t E l ( n ) h ¯ l j ( n ) ( t ) d l ( n ) · i = 1 p K l l i j ( n ) · w ¯ l i ( n ) + { i : j I i ( B ) } w ¯ l i ( n ) · sup t E l ( n ) B l i j ( n ) B i j ( 0 ) ( t ) + { i : j I i ( B ) } w ¯ l i ( n ) · sup t E l ( n ) B l i j ( n ) B i j ( 0 ) ( t ) B ^ i j ( t ) + k = l n { i : j I i ( K ) } sup t E l ( n ) E ¯ k ( n ) K i j ( 0 ) ( s , t ) K k l i j ( n ) · w ¯ k i ( n ) d s + k = l n { i : j I i ( K ) } sup t E l ( n ) E ¯ k ( n ) K i j ( 0 ) ( s , t ) K ^ i j ( s , t ) K k l i j ( n ) · w ¯ k i ( n ) d s 0 as n .
Now, for j I ( a ) , we have
0 sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) sup t E l ( n ) h ¯ l j ( n ) ( t ) + sup t E l ( n ) a j ( 0 ) ( t ) a l j ( n )
and, for j I ( a ) , we have
0 sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) sup t E l ( n ) h ¯ l j ( n ) ( t ) + sup t E l ( n ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) .
Using Lemma 1, we complete the proof. □
We define the following notations:
k ¯ l ( n ) = max j = 1 , , q sup ( s , t ) [ e l 1 ( n ) , T ] × E ¯ l ( n ) i = 1 p K i j ( 0 ) ( s , t ) { i : j I i ( K ) } K ^ i j ( 0 ) ( s , t )
and
b ¯ l ( n ) = min j = 1 , , q inf t E ¯ l ( n ) i = 1 p B i j ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( 0 ) ( t ) .
From (7), (6) and (19), we see that
b ¯ l ( n ) min j = 1 , , q inf t [ 0 , T ] i = 1 p B i j ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( 0 ) ( t ) σ > 0 and k ¯ l ( n ) ν .
Let
k l ( n ) = max k = l , , n k ¯ k ( n ) ν and b l ( n ) = min k = l , , n b ¯ k ( n ) σ .
Then, we see that 0 < b l ( n ) and
k l ( n ) k l + 1 ( n ) and b l ( n ) b l + 1 ( n ) .
Now, we define the real-valued functions u ( n ) and v ( n ) on [ 0 , T ] by
u ( n ) ( t ) = k l ( n ) if t F l ( n ) for l = 1 , , n k n ( n ) if t = T
and
v ( n ) ( t ) = b l ( n ) if t F l ( n ) for l = 1 , , n b n ( n ) if t = T .
Then, we have
u ( n ) ( t ) ν and v ( n ) ( t ) σ for all t [ 0 , T ]
From (32) and Lemmas 1 and 2, we see that the sequence { h ¯ l j ( n ) } n = 1 is uniformly bounded. In other words, the sequence { π l ( n ) } n = 1 is uniformly bounded. Therefore, there exists a constant x satisfying π l ( n ) x for all n N and l = 1 , , n . Now, we define a real-valued function p ( n ) on [ 0 , T ] by
p ( n ) ( t ) = x , if t = e l 1 ( n ) for l = 1 , , n π l ( n ) , if t E l ( n ) for l = 1 , , n max max j I ( a ) r j ( n ) + a j ( 0 ) ( T ) a n j ( n ) , max j I ( a ) r j ( n ) + a j ( 0 ) ( T ) a ^ j ( T ) a n j ( n ) , if t = e n ( n ) = T ,
where r j ( n ) is the jth component of r ( n ) in (33). Then, we have
p ( n ) ( t ) x for all n N and t [ 0 , T ) .
Let 1 p = ( 1 , 1 , , 1 ) R p denote a p-dimensional vector such that each component of 1 p is 1, and let the real-valued function f ( n ) : [ 0 , T ] R + be defined by
f ( n ) ( t ) = p ( n ) ( t ) v ( n ) ( t ) · exp u ( n ) ( t ) · ( T t ) v ( n ) ( t )
We need a useful lemma given below
Lemma 4.
We have
f ( n ) ( T ) · i = 1 p B i j ( 0 ) ( T ) + { i : j I i ( B ) } B ^ i j ( T ) r j ( n ) + a j ( T ) a n j ( n ) f o r j I ( a ) r j ( n ) + a j ( T ) a ^ j ( T ) a n j ( n ) f o r j I ( a ) .
For t F l ( n ) and l = 1 , , n , if j I ( a ) , then
f ( n ) ( t ) · i = 1 p B i j ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) + t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) ( s , t ) { i : j I i ( K ) } K ^ i j ( s , t ) d s
and, if j I ( a ) , then
f ( n ) ( t ) · i = 1 p B i j ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) + t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) ( s , t ) { i : j I i ( K ) } K ^ i j ( s , t ) d s .
Moreover, the sequence of real-valued functions { f ( n ) } n = 1 is uniformly bounded.
Proof. 
For t F l ( n ) , from (49), we have
t T f ( n ) ( s ) d s = t T p ( n ) ( s ) v ( n ) ( s ) · exp u ( n ) ( s ) · ( T s ) v ( n ) ( s ) d s = t e l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T s ) b l ( n ) d s + k = l + 1 n E ¯ k ( n ) π k ( n ) b k ( n ) · exp k k ( n ) · ( T s ) b k ( n ) d s t e l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T s ) b l ( n ) d s + k = l + 1 n E ¯ k ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T s ) b l ( n ) d s ( by ( 36 ) and ( 46 ) ) = t T π l ( n ) b l ( n ) · exp k l ( n ) · ( T s ) b l ( n ) d s = π l ( n ) k l ( n ) · exp k l ( n ) · ( T t ) b l ( n ) 1
Since
b l ( n ) · f ( n ) ( t ) = x · exp k l ( n ) · ( T t ) b l ( n ) if t = e l 1 ( n ) for l = 1 , , n π l ( n ) · exp k l ( n ) · ( T t ) b l ( n ) if t E l ( n ) for l = 1 , , n ,
using (50), it follows that, for t F l ( n ) ,
b l ( n ) · f ( n ) ( t ) x · 1 + k l ( n ) π l ( n ) · t T f ( n ) ( s ) d s if t = e l 1 ( n ) for l = 1 , , n π l ( n ) + k l ( n ) · t T f ( n ) ( s ) d s if t E l ( n ) for l = 1 , , n . π l ( n ) + k l ( n ) · t T f ( n ) ( s ) d s ( since π l ( n ) x for all = 1 , , n ) .
For t = e n ( n ) = T , we also have
b n ( n ) · f ( n ) ( T ) = max max j I ( a ) r j ( n ) + a j ( 0 ) ( T ) a n j ( n ) , max j I ( a ) r j ( n ) + a j ( 0 ) ( T ) a ^ j ( T ) a n j ( n ) .
For each j = 1 , , q and l = 1 , , n , we consider the following cases.
  • For t = e n ( n ) = T , from (44) and (52), we have
    f ( n ) ( T ) · i = 1 p B i j ( 0 ) ( T ) + { i : j I i ( B ) } B ^ i j ( T ) b ¯ n ( n ) · f ( n ) ( T ) = b n ( n ) · f ( n ) ( T ) r j ( n ) + a j ( 0 ) ( T ) a n j ( n ) for j I ( a ) r j ( n ) + a j ( 0 ) ( T ) a ^ j ( T ) a n j ( n ) for j I ( a ) . .
  • For t F l ( n ) , by (44), (51), and (37), we have
    f ( n ) ( t ) · i = 1 p B i j ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) b ¯ l ( n ) · f ( n ) ( t ) b l ( n ) · f ( n ) ( t ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a l j ( n ) + k l ( n ) · t T f ( n ) ( s ) d s for j I ( a ) h ¯ l j ( n ) ( t ) + a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) + k l ( n ) · t T f ( n ) ( s ) d s for j I ( a ) .
    Since
    k l ( n ) k ¯ l ( n ) K i j ( 0 ) ( s , t ) { i : j I i ( K ) } K ^ i j ( s , t )
    for ( s , t ) [ e l 1 ( n ) , T ] × E ¯ l ( n ) , we obtain the desired inequalities.
Finally, from (47) and (48), it is obvious that the sequence of real-valued functions { f ( n ) } n = 1 is uniformly bounded. This completes the proof. □
We define a vector-valued function w ^ ( n ) ( t ) : [ 0 , T ] R p by
w ^ ( n ) ( t ) = w ¯ l ( n ) + f ( n ) ( t ) 1 p if t F l ( n ) for l = 1 , , n w ¯ n ( n ) + f ( n ) ( T ) 1 p if t = T .
Remark 3.
Since the sequence of real-valued functions { f ( n ) } n = 1 is uniformly bounded by Lemma 4, from (32), we also see that the family of vector-valued functions { w ^ ( n ) } n N is also uniformly bounded.
Proposition 4.
For any n N , w ^ ( n ) is a feasible solution of problem ( DRCLP 3 ) .
Proof. 
For l = 1 , , n , we define a real-valued function b j ( n ) on F l ( n ) by
b j ( n ) ( t ) = i = 1 p B i j ( 0 ) ( t ) · w ¯ l i ( n ) + { i : j I i ( B ) } B ^ i j ( t ) · w ¯ l i ( n ) i = 1 p B l i j ( n ) · w ¯ l i ( n ) t e l ( n ) i = 1 p K i j ( 0 ) ( s , t ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ¯ l i ( n ) i = 1 p K l l i j ( n ) · w ¯ l i ( n ) d s k = l + 1 n E ¯ k ( n ) i = 1 p K i j ( 0 ) ( s , t ) · w ¯ k i ( n ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ¯ k i ( n ) i = 1 p K k l i j ( n ) · w ¯ k i ( n ) + f ( n ) ( t ) · i = 1 p B i j ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) ( s , t ) { i : j I i ( K ) } K ^ i j ( s , t ) d s ,
which implies
b j ( n ) ( t ) + f ( n ) ( t ) · i = 1 p B i j ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) ( s , t ) { i : j I i ( K ) } K ^ i j ( s , t ) d s = i = 1 p B l i j ( n ) · w ¯ l i ( n ) i = 1 p B i j ( 0 ) ( t ) · w ¯ l i ( n ) + { i : j I i ( B ) } B ^ i j ( t ) · w ¯ l i ( n ) + t e l ( n ) i = 1 p K i j ( 0 ) ( s , t ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ¯ l i ( n ) i = 1 p K l l i j ( n ) · w ¯ l i ( n ) d s + k = l + 1 n E ¯ k ( n ) i = 1 p K i j ( 0 ) ( s , t ) · w ¯ k i ( n ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ¯ k i ( n ) i = 1 p K k l i j ( n ) · w ¯ k i ( n ) .
Therefore, by adding the term ( e l ( n ) t ) · i = 1 p K l l i j ( n ) · w ¯ l i ( n ) on both sides, we obtain
e l ( n ) t · i = 1 p K l l i j ( n ) · w ¯ l i ( n ) b j ( n ) ( t ) + f ( n ) ( t ) · i = 1 p B i j ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) ( s , t ) { i : j I i ( K ) } K ^ i j ( s , t ) d s = h ¯ l j ( n ) ( t ) ,
which implies
b j ( n ) ( t ) e l ( n ) t · i = 1 p K l l i j ( n ) · w ¯ l i ( n ) = h ¯ l j ( n ) ( t ) + f ( n ) ( t ) · i = 1 p B i j ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) t T f ( n ) ( s ) · i = 1 p K i j ( 0 ) ( s , t ) { i : j I i ( K ) } K ^ i j ( s , t ) d s a j ( 0 ) ( t ) a l j ( n ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n ) for j I ( a ) ( by Lemma 4 )
Now, from (53), we obtain
i = 1 p B i j ( 0 ) ( t ) · w ^ i ( n ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w ^ i ( n ) ( t ) t T i = 1 p K i j ( 0 ) ( s , t ) · w ^ i ( n ) ( s ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ^ i ( n ) ( s ) d s = i = 1 p B l i j ( n ) · w ¯ l i ( n ) t e l ( n ) i = 1 p K l l i j ( n ) · w ¯ l i ( n ) d s k = l + 1 n E ¯ k ( n ) i = 1 p K k l i j ( n ) · w ¯ k i ( n ) d s + b j ( n ) ( t ) = i = 1 p B l i j ( n ) · w ¯ l i ( n ) k = l + 1 n i = 1 p d k ( n ) · K k l i j ( n ) · w ¯ k i ( n ) + b j ( n ) ( t ) e l ( n ) t · i = 1 p K l l i j ( n ) · w ¯ l i ( n ) a l j ( n ) + b j ( n ) ( t ) e l ( n ) t · i = 1 p K l l i j ( n ) · w ¯ l i ( n ) ( by the feasibility of w ¯ ( n ) for problem ( D n ) ) a j ( 0 ) ( t ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a ) ( by ( 54 ) )
Suppose that t = T . We define
b ^ j ( n ) = i = 1 p B i j ( 0 ) ( T ) · w ¯ n i ( n ) + { i : j I i ( B ) } B ^ i j ( T ) · w ¯ n i ( n ) i = 1 p B n i j ( n ) · w ¯ n i ( n ) + f ( n ) ( T ) · i = 1 p B i j ( 0 ) ( T ) + { i : j I i ( B ) } B ^ i j ( T ) .
Then,
b ^ j ( n ) + f ( n ) ( T ) · i = 1 p B i j ( 0 ) ( T ) + { i : j I i ( B ) } B ^ i j ( T ) = i = 1 p B n i j ( n ) · w ¯ n i ( n ) i = 1 p B i j ( 0 ) ( T ) · w ¯ n i ( n ) + { i : j I i ( B ) } B ^ i j ( T ) · w ¯ n i ( n ) = r j ( n ) ,
which implies
b ^ j ( n ) = r j ( n ) + f ( n ) ( T ) · i = 1 p B i j ( 0 ) ( T ) + { i : j I i ( B ) } B ^ i j ( T ) a j ( T ) a n j ( n ) for j I ( a ) a j ( T ) a ^ j ( T ) a n j ( n ) for j I ( a ) . ( by Lemma 4 )
Now, from (53), we obtain
i = 1 p B i j ( 0 ) ( T ) · w ^ i ( n ) ( T ) + { i : j I i ( B ) } B ^ i j ( T ) · w ^ i ( n ) ( T ) = i = 1 p B n i j ( n ) · w ¯ n i ( n ) + b ^ j ( n ) a n j ( n ) + b ^ j ( n ) ( by the feasibility of w ¯ ( n ) ) a j ( T ) for j I ( a ) a j ( T ) a ^ j ( T ) for j I ( a ) . ( using ( 55 ) )
Therefore, we conclude that w ^ ( n ) is indeed a feasible solution of problem (DRCLP3). This completes this proof. □
For each i = 1 , , p and j = 1 , , q , in order to obtain the approximate solutions, we need to define the step functions a ¯ j ( n ) : [ 0 , T ] R and c ¯ i ( n ) : [ 0 , T ] R as follows:
a ¯ j ( n ) ( t ) = a l j ( n ) if t F l ( n ) for l = 1 , , n a n j ( n ) if t = T
and
c ¯ i ( n ) ( t ) = c l i ( n ) if t F l ( n ) for l = 1 , , n c n i ( n ) if t = T ,
respectively. For each i = 1 , , p , we also define the step function w ¯ i ( n ) : [ 0 , T ] R by
w ¯ i ( n ) ( t ) = w ¯ l i ( n ) if t F l ( n ) for l = 1 , , n w ¯ n i ( n ) if t = T .
Lemma 5.
For i = 1 , , p and j = 1 , , q , we have
0 T a j ( 0 ) ( t ) a ¯ j ( n ) ( t ) · z ^ j ( n ) ( t ) d t 0 f o r j I ( a ) 0 T a j ( 0 ) ( t ) a ^ j ( t ) a ¯ j ( n ) ( t ) · z ^ j ( n ) ( t ) d t 0 f o r j I ( a ) a s n
and
0 T c i ( 0 ) ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) d t 0 f o r i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) d t 0 f o r i I ( c ) a s n .
Proof. 
We first observe that the following functions
a j ( 0 ) ( t ) a ¯ j ( n ) ( t ) · z ^ j ( n ) ( t ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) a ¯ j ( n ) ( t ) · z ^ j ( n ) ( t ) for j I ( a )
and
c i ( 0 ) ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) for i I ( c ) c i ( 0 ) ( t ) c ^ i ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) for i I ( c )
are continuous a.e. on [ 0 , T ] , i.e., they are Riemann-integrable on [ 0 , T ] . In other words, their Riemann integral and Lebesgue integral are identical. Lemma 1 says that
a j ( 0 ) ( t ) a ¯ j ( n ) ( t ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) a ¯ j ( n ) ( t ) for j I ( a ) 0 as n a . e . on [ 0 , T ]
and
c i ( 0 ) ( t ) c ¯ i ( n ) ( t ) for i I ( c ) c i ( 0 ) ( t ) c ^ i ( t ) c ¯ i ( n ) ( t ) for i I ( c ) 0 as n a . e . on [ 0 , T ] .
Since z ^ j ( n ) is uniformly bounded by Proposition 2, using the Lebesgue bounded convergence theorem, we can obtain (56). On the other hand, since w ¯ i ( n ) is uniformly bounded using Proposition 1 and the Lebesgue bounded convergence theorem, we can also obtain (57), and the proof is complete □
Theorem 1.
The following statements hold true.
(i)
We have
lim sup n V ( D n ) = V ( DRCLP 3 ) a n d 0 V ( DRCLP 3 ) V ( D n ) ε n ,
where
ε n = V ( D n ) + i = 1 p l = 1 n E ¯ l ( n ) c i ( 0 ) ( t ) · w ¯ l i ( n ) d t i I ( c ) l = 1 n E ¯ l ( n ) c ^ i ( t ) · w ¯ l i ( n ) d t + i = 1 p l = 1 n E ¯ l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T t ) b l ( n ) · c i ( 0 ) ( t ) d t i I ( c ) l = 1 n E ¯ l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T t ) b l ( n ) · c ^ i ( t ) d t
satisfying ε n 0 as n . Moreover, there exists a convergent subsequence { V ( D n k ) } k = 1 of { V ( D n ) } n = 1 satisfying
lim k V ( D n k ) = V ( DRCLP 3 ) .
(ii)
(No Duality Gap). Suppose that the primal problem ( P n ) is feasible. Then, we have
V ( DRCLP 3 ) = V ( RCLP 3 ) = lim sup n V ( D n ) = lim sup n V ( P n )
and
0 V ( RCLP 3 ) V ( P n ) = V ( DRCLP 3 ) V ( D n ) ε n .
Proof. 
To prove part (i), we have
0 V ( DRCLP 3 ) V ( D n ) ( by ( 30 ) ) = V ( DRCLP 3 ) l = 1 n i = 1 p E ¯ l ( n ) c l i ( n ) · w ¯ l i ( n ) d t i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( t ) d t l = 1 n i = 1 p E ¯ l ( n ) c l i ( n ) · w ¯ l i ( n ) d t
( by Proposition 4 ) = i I ( c ) 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · w ^ i ( t ) d t l = 1 n i = 1 p E ¯ l ( n ) c l i ( n ) · w ¯ l i ( n ) d t = i I ( c ) l = 1 n E ¯ l ( n ) c i ( 0 ) ( t ) c l i ( n ) · w ¯ l i ( n ) d t + i I ( c ) l = 1 n E ¯ l ( n ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n ) · w ¯ l i ( n ) d t + i I ( c ) 0 T f ( n ) ( t ) · c i ( 0 ) ( t ) d t + i I ( c ) 0 T f ( n ) ( t ) · c i ( 0 ) ( t ) c ^ i ( t ) d t = i I ( c ) 0 T c i ( 0 ) ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) d t + i I ( c ) 0 T f ( n ) ( t ) · c i ( 0 ) ( t ) d t + i I ( c ) 0 T f ( n ) ( t ) · c i ( 0 ) ( t ) c ^ i ( t ) d t ε n
Lemma 3 states that π l ( n ) 0 as n , which implies p ( n ) 0 as n a.e. on [ 0 , T ] . Therefore, we obtain f ( n ) 0 as n a.e. on [ 0 , T ] . Using the Lebesgue bounded convergence theorem for integrals, we also obtain
0 T f ( n ) ( t ) · c i ( 0 ) ( t ) d t 0 as n for i I ( c ) 0 T f ( n ) ( t ) · c i ( 0 ) ( t ) c ^ i ( t ) d t 0 as n for i I ( c ) .
From Lemma 5, we conclude that ε n 0 as n . From (61), we also obtain
V ( D n ) V ( DRCLP 3 ) V ( D n ) + ε n ,
which implies
lim sup n V ( D n ) V ( DRCLP 3 ) lim sup n V ( D n ) + ε n lim sup n V ( D n ) + lim sup n ε n = lim sup n V ( D n ) .
Part (ii) of Proposition 1 states that { V ( D n ) } n = 1 is a bounded sequence. Therefore, there exists a convergent subsequence { V ( D n k ) } k = 1 of { V ( D n ) } n = 1 . Using (61), we obtain the equality (59). It is easy to see that ε n can be written as (58), which proves part (i).
To prove part (ii), using part (i) and inequality (30), we obtain
V ( DRCLP 3 ) V ( RCLP 3 ) lim sup n V ( D n ) = V ( DRCLP 3 ) .
Since V ( D n ) = V ( P n ) for each n N , we also have
V ( DRCLP 3 ) = V ( RCLP 3 ) = lim sup n V ( D n ) = lim sup n V ( P n )
and
0 V ( RCLP 3 ) V ( P n ) = V ( DRCLP 3 ) V ( D n ) ε n .
This completes the proof. □
From Remark 2 and Theorem 1, if the vector-valued function c is nonnegative, i.e., the primal problem ( P n ) is feasible, then the strong duality holds for the primal and dual pair of continuous-time linear programming problems (RCLP3) and (DRCLP3).
Proposition 5.
The following statements hold true.
(i)
Suppose that the primal problem ( P n ) is feasible. Let z ^ j ( n ) be defined in ( 26 ) for j = 1 , , q . Then, the error between V ( RCLP 3 ) and the objective value of ( z ^ 1 ( n ) , , z ^ q ( n ) ) is less than or equal to ε n defined in ( 58 ) . In other words, we have
0 V ( RCLP 3 ) j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( t ) d t ε n .
(ii)
Let w ^ i ( n ) be defined in ( 53 ) for i = 1 , , p . Then, the error between V ( DRCLP 3 ) and the objective value of ( w ^ 1 ( n ) , , w ^ p ( n ) ) is less than or equal to ε n . In other words, we have
0 i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( t ) d t V ( DRCLP 3 ) ε n
Proof. 
To prove part (i), Proposition 3 states that ( z ^ 1 ( n ) , , z ^ q ( n ) ) is a feasible solution of problem (RCLP3). Since
l = 1 n j = 1 q E ¯ l ( n ) a l j ( n ) · z ^ j ( n ) ( t ) d t = l = 1 n j = 1 q d l ( n ) a l j ( n ) · z ¯ l j ( n ) = V ( P n ) = V ( D n )
and
a l j ( n ) a j ( 0 ) ( t ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a )
for all t E ¯ l ( n ) and l = 1 , , n , it follows that
0 V ( RCLP 3 ) j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( t ) d t V ( RCLP 3 ) l = 1 n j = 1 q E ¯ l ( n ) a l j ( n ) · z ^ j ( n ) ( t ) d t = V ( DRCLP 3 ) V ( D n ) ( by ( 63 ) and part ( ii ) of Theorem 1 ) ε n ( by part ( i ) of Theorem 1 ) .
To prove part (ii), we have
0 i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( t ) d t V ( DRCLP 3 ) ( by Proposition 4 ) i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( t ) d t V ( D n ) ( since V ( D n ) V ( DRCLP 3 ) by part ( i ) of Theorem 1 ) = ε n ( by ( 60 ) and ( 61 ) )
This completes the proof. □
Definition 1.
Given any ϵ > 0 , we say that the feasible solution ( z 1 ( ϵ ) , , z q ( ϵ ) ) of primal problem ( RCLP 3 ) is an ϵ-optimal solution when
0 V ( RCLP 3 ) j = 1 q 0 T a j ( 0 ) ( t ) · z j ( ϵ ) ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( ϵ ) ( t ) d t < ϵ .
We say that the feasible solution ( w 1 ( ϵ ) , , w p ( ϵ ) ) of dual problem ( DRCLP 3 ) is an ϵ-optimal solution when
0 i = 1 p 0 T c i ( 0 ) ( t ) · w i ( ϵ ) ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w i ( ϵ ) ( t ) d t V ( DRCLP 3 ) < ϵ .
Theorem 2.
Given any ϵ > 0 , the following statements hold true.
(i)
The ϵ-optimal solution of primal problem ( RCLP 3 ) exists in the sense that there exists n N satisfying ( z 1 ( ϵ ) , , z q ( ϵ ) ) = ( z ^ 1 ( n ) , , z ^ q ( n ) ) , where ( z ^ 1 ( n ) , , z ^ q ( n ) ) is obtained from Proposition 5 satisfying ε n < ϵ .
(ii)
The ϵ-optimal solution of dual problem ( DRCLP 3 ) exists in the sense that there exists n N satisfying ( w 1 ( ϵ ) , , w p ( ϵ ) ) = ( w ^ 1 ( n ) , , w ^ p ( n ) ) , where ( w ^ 1 ( n ) , , w ^ p ( n ) ) is obtained from Proposition 5 satisfying ε n < ϵ .
Proof. 
Given any ϵ > 0 , from Proposition 5, since ε n 0 as n , there exists n N such that ε n < ϵ . Then, the results follow immediately. □

5. Convergence of Approximate Solutions

By referring to (26) and (53), we are interested in obtaining the convergent properties of the sequences { z ^ ( n ) } n = 1 and { w ^ ( n ) } n = 1 that are constructed from the optimal solutions z ¯ ( n ) of primal problem ( P n ) and the optimal solution w ¯ ( n ) of dual problem ( D n ) , respectively. We need a useful lemma.
Lemma 6.
We define a real-valued function η on [ 0 , T ] by
η ( t ) = τ σ · exp ν · ( T t ) σ .
Let w ( 0 ) be a feasible solution of dual problem (DRCLP3). We also define
w i ( 1 ) ( t ) = min w i ( 0 ) ( t ) , η ( t ) f o r a l l i = 1 , , p a n d t [ 0 , T ] .
Then, w ( 1 ) is a feasible solution of dual problem(DRCLP3)satisfying w ( 1 ) ( t ) w ( 0 ) ( t ) and w i ( 1 ) ( t ) η ( t ) for all i = 1 , , p and t [ 0 , T ] .
Proof. 
The feasibility of w ( 0 ) for problem (DRCLP3) states that
i = 1 p B i j ( 0 ) ( t ) · w i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w i ( 0 ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( 0 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( 0 ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( 0 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( 0 ) ( s ) d s for j I ( a )
Since K i j ( s , t ) 0 and w i ( 1 ) ( t ) w i ( 0 ) ( t ) , from (66), we obtain
i = 1 p B i j ( 0 ) ( t ) · w i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w i ( 0 ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( 1 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( 1 ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( 1 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( 1 ) ( s ) d s for j I ( a )
For any fixed t [ 0 , T ] , we define the index sets
I = i : w i ( 0 ) ( t ) η ( t ) and I > = i : w i ( 0 ) ( t ) > η ( t ) .
Then,
w i ( 1 ) ( t ) = w i ( 0 ) ( t ) if i I η ( t ) if i I > .
For each fixed j, three cases are considered.
  • Suppose that I > = (i.e., the second sum is zero). Then, w i ( 0 ) ( t ) = w i ( 1 ) ( t ) for all i. Therefore, from (67), we have
    i = 1 p B i j ( 0 ) ( t ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w i ( 1 ) ( t ) = i = 1 p B i j ( 0 ) ( t ) · w i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w i ( 0 ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( 1 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( 1 ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( 1 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( 1 ) ( s ) d s for j I ( a )
  • Suppose that I > and that
    B i j ( 0 ) ( t ) = 0 for all i I > and j I i ( B ) B i j ( 0 ) ( t ) + B ^ i j ( t ) = 0 for all i I > and j I i ( B ) .
    Then, we obtain
    i = 1 p B i j ( 0 ) ( t ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w i ( 1 ) ( t ) = i I B i j ( 0 ) ( t ) · w i ( 1 ) ( t ) + { i : j I i ( B ) , i I } B ^ i j ( t ) · w i ( 1 ) ( t ) = i I B i j ( 0 ) ( t ) · w i ( 0 ) ( t ) + { i : j I i ( B ) , i I } B ^ i j ( t ) · w i ( 0 ) ( t ) = i I B i j ( 0 ) ( t ) · w i ( 0 ) ( t ) + i I > B i j ( 0 ) ( t ) · w i ( 0 ) ( t ) + { i : j I i ( B ) , i I } B ^ i j ( t ) · w i ( 0 ) ( t ) + { i : j I i ( B ) , i I > } B ^ i j ( t ) · w i ( 0 ) ( t ) = i = 1 p B i j ( 0 ) ( t ) · w i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w i ( 0 ) ( t ) ,
    Using (67), we also obtain
    i = 1 p B i j ( 0 ) ( t ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w i ( 1 ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( s , t ) · w i ( 1 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( 1 ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( s , t ) · w i ( 1 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( 1 ) ( s ) d s for j I ( a ) .
  • Suppose that I > and that there exists i I > such that B i j ( 0 ) ( t ) 0 for j I i ( B ) or B i j ( 0 ) ( t ) + B ^ i j ( t ) 0 for j I i ( B ) , i.e., B i j ( 0 ) ( t ) σ for j I i ( B ) or B i j ( 0 ) ( t ) + B ^ i j ( t ) σ for j I i ( B ) by (17). Therefore, we obtain
    i = 1 p B i j ( 0 ) ( t ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 1 ) ( t ) i I > B i j ( 0 ) ( t ) · w i ( 1 ) ( t ) + { i : j I i ( B ) , i I > } B ^ i j · w i ( 1 ) ( t ) = i I > B i j ( 0 ) ( t ) · η ( t ) + { i : j I i ( B ) , i I > } B ^ i j · η ( t ) σ · η ( t ) .
    From (64), we see that
    σ · η ( t ) = τ + ν · t T η ( s ) d s ,
    which implies
    σ · η ( t ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · η ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · η ( s ) d s for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · η ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · η ( s ) d s for j I ( a ) .
    for all t [ 0 , T ] . Using (68) and (69), and the facts that w i ( 1 ) ( t ) η ( t ) and K i j ( 0 ) ( s , t ) K ^ i j ( s , t ) 0 for j I i ( K ) , we also have
    i = 1 p B i j ( 0 ) ( t ) · w i ( 1 ) ( t ) + { i : j I i ( B ) } B ^ i j · w i ( 1 ) ( t ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · η ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · η ( s ) d s for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · η ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · η ( s ) d s for j I ( a ) . a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( 1 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( 1 ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( 1 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w i ( 1 ) ( s ) d s for j I ( a ) .
This concludes that w ( 1 ) is a feasible solution of (DRCLP3), and the proof is complete. □
Lemma 7
(Riesz and Sz.-Nagy [54] (p. 64)). Let { f k } k = 1 be a sequence in L 2 [ 0 , T ] . If the sequence { f k } k = 1 is uniformly bounded with respect to · 2 , then a subsequence { f k j } j = 1 exists that weakly converges to f L 2 [ 0 , T ] . In other words, for any g L 2 [ 0 , T ] , we have
lim j 0 T f k j ( t ) g ( t ) d t = 0 T f ( t ) g ( t ) d t .
Lemma 8
(Levinson [4]). If the sequence { f k } k = 1 is uniformly bounded on [ 0 , T ] with respect to · 2 and weakly converges to f L 2 [ 0 , T ] , then
f ( t ) lim sup k f k ( t ) a n d f ( t ) lim inf k f k ( t ) a . e . o n [ 0 , T ] .
Theorem 3.
Suppose that the primal problem ( P n ) is feasible. According to ( 26 ) and ( 53 ) , let { z ^ ( n ) } n = 1 and { w ^ ( n ) } n = 1 be the sequences that are constructed from the optimal solutions z ¯ ( n ) of primal problem ( P n ) and the optimal solution w ¯ ( n ) of dual problem ( D n ) , respectively. Then, the following statements hold true.
(i)
There exists a subsequence { z ^ ( n k ) } k = 1 of { z ^ ( n ) } n = 1 such that { z ^ ( n k ) } k = 1 is weakly convergent to an optimal solution z ^ of primal problem ( RCLP 3 ) .
(ii)
For each n, we define
w ˘ i ( n ) ( t ) = min w ^ i ( n ) ( t ) , η ( t ) .
Then, there exists a subsequence { w ˘ ( n k ) } k = 1 of { w ˘ ( n ) } n = 1 such that { w ˘ ( n k ) } k = 1 is weakly convergent to an optimal solution w ^ of dual problem ( DRCLP 3 ) .
Proof. 
Proposition 2 states that the sequence of functions { z ^ ( n ) } n = 1 is uniformly bounded with respect to · 2 . We write z ^ j ( n ) to denote the jth component of z ^ ( n ) . Lemma 7 says that there exists a subsequence z ^ 1 ( n k ( 1 ) ) k = 1 of z ^ 1 ( n ) n = 1 that weakly converges to some z ^ 1 ( 0 ) L 2 [ 0 , T ] . Using Lemma 7 again, there exists a subsequence z ^ 2 ( n k ( 2 ) ) k = 1 of z ^ 2 ( n k ( 1 ) ) k = 1 that weakly converges to some z ^ 2 ( 0 ) L 2 [ 0 , T ] . By induction, there exists a subsequence z ^ j ( n k ( j ) ) k = 1 of z ^ j ( n k ( j 1 ) ) k = 1 that weakly converges to some z ^ j ( 0 ) L 2 [ 0 , T ] for each j. Therefore, we can construct a subsequence { z ^ ( n k ) } k = 1 that weakly converges to z ^ ( 0 ) . Since z ^ ( n k ) is a feasible solution of problem (RCLP3) for each n k , we have z ^ ( n k ) ( t ) 0 and
j = 1 q B i j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( n k ) ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( n k ) ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( n k ) ( s ) d s for i I ( c ) c i ( 0 ) ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( n k ) ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( n k ) ( s ) d s for i I ( c ) .
From Lemma 8, for each j, we have
lim sup k z ^ j ( n k ) ( t ) z ^ j ( 0 ) ( t ) lim inf k z ^ j ( n k ) ( t ) 0 a . e . in [ 0 , T ] .
Therefore, we obtain z ^ ( 0 ) ( t ) 0 a.e. in [ 0 , T ] . Since B i j ( 0 ) 0 and B ^ i j 0 , for i I ( c ) , by taking the limit superior on both sides of (70), we obtain
j = 1 q B i j ( 0 ) ( t ) · z ^ j ( 0 ) ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( 0 ) ( t ) lim sup k j = 1 q B i j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( n k ) ( t ) ( by ( 71 ) c i ( 0 ) ( t ) c ^ i ( t ) + lim sup k j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( n k ) ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( n k ) ( s ) d s ( by ( 70 ) ) = c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( 0 ) ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( 0 ) ( s ) d s a . e . in [ 0 , T ] ( by the weak convergence )
For i I ( c ) , we can similarly obtain
j = 1 q B i j ( 0 ) ( t ) · z ^ j ( 0 ) ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( 0 ) ( t ) c i ( 0 ) ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( 0 ) ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( 0 ) ( s ) d s .
Let N 0 be a subset of [ 0 , T ] on which inequalities (72) and (73) are violated, let N 1 be a subset of [ 0 , T ] on which z ^ ( 0 ) ( t ) 0 , and let N = N 0 N 1 . We define
z ^ ( t ) = z ^ ( 0 ) ( t ) if t N 0 if t N ,
where the set N has a measure of zero. It is clear that z ^ ( t ) 0 for all t [ 0 , T ] and that z ^ ( t ) = z ^ ( 0 ) ( t ) a.e. on [ 0 , T ] , i.e., z ^ j ( t ) = z ^ j ( 0 ) ( t ) a.e. on [ 0 , T ] for each j. We also see that z ^ j L 2 [ 0 , T ] for each j. Now, we consider two cases.
  • For t N , from (72), we have
    j = 1 q B i j ( 0 ) ( t ) · z ^ j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( t ) = j = 1 q B i j ( 0 ) ( t ) · z ^ j ( 0 ) ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( 0 ) ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( 0 ) ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( 0 ) ( s ) d s c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( s ) d s .
    From (73), we can similarly obtain
    j = 1 q B i j ( 0 ) ( t ) · z ^ j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( t ) c i ( 0 ) ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( s ) d s .
  • For t N , from (72), we have
    j = 1 q B i j ( 0 ) ( t ) · z ^ j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( t ) = 0 c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( 0 ) ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( 0 ) ( s ) d s c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( s ) d s .
    From (73), we can similarly obtain
    j = 1 q B i j ( 0 ) ( t ) · z ^ j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z ^ j ( t ) c i ( 0 ) ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z ^ j ( s ) d s .
This shows that z ^ is a feasible solution of problem (RCLP3). Since z ^ ( t ) = z ^ ( 0 ) ( t ) a.e. on [ 0 , T ] , it follows that the subsequence { z ^ ( n k ) } k = 1 is also weakly convergent to z ^ .
Using the feasibility of w ^ ( n ) for problem (DRCLP3), Lemma 6 states that w ˘ ( n ) is also a feasible solution of problem (DRCLP3) for each n satisfying w ˘ i ( n ) ( t ) w ^ i ( n ) ( t ) for each i = 1 , , p and t [ 0 , T ] . Remark 3 states that the sequence { w ^ ( n ) } n = 1 is uniformly bounded. Since
η ( t ) = τ σ · exp ν · ( T t ) σ τ σ · exp ν · T σ for all t [ 0 , T ] ,
it follows that the sequence { w ˘ ( n ) } n = 1 is also uniformly bounded, which implies that the sequence { w ˘ ( n ) } n = 1 is uniformly bounded with respect to · 2 . Using Lemma 7, we can similarly show that there is a subsequence { w ˘ ( n k ) } k = 1 of { w ˘ ( n ) } n = 1 that weakly converges to some w ˘ ( 0 ) . The feasibility of w ˘ ( n k ) for problem (DRCLP3) states that w ˘ ( n k ) ( t ) 0 and
i = 1 p B i j ( 0 ) ( t ) · w ˘ i ( n k ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w ˘ i ( n k ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ˘ i ( n k ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w ˘ i ( n k ) ( s ) d s for j I ( a ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ˘ i ( n k ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w ˘ i ( n k ) ( s ) d s for j I ( a ) .
From Lemma 8, for each i, we have
lim sup k w ˘ i ( n k ) ( t ) w ˘ i ( 0 ) ( t ) lim inf k w ˘ i ( n k ) ( t ) 0 a . e . in [ 0 , T ] ,
which suggests that w ˘ ( 0 ) ( t ) 0 a.e. in [ 0 , T ] . Since B i j ( 0 ) 0 and B ^ i j 0 , for j I ( a ) , by taking the limit inferior on both sides of (74), we obtain
i = 1 p B i j ( 0 ) ( t ) · w ˘ i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w ˘ i ( 0 ) ( t ) lim inf n k i = 1 p B i j ( 0 ) ( t ) · w ˘ i ( n k ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w ˘ i ( n k ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + lim inf n k i = 1 p t T K i j ( 0 ) ( s , t ) · w ˘ i ( n k ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w ˘ i ( n k ) ( s ) d s = a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ˘ i ( 0 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w ˘ i ( 0 ) ( s ) d s a . e . in [ 0 , T ] ( by the weak convergence )
For j I ( a ) , we can similarly obtain
i = 1 p B i j ( 0 ) ( t ) · w ˘ i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( t ) · w ˘ i ( 0 ) ( t ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ˘ i ( 0 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w ˘ i 0 ) ( s ) d s
We define η ( t ) = η ( t ) 1 p . Then, we see that w ˘ ( n k ) ( t ) η ( t ) for each k and for all t [ 0 , T ] .
Let N ^ 0 be a subset of [ 0 , T ] on which the inequalities (76) and (77) are violated, let N ^ 1 be a subset of [ 0 , T ] on which w ˘ ( 0 ) ( t ) 0 , and let N ^ = N ^ 0 N ^ 1 . We define
w ^ ( t ) = w ˘ ( 0 ) ( t ) if t N ^ η ( t ) if t N ^ ,
where the set N ^ has a measure of zero. Then, we see that w ^ ( t ) 0 for all t [ 0 , T ] and that w ^ ( t ) = w ˘ ( 0 ) ( t ) a.e. on [ 0 , T ] . Now, we claim that w ^ is a feasible solution of ( DRCLP 3 ) . We consider two cases.
  • Suppose that t N ^ . For j I ( a ) , from (76), we have
    i = 1 p B i j ( 0 ) ( t ) · w ^ i ( t ) + { i : j I i ( B ) } B ^ i j ( s , t ) · w ^ i ( t ) = i = 1 p B i j ( 0 ) ( t ) · w ˘ i ( 0 ) ( t ) + { i : j I i ( B ) } B ^ i j ( s , t ) · w ˘ i ( 0 ) ( t ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ˘ i ( 0 ) ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w ˘ i ( 0 ) ( s ) d s = a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ^ i ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w ^ i ( s ) d s .
    For j I ( a ) , from (77), we can similarly obtain
    i = 1 p B i j ( 0 ) ( t ) · w ^ i ( t ) + { i : j I i ( B ) } B ^ i j ( s , t ) · w ^ i ( t ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ^ i ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w ^ i ( s ) d s .
  • Suppose that t N ^ . For each i = 1 , , p , since w ˘ i ( n k ) ( t ) η ( t ) for all t [ 0 , T ] , using Lemma 8, we have
    w ˘ i ( 0 ) ( t ) lim sup n k w ˘ i ( n k ) ( t ) η ( t ) a . e . on [ 0 , T ] .
    For each i = 1 , , p , since w ^ i ( t ) = w ˘ i ( 0 ) ( t ) a.e. on [ 0 , T ] , it follows that
    w ^ i ( t ) η ( t ) a . e . on [ 0 , T ] .
    Therefore, for j I ( a ) , we obtain
    i = 1 p B i j ( 0 ) ( t ) · w ^ i ( t ) + { i : j I i ( B ) } B ^ i j ( s , t ) · w ^ i ( t ) = i = 1 p B i j ( 0 ) ( t ) · η ( t ) + { i : j I i ( B ) } B ^ i j ( s , t ) · η ( t ) σ · η ( t ) ( by ( 68 ) ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · η ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · η ( s ) d s ( by ( 69 ) ) a j ( 0 ) ( t ) a ^ j ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ^ i ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w ^ i ( s ) d s ( by ( 78 ) ) .
    For j I ( a ) , we can similarly obtain
    i = 1 p B i j ( 0 ) ( t ) · w ^ i ( t ) + { i : j I i ( B ) } B ^ i j ( s , t ) · w ^ i ( t ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ^ i ( s ) d s { i : j I i ( K ) } t T K ^ i j ( s , t ) · w ^ i ( s ) d s .
The above two cases conclude that w ^ is indeed a feasible solution of (DRCLP3). Since w ^ ( t ) = w ˘ ( 0 ) ( t ) a.e. on [ 0 , T ] , it follows that the subsequence { w ˘ ( n k ) } k = 1 is also weakly convergent to w ^ .
Finally, we prove the optimality. Now, we have
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( n k ) ( t ) d t = j I ( a ) 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t + j I ( a ) 0 T a j ( 0 ) ( t ) a ^ j ( t ) · z ^ j ( n k ) ( t ) d t = j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ^ j ( n k ) ( t ) d t + j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n k ) · z ^ j ( n k ) ( t ) d t + j = 1 q l = 1 n k E ¯ l ( n k ) a l j ( n k ) · z ^ j ( n k ) ( t ) d t = j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t + j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t + j = 1 q l = 1 n k d l ( n k ) · a l j ( n k ) · z ¯ l j ( n k ) d t = j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t + j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t + V ( P n k )
and
i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( n k ) ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( n k ) ( t ) d t = i I ( c ) 0 T c i ( 0 ) ( t ) · w ^ i ( n k ) ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · w ^ i ( n k ) ( t ) d t = i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) · w ¯ l i ( n k ) d t + i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c ^ i ( t ) · w ¯ l i ( n k ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) · f ( n k ) ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · f ( n k ) ( t ) d t = i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t + i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t + i = 1 p l = 1 n k d l ( n k ) · c l i ( n k ) · w ¯ l i ( n k ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) · f ( n k ) ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · f ( n k ) ( t ) d t = i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t + i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t + V ( D n k ) + i I ( c ) 0 T c i ( 0 ) ( t ) · f ( n k ) ( t ) d t + i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · f ( n k ) ( t ) d t .
Since V ( P n k ) = V ( D n k ) and w ˘ i ( n k ) w ^ i ( n k ) for each i and n k , from (79) and (80), we have
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t j I ( a ) l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t i = 1 p 0 T c i ( 0 ) ( t ) · w ˘ i ( n k ) ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ˘ i ( n k ) ( t ) d t i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t i I ( c ) l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t i I ( c ) 0 T c i ( 0 ) ( t ) · f ( n k ) ( t ) d t i I ( c ) 0 T c i ( 0 ) ( t ) c ^ i ( t ) · f ( n k ) ( t ) d t .
Using Lemma 5, for j I ( a ) , we have
0 l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t = 0 T a j ( 0 ) ( t ) a ¯ j ( n k ) ( t ) · z ^ j ( n k ) ( t ) d t 0 as k
and, for j I ( a ) , we have
0 l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a ^ j ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t = 0 T a j ( 0 ) ( t ) a ^ j ( t ) a ¯ j ( n k ) ( t ) · z ^ j ( n k ) ( t ) d t 0 as k .
Additionally, for i I ( c ) , we have
0 l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t = 0 T c i ( 0 ) ( t ) c ¯ i ( n k ) ( t ) · w ¯ i ( n k ) ( t ) d t 0 as k
and, for i I ( c ) , we have
0 l = 1 n k E ¯ l ( n k ) c i ( 0 ) ( t ) c ^ i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t = 0 T c i ( 0 ) ( t ) c ^ i ( t ) c ¯ i ( n k ) ( t ) · w ¯ i ( n k ) ( t ) d t 0 a s k .
By taking limit on both sides of (81) and by using (62) and (82)–(85), we obtain
lim k j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( n k ) ( t ) d t lim k i = 1 p 0 T c i ( 0 ) ( t ) · w ˘ i ( n k ) ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ˘ i ( n k ) ( t ) d t .
Using the weak convergence, we also obtain
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( t ) d t i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( t ) d t .
According to the weak duality theorem between problems (RCLP3) and (DRCLP3), we have
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z ^ j ( t ) d t = i = 1 p 0 T c i ( 0 ) ( t ) · w ^ i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w ^ i ( t ) d t ,
which also suggests that ( z ^ 1 , , z ^ q ) and ( w ^ 1 , , w ^ p ) are the optimal solutions of problems (RCLP3) and (DRCLP3), respectively. Theorem 1 also states that V ( DRCLP 3 ) = V ( RCLP 3 ) , and the proof is complete. □

6. Computational Procedure and Numerical Example

In order to obtain the approximate solutions of the continuous-time linear programming problem (RCLP3), we use Proposition 5 by considering the limit situation, which can be used to design the computational procedure. It is natural to see that the approximate solutions are the step functions. Proposition 5 shows that it is possible to obtain the appropriate step functions so that the corresponding objective function value is close enough to the optimal objective function value when n is taken to be sufficiently large.
Theorem 1 and Proposition 5 state that the error between the approximate objective value and the optimal objective value is given by
ε n = V ( D n ) + i = 1 p l = 1 n E ¯ l ( n ) c i ( 0 ) ( t ) · w ¯ l i ( n ) d t + E ¯ l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T t ) b l ( n ) · c i ( 0 ) ( t ) d t i I ( c ) l = 1 n E ¯ l ( n ) c ^ i ( t ) · w ¯ l i ( n ) d t + E ¯ l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T t ) b l ( n ) · c ^ i ( t ) d t
In order to obtain π l ( n ) , by referring to (34), we need to solve the following problem:
sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( t ) ,
where the real-valued function a j is given by
a j ( t ) = a j ( 0 ) ( t ) for j I ( a ) a j ( 0 ) ( t ) a ^ j ( t ) for j I ( a ) .
We rewrite the real-valued function h ¯ l j ( n ) as follows:
h ¯ l j ( n ) ( t ) = i = 1 p B l i j ( n ) · w ¯ l i ( n ) i = 1 p B i j ( 0 ) ( t ) · w ¯ l i ( n ) + { i : j I i ( B ) } B ^ i j ( t ) · w ¯ l i ( n ) + t e l ( n ) i = 1 p K i j ( 0 ) ( s , t ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ¯ l i ( n ) d s + k = l + 1 n E ¯ k ( n ) i = 1 p K i j ( 0 ) ( s , t ) · w ¯ k i ( n ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ¯ k i ( n ) i = 1 p K k l i j ( n ) · w ¯ k i ( n ) d s .
For t F l ( n ) and l = 1 , , n , we define the constant
h ^ l j ( n ) = i = 1 p B l i j ( n ) · w ¯ l i ( n ) k = l + 1 n E ¯ k ( n ) i = 1 p K k l i j ( n ) · w ¯ k i ( n ) d s
and the real-valued function
h ˜ l j ( n ) ( t ) = i = 1 p B i j ( 0 ) ( t ) · w ¯ l i ( n ) + { i : j I i ( B ) } B ^ i j ( t ) · w ¯ l i ( n ) + t e l ( n ) i = 1 p K i j ( 0 ) ( s , t ) · w ¯ l i ( n ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ¯ l i ( n ) d s + k = l + 1 n E ¯ k ( n ) i = 1 p K i j ( 0 ) ( s , t ) · w ¯ k i ( n ) { i : j I i ( K ) } K ^ i j ( s , t ) · w ¯ k i ( n ) d s .
Then, the real-valued function h ¯ l j ( n ) is given by
h ¯ l j ( n ) ( t ) = h ^ l j ( n ) + h ˜ l j ( n ) ( t ) for t F l ( n ) .
Now, we define the real-valued function h l j ( n ) on E ¯ l ( n ) by
h l j ( n ) ( t ) = h ¯ l j ( n ) ( t ) + a j ( t ) , if t E l ( n ) lim t e l 1 ( n ) + h ¯ l j ( n ) ( t ) + a j ( t ) , if t = e l 1 ( n ) lim t e l ( n ) h ¯ l j ( n ) ( t ) + a j ( t ) , if t = e l ( n )
Since a j ( 0 ) , a ^ j , B i j ( 0 ) , and B ^ i j are continuous on E l ( n ) and since K i j ( 0 ) and K ^ i j are continuous on E k ( n ) × E l ( n ) for all l , k = 1 , , n , it follows that h ¯ l j ( n ) + a j is also continuous on E l ( n ) . This also suggests that h l j ( n ) is continuous on the compact interval E ¯ l ( n ) . In other words, the supremum in (86) can be obtained below
sup t E l ( n ) h ¯ l j ( n ) ( t ) + a j ( t ) = sup t E ¯ l ( n ) h l j ( n ) ( t ) = max t E ¯ l ( n ) h l j ( n ) ( t ) .
In order to further design the computational procedure, we need to assume that a j ( 0 ) , a ^ j , B i j ( 0 ) , K i j ( 0 ) , and K ^ i j are twice-differentiable on [ 0 , T ] and [ 0 , T ] × [ 0 , T ] , respectively, for the purpose of applying the Newton’s method, which also suggests that a j ( 0 ) , a ^ j , B i j ( 0 ) , K i j ( 0 ) , and K ^ i j are twice-differentiable on the open interval E l ( n ) and open rectangle E k ( n ) × E l ( n ) , respectively, for all l , k = 1 , , n . From (89), we need to solve the following simple type of optimization problem:
max e l 1 ( n ) t e l ( n ) h l j ( n ) ( t ) .
Then, we can see that the optimal solution is
t = e l 1 ( n ) or t = e l ( n ) or satisfying d d t h l j ( n ) ( t ) t = t = 0 .
According to (88), it follows that the optimal solution of problem (90) is
t = e l 1 ( n ) or t = e l ( n ) or satisfying d d t h ˜ l j ( n ) ( t ) + a j ( t ) t = t = 0 .
Let Z l j ( n ) denote the set of all zeros of the real-valued function d d t ( h ˜ l j ( n ) ( t ) + a j ( t ) ) . Then,
max t E ¯ l ( n ) h l j ( n ) ( t ) = max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , max t Z l j ( n ) h l j ( n ) t , if Z l j ( n ) max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , if Z l j ( n ) = .
Therefore, using (89) and (91), we can obtain the desired supremum (86).
From (87), for t E l ( n ) , we have
d d t h ˜ l j ( n ) ( t ) = i = 1 p d d t B i j ( 0 ) ( t ) · w ¯ l i ( n ) + { i : j I i ( B ) } d d t B ^ i j ( t ) · w ¯ l i ( n ) + i = 1 p t e l ( n ) t K i j ( 0 ) ( s , t ) · w ¯ l i ( n ) d s K i j ( 0 ) ( t , t ) · w ¯ l i ( n ) { i : j I i ( K ) } t e l ( n ) t K ^ i j ( s , t ) · w ¯ l i ( n ) d s K ^ i j ( t , t ) · w ¯ l i ( n ) + k = l + 1 n E ¯ k ( n ) i = 1 p t K i j ( 0 ) ( s , t ) · w ¯ k i ( n ) { i : j I i ( K ) } t K ^ i j ( s , t ) · w ¯ k i ( n ) d s .
and
d 2 d t 2 h ˜ l j ( n ) ( t ) = i = 1 p d 2 d t 2 B i j ( 0 ) ( t ) · w ¯ l i ( n ) + { i : j I i ( B ) } d 2 d t 2 B ^ i j ( t ) · w ¯ l i ( n ) i = 1 p d d t K i j ( 0 ) ( t , t ) · w ¯ l i ( n ) + i = 1 p d d t K ^ i j ( t , t ) · w ¯ l i ( n ) + i = 1 p t e l ( n ) 2 t 2 K i j ( 0 ) ( s , t ) · w ¯ l i ( n ) d s d d t K i j ( 0 ) ( t , t ) · w ¯ l i ( n ) { i : j I i ( K ) } t e l ( n ) 2 t 2 K ^ i j ( s , t ) · w ¯ l i ( n ) d s d d t K ^ i j ( t , t ) · w ¯ l i ( n ) + k = l + 1 n E ¯ k ( n ) i = 1 p 2 t 2 K i j ( 0 ) ( s , t ) · w ¯ k i ( n ) { i : j I i ( K ) } 2 t 2 K ^ i j ( s , t ) · w ¯ k i ( n ) d s .
We consider the following cases.
  • Suppose that h ˜ l j ( n ) + a j is a linear function of t on E l ( n ) assumed by
    h ˜ l j ( n ) ( t ) + a j ( t ) = a j · t + b j
    for j = 1 , , q . Using (89), we obtain
    max t E ¯ l ( n ) h l j ( n ) ( t ) = max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , b j , if a j = 0 max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , a j · e l ( n ) + b j , if a j > 0 max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , a j · e l 1 ( n ) + b j , if a j < 0
  • Suppose that h ˜ l j ( n ) + a j is not a linear function of t. In order to obtain the zero t of d d t ( h ˜ l j ( n ) ( t ) + a j ( t ) ) , we can apply the Newton’s method to generate a sequence { t m } m = 1 such that t m t as m . The iteration is given by
    t m + 1 = t m d d t h ˜ l j ( n ) ( t ) t = t m + d d t a j ( t ) t = t m d 2 d t 2 h ˜ l j ( n ) ( t ) t = t m + d 2 d t 2 a j ( t ) t = t m
    for m = 0 , 1 , 2 , . The initial guess is t 0 . Since the real-valued function d d t ( h ˜ l j ( n ) ( t ) + a j ( t ) ) may have more than one zero, we need to apply Newton’s method by using as many as possible for the initial guess t 0 .
Now, the computational procedure is given below.
  • Step 1. First, set the error tolerance ϵ and the initial value of natural number n N regarding the iterations.
  • Step 2. Solve the dual problem ( D n ) to obtain the optimal objective value V ( D n ) and optimal solution w ¯ .
  • Step 3. Use Newton’s method given in (94) to find the set Z l j ( n ) of all zeros of the real-valued function d d t ( h ˜ l j ( n ) ( t ) + a j ( t ) ) .
  • Step 4. Use (91) to calculate the maximum (90), and use (89) to calculate the supremum (86).
  • Step 5. Use (34) and the supremum obtained in Step 4 to obtain π ¯ l ( n ) . According to (35), use the values of π ¯ l ( n ) to obtain π l ( n ) .
  • Step 6. Use (58) to calculate the error bound ε n . If ε n < ϵ , then go to Step 7. Otherwise, consider one more subdivision of each closed subinterval and set n n + n ^ for some integer n ^ , and go to Step 2, where n ^ is the number of new points of subdivisions for all the closed subintervals.
  • Step 7. Solve the primal problem ( P n ) to obtain the optimal solution z ¯ ( n ) .
  • Step 8. Use (26) to set the step function z ^ ( n ) ( t ) , which is the approximate solution of problem (RCLP3). Proposition 5 states that the actual error between V ( RCLP 3 ) and the objective value of z ^ ( n ) ( t ) is less than ε n , where the error tolerance ϵ is reached for this partition P n .
In the sequel, we present a numerical example that considers the piecewise continuous functions on the time interval [ 0 , T ] . We consider T = 1 and the following problem:
maximize 0 1 a 1 ( t ) · z 1 ( t ) + a 2 ( t ) · z 2 ( t ) d t subject to b 1 ( t ) · z 1 ( t ) c 1 ( t ) + 0 t k 1 ( t , s ) · z 1 ( s ) + k 2 ( t , s ) · z 2 ( s ) d s for all t [ 0 , 1 ] b 2 ( t ) · z 2 ( t ) c 2 ( t ) + 0 t k 3 ( t , s ) · z 1 ( s ) + k 4 ( t , s ) · z 2 ( s ) d s for all t [ 0 , 1 ] z = ( z 1 , z 2 ) L 2 2 [ 0 , 1 ] .
The data a 1 and a 2 are assumed to be uncertain with the nominal data
a 1 ( 0 ) ( t ) = e t , if 0 t 0.2 sin t , if 0.2 < t 0.6 t 2 , if 0.6 < t 1 and a 2 ( 0 ) ( t ) = 2 t , if 0 t 0.5 t , if 0.5 < t 0.7 t 2 , if 0.7 < t 1
and the uncertainties
a ^ 1 ( t ) = e 0.01 t , if 0 t 0.2 sin ( 0.01 t ) , if 0.2 < t 0.6 ( 0.02 t ) 2 , if 0.6 < t 1 and a ^ 2 ( t ) = 0.02 t , if 0 t 0.5 0.01 t , if 0.5 < t 0.7 ( 0.02 t ) 2 , if 0.7 < t 1 ,
respectively. The data c 1 and c 2 are assumed to be uncertain with the nominal data
c 1 ( 0 ) ( t ) = t 3 , if 0 t 0.3 ( ln t ) 2 , if 0.3 < t 0.5 t 2 , if 0.5 < t 0.8 cos t , if 0.8 < t 1 and c 2 ( 0 ) ( t ) = t , if 0 t 0.4 5 t , if 0.4 < t 0.5 t 3 , if 0.5 < t 0.8 t 2 , if 0.8 < t 1 .
and the uncertainties
c ^ 1 ( t ) = ( 0.01 t ) 3 , if 0 t 0.3 0 , if 0.3 < t 0.5 ( 0.03 t ) 2 , if 0.5 < t 0.8 0 , if 0.8 < t 1 and c ^ 2 ( t ) = 0.01 t , if 0 t 0.4 0.02 t , if 0.4 < t 0.5 ( 0.01 t ) 3 , if 0.5 < t 0.8 ( 0.02 t ) 2 , if 0.8 < t 1 .
The uncertain time-dependent matrices B ( t ) and K ( t , s ) are given below:
B ( t ) = B 11 ( t ) B 12 ( t ) B 21 ( t ) B 22 ( t ) = b 1 ( t ) 0 0 b 2 ( t )
and
K ( t , s ) = K 11 ( t , s ) K 12 ( t , s ) K 21 ( t , s ) K 22 ( t , s ) = k 1 ( t , s ) k 2 ( t , s ) k 3 ( t , s ) k 4 ( t , s ) .
The data b 1 = B 11 and b 2 = B 22 are assumed to be uncertain with the nominal data
B 11 ( 0 ) ( t ) = b 1 ( 0 ) ( t ) = 20 cos t , if 0 t 0.2 25 sin t , if 0.2 < t 0.6 27 t 2 , if 0.6 < t 1
and
B 22 ( 0 ) ( t ) = b 2 ( 0 ) ( t ) = 25 cos t , if 0 t 0.5 22 t , if 0.5 < t 0.7 25 t 2 , if 0.7 < t 1
and the uncertainties
B ^ 11 ( t ) = b ^ 1 ( t ) = 0 , if 0 t 0.2 sin ( 0.01 t ) , if 0.2 < t 0.6 ( 0.03 t ) 2 , if 0.6 < t 1
and
B ^ 11 ( t ) = b ^ 2 ( t ) = 0 , if 0 t 0.5 0.01 t , if 0.5 < t 0.7 ( 0.02 t ) 2 , if 0.7 < t 1
The data k 1 = K 11 , k 2 = K 12 , k 3 = K 21 , and k 4 = K 22 are assumed to be uncertain with the nominal data
K 11 ( 0 ) ( t , s ) = k 1 ( 0 ) ( t , s ) = t 3 + s 2 , if 0 t 0.8 and 0 s 0.5 t 2 + sin s , if 0 t 0.8 and 0.5 < s 1 ( ln t ) 2 + 3 e s , if 0.8 < t 1 and 0 s 0.5 cos t + 5 e s , if 0.8 < t 1 and 0.5 < s 1 K 12 ( 0 ) ( t , s ) = k 2 ( 0 ) ( t , s ) = t 3 · s 2 , if 0 t 0.6 and 0 s 0.7 t 2 · sin s , if 0 t 0.6 and 0.7 < s 1 ( ln t ) 2 · e s , if 0.6 < t 1 and 0 s 0.7 3 t 2 · sin s , if 0.6 < t 1 and 0.7 < s 1 K 21 ( 0 ) ( t , s ) = k 3 ( 0 ) ( t , s ) = 3 t 2 · sin s , if 0 t 0.3 and 0 s 0.6 2 t · s 2 , if 0 t 0.3 and 0.6 < s 1 ( ln t ) 2 + ( cos s ) 2 , if 0.3 < t 1 and 0 s 0.6 t 3 · s 2 , if 0.3 < t 1 and 0.6 < s 1 K 22 ( 0 ) ( t , s ) = k 4 ( 0 ) ( t , s ) = t 2 + s 2 , if 0 t 0.5 and 0 s 0.3 sin t + s 2 , if 0 t 0.5 and 0.3 < s 1 ( cos t ) 2 + 3 e s , if 0.5 < t 1 and 0 s 0.3 2 t 3 · s 2 , if 0.5 < t 1 and 0.3 < s 1 .
and the uncertainties
K ^ 11 ( t , s ) = k ^ 1 ( t , s ) = ( 0.05 t ) 3 + ( 0.02 s ) 2 , if 0 t 0.8 and 0 s 0.5 ( 0.03 t ) 2 + sin ( 0.02 s ) , if 0 t 0.8 and 0.5 < s 1 e 0.01 s , if 0.8 < t 1 and 0 s 0.5 e 0.01 s , if 0.8 < t 1 and 0.5 < s 1 K ^ 12 ( t , s ) = k ^ 2 ( t , s ) = ( 0.02 t ) 3 · ( 0.05 s ) 2 , if 0 t 0.6 and 0 s 0.7 ( 0.03 t ) 2 · sin ( 0.05 s ) , if 0 t 0.6 and 0.7 < s 1 e 0.01 s , if 0.6 < t 1 and 0 s 0.7 ( 0.02 t ) 2 · sin ( 0.02 s ) , if 0.6 < t 1 and 0.7 < s 1 K ^ 21 ( t , s ) = k ^ 3 ( t , s ) = ( 0.03 t ) 2 · sin ( 0.01 s ) , if 0 t 0.3 and 0 s 0.6 ( 0.04 t ) · ( 0.02 s ) 2 , if 0 t 0.3 and 0.6 < s 1 0 , if 0.3 < t 1 and 0 s 0.6 ( 0.01 t ) 3 · ( 0.05 s ) 2 , if 0.3 < t 1 and 0.6 < s 1 K ^ 22 ( t , s ) = k ^ 4 ( t , s ) = ( 0.01 t ) 2 + ( 0.02 s ) 2 , if 0 t 0.5 and 0 s 0.3 sin ( 0.01 t ) + ( 0.02 s ) 2 , if 0 t 0.5 and 0.3 < s 1 e 0.03 s , if 0.5 < t 1 and 0 s 0.3 ( 0.02 t ) 3 · ( 0.03 s ) 2 , if 0.5 < t 1 and 0.3 < s 1 .
We see that B i j ( 0 ) ( t ) and B ^ i j ( t ) satisfy the conditions (6) and (7). From the discontinuities of a 1 , a 2 , c 1 , c 2 , b 1 , b 2 , k 1 , k 2 , k 3 , and k 4 , according to the setting of partition P n , we see that r = 8 and
D = d 0 = 0 , d 1 = 0.2 , d 2 = 0.3 , d 3 = 0.4 , d 4 = 0.5 , d 5 = 0.6 , d 6 = 0.7 , d 7 = 0.8 , d 8 = 1 .
For n = 2 , this means that each closed interval [ d v , d v + 1 ] is equally divided by two subintervals for v = 0 , 1 , 7 . In this case, we have n = 2 · 8 = 16 . Therefore, we obtain a partition P 16 .
We denote by
V ( RCLP 3 n ) = 0 T a ( t ) z ^ ( n ) ( t ) d t
the approximate optimal objective value of problem (RCLP3). Theorem 1 and Proposition 5 state that
0 V ( RCLP 3 ) V ( RCLP 3 n ) ε n
and
0 V ( RCLP 3 n ) V ( P n ) V ( RCLP 3 ) V ( P n ) ε n .
The numerical results are shown in the following Table 1.
The decision-maker tolerating the error ϵ = 0.0005 suggests that n = 100 is sufficient to achieve this error ϵ by referring to the error bound ε n = 0.0005599 . We use the active set method in MATLAB to solve the primal and dual linear programming problems ( P n ) and ( D n ) , respectively, to obtain the numerical results. We need to mention that using the simplex method in MATLAB to solve the problems ( P n ) and ( D n ) for large n encounters a little bug. There is a warning message from MATLAB when the simplex method is used to solve the dual problem ( D n ) for large n. However, it is fine when the simplex method is used to solve the primal problem ( P n ) .

7. Conclusions

Solving the continuous-time linear programming problem is indeed difficult. Especially, when the time-dependent matrices are involved in the problem, more efforts should put into taking care of the time factor through the time interval [ 0 , T ] . In this paper, a more complicated problem was studied when the uncertainty was assumed to be considered in the continuous-time linear programming problem with time-dependent matrices. In this case, a robust counter part was established and solved.
The main essence for solving the continuous-time linear programming problem is to formulate the discretization problem by considering n time points that divide the whole time interval [ 0 , T ] into n time subintervals. In this case, we can formulate a large-scale conventional linear programming problem that can be solved to obtain the approximate optimal solution. In other words, when the scale is large enough, the error between the actual optimal solution and approximate optimal solution is small. The main purpose of this paper is to obtain an analytic formula for the upper bound of error as shown in Theorem 1. The limitation of the approach proposed in this paper is that the large-scale linear programming problem should be solved, which will consume huge computer resources, which the personal computer sometimes lacks. In other words, the high-level computer will increase the efficiency of the methodology proposed in this paper. Alternatively, a new computational procedure such as parallel computing may be proposed, which can be future research.

Funding

This research received no external funding.

Acknowledgments

The author would like to thank the reviewers for carefully reading this manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bellman, R.E. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  2. Tyndall, W.F. A Duality Theorem for a Class of Continuous Linear Programming Problems. SIAM J. Appl. Math. 1965, 15, 644–666. [Google Scholar] [CrossRef] [Green Version]
  3. Tyndall, W.F. An Extended Duality Theorem for Continuous Linear Programming Problems. SIAM J. Appl. Math. 1967, 15, 1294–1298. [Google Scholar] [CrossRef]
  4. Levinson, N. A Class of Continuous Linear Programming Problems. J. Math. Anal. Appl. 1966, 16, 73–83. [Google Scholar] [CrossRef] [Green Version]
  5. Wu, H.-C. Numerical Method for Solving the Continuous-Time Linear Programming Problems with Time-Dependent Matrices and Piecewise Continuous Functions. Aims Math. 2020, 5, 5572–5627. [Google Scholar] [CrossRef]
  6. Anderson, E.J.; Nash, P.; Perold, A.F. Some Properties of a Class of Continuous Linear Programs. Siam Control. Optim. 1983, 21, 758–765. [Google Scholar] [CrossRef]
  7. Anderson, E.J.; Philpott, A.B. On the Solutions of a Class of Continuous Linear Programs. Siam J. Control. Optim. 1994, 32, 1289–1296. [Google Scholar] [CrossRef]
  8. Anderson, E.J.; Pullan, M.C. Purification for Separated Continuous Linear Programs. Math. Methods Oper. Res. 1996, 43, 9–33. [Google Scholar] [CrossRef]
  9. Fleischer, L.; Sethuraman, J. Efficient Algorithms for Separated Continuous Linear Programs: The Multicommodity Flow Problem with Holding Costs and Extensions. Math. Oper. Res. 2005, 30, 916–938. [Google Scholar] [CrossRef] [Green Version]
  10. Pullan, M.C. An Algorithm for a Class of Continuous Linear Programs. Siam J. Control. Optim. 1993, 31, 1558–1577. [Google Scholar] [CrossRef] [Green Version]
  11. Pullan, M.C. Forms of Optimal Solutions for Separated Continuous Linear Programs. Siam J. Control. Optim. 1995, 33, 1952–1977. [Google Scholar] [CrossRef]
  12. Pullan, M.C. A Duality Theory for Separated Continuous Linear Programs. Siam J. Control. Optim. 1996, 34, 931–965. [Google Scholar] [CrossRef]
  13. Pullan, M.C. Convergence of a General Class of Algorithms for Separated Continuous Linear Programs. Siam J. Optim. 2000, 10, 722–731. [Google Scholar] [CrossRef] [Green Version]
  14. Pullan, M.C. An Extended Algorithm for Separated Continuous Linear Programs. Math. Program. 2002, 93, 415–451. [Google Scholar] [CrossRef]
  15. Meidan, R.; Perold, A.F. Optimality Conditions and Strong Duality in Abstract and Continuous-Time Linear Programming. J. Optim. Theory Appl. 1983, 40, 61–77. [Google Scholar] [CrossRef]
  16. Papageorgiou, N.S. A Class of Infinite Dimensional Linear Programming Problems. J. Math. Anal. Appl. 1982, 87, 228–245. [Google Scholar] [CrossRef] [Green Version]
  17. Schechter, M. Duality in Continuous Linear Programming. J. Math. Anal. Appl. 1972, 37, 130–141. [Google Scholar] [CrossRef] [Green Version]
  18. Weiss, G. A Simplex Based Algorithm to Solve Separated Continuous Linear Programs. Math. Program. 2008, 115, 151–198. [Google Scholar] [CrossRef]
  19. Wu, H.-C. Solving the Continuous-Time Linear Programming Problems Based on the Piecewise Continuous Functions. Numer. Funct. Anal. Optim. 2016, 37, 1168–1201. [Google Scholar] [CrossRef]
  20. Farr, W.H.; Hanson, M.A. Continuous Time Programming with Nonlinear Time-Delayed. J. Math. Anal. Appl. 1974, 46, 41–61. [Google Scholar] [CrossRef] [Green Version]
  21. Farr, W.H.; Hanson, M.A. Continuous Time Programming with Nonlinear Constraints. J. Math. Anal. Appl. 1974, 45, 96–115. [Google Scholar] [CrossRef] [Green Version]
  22. Grinold, R.C. Continuous Programming Part Two: Nonlinear Objectives. J. Math. Anal. Appl. 1969, 27, 639–655. [Google Scholar] [CrossRef] [Green Version]
  23. Grinold, R.C. Continuous Programming Part One: Linear Objectives. J. Math. Anal. Appl. 1969, 28, 32–51. [Google Scholar] [CrossRef]
  24. Hanson, M.A.; Mond, B. A Class of Continuous Convex Programming Problems. J. Math. Anal. Appl. 1968, 22, 427–437. [Google Scholar] [CrossRef] [Green Version]
  25. Reiland, T.W. Optimality Conditions and Duality in Continuous Programming I: Convex Programs and a Theorem of the Alternative. J. Math. Anal. Appl. 1980, 77, 297–325. [Google Scholar] [CrossRef]
  26. Reiland, T.W. Optimality Conditions and Duality in Continuous Programming II: The Linear Problem Revisited. J. Math. Anal. Appl. 1980, 77, 329–343. [Google Scholar] [CrossRef] [Green Version]
  27. Reil, T.W.; Hanson, M.A. Generalized Kuhn-Tucker Conditions and Duality for Continuous Nonlinear Programming Problems. J. Math. Anal. Appl. 1980, 74, 578–598. [Google Scholar]
  28. Singh, C. A Sufficient Optimality Criterion in Continuous Time Programming for Generalized Convex Functions. J. Math. Anal. Appl. 1978, 62, 506–511. [Google Scholar] [CrossRef] [Green Version]
  29. Rojas-Medar, M.A.; Brandao, A.J.; Silva, G.N. Nonsmooth Continuous-Time Optimization Problems: Sufficient Conditions. J. Math. Anal. Appl. 1998, 227, 305–318. [Google Scholar] [CrossRef] [Green Version]
  30. Singh, C.; Farr, W.H. Saddle-Point Optimality Criteria of Continuous Time Programming without Differentiability. J. Math. Anal. Appl. 1977, 59, 442–453. [Google Scholar] [CrossRef] [Green Version]
  31. Nobakhtian, S.; Pouryayevali, M.R. Optimality Criteria for Nonsmooth Continuous-Time Problems of Multiobjective Optimization. J. Optim. Theory Appl. 2008, 136, 69–76. [Google Scholar] [CrossRef]
  32. Nobakhtian, S.; Pouryayevali, M.R. Duality for Nonsmooth Continuous-Time Problems of Vector Optimization. J. Optim. Theory Appl. 2008, 136, 77–85. [Google Scholar] [CrossRef]
  33. Zalmai, G.J. Duality for a Class of Continuous-Time Homogeneous Fractional Programming Problems. Z. Oper. Res. Ser. A B 1986, 30, 43–48. [Google Scholar] [CrossRef]
  34. Zalmai, G.J. Duality for a Class of Continuous-Time Fractional Programming Problems. Util. Math. 1987, 31, 209–218. [Google Scholar] [CrossRef]
  35. Zalmai, G.J. Optimality Conditions and Duality for a Class of Continuous-Time Generalized Fractional Programming Problems. J. Math. Anal. Appl. 1990, 153, 365–371. [Google Scholar] [CrossRef] [Green Version]
  36. Zalmai, G.J. Optimality Conditions and Duality models for a Class of Nonsmooth Constrained Fractional Optimal Control Problems. J. Math. Anal. Appl. 1997, 210, 114–149. [Google Scholar] [CrossRef] [Green Version]
  37. Wen, C.-F.; Wu, H.-C. Using the Dinkelbach-Type Algorithm to Solve the Continuous-Time Linear Fractional Programming Problems. J. Glob. Optim. 2011, 49, 237–263. [Google Scholar] [CrossRef]
  38. Wen, C.-F.; Wu, H.-C. Using the Parametric Approach to Solve the Continuous-Time Linear Fractional Max-Min Problems. J. Glob. Optim. 2012, 54, 129–153. [Google Scholar] [CrossRef]
  39. Wen, C.-F.; Wu, H.-C. The Approximate Solutions and Duality Theorems for the Continuous-Time Linear Fractional Programming Problems. Numer. Funct. Anal. Optim. 2012, 33, 80–129. [Google Scholar] [CrossRef]
  40. Dantzig, G.B. Linear Programming under Uncertainty. Manag. Scienve 1955, 1, 197–206. [Google Scholar] [CrossRef]
  41. Ben-Tal, A.; Nemirovski, A. Robust Convex Optimization. Math. Oper. Res. 1998, 23, 769–805. [Google Scholar] [CrossRef] [Green Version]
  42. Ben-Tal, A.; Nemirovski, A. Robust Solutions of Uncertain Linear Programs. Oper. Res. Lett. 1999, 25, 1–13. [Google Scholar] [CrossRef] [Green Version]
  43. El Ghaoui, L.; Lebret, H. Robust Solutions to Least-Squares Problems with Uncertain Data. Siam J. Matrix Anal. Appl. 1997, 18, 1035–1064. [Google Scholar] [CrossRef]
  44. El Ghaoui, L.; Oustry, F.; Lebret, H. Robust Solutions to Uncertain Semidefinite Programs. Siam J. Optim. 1998, 9, 33–52. [Google Scholar] [CrossRef]
  45. Averbakh, I.; Zhao, Y.-B. Explicit Reformulations for Robust Optimization Problems with General Uncertainty Sets. Siam J. Optim. 2008, 18, 1436–1466. [Google Scholar] [CrossRef]
  46. Ben-Tal, A.; Boyd, S.; Nemirovski, A. Extending Scope of Robust Optimization: Comprehensive Robust Counterpart of Uncertain Problems. Math. Program. 2006, 107, 63–89. [Google Scholar] [CrossRef]
  47. Bertsimas, D.; Natarajan, K.; Teo, C.-P. Persistence in Discrete Optimization under Data Uncertainty. Math. Program. 2006, 108, 251–274. [Google Scholar] [CrossRef]
  48. Bertsimas, D.; Sim, M. The Price of Robustness. Oper. Res. 2004, 52, 35–53. [Google Scholar] [CrossRef]
  49. Bertsimas, D.; Sim, M. Tractable Approximations to Robust Conic Optimization Problems. Math. Program. 2006, 107, 5–36. [Google Scholar] [CrossRef] [Green Version]
  50. Chen, X.; Sim, M.; Sun, P. A Robust Optimization Perspective on Stochastic Programming. Oper. Res. 2007, 55, 1058–1071. [Google Scholar] [CrossRef]
  51. Erdoǧan, E.; Iyengar, G. Ambiguous Chance Constrained Problems and Robust Optimization. Math. Program. 2006, 107, 37–61. [Google Scholar] [CrossRef] [Green Version]
  52. Zhang, Y. General Robust Optimization Formulation for Nonlinear Programming. J. Optim. Theory Appl. 2007, 132, 111–124. [Google Scholar] [CrossRef] [Green Version]
  53. Wu, H.-C. Numerical Method for Solving the Robust Continuous-Time Linear Programming Problems. Mathematics 2019, 7, 435. [Google Scholar] [CrossRef] [Green Version]
  54. Riesz, F.; Nagy, B.S. Functional Analysis; Frederick Ungar Publishing Co.: New York, NY, USA, 1955. [Google Scholar]
Table 1. Numerical Results.
Table 1. Numerical Results.
n n = n · 8 ε n V ( P n ) V ( RCLP 3 n )
216 0.0261958 0.0303016 0.0327564
1080 0.0053931 0.0367996 0.0373742
50400 0.0011151 0.0382602 0.0383788
100800 0.0005599 0.0384469 0.0385064
2001600 0.0002805 0.0385406 0.0385704
3002400 0.0001871 0.0385719 0.0385918
4003200 0.0001404 0.0385875 0.0386025
5004000 0.0001124 0.0385969 0.0386089
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, H.-C. Robust Solutions for Uncertain Continuous-Time Linear Programming Problems with Time-Dependent Matrices. Mathematics 2021, 9, 885. https://doi.org/10.3390/math9080885

AMA Style

Wu H-C. Robust Solutions for Uncertain Continuous-Time Linear Programming Problems with Time-Dependent Matrices. Mathematics. 2021; 9(8):885. https://doi.org/10.3390/math9080885

Chicago/Turabian Style

Wu, Hsien-Chung. 2021. "Robust Solutions for Uncertain Continuous-Time Linear Programming Problems with Time-Dependent Matrices" Mathematics 9, no. 8: 885. https://doi.org/10.3390/math9080885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop