Abstract
A robust continuous-time linear programming problem is formulated and solved numerically in this paper. The data occurring in the continuous-time linear programming problem are assumed to be uncertain. In this paper, the uncertainty is treated by following the concept of robust optimization, which has been extensively studied recently. We introduce the robust counterpart of the continuous-time linear programming problem. In order to solve this robust counterpart, a discretization problem is formulated and solved to obtain the -optimal solution. The important contribution of this paper is to locate the error bound between the optimal solution and -optimal solution.
Keywords:
approximate solutions; continuous-time linear programming problems; ϵ-optimal solutions; robust optimization MSC:
90C05; 90C46; 90C90
1. Introduction
The theory of continuous-time linear programming problem has received considerable attention for a long time. Tyndall [1,2] treated rigorously a continuous-time linear programming problem with the constant matrices, which was originated from the “bottleneck problem” proposed by Bellman [3]. Levison [4] generalized the results of Tyndall by considering the time-dependent matrices in which the functions shown in the objective and constraints were assumed to be continuous on the time interval . In this paper, we are going to consider the continuous-time linear programming problem in which the data are assumed to be uncertain. The data here mean the real-valued functions or real numbers. Based on the assumption of uncertainty, we are going to propose and solve the so-called robust continuous-time linear programming problem.
Meidan and Perold [5], Papageorgiou [6] and Schechter [7] have obtained many interesting results of the continuous-time linear programming problem. Also, Anderson et al. [8,9,10], Fleischer and Sethuraman [11] and Pullan [12,13,14,15,16] investigated a subclass of the continuous-time linear programming problem, which is called the separated continuous-time linear programming problem and can be used to model the job-shop scheduling problems. Weiss [17] proposed a Simplex-like algorithm to solve the separated continuous-time linear programming problem. Also, Shindin and Weiss [17,18] studied a more generalized separated continuous-time linear programming problem. However, the error estimate was not studied in the above articles. One of the contributions of this paper is to obtain the error bound between the optimal solution and numerical optimal solution.
On the other hand, the nonlinear type of continuous-time optimization problems was also studied by Farr and Hanson [19,20], Grinold [21,22], Hanson and Mond [23], Reiland [24,25], Reiland and Hanson [26] and Singh [27]. The nonsmooth continuous-time optimization problems was studied by Rojas-Medar et al. [28] and Singh and Farr [29]. The nonsmooth continuous-time multiobjective programming problems was also studied by Nobakhtian and Pouryayevali [30,31]. Zalmai [32,33,34,35] investigated the continuous-time fractional programming problems. However, the numerical methods were not developed in the above articles.
Wen and Wu [36,37,38] have developed different numerical methods to solve the continuous-time linear fractional programming problems. In order to solve the continuous-time problems, the discretized problems should be considered by dividing the time interval into many subintervals. Since the functions considered in Wen and Wu [36,37,38] are assumed to be continuous on , we can take this advantage to equally divide the time interval . In other words, each subinterval has the same length. In Wu [39], the functions are assumed to be piecewise continuous on the time interval . In this case, the time interval cannot be equally divided. The reason is that, in order to develop the numerical technique, the functions should be continuous on each subinterval. Therefore a different methodology for not equally partitioning the time interval was proposed in Wu [39]. In this paper, we shall solve a more general model that considers the uncertain data in continuous-time linear programming problem. In other words, we are going to solve the robust counterpart of the continuous-time linear programming problem. In this paper, we still consider the piecewise continuous functions on the time interval . Therefore, the time interval cannot be equally divided.
Addressing the uncertain data in optimization problems has become an attractive research topic. As early as the mid 1950s, Dantzig [40] introduced the stochastic optimization as an approach to model uncertain data by assuming scenarios for the data occurring with different probabilities. One of the difficulties is to present the exact distribution for the data. Therefore, the so-called robust optimization might be another choice for modeling the optimization problems with uncertain data. The basic idea of robust optimization is to assume that each uncertain data falls into a set. For example, the real-valued data can be assumed to fall into a closed interval for convenience. In order to address the optimization problems with uncertain data falling into the uncertainty sets, Ben-Tal and Nemirovski [41,42] and independently El Ghaoui [43,44] proposed to solve the so-called robust optimization problems. Now, this topic has increasingly received much attention. For example, the research articles contributed by Averbakh and Zhao [45], Ben-Tal et al. [46], Bertsimas et al. [47,48,49], Chen et al. [50], Erdoǧan and Lyengar [51], Zhang [52] and the references therein can present the main stream of this topic. In this paper, we propose the robust counterpart of the continuous-time linear programming problem and design a practical algorithm to solve this problem.
This paper is organized as follows. In Section 2, a robust continuous-time linear programming problems is formulated. Under some algebraic calculation, it can be transformed into a traditional form of the continuous-time linear programming problem. In Section 3, we introduce the discretization problem of the transformed problem. Based on the solutions obtained from the discretization problem, we can construct the feasible solutions of the transformed problem. Under these settings, the error bound will be derived. In order to obtain the approximate solution, we also introduce the concept of -optimal solutions. In Section 4, we study the convergence of approximate solutions. In Section 5, based on the obtained results, the computational procedure is proposed and a numerical example is provided to demonstrate the usefulness of this practical algorithm.
2. Robust Continuous-Time Linear Programming Problems
In this section, a robust continuous-time linear programming problems will be formulated. We shall use some algebraic calculation to transform it into a traditional form of the continuous-time linear programming problem. Now, we consider the following continuous-time linear programming problem:
where and are nonnegative real numbers for and , and and are the real-valued functions. It is obvious that if the real-valued functions are assumed to be non-negative on for , then the primal problem (CLP) is feasible with a trivial feasible solution for all .
Suppose that some of the data and are uncertain such that they should fall into the uncertainty sets and , respectively. Given any fixed , we denote by and the set of indices in which and are assumed to be uncertain. In other words, if , then is uncertain, and if , then is uncertain. Therefore and are subsets of . It is clear that if the data is certain, then the uncertainty set is a singleton set. We also assume that functions and are pointwise-uncertain in the sense that, given each , the uncertain data and should fall into the uncertainty sets and , respectively. If function or is assumed to be certain, then each function value or is assumed to be certain for . If function or is assumed to be uncertain, then the function value or may be certain for some . We denote by and the sets of indices in which the functions and are assumed to be uncertain, respectively. In other words, if , then the function is uncertain, and if , then the function is uncertain.
The robust counterpart of problem (CLP) is assumed to take each data in the corresponding uncertainty sets, and it is formulated as follows:
We can see that the robust counterpart is a continuous-time programming problem with infinitely many number of constraints. Therefore, it is difficult to solve. However, if we can determine the suitable uncertainty sets , , and , then this semi-infinite problem can be transformed into a conventional continuous-time linear programming problem.
We assume that all the uncertain data fall into the closed intervals, which will be described below.
- For with and with , we assume that the uncertain data and should fall into the closed intervalsandrespectively, where and are the known nominal data of and , respectively, and and are the uncertainties such thatFor , we use the notation to denote the certain data with uncertainty . Also, we use the notation to denote the certain data with uncertainty for .
- For with and with , we assume thatwhere and are the known nominal data of and , respectively, and and are the uncertainties of and , respectively. For , we use the notation to denote the certain function with uncertainties for all . Also, we use the notation to denote the certain function with uncertainties for and .In this case, the robust counterpart is written as follows:
Next, we are going to convert the above semi-infinite problem into a conventional continuous-time linear programming problem.
First of all, the problem can be rewritten as the following equivalent form
Given any fixed , for , since and , we have
Similarly, for , since and , we also have
- For , since for all , we obtainfor all , and , which implieswhere the equality can be attained.
- For , we obtainwhich implieswhere the equality can be attained. On the other hand, since for and , we havewhich implieswhere the equality can also be attained. Therefore, from (3), (4) and (5), we conclude that is a feasible solution of problem if and only if it satisfies the following inequalities:andand
This shows that problem is equivalent to the following problem
which can also be rewritten as the following continuous-time linear programming problem:
According to the duality theory in continuous-time linear programming problem, the dual problem of (RCLP3) can be formulated as follows
In the sequel, we are going to design a computational procedure to numerically solve the robust counterpart (RCLP3).
3. Discretization
In this section, we shall introduce the discretization problem. Based on the solutions obtained from the discretization problem, we can construct the feasible solutions of the transformed problem. Under these settings, the error bound can be derived. On the other hand, in order to obtain the approximate solution, we also introduce the concept of -optimal solutions.
In order to develop the efficient numerical method, we assume that the following conditions are satisfied.
- For and , the real numbers and . The real numbers for and for .
- For and , the functions and are piecewise continuous on . For and , the functions and are also piecewise continuous on , which says that the functions and are piecewise continuous on for and .
- For each , the following inequality is satisfied:
- For each and forthe following inequality is satisfied:In other words,
Let denote the set of discontinuities of functions for and for . Let denote the set of discontinuities of functions for and for . Then, we see that and are finite subsets of . In order to determine the partition of the time interval , we consider the following set
Then, is a finite subset of written by
where, for convenience, we set and . It means that and may be the endpoint-continuities of functions and . Let be a partition of such that , which means that each closed is also divided into many closed subintervals. In this case, the time interval is not necessarily equally divided into n closed subintervals. Let
where and . Then, the n closed subintervals are denoted by
We also write
Let denote the length of closed interval for , and let
In the limiting case, we shall assume that
In this paper, we assume that there exists such that
Therefore, in the limiting case, we assume that , which implies . In the sequel, when we say that , it implicitly means that .
For example, suppose that the length of closed interval is for . Each closed interval is equally divided by subintervals for . In this case, the total subintervals are . We also see that and
Under the above construction for the partition , we see that the functions for , for , for and for are continuous on the open interval for . Now, we define
and
and the vectors
Then, we see that
for all and .
For each and , we define the following linear programming problem:
According to the duality theory of linear programming, the dual problem of is given by
Now, let
Then, by dividing on both sides of the constraints, the dual problem can be equivalently written by
Recall that denotes the length of closed interval . We also define
Then, we have
which say that
for . For further discussion, we adopt the following notations:
Now, we also have
which say that
It is obvious that
for any and .
Proposition 1.
The following statements hold true.
- (i)
- Letwhere σ is given in (7). We define for and , and consider the following vectorThen is a feasible solution of problem . Moreover, we havefor all , and .
- (ii)
- Given a feasible solution of problem , we definefor and , and consider the following vectorThen, is a feasible solution of problem satisfying the following inequalitiesfor all , and . Moreover, if each is non-negative and is an optimal solution of problem , then is also an optimal solution of problem .
Proof.
To prove part (i), by (6), for each j, there exists such that for or for . Since for or for by (7), we have
Since
it follows that, for ,
Therefore, from (20), we obtain
for and
which show that is indeed a feasible solution of problem . On the other hand, for , from (9), (13) and (17), we have
Since
i.e., , this proves (19).
To prove part (ii), for each and , we define the index set
Then, by (6). For each , we see that
For each fixed , we also define the index set
For each fixed and , we consider the following two cases.
Therefore, from the above two cases, we conclude that is indeed a feasible solution of problem . Since problem is a minimization problem and
it says that if is an optimal solution of problem , then is an optimal solution of problem . This completes the proof. □
Proposition 2.
Suppose that the primal problem is feasible with a feasible solution , where for . Then
and
for all , and .
Proof.
By (6), for each j, there exists such that for or for ; that is, for or for . If , then K is a zero matrix. In this case, using the feasibility of , we have
which implies
For the case of , we want to show that
for all and . We shall prove it by induction on l. Since is a feasible solution of problem , for , we have
Therefore, for each j, we obtain
Suppose that
for . Then, for each j, we have
By the feasibility of , we have
which implies
Let with be a feasible solution of problem . For each , we construct the step function as follows:
Then we have the following feasibility.
Proposition 3.
Let be a feasible solution of the primal problem , where for . Then the vector-valued step function defined in is a feasible solution of problem .
Proof.
Since is a feasible solution of problem , it follows that
and
Now, we consider the following two cases.
- Suppose that for . Recall that is the left-end point of closed interval . Then we haveFor , the desired inequality can be similarly obtained by (30).
- Suppose that . Then, we can similarly show that
Therefore, we conclude that is indeed a feasible solution of problem (RCLP3). This completes the proof. □
Given any optimization problem (P), we denote by the optimal objective value of problem (P). For example, the optimal objective value of problem (RCLP3) is denoted by . Now we assume that is an optimal solution of problem . Then we can also construct a feasible solution of problem according to (29). Then, using (11), we have
Therefore, we have
Using the weak duality theorem for the primal and dual pair of problems (DRCLP3) and (RCLP3), we see that
Next, we want to show that
Let with be an optimal solution of problem . We define
for and , where is defined in (18), and consider the following vector
Then, according to part (ii) of Proposition 1, we see that is an optimal solution of problem satisfying the following inequalities
for all , and .
For each and , we define a real-valued function on the half-open interval by
where is defined in (8). For , let
and
Then, we have
which says that
and
for . We want to prove
We first provide some useful lemmas.
Lemma 1.
For , and , we have
and
Proof.
According to the construction of partition , we see that for , for , for and for are continuous on the open interval . It is not hard to obtain the desired results. □
Lemma 2.
For each , we have
Proof.
It suffices to prove
Now, for , we have
and, for , we have
Using Lemma 1, we complete the proof. □
We define
From (7), we see that . Also, from (35), we see that the sequence of functions is uniformly bounded, which also says that is uniformly bounded. Therefore there exists a constant such that for all and . Now, we define a real-valued function on by
and a real-valued function by
The following lemma will be used for further discussion.
Lemma 3.
We have
and, for and ,
Moreover, the sequence of real-valued functions is uniformly bounded.
Proof.
For , from (41), we have
For , we also have,
For each and , we consider the following cases.
Finally, it is obvious that the sequence of real-valued functions is uniformly bounded. This completes the proof. □
For each , we define the step function by
Remark 1.
According to (35) and Lemma 3, we see that the family of vector-valued functions is uniformly bounded.
Proposition 4.
For any , the vector-valued step function is a feasible solution of problem .
Proof.
For and , we have
Suppose that . Then we have
Therefore, we conclude that is indeed a feasible solution of problem (DRCLP3), and the proof is complete. □
For each and , we define the step functions and as follows:
and
respectively. For each , we also define the step function by
Lemma 4.
For and , we have
and
Proof.
It is obvious that the following functions
and
are continuous a.e. on , i.e., they are Riemann-integrable on . In other words, their Riemann integral and Lebesgue integral are identical. From Lemma 1, we see that
and
Since the set of functions for is uniformly bounded according to Proposition 2, using the Lebesgue bounded convergence theorem, we obtain (46). On the other hand, since set of functions for is uniformly bounded according to Proposition 1, using the Lebesgue bounded convergence theorem, we can obtain (47). This completes the proof. □
Theorem 1.
The following statements hold true.
- (i)
- We havewheresatisfying as .
- (ii)
- (No Duality Gap). Suppose that the primal problem is feasible. We haveand
Proof.
To prove part (i), we have
Since is continuous a.e. on , it follows that is also continuous a.e. on , which says that is Riemann-integrable on . In other words, the Riemann integral and Lebesgue integral of on are identical. Since as by Lemma 2, it follows that as a.e. on , which implies that as a.e. on . Applying the Lebesgue bounded convergence theorem for integrals, we obtain
It is easy to see that can be written as (48), which proves part (i).
To prove part (ii), by part (i) and inequality (33), we obtain
Since for each , we also have
and
This completes the proof. □
Proposition 5.
The following statements hold true.
- (i)
- Suppose that the primal problem is feasible. Let be defined in for . Then, the error between and the objective value of is less than or equal to defined in , i.e.,
- (ii)
- Let be defined in for . Then, the error between and the objective value of is less than or equal to , i.e.,
Proof.
To prove part (i), Proposition 3 says that is a feasible solution of problem (RCLP3). Since
and
for all and , it follows that
To prove part (ii), we have
This completes the proof. □
Definition 1.
Given any , we say that the feasible solution of problem is an ϵ-optimal solution if and only if
We say that the feasible solution of problem is an ϵ-optimal solution if and only if
Theorem 2.
Given any , the following statements hold true.
- (i)
- The ϵ-optimal solution of problem exists in the sense: There exists such that , where is obtained from Proposition 5 satisfying .
- (ii)
- The ϵ-optimal solution of problem exists in the sense: There exists such that , where is obtained from Proposition 5 satisfying .
Proof.
Given any , from Proposition 5, since as , there exists such that . Then, the result follows immediately. □
4. Convergence of Approximate Solutions
In this section, we shall study the convergence of approximate solutions. We first provide some useful lemmas that can guarantee the feasibility of solutions. On the other hand, the strong duality theorem can also be established using the limits of approximate solutions.
In the sequel, by referring to (29) and (45), we shall present the convergent properties of the sequences and that are constructed from the optimal solutions of problem and the optimal solution of problem , respectively. We first provide a useful lemma.
Lemma 5.
Let the real-valued function η be defined by
on , and let be a feasible solution of dual problem (DRCLP3). We define
for all and . Then is a feasible solution of dual problem(DRCLP3)satisfying and for all and .
Proof.
By the feasibility of for problem (DRCLP3), we have
Since for , for and , from (54), we obtain
For any fixed , we define the index sets
and consider
Then, for each fixed i, we have the following three cases:
Therefore, we conclude that is a feasible solution of (DRCLP3), and the proof is complete. □
For further discussion, we need the following useful lemmas.
Lemma 6.
(Riesz and Sz.-Nagy [53] (p. 64)) Let be a sequence in . If the sequence is uniformly bounded with respect to , then exists a subsequence that weakly converges to some . In other words, for any , we have
Lemma 7.
(Levinson [4]) If the sequence is uniformly bounded on with respect to and weakly converges to , then
Theorem 3. (Strong Duality Theorem)
Suppose that the primal problem is feasible. Let and be the sequences that are constructed from the optimal solutions of problem and the optimal solution of problem according to (29) and (45), respectively. Then, the following statements hold true.
- (i)
- For each , there is a subsequence of such that weakly converges to some , and forms an optimal solution of primal problem .
- (ii)
- For each n, we definewhere η is defined in . Then, for each , there is a subsequence of such that weakly converges to some , and forms an optimal solution of dual problem .Moreover we have .
Proof.
From Proposition 2, it follows that the sequence of functions is uniformly bounded with respect to . Using Lemma 6, there exists a subsequence of that weakly converges to some . Using Lemma 6 again, there exists a subsequence of that weakly converges to some . By induction, there exists a subsequence of that weakly converges to some for each j. Therefore we can construct the subsequences that weakly converge to for all . Since is a feasible solution of problem (RCLP3) for each , for all , we have for all and
From Lemma 7, we have
which says that a.e. in for all . It is clear to see that the sequence
weakly converges to
For , we obtain
For , we can similarly obtain
Let be the subset of on which the inequalities (60) and (61) are violated, let be the subset of on which , let , and define
where the set has measure zero. Then for all and a.e. on , i.e., a.e. on for each j. We also see that for each j.
This shows that is a feasible solution of problem (RCLP3). Since a.e. on , it follows that the subsequence is also weakly convergent to .
On the other hand, since is a feasible solution of problem (DRCLP3) for each n, Lemma 5 says that is also a feasible solution of problem (DRCLP3) for each n satisfying for each and . From Remark 1, it follows that the sequence is uniformly bounded. Since
we see that the sequence is also uniformly bounded, which implies that the sequence is uniformly bounded with respect to . Using Lemma 6, We can similarly show that there is a subsequence of that weakly converges to some . Since is a feasible solution of problem (DRCLP3) for each , for each , we have for all and
From Lemma 7, for each i, we have
which says that a.e. in for all . For , by taking the limit inferior on both sides of (62), we obtain
For , we can similarly obtain
For each , we see that for each k and for all . Let be the subset of on which the inequalities (63) and (64) are violated, let be the subset of on which , let , and define
where the set has measure zero. Then, for each , for all and a.e. on . We also see that . Now we are going to claim that is a feasible solution of .
Therefore, we conclude that is a feasible solution of (DRCLP3). Since a.e. on , it follows that the subsequence also weakly converges to .
Finally, we want to prove the optimality. Now, we have
and
Using Lemma 4, for , we have
and, for , we have
Also, for , we have
and, for , we have
Using the weak convergence, we also obtain
According to the weak duality theorem between problems (RCLP3) and (DRCLP3), we have that
and conclude that and are the optimal solutions of problems (RCLP3) and (DRCLP3), respectively. Theorem 1 also says that . This completes the proof. □
5. Computational Procedure and Numerical Example
In this section, based on the above results, we can design a computational procedure. Also, a numerical example is provided to demonstrate the usefulness of this practical algorithm. The purpose is to provide the computational procedure to obtain the approximate solutions of the continuous-time linear programming problem (RCLP3). Of course, the approximate solutions will be the step functions. According to Proposition 5, it is possible to obtain the appropriate step functions so that the corresponding objective function value is close enough to the optimal objective function value when n is taken to be sufficiently large.
Recall that, from Theorem 1 and Proposition 5, the error upper bound between the approximate objective value and the optimal objective value is given by
In order to obtain , by referring to (37), we need to solve
where the real-valued function is given by
Now, we define the real-valued function on by
Since is continuous on the open interval , it follows that is continuous on . This also says that is continuous on the compact interval . In other words, we have
Moreover, if the function is well-defined at the end-points and , then we see that
Then, the supremum in (74) can be obtained by the following equality
In order to further design the computational procedure, we need to assume that each is twice-differentiable on for the purpose of applying the Newton’s method, which also says that is twice-differentiable on the open interval for all . From (75), we need to solve the following simple type of optimization problem
The KKT condition says that if is an optimal solution of problem (76), then the following conditions should be satisfied:
where are the Lagrange multipliers. Then, we can see that the optimal solution is
Let denote the set of all zeros of the real-valued function on . Then, we have
For , we have
and
We consider the following cases.
- Suppose that is a linear function of t assumed byThen, for ,Using (77), we obtain the following maximum:
- (1)
- Ifthen
- (2)
- Ifthen
- Suppose that is not a linear function of t. In order to obtain the zero of , we can apply the Newton’s method to generate a sequence such that as . The iteration is given byfor . The initial guess is . Since the real-valued function may have more than one zero, we need to apply the Newton’s method by taking as many as possible for the initial guess .Now, the computational procedure is given below.
- Step 1. Set the error tolerance and the initial value of natural number .
- Step 2. Find the optimal objective value and optimal solution of dual problem .
- Step 3. Find the set of all zeros of the real-valued function by applying the Newton method given in (80).
- Step 6. Evaluate the error upper bound according to (48). If , then go to Step 7; otherwise, consider one more subdivision for each closed subinterval and set for some integer and go to Step 2, where is the number of new points of subdivisions such that n satisfies (9). For example, the inequality (10) can be used.
- Step 7. Find the optimal solution of primal problem .
- Step 8. Set the step functions defined in (29), which will be the approximate solution of problem (RCLP3). The actual error between and the objective value of is less than or equal to by Proposition 5, where the error tolerance is reached for this partition .
In the sequel, we present a numerical example that considers the piecewise continuous functions on the time interval . We consider and the following problem
The uncertainties are assumed below.
- The data are assumed to be certain.
- The data and are assumed to be uncertain with the nominal data and and the uncertainties and , respectively.
- The data , , and are assumed to be uncertain with the nominal data , , and and the uncertainties for , respectively.
- The data and are assumed to be uncertain with the nominal dataand the uncertaintiesrespectively.
- The data and are assumed to be uncertain with the nominal dataand the uncertainties
From the discontinuities of the above functions and according to the setting of partition, we see that and
For , it means that each closed interval is equally divided by two subintervals for . In this case, we have . Therefore, we obtain a partition . By the definitions of desired quantities, we have and .
Now, in the following Table 1, for different values of , we present the error bound . We denote by
the approximate optimal objective value of problem (RCLP3). Theorem 1 and Proposition 5 say that
and
Table 1.
Numerical Results.
Suppose that the decision-maker can tolerate the error . Then, we see that is sufficient to achieve this error by referring to the error bound . The numerical results are obtained by using MATLAB in which the active set method is used to solve the primal and dual linear programming problems and , respectively. We need to mention that there is a warning message from MATLAB when the Simplex method is used to solve the dual problem for large n. However, the MATLAB has no problem solving the primal problem using Simplex method.
6. Conclusions
The continuous-time linear programming problem with uncertain data has been studied in this paper. The data mean the real-valued functions or real numbers. Based on the assumption of uncertainty, we have numerically solved the so-called robust continuous-time linear programming problem.
The robust continuous-time linear programming problem has been formulated as problem (RCLP) that has also been transformed into the standard continuous-time linear programming problem (RCLP3). In this paper, we have successfully presented a computational procedure to obtain the error bound between the approximate objective function value and the optimal objective function value of problem (RCLP3). In order to design a computational procedure, we introduce a discretization problem. Based on the solutions obtained from the discretization problem, the error bound has been derived. We introduce the concept of -optimal solutions for the purpose of obtaining the approximate solution. On the other hand, we have also studied the convergence of approximate solutions and have established the strong duality theorem.
In the future, we are going to extend the computational procedure proposed in this paper to solve the nonlinear type of robust continuous-time optimization problems, which will be a challenging topic.
Funding
This research received no external funding.
Conflicts of Interest
The author declares no conflict of interest.
References
- Tyndall, W.F. A Duality Theorem for a Class of Continuous Linear Programming Problems. SIAM J. Appl. Math. 1965, 15, 644–666. [Google Scholar] [CrossRef][Green Version]
- Tyndall, W.F. An Extended Duality Theorem for Continuous Linear Programming Problems. SIAM J. Appl. Math. 1967, 15, 1294–1298. [Google Scholar] [CrossRef]
- Bellman, R.E. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
- Levinson, N. A Class of Continuous Linear Programming Problems. J. Math. Anal. Appl. 1966, 16, 73–83. [Google Scholar] [CrossRef]
- Meidan, R.; Perold, A.F. Optimality Conditions and Strong Duality in Abstract and Continuous-Time Linear Programming. J. Optim. Theory Appl. 1983, 40, 61–77. [Google Scholar] [CrossRef]
- Papageorgiou, N.S. A Class of Infinite Dimensional Linear Programming Problems. J. Math. Anal. Appl. 1982, 87, 228–245. [Google Scholar] [CrossRef]
- Schechter, M. Duality in Continuous Linear Programming. J. Math. Anal. Appl. 1972, 37, 130–141. [Google Scholar] [CrossRef]
- Anderson, E.J.; Nash, P.; Perold, A.F. Some Properties of a Class of Continuous Linear Programs. SIAM J. Control Optim. 1983, 21, 758–765. [Google Scholar] [CrossRef]
- Anderson, E.J.; Philpott, A.B. On the Solutions of a Class of Continuous Linear Programs. SIAM J. Control Optim. 1994, 32, 1289–1296. [Google Scholar] [CrossRef]
- Anderson, E.J.; Pullan, M.C. Purification for Separated Continuous Linear Programs. Math. Methods Oper. Res. 1996, 43, 9–33. [Google Scholar] [CrossRef]
- Fleischer, L.; Sethuraman, J. Efficient Algorithms for Separated Continuous Linear Programs: The Multicommodity Flow Problem with Holding Costs and Extensions. Math. Oper. Res. 2005, 30, 916–938. [Google Scholar] [CrossRef][Green Version]
- Pullan, M.C. An Algorithm for a Class of Continuous Linear Programs. SIAM J. Control Optim. 1993, 31, 1558–1577. [Google Scholar] [CrossRef][Green Version]
- Pullan, M.C. Forms of Optimal Solutions for Separated Continuous Linear Programs. SIAM J. Control Optim. 1995, 33, 1952–1977. [Google Scholar] [CrossRef]
- Pullan, M.C. A Duality Theory for Separated Continuous Linear Programs. SIAM J. Control Optim. 1996, 34, 931–965. [Google Scholar] [CrossRef]
- Pullan, M.C. Convergence of a General Class of Algorithms for Separated Continuous Linear Programs. SIAM J. Optim. 2000, 10, 722–731. [Google Scholar] [CrossRef]
- Pullan, M.C. An Extended Algorithm for Separated Continuous Linear Programs. Math. Program. Ser. A 2002, 93, 415–451. [Google Scholar] [CrossRef]
- Weiss, G. A Simplex Based Algorithm to Solve Separated Continuous Linear Programs. Math. Program. Ser. A 2008, 115, 151–198. [Google Scholar] [CrossRef]
- Shindin, E.; Weiss, G. Structure of Solutions for Continuous Linear Programs with Constant Coefficients. SIAM J. Optim. 2015, 25, 1276–1297. [Google Scholar] [CrossRef][Green Version]
- Farr, W.H.; Hanson, M.A. Continuous Time Programming with Nonlinear Constraints. J. Math. Anal. Appl. 1974, 45, 96–115. [Google Scholar] [CrossRef]
- Farr, W.H.; Hanson, M.A. Continuous Time Programming with Nonlinear Time-Delayed. J. Math. Anal. App. 1974, 46, 41–61. [Google Scholar] [CrossRef]
- Grinold, R.C. Continuous Programming Part One: Linear Objectives. J. Math. Anal. Appl. 1969, 28, 32–51. [Google Scholar] [CrossRef]
- Grinold, R.C. Continuous Programming Part Two: Nonlinear Objectives. J. Math. Anal. Appl. 1969, 27, 639–655. [Google Scholar] [CrossRef]
- Hanson, M.A.; Mond, B. A Class of Continuous Convex Programming Problems. J. Math. Anal. Appl. 1968, 22, 427–437. [Google Scholar] [CrossRef]
- Reiland, T.W. Optimality Conditions and Duality in Continuous Programming I: Convex Programs and a Theorem of the Alternative. J. Math. Anal. Appl. 1980, 77, 297–325. [Google Scholar] [CrossRef]
- Reiland, T.W. Optimality Conditions and Duality in Continuous Programming II: The Linear Problem Revisited. J. Math. Anal. Appl. 1980, 77, 329–343. [Google Scholar] [CrossRef]
- Reiland, T.W.; Hanson, M.A. Generalized Kuhn-Tucker Conditions and Duality for Continuous Nonlinear Programming Problems. J. Math. Anal. Appl. 1980, 74, 578–598. [Google Scholar] [CrossRef]
- Singh, C. A Sufficient Optimality Criterion in Continuous Time Programming for Generalized Convex Functions. J. Math. Anal. Appl. 1978, 62, 506–511. [Google Scholar] [CrossRef]
- Rojas-Medar, M.A.; Brandao, J.V.; Silva, G.N. Nonsmooth Continuous-Time Optimization Problems: Sufficient Conditions. J. Math. Anal. Appl. 1998, 227, 305–318. [Google Scholar] [CrossRef]
- Singh, C.; Farr, W.H. Saddle-Point Optimality Criteria of Continuous Time Programming without Differentiability. J. Math. Anal. Appl. 1977, 59, 442–453. [Google Scholar] [CrossRef]
- Nobakhtian, S.; Pouryayevali, M.R. Optimality Criteria for Nonsmooth Continuous-Time Problems of Multiobjective Optimization. J. Optim. Theory Appl. 2008, 136, 69–76. [Google Scholar] [CrossRef]
- Nobakhtian, S.; Pouryayevali, M.R. Duality for Nonsmooth Continuous-Time Problems of Vector Optimization. J. Optim. Theory Appl. 2008, 136, 77–85. [Google Scholar] [CrossRef]
- Zalmai, G.J. Duality for a Class of Continuous-Time Homogeneous Fractional Programming Problems. Z. Oper. Res. Ser. A-B 1986, 30, 43–48. [Google Scholar] [CrossRef]
- Zalmai, G.J. Duality for a Class of Continuous-Time Fractional Programming Problems. Utilitas Math. 1987, 31, 209–218. [Google Scholar] [CrossRef]
- Zalmai, G.J. Optimality Conditions and Duality for a Class of Continuous-Time Generalized Fractional Programming Problems. J. Math. Anal. Appl. 1990, 153, 365–371. [Google Scholar] [CrossRef]
- Zalmai, G.J. Optimality Conditions and Duality models for a Class of Nonsmooth Constrained Fractional Optimal Control Problems. J. Math. Anal. Appl. 1997, 210, 114–149. [Google Scholar] [CrossRef][Green Version]
- Wen, C.-F.; Wu, H.-. C Using the Dinkelbach-Type Algorithm to Solve the Continuous-Time Linear Fractional Programming Problems. J. Glob. Optim. 2011, 49, 237–263. [Google Scholar] [CrossRef]
- Wen, C.-F.; Wu, H.-. C Using the Parametric Approach to Solve the Continuous-Time Linear Fractional Max-Min Problems. J. Glob. Optim. 2012, 54, 129–153. [Google Scholar] [CrossRef]
- Wen, C.-F.; Wu, H.-C. The Approximate Solutions and Duality Theorems for the Continuous-Time Linear Fractional Programming Problems. Numer. Funct. Anal. Optim. 2012, 33, 80–129. [Google Scholar] [CrossRef]
- Wu, H.-C. Solving the Continuous-Time Linear Programming Problems Based on the Piecewise Continuous Functions. Numer. Funct. Anal. Optim. 2016, 37, 1168–1201. [Google Scholar] [CrossRef]
- Dantzig, G.B. Linear Programming under Uncertainty. Manag. Sci. 1955, 1, 197–206. [Google Scholar] [CrossRef]
- Ben-Tal, A.; Nemirovski, A. Robust Convex Optimization. Math. Oper. Res. 1998, 23, 769–805. [Google Scholar] [CrossRef]
- Ben-Tal, A.; Nemirovski, A. Robust Solutions of Uncertain Linear Programs. Oper. Res. Lett. 1999, 25, 1–13. [Google Scholar] [CrossRef]
- El Ghaoui, L.; Lebret, H. Robust Solutions to Least-Squares Problems with Uncertain Data. SIAM J. Matrix Anal. Appl. 1997, 18, 1035–1064. [Google Scholar] [CrossRef]
- El Ghaoui, L.; Oustry, F.; Lebret, H. Robust Solutions to Uncertain Semidefinite Programs. SIAM J. Optim. 1998, 9, 33–52. [Google Scholar] [CrossRef]
- Averbakh, I.; Zhao, Y.-B. Explicit Reformulations for Robust Optimization Problems with General Uncertainty Sets. SIAM J. Optim. 2008, 18, 1436–1466. [Google Scholar] [CrossRef]
- Ben-Tal, A.; Boyd, S.; Nemirovski, A. Extending Scope of Robust Optimization: Comprehensive Robust Counterpart of Uncertain Problems. Math. Program. Ser. B 2006, 107, 63–89. [Google Scholar] [CrossRef]
- Bertsimas, D.; Natarajan, K.; Teo, C.-P. Persistence in Discrete Optimization under Data Uncertainty. Math. Program. Ser. B 2006, 108, 251–274. [Google Scholar] [CrossRef]
- Bertsimas, D.; Sim, M. The Price of Robustness. Oper. Res. 2004, 52, 35–53. [Google Scholar] [CrossRef]
- Bertsimas, D.; Sim, M. Tractable Approximations to Robust Conic Optimization Problems. Math. Program. Ser. B 2006, 107, 5–36. [Google Scholar] [CrossRef]
- Chen, X.; Sim, M.; Sun, P. A Robust Optimization Perspective on Stochastic Programming. Oper. Res. 2007, 55, 1058–1071. [Google Scholar] [CrossRef]
- Erdoǧan, E.; Iyengar, G. Ambiguous Chance Constrained Problems and Robust Optimization. Math. Program. Ser. B 2006, 107, 37–61. [Google Scholar] [CrossRef]
- Zhang, Y. General Robust Optimization Formulation for Nonlinear Programming. J. Optim. Theory Appl. 2007, 132, 111–124. [Google Scholar] [CrossRef]
- Riesz, F.; -Nagy, B.S. Functional Analysis; Frederick Ungar Publishing Co.: New York, NY, USA, 1955. [Google Scholar]
© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).