Efficient Solutions of Interval Programming Problems with Inexact Parameters and Second Order Cone Constraints

In this article, a methodology is developed to solve an interval and a fractional interval programming problem by converting into a non-interval form for second order cone constraints, with the objective function and constraints being interval valued functions. We investigate the parametric and non-parametric forms of the interval valued functions along with their convexity properties. Two approaches are developed to obtain efficient and properly efficient solutions. Furthermore, the efficient solutions or Pareto optimal solutions of fractional and non-fractional programming problems over R+ ⋃ {0} are also discussed. The main idea of the present article is to introduce a new concept for efficiency, called efficient space, caused by the lower and upper bounds of the respective intervals of the objective function which are shown in different figures. Finally, some numerical examples are worked through to illustrate the methodology and affirm the validity of the obtained results.


Introduction
We consider solving fractional interval programming problems with second order cone constraints with both the objective and constraints being interval valued functions.There are several approaches in the literature to solve such problems.Nonlinear interval optimization problems have been studied in several directions by many researchers during the past few decades [1][2][3][4].Most considered models used quadratic programming problems with interval parameters.A methodology applied to interval valued convex quadratic programming problems by Bhurjee and Panda [1] which categorized how a solution of a general optimization problem can exist.
In the past few decades, fractional programming problems have also attracted the interest of many researchers.These problems have applications in the real physical world such as finance, production planning, electronic, etc. Fractional programming is being used for modelling real life problems involving one or more objective(s) such as actual cost/standard cost, inventory/sales and profit/cost.There are different algorithms to determine solutions of particular fractional programming problems.For example, Charnes and Cooper [5] converted a linear fractional program (LFP) to a linear program (LP) by a variable transformation technique.Tantawy [6] proposed an iterative method based on a conjugate gradient projection method.Dinkelbach [7] considered the same objective over a convex feasible set.He also solved the same problem using a sequence of nonlinear convex programming problems.
On the other hand, we know that the convexity of the SOCP (Second Order Cone Constraints) problems are definite.The problems such as linear programs, convex quadratic programs and quadratically constrained convex quadratic programs can be easily converted to SOCP problems; for several other types of problems not falling into these three categories, see [8,9].
Lobo et al. [9] discussed several applications of SOCP in engineering.Nesterov and Nemirovski [10] and Lobo et al. [9,11] showed several kinds of problems formulated as SOCP problems, such as filter design, truss design, grasping force optimization in robotics, etc.In a pioneering paper, Nesterov and Nemirovski [10] applied the concept of self-concordant barrier to SOCP problems and found an iteration complexity of √ m for problems with m second order cone inequalities.Nestrov and Todd [12,13] were the first to investigate primal-dual interior methods for SOCP problems in which they investigate their outcome in the form of optimization over self-scaled cones having second order cones class as an especial case.Alizadeh & Goldfarb [8] considered and overviewed a large class of SOCP problems.They showed that many optimization problems such as linear programming (LP), quadratic programming (QP), quadratically constrained quadratic programming (QCQP) and other types of optimization problems could be rewritten as SOCP problems.They also demonstrated the method of converting different types of constraints into the form of SOC inequalities.Furthermore, they described an algebraic foundation of SOCs and showed how robust least squares and robust linear programming problems could be converted to SOCPs.The authors of [8] also discussed duality and complementary slackness for SOCP with notions of primal and dual non-degeneracy and strict complementarity along with logarithmic barrier function and primal-dual path following interior point methods (IPMs) for SOCPs.
Kim and Kojima [14] showed that semi-definite programming (SDP) and SOCP relaxation provide exact optimal solutions for a class of non-convex quadratic optimization problems.Moreover, SDP problems can in fact be formulated as SOCP problems and solved as such.There are a number of advantages for an SOCP problem.Adding a SOC constraint sometimes leads to negative decision variables, which usually does not occur with LP problems unless we let the variables be free in sign and usually get a much better solution, even though the dimension and convexity remain the same.
In our work here, we establish two results concerning efficient and properly efficient solutions of interval programming problems constrained with a second order cone constraint.The remainder of our work is organized as follows.In Section 2, the definitions and notations are provided.The interval valued functions in parametric and non-parametric forms along with their convexity properties are discussed in Section 3. In Section 4, we explain the existence of solutions for interval valued optimization problems and establish certain results concerning the efficient and properly efficient solutions of the interval problems involving SOC constraints.We also investigate the efficient solution for interval fractional and non-fractional programming problems in R n + {0}.In Section 5, some numerical examples are worked through to verify the results on efficient and properly efficient solutions using MATLAB software environment.We conclude in Section 6.

Definitions and Notations
Let I(R) be represented as the class of all closed intervals.A closed interval is shown by M = [m, m], where m and m are respectively the lower and upper bounds of M. For closed intervals M, N, and k ∈ R, we have: m] and N = [n, n] are bounded, real valued intervals, then the multiplication of M and N is defined to be MN = {min{mn, m n, mn, m n}, max{mn, m n, mn, m n}}.
Let F(x) = F(x 1 , • • • , x n ) be a closed interval in R, for each x ∈ R n .The interval-valued function F may be represented as F(x) = [F(x), F(x)], where F and F are real-valued functions defined on R n and satisfy F(x) ≤ F(x), for every x ∈ R n .We say that the interval-valued function F is differentiable at x 0 ∈ R n if and only if the real-valued functions F(x) and F(x) are differentiable at x 0 .To know more, see [2].
Let M = [m, m] and N = [n, n] be two closed intervals in R and the relation " " be a partial ordering on I(R).We write M N if and only if m ≤ n and m ≤ n.We also write M ≺ N if and only if M N and M = N, meaning that M is inferior to N, or N is superior to M.
A second order cone is defined as follows: where • is the standard Euclidean norm, and n is the dimension of Q n ; n is usually dropped from the subscript.We refer to inequality x Q 0 as the second-order cone inequality.
For the cone denote the boundary of Q without the origin 0. In addition, let denote the interior of Q.
We continue to present an overview of the SOCP problem.A standard form of an SOCP problem is given by (P1) where We make the following assumptions regarding the primal-dual pair (P1) and (D1) [8].

Assumption 2.
Due to the strict feasibility of both primal and dual, there exists a vector x = (x 1 ; . . .; x m ) for every x i Q 0, for i = 1, . . ., m, and dual-feasible y and z = (z 1 ; . . .; z m ) such that z i Q 0, for i = 1, . . ., m.

Remark 1.
If problem (P1) has only one second order cone constraint, then the standard SOCP problem can be written as with its corresponding dual as Over time, we have seen a rapid development in improvement of software packages that can be applied to the problems such as SOCPs and mixed SOCP problems.SeDuMi [11] is a widely available package based on the Nesterov-Todd method.

Interval Valued Function
The definition of interval function in terms of functions of one and more intervals is given by see [2][3][4].Walster and Hansen and Moore [2] defined an interval function as a function of one or more interval arguments onto an interval.Wu considered the interval valued function These functions may be defined on one or more interval arguments or maybe interval extension of real valued functions.The interval valued function in parametric form, introduced by [4], is as follows.For m(t) ∈ M k ν , let f m(t) : R n −→ R.Then, for a given interval vector M k ν , an interval valued function

Interval Valued Convex Function
Interval valued convex function has the important property to guarantee the existence of solution of the interval optimization problem.

Definition 2. [4] An interval valued function F M k
) is a convex function on N, for every t.

Interval Valued Function in the Parametric Form
Let a binary operation on the set of real numbers be represented by * ∈ {+, −, ., /}.The binary operation between two intervals M = [m, m] and N = [n, n] in I(R), denoted by M N, is the set {m * n|m ∈ M, n ∈ N}.In the case of division, M/N, it is to be noted that 0 / ∈ N.An interval may be shown as a parameter form in several disciplines.Any point in M may be expressed as m(t), where m(t) = m + t( m − m).Throughout our work, we consider a specific parametric representation of an interval as M = [m, m] = {m(t)|t ∈ [0, 1]}.The algebraic operations over classical intervals can be represented as either lower or upper bounds of the intervals [1].The parametric form of interval operations can be represented as follows: . ., M k ) T , can be expressed in terms of parameters as where m j (t j ) ∈ M j , m j (0) = m l j , m j (1) = m u j , t = (t 1 , t 2 , . . ., t k ) T , 0 ≤ t j ≤ 1.

Existence of Solutions
In this section, we consider the interval optimization problem as follows: where B j ∈ I(R), and the interval valued functions ν }. Discussion of the partial ordering is seen in Section 2, and the feasible space of (P3) is expressed as the following set: x * ∈ χ is said to be an efficient solution of (P3), if there does not exist any x * ∈ χ is said to be a properly efficient solution of (P3), if x * ∈ χ is an efficient solution and there exists a real number µ > 0 such that for some t ∈ [0; 1] k and all x ∈ χ with f m(t) (x) < f m(t) (x * ), at least there is one t ∈ [0, 1] k , t = t , exists with f m(t Consider the following optimization problem with respect to a weight function ω where ω(t) = ω(t 1 , t 2 , . . ., t k ).Here, t 1 , t 2 , . . ., t k are mutually independent and each t i varies from 0 to 1. Thus, . .dt k is a function of x only, say h(x).Thus, (P4) as min x∈χ h(x) is a general nonlinear programming problem free from interval uncertainty.The problem can be solved by a nonlinear programming technique.The following theorem establishes the relationship between the solution of the transformed problem (P4) and the original problem (P3) [4].Theorem 1.If x * ∈ χ is an optimal solution of (P4), then x * is a properly efficient solution of (P3).

Alternative Method for Solving an Interval Problem with an SOC Constraint
Here, we consider a problem by applying the order relation " " for the constraints as follows: where the i , for i = 1, 2, . . ., m.Then, the auxiliary interval-valued optimization problem (P5) can be rewritten as follows: (P6 It is obvious that the feasible regions of problems (P5) and (P6) are the same and, since their objective function is also the same, we have the same solution for both problems.The interval property of problem (P6) incurs a very important concept called efficient space which is a new concept from the optimization point of view.
Therefore, the interval-valued optimization problem (P6) is easily converted to a common form as below: We need to interpret the meaning of minimization for (P7).Since is a partial ordering, not a total ordering on I(R), we may follow the similar solution concept (efficient solution) used in multi-objective programming problem to interpret the meaning of minimization in the primal problem (P7).For the minimization problem (P7), we say that the feasible solution x is better than (dominates) the feasible solution x * , if F(x) < F(x * ).Therefore, we propose the following definition.Definition 5. Let x * be a feasible solution of the primal problem (P7).We say that x * is an efficient solution of (P7) if there exists no x ∈ X such that F(x) ≺ F(x * ).In this case, F(x * ) is called the efficient objective value of F.
We denote the set of all efficient objective values of problem (P7) by Min(F, X).More precisely, we write Min(F, X) = {F(x * )}, where x * is an efficient solution of (P7).Let m be a real number.Then, it can be represented as an interval Now, consider the following optimization problem: Obviously, if x * is an optimal solution of problem (P8), then x * is a nondominated solution of problem (P7); see [4].We may now focus here on the two results given as Theorems 2 and 3 below, by which the optimal solutions of the problems (P9) and (P10) are indeed the efficient solutions of the problem (P7): where α and β are positive scalars.
Theorem 2. If x * is an optimal solution of problem (P9), then x * is an efficient solution of problem (P7).
Proof.We see that problems (P9) and (P7) have the same feasible region.Suppose that x * is not an efficient solution.Then, there exists a feasible solution x such that F(x) ≺ F(x * ).From inequation (1), it means that This also shows that f (x) < f (x * ), which contradicts the fact that x * is an optimal solution of problem (P7).This completes the proof.Theorem 3. If x * is an optimal solution of problem (P10), then x * is an efficient solution of problem (P7).
Proof.We see that problems (P10) and (P7) have the same feasible region.Suppose that x * is not an efficient solution.Then, there exists a feasible solution x such that F(x) ≺ F(x * ).From (1), it means that Then, we have This also shows that f (x) < f (x * ), which contradicts the fact that x * is an optimal solution of problem (P7).This completes the proof.

Interval Valued Convex Linear Programming Problem with SOC Constraint
An interval valued optimization problem (P3) is said to be an interval valued convex programming problem, if F M k ν and G jN m j ν are convex functions with respect to .
If (P3) is an interval valued convex programming problem, then P4 is a convex programming problem.A general interval linear programming problem (P3) has the following form: , the product of a real vector x ∈ R k and an interval vector M k ν ∈ (I(R) k ).

Numerical Results
In this section, we consider three examples having various dimensions to illustrate the obtained results.In order to solve problems using both theorems, we use the fmincon command of Matlab.Notations are given in Table 1 and the results are summarized in Tables 2-4 and the corresponding diagrams.We generate problems with different dimensions and report the required CPU times.All computations are performed on MATLAB R2015a (8.5) using a laptop with Intel(R) Core i3 CPU 2.53 GHz and 5.00 GB of RAM.
We present computational results on Examples 1 and 2 to compare the results due to Theorems 2 and 3. To compare the obtained results for the numerical examples, we use different diagrams and tables to show the advantages of the given Theorems 2 and 3 by showing that any solution of the problem (P9) or (P10) is an efficient solution of the problem (P7).In addition, the efficient space for different pairs of (α, β) is also shown, with the generated nonzero elements α taken randomly in the interval (0, 1) and elements of vector β given in (0, n) with step length 1.
Example 1.Consider the interval programming problem with SOC constraint as follows: Example 2. Consider the interval programming problem with SOC constraint as follows: Example 3. Consider the interval programming problem with SOC constraint as follows: For some ω(t) : R 2 −→ R, the corresponding problem (P4) becomes: This problem is an SOCP problem and can be solved by an interior point method.
If ω(t) = 2t 1 , then the properly efficient solution is x * = [0.2,0.2] and the optimal interval is [−1.6, −0.6] and efficient solution obtained by Theorem 2 is Table 2 shows the objective function values obtained using Theorems 2 and 3. We see the results for various values of n in Table 3.The results for different values of n are summarized in Tables 3 and 4. We observe that the CPU times for problems with SOC constraints is lower than the ones for problems without SOC constraint.
Efficient spaces for Example 3 with SOC constraint for different n's is given in the Figures 1-3 and without SOC constraint for different n's is illustrated in the Figures 4-6, where efficient space is a new concept in efficiency literature.

Conclusions
A very important concept of SOCP is investigated here.We have paid our attention to consider the interval fractional programming problem with second order cone constraints.To solve such problems, we established two important results concerning the efficient and properly efficient solutions of the second-order cone constrained interval programming problems.In addition and furthermore, a new notion of efficiency called efficient space was proposed due to interval form of the objective function and the corresponding obtained results were summarized in Tables 3 and 4 and simultaneously in Figures 1-6 with efficient spaces related to upper and lower bound properties of the interval problem.To illustrate the performance of our methodology, a few numerical examples were worked through to represent the importance of the study.The numerical results showed that the CPU times needed for solving problems with second order cone constraints are less than the ones for the problem without second-order cone constraints, which is a very important issue.
where F : R n → I(R) is an interval-valued function, and g i : R n → R and h i : R n → R, i = 1, . . ., m, are real-valued functions.Let M = [m, m] and N = [n, n] be two closed intervals in R. We write M N if and only if m ≤ n and m ≤ n, and we write M ≺ N iff N N and M

Figure 3 .
Figure 3. Efficient space for Example 3 with SOC constraint using n = 1000.

Figure 6 .
Figure 6.Efficient space for Example 3 without SOC constraint using n = 1000.

Table 2 .
Objective function values using Theorems 2 and 3.

Table 3 .
CPU times corresponding to Example 1.

Table 4 .
CPU times corresponding to Example 2.

n CPU Time for Exwsoc2 CPU Time for Exsoc2 cpu.ratio
By our described methodology, we get a properly efficient solution.Here, t = (t 1 , t 2