Next Article in Journal
Para-Holomorphic Statistical Structure with Cheeger Gromoll Metric
Previous Article in Journal
A Novel Empirical Interpolation Surrogate for Digital Twin Wave-Based Structural Health Monitoring with MATLAB Implementation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Selective Multistart Optimization Based on Adaptive Latin Hypercube Sampling and Interval Enclosures

by
Ioannis A. Nikas
1,*,
Vasileios P. Georgopoulos
2 and
Vasileios C. Loukopoulos
2
1
Department of Tourism Management, University of Patras, GR 26334 Patras, Greece
2
Department of Physics, University of Patras, GR 26504 Rion, Greece
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(11), 1733; https://doi.org/10.3390/math13111733
Submission received: 5 April 2025 / Revised: 20 May 2025 / Accepted: 22 May 2025 / Published: 24 May 2025
(This article belongs to the Special Issue Advanced Optimization Algorithms in the Era of Machine Learning)

Abstract

:
Solving global optimization problems is a significant challenge, particularly in high-dimensional spaces. This paper proposes a selective multistart optimization framework that employs a modified Latin Hypercube Sampling (LHS) technique to maintain a constant search space coverage rate, alongside Interval Arithmetic (IA) to prioritize sampling points. The proposed methodology addresses key limitations of conventional multistart methods, such as the exponential decline in space coverage with increasing dimensionality. It prioritizes sampling points by leveraging the hypercubes generated through LHS and their corresponding interval enclosures, guiding the optimization process toward regions more likely to contain the global minimum. Unlike conventional multistart methods, which assume uniform sampling without quantifying spatial coverage, the proposed approach constructs interval enclosures around each sample point, enabling explicit estimation and control of the explored search space. Numerical experiments on well-known benchmark functions demonstrate improvements in space coverage efficiency and enhanced local/global minimum identification. The proposed framework offers a promising approach for large-scale optimization problems frequently encountered in machine learning, artificial intelligence, and data-intensive domains.
MSC:
90C26; 65K05; 65G20; 65G30; 90C56; 68T20

1. Introduction

1.1. Optimization

Optimization is widely used across various fields, including science, economics, engineering, and industry [1]. An optimization problem aims to find the optimal solution from a set of candidates by minimizing (or maximizing) an objective function [2]. The solution may also be subject to constraints. This work focuses on the box-constrained global minimization problem, formulated as follows:
min x S f ( x ) ,
where f is the objective function, S is a closed hyper-rectangular region of R n , and the vector x = ( x 1 , , x n ) is the optimization variable. A vector x is the optimal solution, or the global minimum, of problem (1),] if it attains the lowest objective value among all vectors in S , i.e.,
f ( x ) f ( x ) , x S .
A vector x is a local minimum if there exists a neighborhood N S such that f ( x ) < f ( x ) for all x N , with  x x .

1.2. Optimization in the Modern Era

In recent years, Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), and Big Data (BD) have revolutionized nearly all areas of science [3,4], becoming a key driver of the Fourth Industrial Revolution [5]. Optimization plays a fundamental role in all these fields: machine learning algorithms use data to formulate an optimization model, whose parameters are determined by minimizing a loss function [6]. Deep learning follows the same principle; however, optimization in deep networks is more challenging due to their complex architectures and the large datasets involved [7]. Meanwhile, data scale is also a vital factor. Big data is commonly characterized by the “5 Vs”: volume, variety, value, velocity, and veracity [5]. Managing these factors effectively is essential for insight discovery, informed decision-making, and process optimization [8].
Thus, optimization is the backbone of AI, ML, DL, and BD, enabling models to learn efficiently, make better decisions, and handle large-scale computations [6]. From training AI models and tuning hyperparameters to designing deep learning architectures and managing big data workflows, optimization is crucial for achieving high performance and efficiency. Consequently, efficient optimization is more crucial than ever.

1.3. Local and Global Optimizers

Optimization algorithms are broadly classified into local and global ones. Global optimizers explore large regions of the search space to locate the global optimum [9]. Global optimization is essential when the objective function contains multiple local optima. In contrast, local optimizers iteratively refine an initial solution within a confined search region, seeking a local optimum. Local optimization is highly sensitive to initial conditions, as different starting points may lead to different local optima [2]. This approach is suitable when the global optimum is expected within a specific region of the search space.
Gradient-based local optimizers include gradient descent, Newton’s method, and quasi-Newton methods. Gradient descent updates the solution by following the negative gradient, ensuring monotonic improvement under convexity conditions [1]. Newton’s method leverages second-order derivatives for faster convergence but requires computationally expensive Hessian evaluations [10]. Quasi-Newton methods, such as the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm, approximate the Hessian efficiently, balancing computational cost and convergence speed [11].
Derivative-free local optimizers are useful when gradient computation is impractical or costly. Methods such as Nelder–Mead and Pattern Search refine solutions heuristically, avoiding the need for explicit derivative calculations [12,13]. These techniques are widely applied to non-differentiable or noisy objective functions.
Global optimization methods fall into two main categories: deterministic and stochastic, depending on whether they incorporate stochastic elements [14,15]. Deterministic algorithms systematically explore the feasible space, whereas stochastic methods rely on random sampling, with probabilistic convergence as their defining characteristic [16]. Among deterministic methods, interval techniques are among the most widely used [17,18,19]. These techniques iteratively partition the search space S into smaller subregions, discarding those that do not contain the global solution with certainty, using the principles of interval analysis. However, most global optimization methods are stochastic, including controlled random search (CRS), simulated annealing (SA), differential evolution (DE), genetic algorithms (GAs), particle swarm optimization (PSO), and ant colony optimization (ACO), among others [20].
In addition, there are hybrid stochastic methods which combine different optimization techniques to enhance performance. Examples include PSO–SA hybrids, GA–DE hybrids, and GA–PSO hybrids [20].

1.4. The Multistart Method

The multistart method (MS) [21,22] is one of the main stochastic global optimization techniques and serves as the foundation for many modern global optimization methods. The algorithm aims to efficiently explore the feasible space by ensuring that each region is searched only once [16]. Several variants have been developed, including clustering [23], domain elimination [24], zooming [24], and repulsion [25]. These variants improve the method by introducing new features or addressing its weaknesses, such as repeatedly identifying the same minimum [16].
Typically, MS operates by capturing a random sample from the objective function, which is then used as a starting point for a local optimizer. Samples are typically drawn from a specified probability distribution, most commonly the uniform distribution. This approach is particularly useful for non-convex problems with multiple local optima, as it increases the likelihood of locating a global optimum. The multistart method has been extensively studied in various contexts, including its application to identifying local minima [26], its integration into hybrid techniques [27], and its use in parallel optimization [28].
The term selective multistart optimization describes a multistart framework in which local optimization is not performed from all available sampling points but only from a selected subset identified as the most promising. Similar approaches have been proposed in the literature, aiming to improve the efficiency of multistart methods through intelligent sampling, ranking, or filtering strategies based on probabilistically “best” points in their neighborhood [29].

1.5. Sampling

The distribution of starting points significantly impacts the performance of a multistart method. Several sampling strategies exist, each with its own advantages and drawbacks.
Random sampling is the simplest approach, selecting points uniformly at random from the search space. While easy to implement, it may lead to inefficiencies, especially in high-dimensional spaces where random points may be sparsely distributed [30].
Quasi-random sequences, such as Halton and Sobol sequences, provide low-discrepancy sampling, ensuring better uniformity than purely random sampling. This makes them more effective for multistart optimization [31].
Adaptive sampling methods use information from previous function evaluations to guide future sampling. Techniques such as clustering-based sampling and Bayesian Optimization (BO) dynamically adjust sampling density in regions of interest, improving convergence rates [32].
Latin Hypercube Sampling enhances random sampling by ensuring a more uniform coverage of the search space. Each variable’s range is divided into equal intervals, with points selected so that each interval is sampled exactly once. This method reduces clustering and ensures better space-filling properties [33]. Various modifications of Latin Hypercube Sampling have been proposed to address different types of problems [34,35,36].

1.6. Current Work

This paper proposes a novel framework for selective multistart optimization to identify both local and global minima. The framework enhances the effectiveness and efficiency of multistart optimization by addressing key limitations through the integration of Latin Hypercube Sampling and Interval Arithmetic. Its primary objectives are to improve space coverage of the feasible search space, particularly in higher dimensions; prioritize sampling points from less to more promising ones, to guide optimization toward the global minimum; ensure uniform behavior regardless of the resulting sample; and maintain computational efficiency.
LHS provides uniformity, while IA introduces two key advantages: (a) it enables the discard of regions that do not contain the global minimum with certainty, thereby reducing the search space, and (b) it enables the verification of a minimum as the global minimum—a capability not available using Real Arithmetic. These features enhance both effectiveness and efficiency, particularly in higher dimensions. The proposed framework’s performance is compared to that of a conventional multistart approach using the same sample. Rather than selecting points randomly within the interval boxes, this work selects the midpoint.
Both approaches are implemented in MATLAB (v. 24.2.0.2773142), using a combination of core MATLAB functions, proprietary MathWorks toolbox functions—namely, lhsdesign (for Latin Hypercube Sampling), from the Statistics and Machine Learning Toolbox, and fminunc (a quasi-Newton method), from the Optimization Toolbox—as well as functions from the third-party INTLAB toolbox for interval arithmetic [37,38]. All experiments were conducted on an AMD Ryzen 2700X, equipped with 32 GB of RAM, on Windows 10.

2. Materials and Methods

2.1. LHS and the Coverage Issue

Stratified sampling is a variance reduction technique in which the sample space Ω is partitioned into M disjoint strata { D m } , where m = 1 , 2 , , M , for some M 2 , and a sample is drawn independently from each stratum. These strata satisfy D i D j = for all i j , ensuring that their union covers the entire space, i.e.,  Ω = D m .
A specific form of stratified sampling is Latin Hypercube Sampling, which further subdivides each of the d dimensions of the domain D k into N non-overlapping (except at their endpoints), equally sized intervals { D k , j } , j = 1 , 2 , , N . Each interval D k , j contains exactly one sampled point, drawn uniformly at random within that interval. LHS achieves a more uniform distribution of points across the domain, thereby reducing sampling variability. For each dimension k { 1 , , d } , we define the partition D k = j D k , j with D k , i D k , j = whenever i j , i , j = 1 , 2 , , N . LHS is performed using the density function f k ( x ) 1 ( x D k , j ) / N , where f k represents the marginal probability density function of the kth variable. The final random vector x is then formed by permuting the sampled values across dimensions (Figure 1) [39].
The percentage coverage of the search space can be approximated using d-dimensional intervals or interval boxes, from which the sample is drawn. A partition of N non-overlapping intervals in each dimension leads to the following formula for search space coverage:
c ( N , d ) = N N d · 100 = 100 N d 1 ,
where c denotes the percentage coverage of the search space. It is evident that as the number of partitions or the dimensionality increases, the coverage c tends to zero. Conversely, to maintain a minimum coverage of the search space, say c % , the required partitioning in each dimension is given by:
N = 10 2 log c % d 1 .
This formula leads to an interesting conclusion:
Corollary 1
(The Coverage Issue). The percentage coverage of the search space using interval boxes resulting from LHS is exponentially inversely proportional to the number of required partitions as the dimensionality of the problem increases.
In practice, this implies that as the dimensionality of the problem increases, achieving constant coverage of the search space using the LHS requires increasingly finer partitions. In contrast, if we use interval boxes, achieving constant coverage requires fewer and fewer boxes, which, in the context of multistart optimization, leads to fewer applications of a local minimizer and, consequently, to a lower probability of finding the global minimum.

2.2. Proposed Framework Components

The proposed selective multistart optimization framework consists of the below components.

2.2.1. Latin Hypercube Sampling

The first component of the framework is implemented using LHS (see Section 2.1).

2.2.2. Constant Coverage of Search Space/Resolving the Coverage Issue

In the second component of the framework, in light of Corollary 1, we begin with a given partitioning and coverage rate of the search space. To ensure the required coverage, an appropriate sample size is chosen, which, however, corresponds to a larger partition than the given one:
S ( c , N , d ) = c 100 · N d ,
where S I N is the sampling size function, c % [ 0 , 1 ] the desired coverage, N I N the partition scheme, and  d I N the problem’s dimensionality. To resolve the discrepancy between the given and required partitions, the interval boxes corresponding to the sampling points are expanded to match the size indicated by the given partition.

2.2.3. Priorities on the Sampling Points

In the third component of the framework, the range of the objective function is estimated over the expanded interval boxes using interval arithmetic. These boxes are assessed by the lower bound of their calculated enclosures; those with the lowest lower bounds are deemed most promising for locating the global minimum. This process reflects on the corresponding sample points, determining which initial value is the most promising, guiding the local minimizer toward the global minimum.
Under certain conditions, interval arithmetic may overestimate the range of an objective function [40], providing realistic upper and lower bounds while identifying promising regions for deeper local searches. For example, consider the one-dimensional Rastrigin function, defined over [ 5.12 , 5.12 ] , as shown in Figure 2. The given partition size is 6, and for each partition, a range estimation is performed using interval arithmetic on the natural interval extension of the Rastrigin function—a natural interval extension replaces the real variables of the objective function with interval ones. It is evident that the third and fourth intervals are the most likely to contain the global minimum. Therefore, the points drawn within these intervals using LHS should be prioritized to initiate a local minimizer.
Range estimation for each interval box enables local comparison of the objective function’s dynamics and, in some cases, the discard of certain search space regions from further exploration. Specifically, in interval global optimization, pruning techniques such as the cut-off test [41] can accelerate convergence by discarding interval boxes that are guaranteed not to contain the global minimum. During the cut-off test, an interval box is eliminated from further consideration if its lower bound estimate exceeds a known upper bound ( f ˜ ) identified elsewhere in the search space (Figure 3).

2.2.4. Multistart, “Promising Status”, and Termination Criteria

The search continues until the local minimizer has processed all available sample points or predefined termination criteria are met. In this paper, two termination criteria are used: (1) a criterion to accept a local minimum as the global minimum with certainty, and (2) a criterion to accept a successively found local minimum as a candidate for the global minimum [42].
Termination Criterion 1
(Strong Condition). Let f be an objective function, [ x ] IR an interval box, and  f ˜ a known upper bound (i.e., a previously found local minimum) of the global minimum f . Let F be the natural interval extension of f and F ( [ x ] ) be the range enclosure of f over [ x ] . If  | f ˜ F ̲ ( [ x ] ) | < ε for a small predefined tolerance ε, then the global minimum has been found with tolerance ε.
Termination Criterion 2
(Weak Condition). Let f be an objective function, and  x l o c a l k and x l o c a l k + 1 successive local minimizers found by the multistart optimization method. If  | x l o c a l k + 1 x l o c a l k | < ε for a small predefined tolerance ε, then f ( x l o c a l k + 1 ) is considered a candidate global minimum with tolerance ε.
Each time a local minimum is found, the "promising status" of the available sample points is reassessed using well-known acceleration devices in interval global optimization, such as the cut-off test, applied to the corresponding expanded interval boxes. If applicable, these techniques discard certain interval boxes, reducing the number of sample points that need to initiate a local search.

2.3. On Interval Analysis

A closed, compact interval, denoted as [ x ] , is the set of all points
[ x ] = x ̲ , x ¯ = x R x ̲ x x ¯
where x ̲ , x ¯ denote the lower and upper bound of interval [ x ] , respectively, and the set of all closed and compact intervals is denoted as IR . The intersection of two intervals [ x ] [ y ] is defined as the common points between the two intervals, while the empty intersection, the lack of common points is defined as [ / ] . The width of an interval [ x ] is defined by w ( [ x ] ) = x ¯ x ̲ , while the midpoint of an interval [ x ] is defined as m ( [ x ] ) = x ¯ + x ̲ 2 .
Similarly, an interval vector [ x ] , also called an interval box, is a vector of intervals [ x ] = [ x ] 1 , [ x ] 2 , , [ x ] n T , and the set of all interval vectors is denoted as IR n . The width of an interval vector is defined by
w ( [ x ] ) = max i w ( [ x ] i ) ,
while the midpoint of an interval vector is defined as
m ( [ x ] ) = m ( [ x ] 1 ) , m ( [ x ] 2 ) , , [ x ] n .
For operations between two intervals [ a ] , [ b ] IR , a corresponding interval arithmetic was defined, which, in general, is described as a set extension of real arithmetic, using the elementary operations + , , · , ÷
[ a ] [ b ] = a b a [ a ] , b [ b ] .
The above definition is equivalent to the following simpler rules [43],
[ a ] + [ b ] = [ a ̲ + b ̲ , a ¯ + b ¯ ] [ a ] [ b ] = [ a ̲ b ¯ , a ¯ b ̲ ] [ a ] · [ b ] = min a b ̲ , a ̲ b ¯ , a ¯ b ̲ , a b ¯ , max a b ̲ , a ̲ b ¯ , a ¯ b ̲ , a b ¯ [ a ] [ b ] = min a ̲ b ̲ , a ̲ b ¯ , a ¯ b ̲ , a ¯ b ¯ , max a ̲ b ̲ , a ̲ b ¯ , a ¯ b ̲ , a ¯ b ¯ ,
where 0 [ b ] for the case of interval division. For the case where 0 [ b ] , an extended interval arithmetic has been proposed, [44].
The range of a function f over a closed and compact interval vector [ x ] R n is defined as
f r g [ x ] = f ( x ) x [ x ] F [ x ] ,
where F is an interval extension of function f and provides enclosures of the actual range of f. The following theorem, known as the Fundamental Theorem of Interval Analysis [40,45], can be used to bound the exact range of a given expression.
Theorem 1.
Let f : IR n IR be a function, and let [ x ] be an interval vector. If F is an inclusion isotonic interval extension of f, then
f ( [ x ] ) F [ ( [ x ] ) ] .
The resulting enclosure of the range f is generally overestimated due to the phenomenon of dependency, as described in [45]. Specifically, the more occurrences a variable has in an expression, the higher the overestimation will be. A more thorough study on interval analysis may be found indicatively in [40,46,47,48].

2.4. The Algorithmic Approach

All the aforementioned ideas are summarized in Algorithm 1 and Figure 4, which outline the proposed framework for selective multistart optimization (SelectiveMultistart).
The proposed framework is designed for global optimization by efficiently exploring the search space. The process begins with the necessary initializations, followed by the calculation of the sample size required to achieve the desired coverage c % and the application of LHS to generate the sample. The interval boxes containing the sample points are then expanded to ensure that their total number matches the given partition. The objective function is then evaluated for all expanded intervals, and the results are stored in a working list L , along with the lower bound of the corresponding range estimation. Steps 1 through 6 constitute the preprocessing phase of the proposed framework.
Algorithm 1: Selective multistart framework for global optimization.
     Function SelectiveMultistart (F, [ x ] 0 , N, d, c, ε o p t , ε m s )
1 f ˜ := + ; x l o c a l k = +  // Initialization
2 F L = F ̲ ( [ x ] 0 )  //  f F ̲ ( [ x ] )
3S := c 100 · N d  // Sample Size S
4Sample := LHS(d, S);
5 [ x ] := Expand_IBoxes( S a m p l e , N) // We construct the intervals that contain each sample point
6 L := ( [ x ] i , F ̲ ( [ x ] i ) ) ;
while  L  do
7       [ y ] := Best_Point(L) // The most promising
8      [ f m i n , x m i n ] := Local_Optimizer( m i d ( y ) , ε o p t );
9       f ˜ := max( f m i n , f ˜ );
      // Termination criterion #1--Strong condition (
      if  | f m i n F L | ε m s  then
10           f : = f m i n  // Global minimum is found
11          return;
      end
      // Termination criterion #2--Weak condition
      else if  | x l o c a l k + 1 x l o c a l k | ε m s  then
12           f : = f m i n  // Candidate global minimum
13          return;
      end
      else
14          L := CutOffTest(L, f ˜ ) // Discard all boxes that F ̲ ( [ x ] > f ˜
15           x l o c a l k = x l o c a l k + 1
      end
   end
end
Function Expand_IBoxes (S,N)
16    i i = [ S · N ] ; // Interval Index
17    i s = w ( a x i s ) / N ; // Interval Size (
18    [ x ] = [ i n f ( S I ) + ( i i 1 ) · i s , i n f ( S I ) + i i · i s ] ; // Constructed Intervals
19   return [ x ] ;
end
The subsequent steps form the core of the multistart algorithm. The most promising point is selected from the working list, based on the lower bound of its range estimation, and extracted for local optimization. A local minimizer is then initiated, and the best-known upper bound is compared with the newly found local minimum, updating the bound if a better solution is identified. The termination criteria are then applied, followed by the execution of the cut-off test on the remaining items on the working list. The algorithm iterates through this sequence of steps until the termination criteria are met or no promising interval boxes remain.

3. Results

In the literature, there is no consensus on how to demonstrate a method’s efficiency, and a variety of metrics and graphical representations are used. This work presents the following outputs, motivated by the following considerations: (a) the method’s effectiveness in detecting the global minimum; (b) in cases where detection fails, the method’s proximity to the global minimum; (c) the method’s efficiency (i.e., computational cost relative to output); and (d) the method’s behavior with respect to the problem’s characteristics. Accordingly, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15, Table A1, Table A2, Table A3, Table A4 and Table A5 (in Appendix A) were selected to illustrate the above aspects.
Our proposed framework is referred to as IVL, while the conventional approach is denoted as MS. All graphs and tables are presented such that a positive value indicates better performance of IVL compared to MS, whereas a negative value indicates worse performance. Entries marked with an asterisk (*) correspond to cases of division by zero. Specifically, in Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9:
  • Graph (a) illustrates the difference in success rates between the two methods, where success rate is defined as the ratio of successfully identified global minima to the total number of function calls.
  • Graph (b) presents the correlation between success rate and problem dimensionality for each partition size, providing insights into how scalability affects the detection capability of each method.
  • Graph (c) shows the relationship between the number of function evaluations and the partition size, across all dimensions, highlighting the computational cost associated with increasing dimensionality.
  • Graph (d) investigates efficiency by correlating the number of function calls per detected local minimum with partition size, again across all dimensions.
  • Graph (e) depicts the relative difference in function evaluations between the two methods across all partition sizes and dimensions.
  • Finally, Graph (f) introduces an efficiency metric for IVL, defined as:
    total number of search interval boxes function calls total number of search interval boxes
    which quantifies how effectively the method avoids unnecessary evaluations, thereby reflecting its ability to prune the search space.
Each table reports three performance metrics—namely, Success Rate (SR), Convergence Variability (CV), and Efficiency (Eff)—for both IVL and MS. The symbol * denotes division with zero.
  • The Success Rate (SR) is defined as the number of successfully detected global minima divided by the total number of function evaluations, serving as a measure of effectiveness.
  • The Coefficient of Variation (CV) is defined as the ratio of the standard deviation to the mean of the distances between the detected minima to the true global minimum. This metric highlights the robustness of convergence across trials.
  • The Efficiency (Eff) metric is calculated as the success rate divided by the number of function calls, providing an indicator of performance that balances accuracy with computational cost.
Additional ratios and measures are provided in Table A1, Table A2, Table A3, Table A4 and Table A5 (in Appendix A).
The experiments were conducted using a space coverage of 10%, with a tolerance of ε = ε o p t = ε m s = 1 × 10−6. These values reflect a balance between practical numerical accuracy and computational feasibility.

3.1. Limitations

3.1.1. On Methodological Comparisons

In evaluating the performance of the proposed framework, we chose to compare it with the conventional multistart approach, as both share a common methodological foundation: the use of multiple starting points to guide local optimization. This enables a fair and consistent basis for comparison in terms of sampling strategies, coverage, and convergence behavior. While numerous other global optimization techniques exist—such as evolutionary algorithms, swarm intelligence methods, and Bayesian Optimization—a direct comparison with these would not be methodologically appropriate, as they operate under fundamentally different principles.
For instance, BO employs surrogate probabilistic models—such as Gaussian Processes—to approximate the objective function, along with acquisition functions to identify promising regions [49]. In contrast, our framework constructs boxes around sampled points and uses interval arithmetic to deterministically enclose the objective function’s ranges withing these boxes. These enclosures are then used to assess the boxes, determine the most promising one, and facilitate pruning through rigorous discard criteria, such as the cut-off test. As such, rather than presenting cross-paradigm benchmarking, we focused on demonstrating the strengths and theoretical properties of our framework within its intended design space.

3.1.2. On High-Dimensional Spaces

The curse of dimensionality [50] is a well-known challenge in global optimization, particularly for sampling-based methods. In the proposed framework, the percentage coverage of the search space is exponentially inversely proportional to the dimensionality d (Corollary 1). To address this, we introduce a compensatory mechanism: expanding the interval boxes to ensure that a predefined coverage rate is maintained even as d increases.
However, while this strategy ensures theoretical coverage of the search space, it incurs an increasing computational cost. Specifically, the required sample size S N d increases exponentially with dimension. Consequently, the practical applicability of the framework is naturally restricted beyond a certain dimensional threshold—typically d 10 –20 with N 2 –5, under common computational budgets.
Table 16 illustrates how sample size scales with dimensionality and partitioning, alongside the corresponding number of local optimizer calls, for the Rastrigin, Dixon–Price, and Rosenbrock benchmark functions. The results highlight both the efficiency of the framework and the computational limits imposed by the number of interval evaluations required during the preprocessing stage. Beyond this threshold, the volume of interval calculations becomes prohibitively expensive.
To extend the framework’s applicability in higher dimensions, alternative strategies must be explored, such as dimensionality reduction, problem decomposition, or hierarchical sampling. Importantly, the limitation lies not in the prioritization mechanism itself but in the cost of obtaining reliable interval bounds for each partition. Once these bounds are available, the framework exhibits extremely efficient behavior, especially in high dimensions: the selection of regions based on their lower bounds guides the optimizer toward the most promising areas with minimal waste. Therefore, future improvements in approximating or accelerating this preprocessing step—without compromising the integrity of the bounds—could significantly enhance the scalability and practicality of the method in high-dimensional settings.

4. Discussion

The core idea behind the proposed framework is to move beyond real arithmetic by treating regions of the search space as entities to be analyzed and prioritized. Instead of evaluating the objective function at individual points, the method computes interval enclosures—mathematically guaranteed bounds—over subregions. Each such region is associated with a lower and upper bound, enclosing the possible function values within it. This enables two key capabilities: (a) discarding regions that do not contain the global minimum with certainty, and (b) assessing the remaining regions based on their lower bounds to guide the local optimizer toward the most promising areas. This rigorous approach not only enhances the efficiency of the search but also provides a mechanism for verifying whether a candidate solution is globally optimal within a specified tolerance.
Numerical experiments performed on standard benchmark functions confirm the method’s effectiveness, especially in higher-dimensional settings. The combination of Latin Hypercube Sampling and Interval Arithmetic enables a more informed and structured sampling of the search space, resulting in improvements across all considered performance metrics.
The presented numerical results demonstrate an increased success rate for the proposed method, particularly as the dimensionality of the objective function increases. Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15, Table A1, Table A2, Table A3, Table A4 and Table A5 (in Appendix A) show that, in most cases, the proposed multistart framework significantly outperforms the conventional multistart method in terms of both efficiency and effectiveness. Moreover, the proposed framework achieves closer proximity to the global minimum, as reflected in the reduced standard deviations of proximity and the number of function evaluations.
The utilization of acceleration devices discards regions that do not contain the global minimum with certainty at the beginning of the optimization process. Additionally, the satisfaction of the strong termination criterion suggests that the algorithm is capable of identifying the global minimum with arbitrary precision, an important advantage in problems where verification of solutions is costly or impractical.

5. Conclusions

This work presented a selective multistart global optimization framework based on interval arithmetic and stratified sampling. Each sampling point is enclosed in an interval box and assessed based on the lower bound of the function’s interval enclosure. The local optimization process is initiated from the most promising sample points, while both strong and weak termination criteria are applied to terminate the search process. Numerical experiments on well-known test functions showed that the method is effective in finding both global and local minima using fewer function evaluations. This is supported by the reported distance to the global minimum, success rates, and the number of global minima found per optimizer call.
The main contribution of the proposed framework lies in its ability to quantify and manage space coverage—a feature that is usually assumed but not controlled in standard multistart methods. Additionally, the use of interval arithmetic enables the method to discard with certainty regions that cannot contain the global minimum, leveraging techniques from interval optimization to accelerate the search.
However, the current framework is limited to unconstrained, continuous optimization problems. Extending the method to constrained or combinatorial problems would require significant modifications, such as constraints handling and adaptation of interval-based methodologies. While the interval preprocessing cost remains low, a more detailed analysis of runtime and scalability in high-dimensional settings is warranted and left for future work.
The coverage issue associated with LHS in high-dimensional spaces remains a fundamental challenge, as the coverage rate decreases exponentially with increasing dimensionality. Although the proposed framework mitigates this limitation to some extent by adjusting the sample size and the corresponding interval enclosures, ensuring sufficient exploration of the search space remains a key issue that requires further investigation.
Future research should also include combining this framework with other global optimization strategies (e.g., Differential Evolution, Bayesian Optimization), potentially implementing adaptive box sizing and refined sampling techniques, as well as applying this method to real-world problems—such as hyperparameter tuning or engineering design, which often exhibit additional complexities such as noise, discontinuities, or high computational costs. Finally, addressing the coverage limitations in high dimensions, and scaling the algorithm—potentially through parallelization, advanced prioritization, and tighter bounding techniques—will also be critical for advancing the method’s practical applicability and robustness.

Author Contributions

Conceptualization, I.A.N. and V.P.G.; methodology, I.A.N. and V.P.G.; software, V.P.G. and I.A.N.; validation, V.P.G., I.A.N. and V.C.L.; formal analysis, I.A.N. and V.P.G.; investigation, V.P.G. and I.A.N.; data curation, V.P.G.; writing—original draft preparation, V.P.G. and I.A.N.; writing—review and editing, V.P.G., I.A.N. and V.C.L.; visualization, I.A.N. and V.P.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In the following tables, seven (7) ratios are presented. The columns represent the partition size (2, 3, 4, 5, 6, 7, 8, 9, 10, 15), grouped together for each dimension (1, 2, 3, 4, 5). Our proposed framework is referred as “IVL” while the conventional one as “MS”. All ratios are calculated in such a way where a value >1 indicates that our framework yields better results than the conventional one, while <1 means that it yields worse.
The calculated ratios, in the presented order, are the following:
  • [ Globals Detected / Calls ( IVL ) ] / [ Globals Detected / Calls ( MS ) ]
  • [ Locals Detected / Calls ( IVL ) ] / [ Locals Detected / Calls ( MS ) ]
  • Average Distance ( MS ) / Average Distance ( IVL )
  • St . Dev . of Distance ( MS ) / St . Dev . of Distance ( IVL )
  • Calls ( MS ) / Calls ( IVL )
  • Function Evaluations ( MS ) / Function Evaluations ( IVL )
  • Strong Criterion / Calls ( IVL )
The first ratio represents efficiency regarding globals detection. The second ratio represents efficiency regarding locals detection. The third ratio represents the proximity of each method to the global minimum, since each “distance” is the distance of each detected minimum to the known global minimum. The forth ratio represents reliability regarding global minimum proximity. The fifth ratio represents efficiency regarding function calls (a measure of computation cost). The sixth ratio represents efficiency regarding function computations (another measure of computation cost). Lastly, the seventh ratio is not a comparison between methods; it represents the frequency of strongly detected minimums (1st criterion) of our proposed framework.
Table A1. Ratio metrics of the Ackley function.
Table A1. Ratio metrics of the Ackley function.
#234567891015
111-111111-
1111111111
0.350.320.350.340.350.390.380.350.320.53
0.770.81.20.870.730.840.751.10.763.8 × 102
1111111111.1
0.940.810.740.70.760.730.760.550.690.62
10.400.130.380.430.560.170.450
211-2.11.83.94.673.710
1111111111
0.350.350.541.51.71.52.31.81.84.5
0.681.3771.42.21.111.40.950.86
1112.13.63.95.476.718
0.970.820.871.52.72.33.54.44.88.9
10.1500.00710.130.270.480.10.310.59
3---3.906.27572931
111110.941111
0.350.631.2231.41732.73.544
1.3 × 1015271.33.11.75.60.4710.321.8
111.91018351857402.9 × 102
0.90.931.84.826151942311.3 × 102
0000.7400.530.220.0390.260.79
4-1.1-5.1-6.9416.6 × 102403.3 × 102
1111111111
0.540.992.18.91.59.7103.55.77.1 × 106
1.3 × 10151.40.584.1534.70.5110.237.2 × 105
11.15.163762.4 × 1021.4 × 1026.6 × 1022.4 × 1025.1 × 103
0.961.14.7261.6 × 1021.4 × 1021.3 × 1025.1 × 1022.6 × 1022.2 × 103
00.009600.500.120.310.10.181
5-1.3-1.6092985.7 × 1023.1 × 1031.3 × 103
1111111111
0.691.22.86.72.211224.16 × 1022.8 × 106
8.3 × 10140.360.646.11.53.10.470.883.37.2 × 104
11.3233.1 × 1022.1 × 1021.7 × 1036.6 × 1025.9 × 1031 × 1047.6 × 104
0.981.3261.3 × 1022.6 × 1021.1 × 1036.4 × 1024.6 × 1033.9 × 1034.2 × 104
00.006800.0600.060.190.10.991
Table A2. Ratio metrics of the Dixon–Price function.
Table A2. Ratio metrics of the Dixon–Price function.
#234567891015
11111111111
1111111111
0.330.320.380.370.180.370.370.340.410.27
0.830.70.690.780.420.640.750.570.772.9
1111111112
0.60.470.430.350.330.290.270.240.220.38
1111111111
21110.7310.4910.7710.78
1111111111
0.360.383.90.19290.21.6 × 1060.283.3 × 1080.3
0.820.6820.53150.652.7 × 1060.612.2 × 1090.6
112345791022
0.950.921.63.44.37.47.9111125
10.7510.6310.4110.6610.71
310.811.20.41.10.631.20.650.90.79
1111111111
0.30.521.4 × 1060.372.4 × 1070.491.1 × 1090.480.690.55
0.620.948.5 × 1050.968.6 × 1060.851.1 × 1090.830.860.81
1371322325270973.1 × 102
0.9837.5232452531.2 × 1021.2 × 1025.2 × 102
10.410.2910.4710.510.760.63
40.6801.40.31.20.511.10.720.950.91
110.981111111
0.30.491.1 × 1070.592.20.621.50.740.991.4
0.71.1 × 1043.8 × 1061.41.311.10.980.991.1
2926631.2 × 1022.1 × 1023.9 × 1026 × 1029.5 × 1023.8 × 103
2.214301.2 × 1021.5 × 1023.5 × 1025.2 × 1021.1 × 1031.3 × 1037.8 × 103
0.52010.160.810.290.730.420.620.56
5100.880.0291.40.180.890.950.331.1
1.11.11.11.11.11.11.11.11.11.1
1.20.631.10.571.80.641.10.980.671.7
1.19.1 × 1030.994.91.11.80.990.991.11.1
2.725743.1 × 1026.8 × 1021.4 × 1032.5 × 1035.2 × 1036 × 1035 × 104
2.847976.8 × 1029.4 × 1022.3 × 1034.1 × 1031 × 1041.1 × 1041.2 × 105
0.5100.390.010.640.0660.40.370.150.45
Table A3. Ratio metrics of the Levy function.
Table A3. Ratio metrics of the Levy function.
#234567891015
1-11111111-
1111111111
0.340.330.330.350.330.380.360.350.340.75
1.10.880.870.810.720.790.730.870.7714
1111111112
0.750.660.630.570.540.470.490.40.360.57
00.320.180.250.370.360.480.160.310
2-1--2.13.44.85.35.65.2
1111111111
0.350.360.640.971.52.65.9127.61 × 102
1.913.13.32.63.13.76.24.615
1111.42.13.44.87.6620
0.870.840.891.21.72.73.454.312
00.15000.0210.130.370.590.310.83
3--3.83.812-1916389.3
1111111111
0.340.855.27.6156.334402955
1.94.13.74.47.45.411121025
113.8615123151652.7 × 102
0.920.973.45.412102544582 × 102
000.150.0550.2600.270.30.270.3
4--8.5067067432.7 × 10230
1111111111
0.6521712607.592894694
3.72.66.46205.821352037
11.1171698412.3 × 1023.7 × 1026.1 × 1022.6 × 103
0.971.1181579382.2 × 1024.5 × 1026.3 × 1022.7 × 103
000.3700.3700.30.210.160.23
5--24-2.9 × 10201.1 × 102352.3 × 10384
1111111111
0.923.351141.1 × 102111.2 × 10294971.7 × 102
4.92.6137.5469.139563074
11.478396.5 × 1021.6 × 1021.9 × 1032.2 × 1035.1 × 1033.2 × 104
0.991.41.1 × 102386.2 × 1021.6 × 1022.2 × 1033.6 × 1036.6 × 1034.4 × 104
000.600.2900.250.0680.190.19
Table A4. Ratio metrics of the Rastrigin function.
Table A4. Ratio metrics of the Rastrigin function.
#234567891015
1-1111111--
1111111111
0.350.360.350.350.360.330.360.360.380.6
8 × 101410.81.10.770.670.750.680.841.3
1111111111.1
0.830.790.660.660.610.580.610.530.490.52
00.240.430.230.30.650.310.3500
2-1--01.307.3-5.8
1111111111
0.350.330.572.20.771.72.9493.518
2.4 × 101512.71.61.91.85.7153.717
1111.81.43.54.98.73.417
0.880.870.821.41.42.84.37.12.89.4
00.120000.4400.8400.25
3--1.61305.6027-11
1111111111
0.350.72122.41.2 × 1024.73.7 × 1026.128
1.8 × 10152.51.6171.19.88.2623.720
11.21.7133321172792
0.911.11.68.43.12314687.377
000.10.1500.900.9300.098
4-3.95.158011082-15
1111111111
0.541.7162045.1 × 10126.66.6 × 10149.635
8.3 × 10142.42.6201.65.8 × 10119.11.2 × 10155.525
13.98.76392.4 × 102246.6 × 102295.1 × 102
0.963.77.944111.8 × 102316.7 × 102314.9 × 102
00.00430.30.12010100.024
5-11133.1 × 10201702.4 × 102-48
1111111111
0.6931.1 × 102265.55 × 10136.65.6 × 10141153
-1.85.6282.16.8 × 1012251.2 × 10158.627
111433.1 × 102301.7 × 103465.9 × 103923.4 × 103
0.9911392.2 × 102381.2 × 103606.5 × 1031 × 1023.4 × 103
00.0440.410.07010100.016
Table A5. Ratio metrics of the Rosenbrock function.
Table A5. Ratio metrics of the Rosenbrock function.
#234567891015
11111111111
1111111111
0.39-0.34-0.310.320.360.280.320.26
0.74-0.68-0.650.710.780.550.641.7
1111111112
0.830.70.730.60.580.50.560.460.470.56
0.5910.4810.290.210.10.260.391
2111.90.612.63.25.60.693.94.7
1111111111
0.370.360.661.42.64.33.25.38.41.3 × 102
0.720.690.952.27.15.96.8169.239
111.92.82.63.25.66.87.423
0.90.821.62.52.12.354.95.718
0.280.650.410.130.0830.130.040.0450.180.93
3120.7426.28.3180.361911
1111111111
0.321.72.91111127.711211.2 × 1015
0.7513.4115.5111820161.8 × 1015
124.2137.9111817373.4 × 102
0.932.24.5127.69.41917413.5 × 102
0.120.510.060.230.230.0550.0310.00460.0941
41.43.80.641.918195704111
1111111111
0.69183.91342188.916382.2 × 1015
1.42.67.89.19.2192035348.1 × 1015
1.47.49.343363757762.1 × 1025.1 × 103
1.49.911433933661 × 1022.5 × 1025.5 × 103
0.0750.790.0320.140.250.0180.01300.0271
51.45.85.12.483792.2 × 10208.5 × 10212
1111111111
2.3544.2201.2 × 102241218833 × 1015
2.14.69.3331425191.6 × 102631.6 × 1016
1.420183 × 1022.7 × 1021.6 × 1022.2 × 1023.7 × 1022.2 × 1037.6 × 104
1.531253.6 × 1022.9 × 1021.5 × 1022.6 × 1025.3 × 1022.8 × 1031 × 105
0.0650.780.0120.130.330.00660.005500.0321

References

  1. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 2006. [Google Scholar]
  2. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  3. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
  4. Mayer-Schönberger, V.; Cukier, K. Big Data: A Revolution That Will Transform How We Live, Work, and Think; Houghton Mifflin Harcourt: Boston, MA, USA, 2013. [Google Scholar]
  5. Zhang, W.; Gu, X.; Tang, L.; Yin, T.; Liu, D.; Zhang, Y. Application of machine learning, deep learning and optimization algorithms in geoengineering and geoscience: Comprehensive review and future challenge. Gondwana Res. 2022, 109, 1–17. [Google Scholar] [CrossRef]
  6. Sun, S.; Cao, Z.; Zhu, H.; Zhao, J. A Survey of Optimization Methods From a Machine Learning Perspective. IEEE Trans. Cybern. 2019, 50, 3668–3681. [Google Scholar] [CrossRef] [PubMed]
  7. Soydaner, D. A Comparison of Optimization Algorithms for Deep Learning. Int. J. Pattern Recognit. Artif. Intell. 2020, 34, 2052013. [Google Scholar] [CrossRef]
  8. L’Heureux, A.; Grolinger, K.; Elyamany, H.F.; Capretz, M.A.M. Machine Learning with Big Data: Challenges and Approaches. IEEE Access 2017, 5, 7776–7797. [Google Scholar] [CrossRef]
  9. Engelbrecht, A.P. Computational Intelligence: An Introduction; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  10. Dennis, J.E.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Siam: Bangkok, Thailand, 1996. [Google Scholar]
  11. Broyden, C.G.; Fletcher, R.; Goldfarb, D.; Shanno, D.F. A class of quasi-Newton methods. Math. Comput. 1970, 24, 125–136. [Google Scholar] [CrossRef]
  12. Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  13. Hooke, R.; Jeeves, T. Direct search solutions of numerical and statistical problems. J. Assoc. Comput. Mach. 1961, 8, 212–229. [Google Scholar] [CrossRef]
  14. Dixon, L.C.W.; Szego, G.P. Towards Global Optimization; North-Holland: Amsterdam, The Netherlands, 1975. [Google Scholar]
  15. Dixon, L.C.W.; Szego, G.P. Towards Global Optimization II; North-Holland: Amsterdam, The Netherlands, 1978. [Google Scholar]
  16. Ali, M.M.; Storey, C. Topographical Multilevel Single Linkage. J. Glob. Optim. 1994, 5, 349–358. [Google Scholar] [CrossRef]
  17. Wolfe, M.A. Interval methods for global optimization. Appl. Math. Comput 1996, 75, 179–206. [Google Scholar]
  18. Allahdadi, M.; Nehi, H.M.; Ashayerinasab, H.A.; Javanmard, M. Improving the modified interval linear programming method by new techniques. Inf. Sci. 2016, 339, 224–236. [Google Scholar] [CrossRef]
  19. Araya, I.; Reyes, V. Interval Branch-and-Bound algorithms for optimization and constraint satisfaction: A survey and prospects. J. Glob. Optim. 2016, 65, 837–866. [Google Scholar] [CrossRef]
  20. Tsoulos, I.G.; Tzallas, A.; Tsalikakis, D. Use RBF as a Sampling Method in Multistart Global Optimization Method. Signals 2022, 3, 857–874. [Google Scholar] [CrossRef]
  21. Rinnooy Kan, A.H.G.; Timmer, G.T. Stochastic Global Optimization Methods; Part I: Clustering Methods. Math. Program. 1987, 39, 27–56. [Google Scholar] [CrossRef]
  22. Rinnooy Kan, A.H.G.; Timmer, G.T. Stochastic Global Optimization Methods; Part II: Clustering Methods. Math. Program. 1987, 39, 57–78. [Google Scholar] [CrossRef]
  23. Torn, A.A. A search clustering approach to global optimization. In Towards Global Optimization; Dixon, L.C.W., Szego, G.P., Eds.; North-Holland: Amsterdam, The Netherlands, 1978; pp. 49–62. [Google Scholar]
  24. Elwakeil, O.A.; Arora, J.S. Two algorithms for global optimization of general NLP problems. Int. J. Numer. Methods Eng. 1996, 39, 3305–3325. [Google Scholar] [CrossRef]
  25. Sepulveda, A.E.; Epstein, L. The repulsion algorithm, a new multistart method for global optimization. Struct. Optim. 1996, 11, 145–152. [Google Scholar] [CrossRef]
  26. Tsoulos, I.G.; Lagaris, I.E. MinFinder Locating all the local minima of a function. Comput. Phys. Commun. 2006, 174, 166–179. [Google Scholar] [CrossRef]
  27. Day, R.F.; Yin, P.Y.; Wang, Y.C.; Chao, C.H. A new hybrid multi-start tabu search for finding hidden purchase decision strategies in WWW based on eye-movements. Appl. Soft Comput. 2016, 48, 217–229. [Google Scholar] [CrossRef]
  28. Larson, J.; Wild, S.M. Asynchronously parallel optimization solver for finding multiple minima. Math. Comput. 2018, 10, 303–332. [Google Scholar] [CrossRef]
  29. Jaiswal, P.; Larson, J. Multistart algorithm for identifying all optima of nonconvex stochastic functions. Optim. Lett. 2024, 18, 1335–1360. [Google Scholar] [CrossRef]
  30. Devroye, L. Non-Uniform Random Variate Generation; Springer: New York, NY, USA, 1986. [Google Scholar]
  31. Niederreiter, H. Random Number Generation and Quasi-Monte Carlo Methods; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
  32. Mockus, J. Bayesian Approach to Global Optimization; Springer: Dordrecht, The Netherlands, 1989. [Google Scholar]
  33. McKay, M.D.; Beckman, R.J.; Conover, W.J. A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. Technometrics 1979, 21, 239–245. [Google Scholar]
  34. Wang, Z.; Zhao, D.; Heidari, A.A.; Chen, Y.; Chen, H.; Liang, G. Improved Latin hypercube sampling initialization-based whale optimization algorithm for COVID-19 X-ray multi-threshold image segmentation. Sci. Rep. 2024, 14, 13239. [Google Scholar] [CrossRef]
  35. Borisut, P.; Nuchitprasittichai, A. Adaptive Latin Hypercube Sampling for a Surrogate-Based Optimization with Artificial Neural Network. Processes 2023, 11, 3232. [Google Scholar] [CrossRef]
  36. Iordanis, I.; Koukouvinos, C.; Silou, I. Classification accuracy improvement using conditioned Latin Hypercube Sampling in Supervised Machine Learning. In Proceedings of the 2022 12th International Conference on Dependable Systems, Services and Technologies (DESSERT), Athens, Greece, 9–11 December 2022; pp. 1–5. [Google Scholar]
  37. Rump, S.M. INTLAB—INTerval LABoratory. In Developments in Reliable Computing; Csendes, T., Ed.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999; pp. 77–104. [Google Scholar]
  38. Rump, S.M. Fast and Parallel Interval Arithmetic. BIT 1999, 39, 534–554. [Google Scholar] [CrossRef]
  39. Song, C.; Kawai, R. Monte Carlo and variance reduction methods for structural reliability analysis: A comprehensive review. Probabilistic Eng. Mech. 2023, 73, 103479. [Google Scholar] [CrossRef]
  40. Ramon, E. Moore Interval Analysis; Prentice-Hall: Englewood Cliffs, NJ, USA, 1966. [Google Scholar]
  41. Kulisch, U.; Hammer, R.; Hocks, M.; Ratz, D. C++ Toolbox for Verified Computing I; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
  42. Nikas, I.A.; Grapsa, T.N. An escape-from-local minima technique in unconstrained optimization using a grid-like approach and interval equations. In Proceedings of the Numan2012 “5th Conference in Numerical Analysis”, Ioannina, Greece, 5–8 September 2012. [Google Scholar]
  43. Ratschek, H.; Rokne, J. Computer Methods for the Range of Functions; Ellis Horwood: Hemel Hempstead, UK, 1984. [Google Scholar]
  44. Ratz, D. Inclusion Isotone Extended Interval Arithmetic. In Institut für Angewandte Mathematik; Universität Karlsruhe: Karlsruhe, Germany, 1996. [Google Scholar]
  45. Moore, R.E.; Kearfott, R.B.; Cloud, M.J. Introduction to Interval Analysis; SIAM: Philadelphia, PA, USA, 2009. [Google Scholar]
  46. Hansen, E.R.; Walster, G.W. Global Optimization Using Interval Analysis, 2nd ed.; Marcel Dekker: New York, NY, USA, 2004. [Google Scholar]
  47. Neumaier, A. Interval Methods for Systems of Equations; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  48. Alefeld, G.; Herzberger, J. Introduction to Interval Computation; Academic Press: New York, NY, USA, 1983. [Google Scholar]
  49. Shahriari, B.; Swersky, K.; Wang, Z.; Adams, R.P.; de Freitas, N. Taking the Human Out of the Loop: A Review of Bayesian Optimization. Proc. IEEE 2016, 104, 148–175. [Google Scholar] [CrossRef]
  50. Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
Figure 1. Latin Hypercube Sampling: Interval boxes and their corresponding midpoints. In (a) two- and (b) three-dimensional space.
Figure 1. Latin Hypercube Sampling: Interval boxes and their corresponding midpoints. In (a) two- and (b) three-dimensional space.
Mathematics 13 01733 g001
Figure 2. One−dimensional Rastrigin function: Range overestimation on six-box partition. The two middle boxes are identified as the most promising for deeper local search.
Figure 2. One−dimensional Rastrigin function: Range overestimation on six-box partition. The two middle boxes are identified as the most promising for deeper local search.
Mathematics 13 01733 g002
Figure 3. One−dimensional Rastrigin function: Application of the cut-off test results in the exclusion of 2 out of 6 interval boxes.
Figure 3. One−dimensional Rastrigin function: Application of the cut-off test results in the exclusion of 2 out of 6 interval boxes.
Mathematics 13 01733 g003
Figure 4. Flowchart of the proposed selective multistart optimization framework. The method combines Latin Hypercube Sampling, interval analysis, and local search with termination and pruning criteria.
Figure 4. Flowchart of the proposed selective multistart optimization framework. The method combines Latin Hypercube Sampling, interval analysis, and local search with termination and pruning criteria.
Mathematics 13 01733 g004
Figure 5. Ackley Function: (a) Success Rate, (b) Correlation: Success Rate−Partition, (c) Correlation: Function Evaluations−Partition, (d) Correlation: Calls/Locals−Partition, (e) Function Evaluations/Dimension, and (f) Improvement.
Figure 5. Ackley Function: (a) Success Rate, (b) Correlation: Success Rate−Partition, (c) Correlation: Function Evaluations−Partition, (d) Correlation: Calls/Locals−Partition, (e) Function Evaluations/Dimension, and (f) Improvement.
Mathematics 13 01733 g005
Figure 6. Dixon–Price Function: (a) Success Rate, (b) Correlation: Success Rate−Partition, (c) Correlation: Function Evaluations−Partition, (d) Correlation: Calls/Locals−Partition, (e) Function Evaluations/Dimension, and (f) Improvement.
Figure 6. Dixon–Price Function: (a) Success Rate, (b) Correlation: Success Rate−Partition, (c) Correlation: Function Evaluations−Partition, (d) Correlation: Calls/Locals−Partition, (e) Function Evaluations/Dimension, and (f) Improvement.
Mathematics 13 01733 g006
Figure 7. Levy Function: (a) Success Rate, (b) Correlation: Success Rate−Partition, (c) Correlation: Function Evaluations−Partition, (d) Correlation: Calls/Locals−Partition, (e) Function Evaluations/Dimension, and (f) Improvement.
Figure 7. Levy Function: (a) Success Rate, (b) Correlation: Success Rate−Partition, (c) Correlation: Function Evaluations−Partition, (d) Correlation: Calls/Locals−Partition, (e) Function Evaluations/Dimension, and (f) Improvement.
Mathematics 13 01733 g007
Figure 8. Rastrigin Function: (a) Success Rate, (b) Correlation: Success Rate−Partition, (c) Correlation: Function Evaluations−Partition, (d) Correlation: Calls/Locals−Partition, (e) Function Evaluations/Dimension, and (f) Improvement.
Figure 8. Rastrigin Function: (a) Success Rate, (b) Correlation: Success Rate−Partition, (c) Correlation: Function Evaluations−Partition, (d) Correlation: Calls/Locals−Partition, (e) Function Evaluations/Dimension, and (f) Improvement.
Mathematics 13 01733 g008
Figure 9. Rosenbrock Function: (a) Success Rate, (b) Correlation: Success Rate−Partition, (c) Correlation: Function Evaluations−Partition, (d) Correlation: Calls/Locals−Partition, (e) Function Evaluations/Dimension, and (f) Improvement.
Figure 9. Rosenbrock Function: (a) Success Rate, (b) Correlation: Success Rate−Partition, (c) Correlation: Function Evaluations−Partition, (d) Correlation: Calls/Locals−Partition, (e) Function Evaluations/Dimension, and (f) Improvement.
Mathematics 13 01733 g009
Table 1. Performance metrics of the Ackley function: success rate, coefficient of variation, and efficiency with respect to dimension and partition size. Entries marked with an asterisk (*) correspond to cases of division by zero.
Table 1. Performance metrics of the Ackley function: success rate, coefficient of variation, and efficiency with respect to dimension and partition size. Entries marked with an asterisk (*) correspond to cases of division by zero.
234567891015
SRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEff
10.0%0.302800.0%0.25690*−0.1525*0.0%0.150900.0%0.367100.0%0.195500.0%0.336400.0%−0.067600.0%0.32150*−0.9974*
20.0%0.471600.0%−0.21400*−0.9871*114.3%−0.27193.591884.7%−0.54695.5975293.7%−0.092514.5000364.7%−0.029224.0223603.1%−0.294948.4385272.0%0.055423.9636944.7%0.1587186.7186
3*−1**−0.9630**−0.2063*287.4%−0.679839.6099−100.0%−0.4055−1523.1%−0.8216217.0871604.9%1.1320128.52895603.1%−0.03133251.56352831.7%2.09921185.93623013.2%−0.43698992.6054
4*−1*7.7%−0.26220.1590*0.7140*413.0%−0.7551322.2085*−0.9812*585.3%−0.78771650.59244012.6%0.96915957.210265,600.0%−0.0374431,6483948.1%3.25769568.878432,670.2%−0.99991,659,155.57
5*−1*30.4%1.77620.7007*0.5619*61.5%−0.8351504.4291−100.0%−0.3149−19069.1%−0.6731154,131.41829733.9%1.137664,709.000856,678.8%0.14083,352,789.865310,244.8%−0.697031,034,481.76128,739.5%−0.999997,838,136.84
Table 2. Preprocessing times (IVL) for the Ackley function.
Table 2. Preprocessing times (IVL) for the Ackley function.
#234567891015
10.00100.00100.00100.00100.00090.00110.00130.00100.00110.0019
20.00130.00130.00250.00360.00470.00970.00810.01030.01140.0255
30.00170.00460.01050.01900.03090.04790.06980.09770.13350.4370
40.00360.01510.04180.09870.20270.37640.63861.03311.54927.8323
50.00830.04760.19140.57501.42843.08816.008910.897118.2944139.6668
Table 3. Processing times (left: IVL, right: MS) for the Ackley function.
Table 3. Processing times (left: IVL, right: MS) for the Ackley function.
234567891015
IVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMS
10.00280.00230.00140.00120.00140.00140.00150.00140.00170.00320.00150.00130.00160.00120.00160.00070.00210.00160.00420.0021
20.00220.00280.00130.00130.00220.00200.00150.00250.00450.00320.00330.00780.00130.00550.00140.00640.00220.00760.00120.0174
30.00130.00100.00290.00180.00250.00540.00150.01020.00150.02020.00150.02330.00170.04140.00130.05140.00750.07550.00150.2476
40.00220.00140.00660.00470.01830.01900.00140.04030.00190.09970.00130.15240.01110.30880.00140.48290.00180.76260.00183.8704
50.00370.00230.01550.01310.00120.07880.00160.19370.00410.59430.00161.08630.00222.55840.00174.47870.00207.96430.002060.5127
Table 4. Performance metrics of the Dixon–Price function: success rate, coefficient of variation, and efficiency with respect to dimension and partition size.
Table 4. Performance metrics of the Dixon–Price function: success rate, coefficient of variation, and efficiency with respect to dimension and partition size.
234567891015
SRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEff
10.0%0.210100.0%0.419900.0%0.439900.0%0.279700.0%1.390800.0%0.568200.0%0.331600.0%0.761800.0%0.303900.0%−0.65751
20.0%0.222900.0%0.475100.0%−0.50371−27.0%0.88581.18920.0%−0.93433−51.4%0.52721.42894.8%−0.99996.3353−23.2%0.63315.91593.8%−19.3842−22.0%0.653816.2592
30.0%0.61180−19.5%0.06001.416116.9%−0.99997.1803−59.9%0.03884.219314.2%−0.999924.1298−36.6%0.180619.170318.5%−0.999960.6085−34.7%0.201344.8684−10.3%0.156986.0603−20.8%0.2352244.6160
4−31.6%0.42450.3684−100.0%−0.9999−139.4%−0.999935.2547−70.2%−0.263317.743819.2%−0.2470145.2101−49.0%−0.0306103.991511.8%−0.1052435.6865−28.3%0.0238431.3220−5.2%0.0082901.5030−8.8%−0.07773470.5174
51.3%−0.06251.7022−100.0%−0.9998−1−12.1%0.010864.1526−97.1%−0.79737.955944.5%−0.1005984.8924−82.2%−0.4400243.7062−10.9%0.01422261.8978−4.9%0.00794969.5842−66.5%−0.12141992.704410.9%−0.059855,054.0632
Table 5. Preprocessing times (IVL) for the Dixon–Price function.
Table 5. Preprocessing times (IVL) for the Dixon–Price function.
#234567891015
10.00420.00600.00170.00070.00120.00130.00060.00040.00030.0027
20.00230.00090.00290.00200.00270.00320.00450.00580.00640.0145
30.00110.00310.00710.01290.02130.03330.04980.06980.09440.3144
40.00290.01220.03340.07780.15930.29720.50460.78261.19265.9960
50.00690.03930.15850.47831.18122.54734.97768.998115.2041115.7074
Table 6. Processing times (left: IVL, right: MS) for the Dixon–Price function.
Table 6. Processing times (left: IVL, right: MS) for the Dixon–Price function.
234567891015
IVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMS
10.42850.02040.02270.00530.02830.00230.00100.00080.00220.00180.00090.00070.00090.00050.00150.00060.00080.00050.00210.0015
20.16370.00830.00650.00240.00460.00500.00510.00360.00120.00370.00160.00510.00120.00670.00140.00810.00120.00920.00330.0198
30.00250.00210.00200.00380.00130.00780.00140.01470.00200.02410.00150.04000.00220.05670.00130.07810.00140.10730.00120.3617
40.00360.00280.00160.01220.00180.03420.00170.08210.00170.17930.00160.32450.00120.54900.00160.84220.00161.31490.00126.5467
50.00780.00630.00090.03360.00150.15900.00150.47430.00201.18720.00102.53050.00215.11530.00179.00070.001715.56210.0049116.4989
Table 7. Performance metrics of the Levy function: success rate, coefficient of variation, and efficiency with respect to dimension and partition Size. Entries marked with an asterisk (*) correspond to cases of division by zero.
Table 7. Performance metrics of the Levy function: success rate, coefficient of variation, and efficiency with respect to dimension and partition Size. Entries marked with an asterisk (*) correspond to cases of division by zero.
234567891015
SRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEff
1*−0.0676*0.0%0.132400.0%0.155600.0%0.239900.0%0.395300.0%0.272100.0%0.370100.0%0.143700.0%0.29590*−0.9276*
2*−0.4865*0.0%0.00430*−0.6722**−0.6993*105.1%−0.61653.208244.8%−0.682110.891376.2%−0.731421.676433.9%−0.839639.721456.0%−0.781332.295418.9%−0.9343102.784
3*−0.4845**−0.7589*276.3%−0.730113.163278.4%−0.771921.6671111.1%−0.8647185.328*−0.8141*1834.5%−0.9076597.7811467.9%−0.9153799.4133728.2%−0.90462501.090833.0%−0.95942482.020
4*−0.7280**−0.6122*753.7%−0.8434142.199−100.0%−0.8335−16552.0%−0.95066501.001−100.0%−0.8272−16645.5%−0.952215,624.2034194.6%−0.971815,850.31726,725.6%−0.9506162,578.5962876.3%−0.972678,482.737
5*−0.7962**−0.6179*2289.3%−0.92471863.378*−0.8673*28,502.9%−0.9782186,999.741−100.0%−0.8902−111,134.7%−0.9745212,808.5563411.3%−0.982277,947.323229,160.9%−0.966411,756,970.038257.6%−0.98652,677,901.014
Table 8. Preprocessing times (IVL) for the Levy function.
Table 8. Preprocessing times (IVL) for the Levy function.
#234567891015
10.00810.00330.02430.00260.00250.00460.00380.00220.00390.0052
20.00240.00240.00820.00790.01110.01680.01780.02180.02280.0438
30.00260.00820.01840.03210.05230.08260.12310.16900.23750.7531
40.00630.02600.07660.19300.37560.67991.16431.84092.863014.1993
50.01430.08390.35401.11332.66245.735711.335620.125634.8130259.5128
Table 9. Processing times (left: IVL, right: MS) for the Levy function.
Table 9. Processing times (left: IVL, right: MS) for the Levy function.
234567891015
IVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMS
10.03830.00160.00120.00110.00320.00120.00110.00100.00130.00190.00140.00100.00170.00090.00130.00090.00320.00080.00300.0019
20.02440.00160.00390.00120.00650.00300.00570.00290.00550.00340.00130.00430.00260.00630.00110.00700.00160.00760.00140.0182
30.00160.00090.00420.00300.00160.00700.00600.01210.00310.01870.00190.03060.00130.06080.00240.06240.00590.07750.00120.3176
40.00260.00170.00890.00860.00150.02990.00480.06300.00160.12030.00550.25210.00400.41560.00250.64200.00460.86960.00165.4197
50.00430.00320.02230.02810.00300.12960.02480.36370.00180.82790.03421.92640.00173.88850.00286.85420.003110.02630.001598.1058
Table 10. Performance metrics of the Rastrigin function: success rate, coefficient of variation, and efficiency with respect to dimension and partition size. Entries marked with an asterisk (*) correspond to cases of division by zero.
Table 10. Performance metrics of the Rastrigin function: success rate, coefficient of variation, and efficiency with respect to dimension and partition size. Entries marked with an asterisk (*) correspond to cases of division by zero.
234567891015
SRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEff
1*−1*0.0%−0.036300.0%0.248600.0%−0.049700.0%0.301100.0%0.490400.0%0.334300.0%0.46440*0.1872**−0.2102*
2*−1*0.0%−0.03920*−0.6327**−0.3818*−100.0%−0.4764−133.9%−0.43603.6482−100.0%−0.8251−1631.0%−0.931362.8699*−0.7318*477.6%−0.939795.2596
3*−1**−0.6019*59.6%−0.38941.72481200.0%−0.9406168−100.0%−0.1099−1463.5%−0.8976178.2974−100.0%−0.8786−12608.8%−0.98401937.6315*−0.7306*995.5%−0.95091002.4984
4*−1*286.3%−0.586213.9201410.9%−0.614443.57185715.4%−0.94993662.6923−100.0%−0.3598−1980.2%−12602.3617−100.0%−0.8905−18133.1%−154,090.3534*−0.8179*1404.5%−0.96017685.3858
5*−1*996.5%−0.4524119.22931185.2%−0.8228555.202231200.0%−0.964597,968−100.0%−0.5276−11642.0%−129,281.4974−100.0%−0.9605−124,170.4%−11,433,168.955*−0.8837*4738.3%−0.9634165,202.7248
Table 11. Preprocessing times (IVL) for the Rastrigin function.
Table 11. Preprocessing times (IVL) for the Rastrigin function.
#234567891015
10.00180.00120.00060.00060.00080.00060.00060.00070.00070.0015
20.00100.00090.00200.00260.00410.00430.00590.00750.00830.0193
30.00120.00370.00840.01540.02590.04120.05860.08120.10630.3752
40.00310.01370.03620.08620.17840.33280.55460.88361.33397.2134
50.00750.04290.17060.52111.29862.81945.45609.821116.4495133.8370
Table 12. Processing times (left: IVL, right: MS) for the Rastrigin function.
Table 12. Processing times (left: IVL, right: MS) for the Rastrigin function.
234567891015
IVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMS
10.01560.00170.00280.00120.00120.00110.00400.00170.00140.00190.00110.00110.00140.00100.00110.00090.00120.00090.00200.0020
20.01460.00160.00290.00150.00370.00180.00160.00250.00310.00350.00130.00410.00130.00540.00450.00740.00130.00760.00420.0149
30.00130.00100.00140.00290.00600.00450.00150.00850.00330.01610.00150.02770.00230.03500.00120.05350.02380.07300.00130.2074
40.00220.00140.00420.00720.00110.01540.00120.03690.00960.09260.00140.18180.01630.28160.00110.46420.01870.71930.00723.0977
50.00430.00290.00130.01920.00100.05970.00120.18800.01820.56180.00141.26960.10692.26250.00124.28610.11767.45350.006047.5542
Table 13. Performance metrics of the Rosenbrock function: success rate, coefficient of variation, and efficiency with respect to dimension and partition Size. Entries marked with an asterisk (*) correspond to cases of division by zero.
Table 13. Performance metrics of the Rosenbrock function: success rate, coefficient of variation, and efficiency with respect to dimension and partition Size. Entries marked with an asterisk (*) correspond to cases of division by zero.
234567891015
SRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEffSRCVEff
10.0%0.354300.0%*00.0%0.467400.0%*00.0%0.543700.0%0.402100.0%0.283200.0%0.829000.0%0.570100.0%−0.40311
20.0%0.386400.0%0.4482090.5%0.05092.6281−39.0%−0.55090.7252156.4%−0.85995.5746218.5%−0.83089.1424460.0%−0.853530.36−31.2%−0.93663.6567291.1%−0.891127.7584372.2%−0.9743107.6026
30.0%0.34000102.7%−0.04463.1088−25.6%−0.70332.1002100.8%−0.907524.5895517.7%−0.818748.0616734.5%−0.910493.21801711.8%−0.9435327.2788−63.9%−0.95005.05641810.9%−0.9357714.68421007.5%−13742.2503
437.0%−0.29300.8765279.8%−0.619127.2506−36.0%−0.87164.966494.4%−0.890382.30761680.3%−0.8913647.27671834.4%−0.9469716.23395634.3%−0.94963287.1804−100.0%−0.9715−14023.7%−0.97028501.49761031.2%−157,272.6533
543.9%−0.51291.0703476.5%−0.7817114.2941409.5%−0.892891.7256138.0%−0.9698715.43508154.0%−0.926122,374.07057806.9%−0.960412,502.710322,376.0%−0.947350,516.0329−100.0%−0.9938−185,340.9%−0.98421,849,368.58701059.9%−1880,807.0508
Table 14. Preprocessing times (IVL) for the Rosenbrock function.
Table 14. Preprocessing times (IVL) for the Rosenbrock function.
#234567891015
10.00130.00060.00060.00060.00070.00060.00070.00070.00060.0016
20.00120.00090.00210.00270.00340.00430.00630.00770.00940.0186
30.00120.00380.00870.01610.02560.04180.06220.07720.10610.3539
40.00310.01350.03680.08700.17850.32950.60860.88911.36136.8458
50.00760.04270.17400.52731.30072.79376.00069.873216.8640127.6426
Table 15. Processing times (left: IVL, right: MS) for the Rosenbrock function.
Table 15. Processing times (left: IVL, right: MS) for the Rosenbrock function.
234567891015
IVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMSIVLMS
10.01840.00190.00120.00110.00120.00130.00100.00120.00150.00180.00110.00090.00110.00070.00110.00070.00110.00070.00150.0020
20.01630.00170.00150.00140.00240.00230.00170.00320.00100.00280.00140.00370.00120.00570.00150.00690.00130.00760.00120.0184
30.00110.00080.00570.00280.00120.00600.00120.01130.00120.01470.00500.02550.00100.03830.00420.05240.00500.06180.00120.2742
40.00160.00190.00080.00640.00300.01910.00110.05230.00100.07790.00690.16620.00900.31360.00740.49340.00210.61690.00224.1540
50.00320.00290.00090.01840.00550.08090.00100.27970.00110.46150.00621.19210.01782.65450.01464.69960.00556.26300.001267.2266
Table 16. Sample size (S) and framework’s function calls performed (for a 10% space coverage) with respect to dimension (d) and partitioning (N = 2, 3, and 4, for the Rastrigin, Dixon–Price, and Rosenbrock benchmark functions, respectively).
Table 16. Sample size (S) and framework’s function calls performed (for a 10% space coverage) with respect to dimension (d) and partitioning (N = 2, 3, and 4, for the Rastrigin, Dixon–Price, and Rosenbrock benchmark functions, respectively).
N234
dSCallsSCallsSCalls
1111111
2111121
3113371
42191263
5422541032
67173114107
71312192163952
8261365736554179
952261969126,215147
1010351590520104,858416
1120510217,7151419,431
1241020553,14531,677,722
13820410159,43336,710,887
141639820478,297226,843,546
15327715391,434,89112107,374,183
16655432774,304,673429,496,730
1713,108655412,914,0171.7 × 109
1826,21513,10738,742,0496.9 × 109
1952,42926,214116,226,1472.7 × 1010
20104,85852,429348,678,4411.1 × 1011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nikas, I.A.; Georgopoulos, V.P.; Loukopoulos, V.C. Selective Multistart Optimization Based on Adaptive Latin Hypercube Sampling and Interval Enclosures. Mathematics 2025, 13, 1733. https://doi.org/10.3390/math13111733

AMA Style

Nikas IA, Georgopoulos VP, Loukopoulos VC. Selective Multistart Optimization Based on Adaptive Latin Hypercube Sampling and Interval Enclosures. Mathematics. 2025; 13(11):1733. https://doi.org/10.3390/math13111733

Chicago/Turabian Style

Nikas, Ioannis A., Vasileios P. Georgopoulos, and Vasileios C. Loukopoulos. 2025. "Selective Multistart Optimization Based on Adaptive Latin Hypercube Sampling and Interval Enclosures" Mathematics 13, no. 11: 1733. https://doi.org/10.3390/math13111733

APA Style

Nikas, I. A., Georgopoulos, V. P., & Loukopoulos, V. C. (2025). Selective Multistart Optimization Based on Adaptive Latin Hypercube Sampling and Interval Enclosures. Mathematics, 13(11), 1733. https://doi.org/10.3390/math13111733

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop