Next Article in Journal
Analytical Solution of Generalized Bratu-Type Fractional Differential Equations Using the Homotopy Perturbation Transform Method
Next Article in Special Issue
The Vertex Gutman Index and Gutman Index of the Union of Two Cycles
Previous Article in Journal
Study of Uniqueness and Ulam-Type Stability of Abstract Hadamard Fractional Differential Equations of Sobolev Type via Resolvent Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Algorithm Based on Compute Unified Device Architecture for Estimating Covering Functionals of Convex Bodies

School of Mathematics, North University of China, Taiyuan 030051, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(2), 132; https://doi.org/10.3390/axioms13020132
Submission received: 12 January 2024 / Revised: 14 February 2024 / Accepted: 14 February 2024 / Published: 19 February 2024
(This article belongs to the Special Issue Advances in Convex Geometry and Analysis)

Abstract

:
In Chuanming Zong’s program to attack Hadwiger’s covering conjecture, which is a longstanding open problem from Convex and Discrete Geometry, it is essential to estimate covering functionals of convex bodies effectively. Recently, He et al. and Yu et al. provided two deterministic global optimization algorithms having high computational complexity for this purpose. Since satisfactory estimations of covering functionals will be sufficient in Zong’s program, we propose a stochastic global optimization algorithm based on CUDA and provide an error estimation for the algorithm. The accuracy of our algorithm is tested by comparing numerical and exact values of covering functionals of convex bodies including the Euclidean unit disc, the three-dimensional Euclidean unit ball, the regular tetrahedron, and the regular octahedron. We also present estimations of covering functionals for the regular dodecahedron and the regular icosahedron.
MSC:
52C17; 52B11; 52B10; 52A15; 52A20

1. Introduction

A compact convex subset K of R n having interior points is called a convex body, whose boundary and interior are denoted by  bd K and int K , respectively. Let K n be the set of convex bodies in R n . For each K K n , let c ( K ) be the least number of translates of int K necessary to cover K. Regarding the least upper bound of c ( K ) in K n , there is a long-standing conjecture:
Conjecture 1
(Hadwiger’s covering conjecture). For each K K n , we have
c ( K ) 2 n ;
the equality holds if and only if K is a parallelotope.
Classical results related to this conjecture can be found in [1,2]. While extensive research has been conducted (see e.g., refs. [3,4,5,6,7]), Conjecture 1 has only been conclusively resolved when n = 2 .
For each p Z + , set [ p ] = i Z + 1 i p .
Let K K n . A set having the form λ K + x , where λ ( 0 , 1 ) and x R n , is called a smaller homothetic copy of K. According to Theorem 34.3 in [1], c ( K ) equals the least number of smaller homothetic copies of K required to cover K. Clearly, c ( K ) p for some p Z + if and only if Γ p ( K ) < 1 , where
Γ p ( K ) : = min γ > 0 x i i [ p ] R n s . t . K i [ p ] ( γ K + x i ) .
For each p Z + , the map
Γ p ( · ) : K n [ 0 , 1 ] K Γ p ( K )
is an affine invariant and is called the covering functional with respect to p. Let K K n and p Z + . A set C of p points satisfying
K Γ p ( K ) K + C ,
is called a p-optimal configuration of K.
In [8], Chuanming Zong proposed the first program based on computers to tackle Conjecture 1 via estimating covering functionals. Two different algorithms have been designed for this purpose. The first one is introduced by Chan He et al. (cf. [9]) based on the geometric branch-and-bound method (cf. [10]). The algorithm is implemented in two parts. The first part uses geometric branch-and-bound methods to estimate Γ ( K , C ) , where
Γ ( K , C ) = min γ > 0 K C + γ K .
The second part also uses geometric branch-and-bound methods to estimate Γ p ( K ) . When n 3 , computing Γ ( K , C ) and Γ p ( K ) in this way exhibits a significantly high computational complexity. The other is introduced by Man Yu et al. (cf. [11]) based on the relaxation algorithm. Let S K be a discretization of K, P K be a set containing a p-optimal configuration of K, and V P K . They transformed the problem of covering S by smaller homothetic copies of K into a vertex p-center problem, and showed that the solution of the corresponding vertex p-center problem is a good approximation of Γ p ( K ) by proving
Γ p ( S ) ε 1 Γ p ( K ) Γ p ( S ) + ε 2 ,
where
Γ p ( S ) = min γ > 0 x i i [ p ] R n s . t . S i [ p ] ( γ K + x i ) ,
and ε 1 and ε 2 are two positive numbers satisfying
K S + ε 1 K and P K V + ε 2 K .
Clearly, finer discretizations of K are required to obtain more accurate estimates of Γ p ( K ) , which will lead to higher computational complexity.
In this paper, we propose an algorithm utilizes Compute Unified Device Architecture (CUDA) and stochastic global optimization methods to accelerate the process of estimating Γ p ( K ) . CUDA is a parallel computing platform, particularly well-suited for handling large-scale computational tasks by performing many computations in parallel (cf. [12]). When discretizing convex bodies, CUDA provides a natural discretization method and enables parallel computation for all discretized points, thereby accelerating the execution of algorithms. As show in Section 2, when calculating Γ ( S , C ) for some C R n , we need to obtain the maximum dissimilarity between a point in S and its closest point in C. The reduction technique provided by CUDA, which typically involves performing a specific operation on all elements in an array (summation, finding the maximum, finding the minimum, etc., cf. [13]), enables the efficient computation of Γ ( S , C ) . When facing large-scale optimization problems, stochastic algorithms have the capability to produce high-quality solutions in a short amount of time.
In Section 2, the problem of estimating Γ p ( K ) is transformed into a minimization optimization problem, and an error estimation is provided. Using ideas mentioned in [9], an algorithm based on CUDA for Γ p ( S ) is designed in Section 3. Results of computational experiments showing the effectiveness of our algorithm are presented in Section 4.

2. Covering Functional and Error Estimation

As in [9], we put Q n = [ 1 , 1 ] n . For each K K n , let
α ( K ) = max α > 0 T A n s . t . α Q n T ( K ) Q n ,
where A n is the set of all non-singular affine transformations on R n . We can apply an appropriate affine transformation, if necessary, to ensure that
α ( K ) Q n K Q n .
Remark 1.
In general, it is not easy to calculate α ( K ) . If K is symmetric about the origin, then 1 α ( K ) is the Banach–Mazur distance between K and Q n . By Proposition 37.6 in [14], when K is the n-dimensional Euclidean unit ball B 2 n , 1 α ( K ) = n . For our purpose, it will be sufficient to choose an α, as large as possible, such that α Q n K Q n .
Definition 1
(cf. [11]). Let K K n . For k, c R n , the number
d ( k , c ) = min γ 0 k γ K + c .
is called the dissimilarity of k and c.
If K = B 2 n , the dissimilarity between any two points is precisely the Euclidean distance between these two points. In general, the dissimilarity is not symmetric. If K is an n-dimensional convex polytope, the dissimilarity between any two points can be computed by Lemma 1.
Lemma 1.
Let K K n be an n-dimensional convex polytope with o i n t K determined by
K = x R n A x B ,
where A is an m-by-n matrix and B is an m-dimensional column vector whose elements are all 1. For any u , c R n , we have
d ( u , c ) = max 1 i m p i ,
where ( p i ) m × 1 = A ( u c ) .
Proof. 
Since K is bounded, max 1 i m p i 0 . Let
γ 0 = min γ 0 u c + γ K .
For positive γ , we have
u c γ K 1 γ ( u c ) K 1 γ A ( u c ) B A ( u c ) γ B .
Since B = ( 1 , 1 , 1 ) T , we have max 1 i m p i γ , which implies that
max 1 i m p i γ 0 = d ( u , c ) .
Since p i max 1 j m p j , i [ m ] , we have A ( u c ) ( max 1 i m p i ) B . Thus,
u c ( max 1 i m p i ) K ,
which implies that
max 1 i m p i γ 0 = d ( u , c ) .
The desired equality (3) follows from (4) and (5).    □
Remark 2.
Clearly, each convex polytope K containing the origin 0 in its interior can be represented as (2).
Let i 2 and j [ n ] be two integers. We denote by p j ( x ) the j-th coordinate of a point x R n and
S i = ( γ 1 , , γ n ) R n γ j 1 + 2 k i 1 k [ i 1 ] 0 , j [ n ] .
Lemma 2.
Q n S i + 1 i 1 Q n .
Proof. 
Suppose that x = ( β 1 , , β n ) Q n . Let y R n be the point satisfying
p j ( y ) = β j ( i 1 ) i 1 , β j ( i 1 ) + i 1 is even , β j ( i 1 ) + 1 i 1 , β j ( i 1 ) + i 1 is odd , j [ n ] .
Then, y S i . Let j [ n ] . If β j ( i 1 ) + i 1 is even, then
| β j p j ( y ) | = β j ( i 1 ) β j ( i 1 ) i 1 1 i 1 .
Otherwise, we have
| β j p j ( y ) | = | β j ( i 1 ) β j ( i 1 ) 1 | i 1 1 i 1 .
It follows that || x y || 1 i 1 . Therefore,
x y + 1 i 1 Q n S i + 1 i 1 Q n .
   □
Theorem 1.
Let α > 0 and i 2 be an integer. If K K n satisfies α Q n K Q n , then
K ( S i ( 1 + α ( i 1 ) α ( i 1 ) K ) ) + 1 i 1 Q n .
Proof. 
By Lemma 2, we have
Q n S i + 1 i 1 Q n .
It follows that
K ( S i + 1 i 1 Q n ) K .
Let x K . There exists a point p S i such that
x p + 1 i 1 Q n .
Thus,
p x 1 i 1 Q n K + 1 i 1 Q n K + 1 α ( i 1 ) K = 1 + α ( i 1 ) α ( i 1 ) K .
Therefore, we have
x ( S i ( 1 + α ( i 1 ) α ( i 1 ) K ) ) + 1 i 1 Q n .
   □
Let α > 0 , i 2 be an integer, K be a convex body satisfying α Q n K Q n ,
S = S i ( 1 + α ( i 1 ) α ( i 1 ) K ) ,
and C R n . Put
Γ ( S , C ) = inf γ > 0 S C + γ K ,
and
Γ p ( S ) = min Γ ( S , C ) C contains at most p points .
Proposition 1.
Let K, α, i, S be as above, p Z + . Then,
Γ p ( K ) Γ p ( S ) + 1 α ( i 1 ) .
Proof. 
By Theorem 1, we have
K S + 1 i 1 Q n S + 1 α ( i 1 ) K .
Let C be a p-element subset of R n such that S C + Γ p ( S ) K . We have
K S + 1 α ( i 1 ) K C + Γ p ( S ) K + 1 α ( i 1 ) K ,
which completes the proof.    □

3. An Algorithm Based on CUDA for Γ p ( S )

Let K K n , p Z + , and C be a set of p points. First, we use CUDA to obtain S defined by (6) and compute the minimum dissimilarity from each point in S to C. Then, we employ a CUDA-based reduction algorithm to obtain Γ ( S , C ) . Finally, we use different stochastic global optimization algorithms to estimate Γ p ( S ) and select an appropriate optimization algorithm through comparison. Figure 1 shows the overall framework of the algorithm.

3.1. An Algorithm Based on CUDA for Γ ( S , C )

CUDA organizes threads into a hierarchical structure consisting of grids, blocks, and threads. The grid is the highest-level organization of threads in CUDA, and a grid represents a collection of blocks. A block, identified by a unique block index within its grid, is a group of threads that can cooperate with each other and share data using shared memory. Threads are organized within blocks, and each thread is identified by a unique thread index within its block. The number of blocks and threads per block can be specified when launching a CUDA kernel. The grid and block dimensions can be one-dimensional, two-dimensional, or three-dimensional, depending on the problem to address. For more information about CUDA, we refer to [15,16,17,18].
The organization of threads within blocks and grids provides a natural way to discretize  Q 3 . First, we discretize [ 1 , 1 ] × [ 1 , 1 ] × 0 into a set P of (gridDim.x) × (gridDim.y) points. Each point p in P corresponds to a block B p in CUDA. And B p contains a collection of blockDim.x threads, each one of which corresponds to a point in p + 0 × 0 × [ 1 , 1 ] . See Figure 2, where gridDim.x, girdDim.y, and blockDim.x are set to be 5.
Let T be the set of all threads invoked by CUDA. Then, the cardinality of T is
gridDim . x × gridDim . y × blockDim . x .
For i [ gridDim . x 1 ] { 0 } , j [ gridDim . y 1 ] { 0 } , and k [ blockDim . x 1 ] { 0 } , there is a thread t : = t ( i , j , k ) T indexed by
( j × gridDim . x + i ) × blockDim . x + k ,
which corresponds to the point
p t = ( 1 + 2 i gridDim . x 1 , 1 + 2 j gridDim . y 1 , 1 + 2 k blockDim . x 1 ) Q 3 .
Put
S = t T p t ( 1 + 1 α ( gridDim . x 1 ) ) K ,
where α is a positive number satisfying α Q 3 K Q 3 . For each t S , denote by d ( p t , C ) the minimum dissimilarity from point p t to C, i.e.,
d ( p t , C ) = min d ( p t , c ) c C .
If t S , we set d ( p t , C ) = 0 . The CUDA thread corresponding to t T computes d ( p t , C ) . Then, a CUDA-based reduction algorithm will be invoked to obtain max t T d ( p t , C ) .
The idea of the reduction algorithm based on CUDA is to divide the original data into multiple blocks, and then perform a local reduction operation on each block to obtain the local reduction result, and, finally, a global reduction operation is performed on the local reduction results to obtain the final reduction result (cf. [19]).
Algorithm 1 with parameters K, C, p, blockDim.x, gridDim.x, gridDim.y, and α calculates Γ ( S , C ) . It is more efficient than the geometric branch-and-bound approach proposed in [9]. For example, take K = B 2 3 , p = 6 ,
C 1 = { ( 0.125 , 0.125 , 0.125 ) , ( 0.125 , 0.125 , 0.125 ) , ( 0.25 , 0.25 , 0.25 ) , ( 0.25 , 0.25 , 0.25 ) , ( 0.5 , 0.5 , 0.5 ) , ( 0.5 , 0.5 , 0.5 ) , ( 0.0375 , 0.0375 , 0.0375 ) , ( 0.0375 , 0.0375 , 0.0375 ) } , and C 2 = { ( 0 , 0.57 , 0.38 ) , ( 0.074 , 0.185 , 0.618 ) , ( 0.05 , 0.66 , 0.13 ) , ( 0.32 , 0.25 , 0.58 ) , ( 0.52 , 0.15 , 0.38 ) , ( 0.587 , 0.123 , 0.22 ) , ( 0.662 , 0.024 , 0.013 ) , ( 0.136 , 0.589 , 0.319 ) } .
Algorithm 1 An algorithm based on CUDA to compute Γ ( S , C )
Require: 
a convex body K, a set C of p points in R n , a positive number α , blockDim.x, gridDim.x and gridDim.y
Ensure: 
m a x as an estimation of Γ ( S , C )
 1:
Host and device allocate memory, initialize and copy host data to device
 2:
b i d blockIdx . y gridDim . x + blockIdx . x
 3:
t b i d blockDim . x + threadIdx . x
 4:
x [ t ] 1 + 2 / ( gridDim . x 1 ) blockIdx . x
 5:
y [ t ] 1 + 2 / ( gridDim . y 1 ) blockIdx . y
 6:
z [ t ] 1 + 2 / ( blockDim . x 1 ) threadIdx . x
 7:
p [ t ] ( x [ i d ] , y [ i d ] , z [ i d ] )
 8:
if  ( p [ t ] ( 1 + 1 α ( gridDim . x 1 ) ) K   then:
 9:
     Calculate d ( p [ t ] , C ) by Lemma 1
10:
     P [ t ] d ( p [ t ] , C )
11:
else
12:
     P [ t ] 0
13:
end if
14:
s D a t a [ threadIdx . x ] P [ t ]
15:
Using the reduction algorithm to find the maximum value
16:
D d [ blockIdx . x ] s D a t a [ 0 ]
17:
Copy the final reduction result to the host, D [ 0 ] D d [ blockIdx . x ]
18:
m a x D [ 0 ]
19:
return  m a x
Algorithm 1 yields good estimations of Γ p ( S , C 1 ) and Γ p ( S , C 2 ) much faster, as seen in Table 1. Both algorithms run on a computer equipped with an AMD Ryzen 9 3900X 12-core processor and the NVIDIA A4000 graphics processor. For Algorithm 1, we take gridDim . x = gridDim . y = blockDim . x = 1024 , and the accuracy is given by Proposition 1. For the geometric branch-and-bound algorithm, we set the relative accuracy ε to be 0.0001 (cf. [9] for the usage of the relative accuracy). The execution time of the geometric branch-and-bound approach exhibits substantial variability among different Cs, whereas the algorithm based on CUDA shows relatively consistent execution times across various cases.

3.2. Different Stochastic Global Optimization Algorithms for Γ p ( S )

We choose to employ stochastic global optimization algorithms for several reasons. In the program proposed by Chuanming Zong (cf. [8]), after appropriately selecting a positive real number β and constructing a β -net N for K n endowed with the Banach–Mazur metric, we only need to verify that Γ 2 n ( K ) c n holds for each K N , where c n is a reasonably accurate estimate of the least upper bound of Γ 2 n ( K ) . For this purpose, we do not need to determine exact values of covering functionals of convex bodies in N . Stochastic global optimization algorithms demonstrate a low time complexity and high algorithmic efficiency. Moreover, based on the results presented in Table 2, it is evident that stochastic global optimization algorithms provide satisfactory estimates for covering functionals.
The NLopt (Non-Linear Optimization) library is a rich collection of optimization routines and algorithms, which provides a platform-independent interface for global and local optimization problems (cf. [20]). Algorithms in the NLopt library are partitioned into four categories: non-derivative-based global algorithms, derivative-based global algorithms, non-derivative-based local algorithms, and derivative-based local algorithms. We use several non-derivative-based stochastic global algorithms here. All global optimization algorithms require bound constraints to be specified as optimization parameters (cf. e.g., ref. [21]).
The following is the framework of a stochastic optimization algorithm based on NLopt.
  • Define the objective function and boundary constraints.
  • Declare an optimizer for NLopt.
  • Set algorithm and dimension.
  • Set termination conditions. NLopt provides different termination condition options including: value tolerance, parameter tolerance, function value stop value, iteration number, and time.
Proposition 2.
If K K n , K Q n , and ( x + K ) K , then x 2 Q n .
Proof. 
Suppose that y ( x + K ) K . Then z : = y x K Q n . Thus,
1 2 x = 1 2 y + 1 2 ( z ) Q n ,
which shows that x 2 Q n .    □
Remark 3.
By Proposition 2, when K Q n , we only need to search 2 Q n for points in a p-optimal configuration of K.
We utilized different stochastic global optimization algorithms to choose a more efficient one. Optimization algorithms under consideration include Controlled Random Search with local mutation (GN_CRS2_LM) (cf. [22]), evolutionary strategy (GN_ESCH) (cf. [23]), and evolutionary constrained optimization (GN_ISRES) (cf. [24]). See Algorithm 2.
Algorithm 2 A stochastic optimization algorithm for Γ p ( S ) based on NLopt.
Require: 
K, C, p, blockDim.x, gridDim.x, gridDim.y, an estimation Γ p ( K ) γ , lower bound LB and upper bound UB of the search domain
Ensure: 
m i n as an estimation of Γ p ( S ) .
 1:
F u n c t i o n M i n ( u n s i g n e d n , c o n s t d o u b l e x , d o u b l e g r a d , v o i d d a t a )
 2:
begin
 3:
      x C
 4:
      m a x procedure( K , C , p , blockDim.x, gridDim.x, gridDim.y) ▹Algorithm 1
 5:
end
 6:
o p t e r n l o p t _ c r e a t e ( N L O P T _ G N _ C R S 2 _ L M , p × n )
 7:
o p t e r _ s e t _ l o w e r _ b o u n d s ( l b ) LB
 8:
o p t e r _ s e t _ u p p e r _ b o u n d s ( u b ) UB
 9:
n l o p t _ s e t _ m i n _ o b j e c t i v e ( o p t e r , M i n , N U L L )
10:
n l o p t _ s e t _ x t o l _ r e l ( o p t e r , t o l )
11:
m i n γ
12:
i 0
13:
repeat
14:
      s t o p v a l m i n
15:
      n l o p t _ s e t _ s t o p v a l ( o p t e r , s t o p v a l )
16:
      n l o p t _ g e t _ s t o p v a l ( o p t e r )
17:
      r e s u l t n l o p t _ o p t i m i z e ( o p t e r , x , &f_ m i n )
18:
     if result then
19:
          m i n f _ m i n
20:
     end if
21:
      i i + 1
22:
until  i 40
23:
return  m i n
Let p be a positive integer, K = B 2 3 , LB = −2, UB = 2, and
gridDim . x = gridDim . y = blockDim . x = 1024 .
Table 3 shows a comparison between these three stochastic algorithms. It can be seen that GN_CRS2_LM is better than the other ones.

4. Computational Experiments

All algorithms were coded in CUDA 12.2 and gcc 13.2.1. The computer’s graphics card is an NVIDIA RTX A4000.

4.1. Covering Functional of the Euclidean Unit Disc

Let B 2 2 be the Euclidean unit disc. In this situation, d ( x , y ) is the Euclidean distance between x and y. The discretization of Q 2 is performed using threadIdx.x for the x axis and blockIdx.x for the y axis. For a positive integer i, let i = blockDim . x = gridDim . x ; then, the values of threadIdx.x and blockIdx.x range from 0 to i 1 . Let
S i 2 = ( x , y ) x , y 1 + 2 k i 1 k i 1 0 .
It is clear that
2 2 Q 2 B 2 2 Q 2 .
Let
S = S i 2 ( 1 + 2 2 ( i 1 ) 2 2 ( i 1 ) K ) = S i 2 ( 2 + 2 ( i 1 ) 2 ( i 1 ) K ) .
By Theorem 1, we have B 2 2 S + 1 i 1 Q 2 . By Corollary 3.1 in [9], there is a p-optimal configuration of B 2 2 contained in B 2 2 . Thus, we can take LB = 1 and UB = 1 . Proposition 1 shows that
Γ p ( B 2 2 ) Γ p ( S ) + 2 i 1 .
Numerical estimates of Γ p ( B 2 2 ) are summarized in Table 4.

4.2. Covering Functionals of Three-Dimensional Convex Bodies

For each K K 3 satisfying α Q 3 K Q 3 , let
i = gridDim . x = gridDim . y = blockDim . x
be a positive integer; then, the values of threadIdx.x, blockIdx.x and blockIdx.y range from 0 to i 1 . Let
S i 3 = ( x , y , z ) x , y , z 1 + 2 k i 1 k [ i 1 ] { 0 } ,
and
S = S i 3 ( 1 + α ( i 1 ) α ( i 1 ) K ) .
By Theorem 1 and Proposition 1, we have
Γ p ( K ) Γ p ( S ) + 1 α ( i 1 ) .

4.2.1. Covering Functionals of B 2 3

By Remark 3, we set LB = 2 and UB = 2 . Clearly,
3 3 Q 3 B 2 3 Q 3 .
From (8), we have
Γ p ( B 2 3 ) Γ p ( S ) + 3 i 1 .
Numerical estimations of Γ p ( B 2 3 ) for p { 4 , 5 , 6 , 7 , 8 } are summarized in Table 2.

4.2.2. Covering Functional of the Regular Tetrahedron

Let
T = x R 3 p i ( x ) 0 and i [ 3 ] p i ( x ) 1 .
Thus, T is a regular tetrahedron affinely equivalent to
T 1 : = x R 3 p i ( x ) 1 5 and i [ 3 ] p i ( x ) 3 5 .
Clearly, T 1 = x R n A x B , where
A = 5 3 5 3 5 3 5 0 0 0 5 0 0 0 5 and   B = 1 1 1 1 .
Let v i ( x ) = p i ( x ) p i ( c ) . Then,
A [ x c ] = 5 3 5 3 5 3 5 0 0 0 5 0 0 0 5 · v 1 ( x ) v 2 ( x ) v 3 ( x ) = 5 3 ( v 1 ( x ) ) + 5 3 ( v 2 ( x ) ) + 5 3 ( v 3 ( x ) ) 5 ( v 1 ( x ) ) 5 ( v 2 ( x ) ) 5 ( v 3 ( x ) ) .
By Lemma 1, the dissimilarity d ( x , c ) between x and c is given by
d ( x , c ) = { max 5 3 v 1 ( x ) + 5 3 v 2 ( x ) + 5 3 v 3 ( x ) , 5 v 1 ( x ) , 5 v 2 ( x ) , 5 v 3 ( x ) } .
It can be verified that
1 5 Q 3 T 1 Q 3 ,
see Figure 3, where
w 1 = ( 1 5 , 1 5 , 1 5 ) , w 2 = ( 1 5 , 1 5 , 1 5 ) , w 3 = ( 1 5 , 1 5 , 1 5 ) , w 4 = ( 1 , 1 5 , 1 5 ) , w 5 = ( 1 5 , 1 5 , 1 ) , and w 6 = ( 1 5 , 1 , 1 5 ) .
By Remark 3, we take LB = 2 and UB = 2 . By (8), we have
Γ p ( T ) Γ p ( S ) + 5 i 1 .
Numerical estimations of Γ p ( T ) for p { 4 , 5 , 6 , 7 , 8 } are summarized in Table 5.

4.2.3. Covering Functional of the Regular Octahedron

Let
B 1 3 = x R 3 i [ 3 ] | p i ( x ) | 1 .
Then, B 1 3 is a regular octahedron. By Lemma 1, the dissimilarity between x and c with respect to B 1 3 is given by
d ( x , c ) = | v 1 ( x ) | + | v 2 ( x ) | + | v 3 ( x ) | .
It can be verified that
1 3 Q 3 B 1 3 Q 3 ,
see Figure 4, where
w 1 = ( 1 , 0 , 0 ) , w 2 = ( 0 , 0 , 1 ) , w 3 = ( 1 3 , 1 3 , 1 3 ) , and w 4 = ( 1 3 , 1 3 , 1 3 ) .
By Remark 3, we can set LB = 2 and UB = 2 . From (8), we have
Γ p ( B 1 3 ) Γ p ( S ) + 3 i 1 .
Numerical estimations of Γ p ( B 1 3 ) for p { 6 , 7 , 8 } are summarized in Table 6.

4.2.4. Covering Functional of the Regular Dodecahedron

Let P be the regular dodecahedron having the form:
P = x R 3 φ | p 1 ( x ) | + | p 2 ( x ) | 1 , φ | p 2 ( x ) | + | p 3 ( x ) | 1 , φ | p 3 ( x ) | + | p 1 ( x ) | 1 ,
where φ = 5 1 2 . By Lemma 1, the dissimilarity between x and c with respect to P is given by
d ( x , c ) = max { φ v 1 ( x ) + v 2 ( x ) , φ v 1 ( x ) v 2 ( x ) , φ v 1 ( x ) + v 2 ( x ) , φ v 1 ( x ) v 2 ( x ) , φ v 2 ( x ) + v 3 ( x ) , φ v 2 ( x ) v 3 ( x ) , φ v 2 ( x ) + v 3 ( x ) , φ v 2 ( x ) v 3 ( x ) , φ v 3 ( x ) + v 1 ( x ) , φ v 3 ( x ) v 1 ( x ) , φ v 3 ( x ) + v 1 ( x ) , φ v 3 ( x ) v 1 ( x ) } .
It can be verified that
φ Q 3 P Q 3 ,
see Figure 5, where
w 1 = ( 1 , 0 , φ 1 ) , w 2 = ( 1 , 0 , 1 φ ) , w 3 = ( φ , φ , φ ) , and w 4 = ( φ , φ , φ ) .
By Remark 3, we can set LB = 2 and UB = 2 . From (8), we have
Γ p ( P ) Γ p ( S ) + 1 φ ( i 1 ) .
Numerical estimations of Γ p ( P ) for p { 5 , 6 , 7 , 8 } are summarized in Table 7.
The estimate of Γ 8 ( P ) is much better than that given in [29].

4.2.5. Covering Functional of the Regular Icosahedron

Let M be the regular icosahedron having the following form:
M = { x R 3 | φ p 1 ( x ) + φ p 2 ( x ) + φ p 3 ( x ) 1 , ( φ 1 ) | p 1 ( x ) | + | p 3 ( x ) | 1 , | p 1 ( x ) | + ( φ 1 ) | p 2 ( x ) | 1 , | p 2 ( x ) | + ( φ 1 ) | p 3 ( x ) | 1 } ,
where φ = 5 1 2 . By Lemma 1, the dissimilarity between x and c with respect to M is given by
d ( x , c ) = max { φ v 1 ( x ) + φ v 2 ( x ) + φ v 3 ( x ) , φ v 1 ( x ) + φ v 2 ( x ) φ v 3 ( x ) , φ v 1 ( x ) + φ v 2 ( x ) + φ v 3 ( x ) , φ v 1 ( x ) + φ v 2 ( x ) φ v 3 ( x ) , φ v 1 ( x ) φ v 2 ( x ) + φ v 3 ( x ) , φ v 1 ( x ) φ v 2 ( x ) + φ v 3 ( x ) , φ v 1 ( x ) φ v 2 ( x ) φ v 3 ( x ) , φ v 1 ( x ) φ v 2 ( x ) φ v 3 ( x ) , ( φ 1 ) v 1 ( x ) + v 3 ( x ) , ( φ 1 ) v 1 ( x ) v 3 ( x ) , ( 1 φ ) v 1 ( x ) + v 3 ( x ) , ( 1 φ ) v 1 ( x ) + v 3 ( x ) , ( φ 1 ) v 2 ( x ) + v 1 ( x ) , ( φ 1 ) v 2 ( x ) v 1 ( x ) , ( 1 φ ) v 2 ( x ) + v 1 ( x ) , ( 1 φ ) v 2 ( x ) + v 1 ( x ) , ( φ 1 ) v 3 ( x ) + v 2 ( x ) , ( φ 1 ) v 3 ( x ) v 2 ( x ) , ( 1 φ ) v 3 ( x ) + v 2 ( x ) , ( 1 φ ) v 3 ( x ) + v 2 ( x ) } .
It can be verified that
1 + φ 3 Q 3 M Q 3 ,
see Figure 6, where
w 1 = ( φ 1 , 0 , 1 ) , w 2 = ( 1 φ , 0 , 1 ) , w 3 = ( 1 + φ 3 , 1 + φ 3 , 1 + φ 3 ) , and w 4 = ( 1 + φ 3 , 1 + φ 3 , 1 + φ 3 ) .
By Remark 3, we take LB = 2 and UB = 2 . By (8), we have
Γ p ( M ) Γ p ( S ) + 3 ( 1 + φ ) ( i 1 ) .
Numerical estimations of Γ p ( M ) for p { 4 , 5 , 6 , 7 , 8 } are summarized in Table 8.
Compared to the estimate Γ 6 ( M ) 0.873 given in [29], our estimate here is much better.

5. Conclusions

This paper proposes an algorithm based on CUDA for estimating covering functionals of three-dimensional convex polytopes, which is more efficient than the geometric branch-and-bound method presented in [9]. The theoretical accuracy of our algorithm is given by (8). Restrictions on the dimensions of each grid makes the implementation of our algorithm in higher dimensions difficult.

Author Contributions

Conceptualization, S.W.; methodology, X.H. and S.W.; software, X.H.; validation, S.W. and L.Z.; writing—original draft preparation, X.H.; writing—review and editing, S.W. and L.Z.; funding acquisition, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant number 12071444), the Fundamental Research Program of Shanxi Province (grant numbers 202103021223191 and 202303021221116), and the 19th Graduate Science and Technology Project of North University of China (grant number 20231944).

Data Availability Statement

Data sharing is not applicable, since no dataset was generated or analyzed during the current study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Boltyanski, V.; Martini, H.; Soltan, P.S. Excursions into Combinatorial Geometry, Universitext; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  2. Martini, H.; Soltan, V. Combinatorial problems on the illumination of convex bodies. Aequationes Math. 1999, 57, 121–152. [Google Scholar] [CrossRef]
  3. Bezdek, K. Classical Topics in Discrete Geometry, CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC; Springer: New York, NY, USA, 2010. [Google Scholar]
  4. Brass, P.; Moser, M.; Pach, J. Research Problems in Discrete Geometry; Springer: New York, NY, USA, 2005. [Google Scholar]
  5. Bezdek, K.; Khan, M.A. The geometry of homothetic covering and illumination. In Discrete Geometry and Symmetry; Springer: Cham, Switzerland, 2018; Volume 234, pp. 1–30. [Google Scholar]
  6. Rogers, C.A.; Zong, C. Covering convex bodies by translates of convex bodies. Mathematika 1997, 44, 215–218. [Google Scholar] [CrossRef]
  7. Papadoperakis, I. An estimate for the problem of illumination of the boundary of a convex body in E3. Geom. Dedicata 1999, 75, 275–285. [Google Scholar] [CrossRef]
  8. Zong, C. A quantitative program for Hadwiger’s covering conjecture. Sci. China Math. 2010, 53, 2551–2560. [Google Scholar] [CrossRef]
  9. He, C.; Lv, Y.; Martini, H.; Wu, S. A branch-and-bound approach for estimating covering functionals of convex bodies. J. Optimiz. Theory. App. 2023, 196, 1036–1055. [Google Scholar] [CrossRef]
  10. Scholz, D. Deterministic Global Optimization: Geometric Branch-and-Bound Methods and Their Applications; Springer: New York, NY, USA, 2012. [Google Scholar]
  11. Yu, M.; Lv, Y.; Zhao, Y.; He, C.; Wu, S. Estimations of covering functionals of convex bodies based on relaxation algorithm. Mathematics 2023, 11, 2000. [Google Scholar] [CrossRef]
  12. Han, T.D.; Abdelrahman, T.S. Hicuda: High-level GPGPU programming. IEEE T. Parall. Distr. 2011, 22, 78–90. [Google Scholar] [CrossRef]
  13. Harris, M. Optimizing Parallel Reduction in CUDA. Available online: https://developer.download.nvidia.cn/assets/cuda/files/reduction.pdf (accessed on 14 February 2024).
  14. Tomczak-Jaegermann, N. Banach-Mazur distances and finite-dimensional operator ideals. In Pitman Monographs and Surveys in Pure and Applied Mathematics; John Wiley & Sons, Inc.: New York, NY, USA, 1989; Volume 38. [Google Scholar]
  15. Dind, K.; Tan, T.A. Review on general purpose computing on GPU and its applications in computational intelligence. CAAI Trans. Intell. Syst. 2015, 10, 1–11. [Google Scholar]
  16. Cheng, J.; Grossman, M.; Mckercher, T. Professional CUDA C Programming; John Wiley & Sons, Inc.: New York, NY, USA, 2014. [Google Scholar]
  17. Ma, X.; Han, W. A parallel multi-swarm particle swarm optimization algorithm based on CUDA streams. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November–2 December 2018; pp. 3002–3007. [Google Scholar]
  18. Guide, D. CUDA C Programming Guide. Available online: https://docs.nvidia.com/cuda/archive/9.1/pdf/CUDA_C_Programming_Guide.pdf (accessed on 14 February 2024).
  19. Zhang, Y.; Chen, L.; An, X.; Yan, S. Study on performance optimization of reduction algorithm targeting GPU computing platform. Comput. Sci. 2019, 46, 306–314. [Google Scholar]
  20. Steven, G.J. NLopt Documentation. Available online: https://nlopt.readthedocs.io/en/latest/ (accessed on 14 February 2024).
  21. Tseng, L.Y.; Chen, C. Multiple trajectory search for large scale global optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 3052–3059. [Google Scholar]
  22. Wang, H.; Shao, H. System optimization strategy based on genetic algorithm and controlled random search. Control. Theory Appl. 2000, 6, 907–910. [Google Scholar]
  23. Hansen, N.; Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 2001, 9, 159–195. [Google Scholar] [CrossRef] [PubMed]
  24. Branke, J.; Deb, K.; Miettinen, K.; Slowinski, R. Multi-Objective Optimization: Interactive and Evolutionary Approaches; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  25. Fejes Tóth, G. Thinnest covering of a circle by eight, nine, or ten congruent circles. In Combinatorial and Computational Geometry; Cambridge University Press: Cambridge, UK, 2005; Volume 52, pp. 361–376. [Google Scholar]
  26. Yu, M.; Gao, S.; He, C.; Wu, S. Estimations of covering functionals of simplices. Math. Inequal. Appl. 2023, 26, 793–809. [Google Scholar] [CrossRef]
  27. Lian, Y.; Zhang, Y. Covering the crosspolytope with its smaller homothetic copies. arXiv 2021, arXiv:2103.10004v2. [Google Scholar]
  28. Joós, A. Covering the crosspolytope with crosspolytopes. arXiv 2023, arXiv:2305.00569. [Google Scholar]
  29. Wu, S.; He, C. Covering functionals of convex polytopes. Linear Algebra. Appl. 2019, 577, 53–68. [Google Scholar] [CrossRef]
Figure 1. The overall framework.
Figure 1. The overall framework.
Axioms 13 00132 g001
Figure 2. Discretize [ 1 , 1 ] 3 with CUDA.
Figure 2. Discretize [ 1 , 1 ] 3 with CUDA.
Axioms 13 00132 g002
Figure 3. Inscribed cube and circumscribed cube of the regular tetrahedron.
Figure 3. Inscribed cube and circumscribed cube of the regular tetrahedron.
Axioms 13 00132 g003
Figure 4. Circumscribed and inscribed cubes of the regular octahedron.
Figure 4. Circumscribed and inscribed cubes of the regular octahedron.
Axioms 13 00132 g004
Figure 5. Inscribed cube and circumscribed cube of the regular dodecahedron.
Figure 5. Inscribed cube and circumscribed cube of the regular dodecahedron.
Axioms 13 00132 g005
Figure 6. Inscribed cube and circumscribed cube of the regular icosahedron.
Figure 6. Inscribed cube and circumscribed cube of the regular icosahedron.
Axioms 13 00132 g006
Table 1. Comparison of results between the algorithm based on CUDA and the geometric branch-and-bound approach.
Table 1. Comparison of results between the algorithm based on CUDA and the geometric branch-and-bound approach.
AlgorithmAlgorithm 1The Geometric Branch-and-Bound Approach
accuracy 3 1023 0.0001
Γ ( S , C 1 ) 1.0027501.002089
Time42.258720 ms32.594993 s
Γ ( S , C 2 ) 0.7481030.748099
Time42.232449 ms0.303316 s
Table 2. Numerical estimations of Γ p ( B 2 3 ) .
Table 2. Numerical estimations of Γ p ( B 2 3 ) .
iLBUBp Γ p ( S i ) Ranges of Γ p ( B 2 3 ) Known Exact Value
1024 2 240.942359 0.9441 0.943 [8]
1024 2 250.894146 0.8959 0.894 [8]
1024 2 260.816220 0.8180 0.816 [8]
1024 2 270.777139 0.7789 0.778 [8]
1024 2 280.744846 0.7466
Table 3. Comparison between different stochastic algorithms.
Table 3. Comparison between different stochastic algorithms.
AlgorithmGN_CRS2_LMGN_ESCHGN_ISRES
Γ 5 ( B 2 3 ) 0.8941200.8967740.895573
Time3655 s7863 s35231 s
Γ 6 ( B 2 3 ) 0.8173860.8188810.819020
Time3332 s8036 s34921 s
Table 4. Numerical estimations of Γ p ( B 2 2 ) .
Table 4. Numerical estimations of Γ p ( B 2 2 ) .
iLBUBp Γ p ( S i ) Ranges of Γ p ( B 2 2 ) Known Exact Value
1024 1 130.865850≤0.86730.866 [5]
1024 1 140.706002≤0.70740.707 [5]
1024 1 150.609076≤0.61050.609 [5]
1024 1 160.555575≤0.55700.555 [5]
1024 1 170.499461≤0.50090.5 [25]
1024 1 180.444873≤0.44630.445 [25]
Table 5. Numerical estimations of Γ p ( T ) .
Table 5. Numerical estimations of Γ p ( T ) .
iLBUBp Γ p ( S i ) Ranges of Γ p ( T ) Known Exact Value
or Estimations
1024 2 240.754079≤0.7590 3 4  [8]
1024 2 250.697766≤0.7027 9 13  [8]
1024 2 260.678219≤0.6832 2 3  [26]
1024 2 270.643850≤0.6488 [ 3 5 , 7 11 ]  [26]
1024 2 280.630140≤0.6351 8 13  [26]
Table 6. Numerical estimations of Γ p ( B 1 3 ) .
Table 6. Numerical estimations of Γ p ( B 1 3 ) .
iLBUBp Γ p ( S i ) Ranges of Γ p ( B 1 3 ) Known Exact Value
1024 2 260.667671≤0.6707 2 3  [8,27,28]
1024 2 270.667666≤0.6706 2 3  [8,27,28]
1024 2 280.667657≤0.6706 2 3  [8,27,28]
Table 7. Numerical estimations of Γ p ( P ) .
Table 7. Numerical estimations of Γ p ( P ) .
iLBUBp Γ p ( S i ) Ranges of Γ p ( P )
1024 2 250.840639≤0.8423
1024 2 260.761430≤0.7631
1024 2 270.745951≤0.7476
1024 2 280.711539≤0.7132
Table 8. Numerical estimations of Γ p ( M ) .
Table 8. Numerical estimations of Γ p ( M ) .
iLBUBp Γ p ( S i ) Ranges of Γ p ( M )
1024 2 240.873606≤0.8755
1024 2 250.848224≤0.8501
1024 2 260.746998≤0.7489
1024 2 270.747460≤0.7493
1024 2 280.731832≤0.7337
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, X.; Wu, S.; Zhang, L. An Algorithm Based on Compute Unified Device Architecture for Estimating Covering Functionals of Convex Bodies. Axioms 2024, 13, 132. https://doi.org/10.3390/axioms13020132

AMA Style

Han X, Wu S, Zhang L. An Algorithm Based on Compute Unified Device Architecture for Estimating Covering Functionals of Convex Bodies. Axioms. 2024; 13(2):132. https://doi.org/10.3390/axioms13020132

Chicago/Turabian Style

Han, Xiangyang, Senlin Wu, and Longzhen Zhang. 2024. "An Algorithm Based on Compute Unified Device Architecture for Estimating Covering Functionals of Convex Bodies" Axioms 13, no. 2: 132. https://doi.org/10.3390/axioms13020132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop