3D Object Recognition Using Fast Overlapped Block Processing Technique

Three-dimensional (3D) image and medical image processing, which are considered big data analysis, have attracted significant attention during the last few years. To this end, efficient 3D object recognition techniques could be beneficial to such image and medical image processing. However, to date, most of the proposed methods for 3D object recognition experience major challenges in terms of high computational complexity. This is attributed to the fact that the computational complexity and execution time are increased when the dimensions of the object are increased, which is the case in 3D object recognition. Therefore, finding an efficient method for obtaining high recognition accuracy with low computational complexity is essential. To this end, this paper presents an efficient method for 3D object recognition with low computational complexity. Specifically, the proposed method uses a fast overlapped technique, which deals with higher-order polynomials and high-dimensional objects. The fast overlapped block-processing algorithm reduces the computational complexity of feature extraction. This paper also exploits Charlier polynomials and their moments along with support vector machine (SVM). The evaluation of the presented method is carried out using a well-known dataset, the McGill benchmark dataset. Besides, comparisons are performed with existing 3D object recognition methods. The results show that the proposed 3D object recognition approach achieves high recognition rates under different noisy environments. Furthermore, the results show that the presented method has the potential to mitigate noise distortion and outperforms existing methods in terms of computation time under noise-free and different noisy environments.


Introduction
Significant effort has been dedicated to developing efficient and reliable remote healthcare systems with the Internet of Things (IoT) applications [1]. This development can be achieved through transmitting efficient and secure medical images and videos of the patients and processing them in a fast and reliable way. To this end, advanced remote monitoring schemes of the patients become essential. In particular, efficient three-dimensional (3D) object recognition techniques could be beneficial to process the images and videos of medical systems. This is due to the ability of object recognition to enable feature extraction, which is essential as it provides unique characteristics that can identify objects [2]. Besides, object recognition is also considered as having the most significant importance in the industrial environment, as it represents each object individually and can distinguish the object [3]. These cues are used to extract discriminative features for accurate recognition [4]. Therefore, there is an increasing interest in object recognition, especially in the fields of machine vision, pattern recognition, and machine learning applications [5][6][7]. Various domains, including facial identification [8], gender description [9], and gesture analysis, among others, use object recognition. Object recognition is also used in object identification, medical diagnosis, security applications, multimedia communication, and computer interface applications [4,10].

Related Works
Object recognition and classification can be considered essential techniques, which are beneficial in various applications such as healthcare systems, pattern recognition, molecular biology, and computer vision [11][12][13][14][15]. To this end, significant research works have been developed for efficient 3D object recognition. Besides, feature extraction for 3D objects is extremely useful for classification [16]. Extensive researches have been carried out to develop 3D object classification methods. Some of these works are based on the principles of moment invariants and 3D moments. To this end, a method of 3D translation, rotation, and scale invariants (TRSI) was developed in [17] from geometric moments and an alternative approach was presented later in [18]. A tensor approach to derive the rotation invariants from the geometric moments was proposed in [19]. Besides, an automatic algorithm was proposed in [20,21] to generate 3D rotation invariants from geometric moments. Recently, a 3D Hahn moments combined with convolutional neural networks (CNN) was proposed in [22] to enhance the 3D object classification. Specifically, the work in [22] proposed a hybrid approach based on combining the 3D discrete Hahn moments and CNN to improve 3D object classification. A multi-layer artificial neural network (ANN) perception approach was proposed in [23] for the classification and recognition of 3D images. In [24], a deep learning approach based on neural network and Racah-based moments was proposed for 3D shape classification. Additionally, in [16], an approach based on the combination of 3D discrete orthogonal moments and deep neural network (DNN) algorithms was proposed to improve the classification accuracy of the 3D object features. In [25], a 3D discrete Krawtchouk moments method was proposed for content-based search and retrieval applications. In [26], a 3D image analysis was considered using Krawtchouk and Tchebichef polynomials, where orthogonal moments were exploited to characterize various types of 2D and 3D images. To this end, orthogonal moments are used in many applications such as image analysis [27,28], face recognition [29], pattern recognition [30,31], steganography [32], image reconstruction [33,34], and medical image analysis [35,36].
The recognition process depends extremely on the feature extraction process, which is used to distinguish between different objects. To this end, the process of object localization and object normalization is considered essential for feature extraction technique [37]. As such, essential issues in object recognition and computer vision applications are the extraction of significant features from objects [38]. Object recognition to date is still a challenging problem that affects pattern recognition. This is because the accuracy of object recognition can be affected by class variations [4,39,40]. In particular, different methods are utilized to extract the features from the images. These methods can be classified as deep-learningbased methods, orthogonal-moment-based methods, and texture-based methods [41][42][43][44][45]. While the recognition accuracy of deep-learning-based methods can be very high, these methods run into a substantial amount of computational complexity, as explained in [46][47][48]. In the orthogonal moment approaches, the features of the object are calculated efficiently through the use of Orthogonal Polynomials (OPs) techniques [49]. Due to their effectiveness, orthogonal moments (OMs) and OPs have been widely exploited in recent years for pattern recognition, form descriptors, and image analysis [50,51]. The OMs-based method gives a powerful capability for evaluating the image components because the image components can be efficiently represented in the transform domain [49]. In many object recognition applications, OMs can be utilized to extract features. It is possible to consider the OMs as a scalar approach that is utilized to define and characterize a function. Such OMs can be used to achieve an effective extraction of the features. The OPs function also contains the coordinates of an image in addition to OMs [52,53]. According to work performed in [44], OMs can be exploited in feature extraction from images with various geometric invariants, including translation, scaling, and rotation. In general, various types of moments can be used for image processing. For instance, due to their simplicity, geometric moments are favored above other types of moments [54]. To depict an image with the least amount of redundancy possible, a Zernike and pseudo-Zernike moments approach was developed in [55]. In [55], a moments-based approach was proposed by exploiting the fractional quaternion for colored image detection [56]. This is because the fractional quaternion, which is considered an opposed approach to integer-order polynomials, can represent functions, according to [44]. Furthermore, the diagnosis of plant diseases has been accomplished using fractional-order moments [57]. In [58], the image analysis used Zernike and Legendre polynomials, which act as the kernel functions for Zernike and Legendre moments, respectively. In particular, the Zernike moments approach has the property of invariance in addition to its capability of image data storing and processing with the least amount of redundancy. However, because the Zernike moments approach focused only on the continuous domain, such an approach would require image coordinate adjustments and transformations for discrete domain [59,60]. To address the challenge of computing continuous moments in image analysis, the discrete OMs approach has recently been proposed [61]. To this end, Mukundan presented a series of moments in [62] that uses discrete Tchebichef polynomials to analyze the image.
Typically, the extraction approaches are divided into global and local features. The former is also called a holistic-based approach [63], which can capture the essential characteristics of the full human face image. At the same time, it is known as the component-based approach or block-processing-based approach, from specific areas in images [64]. In the global feature-based approach, various imaging setups are used to achieve improved performance for feature extraction [65]. To this end, several feature extraction techniques have been proposed so far to enable a global feature-based approach [66,67]. In block processing or what is known as the local feature-based approach, the image features can be extracted locally by utilizing OMs, which entails processing the image's blocks after it has been divided into several blocks to ease their processing. In this approach, the signals such as images and videos can be divided efficiently into several blocks so that they transfer to another domain to extract the features [68]. The signal characteristics can be stored locally in memory to prepare it for the next step of processing. The work in [63] demonstrates that the (local) block-processing-based approach achieves better performance in feature extraction compared with the (global) holistic-based approach.
One technique for extracting local features is the local binary patterns method [69][70][71]. In addition, the combination of global-and local-based approaches, which is termed the hybrid features extraction-based approach, aims to achieve the highest object recognition accuracy [72,73]. It is demonstrated that block processing, which represents local feature extraction, can achieve the highest recognition accuracy with the trade-off of higher computation complexity. Specifically, compared with global features, local features are thought to be more reliable and improve recognition accuracy, see, e.g., [74][75][76]. To this end, partitioning the images using image block processing has the potential of extracting the blocks of any image and analyzing them sequentially. From the perspective of computer memory, this operation is not sequential, which is seen as a major flaw in performance and a crucial difference between the memory and the speed of the CPU. While such an operation would result in additional cache misses and replacements, accessing the complete matrix sequentially can aid in maintaining spatial locality [68]. The removal of additional procedures will speed up the extraction of local features. Specifically, extracting local features from the image blocks using discrete transform will decrease the computational complexity, which is called a fast overlapped block-processing method for feature extraction [68]. Although several advanced methods have been proposed for object recognition, the accuracy and running time are to date considered challenging issues that need to be addressed. Therefore, finding a quick and accurate mechanism for 3D object detection is necessary. Additionally, most of the exciting works need to account for the impact of undesirable noise on recognition. Hence, there is a limited understanding of the effect of noisy environments. Therefore, investigating the proposed method in the noise condition is significant to characterize the effectiveness of the feature extraction for object recognition processes.

Paper Contributions
To overcome the aforementioned challenges, a robust object recognition algorithm that exploits Charlier polynomials and their moments is proposed. The proposed algorithm has a powerful capability for the characterization and feature extraction of the signals of the 3D objects effectively. In addition, to extract the features effectively and in a fast manner, this paper exploits an overlap block-processing technique to provide a construction of auxiliary matrices, which essentially extends the original signal to prevent the time delay in the loops computation. Furthermore, the proposed method is evaluated in the noise condition to characterize the effectiveness of the proposed method in feature extraction for object recognition processes. The major contributions of this paper can be summarized as follows: (1) Proposing an advanced design for robust 3D object recognition, which takes into account the accuracy, computational complexity, and execution time. (2) Exploiting the powerful Charlier polynomials to extract the features of the 3D objects. (3) Developing a fast overlapped block-processing algorithm, which shows more accurate processing for the blocks of the image to perform fast feature extraction with low complexity. The proposed overlapped block-processing method is mainly used to decrease the computation time.
(4) Finally, implementing the support vector machine (SVM) to classify object recognition features accurately. To this end, a well-known dataset known as the McGill benchmark dataset is used for performance evaluation [77]. The results demonstrate that the proposed method achieves high recognition accuracy with lower computational complexity. Furthermore, the results demonstrate that the proposed method is able to reduce noise distortion and outperforms traditional methods under both clean and noisier environments. These achievements signify the importance of the proposed method for the future implementation of 3D object recognition.

Paper Organization
The paper is organized as follows. In Section 2, the orthogonal polynomials and their moments are introduced. In Section 3, the methodology of the proposed method for feature extraction and recognition of 3D objects is presented. In Section 4, the performance evaluation of the proposed method and the numerical results are discussed. Finally, the conclusion of the paper is presented.

Preliminaries of OPs and Their OMs
The mathematical definition of the utilized OPs is explained in this section. Additionally, this section also describes the computation of the OMs for the 3D signals.

Charlier Polynomials Computation and Their Moments
This subsection discusses the Charlier polynomials and their moments. In addition, the existing three-term recurrence (TTR) relation is described. Several studies have considered the use of Charlier polynomials due to its accuracy and effectiveness [78]. To this end, research on the application of Charlier polynomials has been divided into two main areas: moment computation algorithms and recurrence relation algorithms. For the recurrence relation-based algorithms, the research works exploit the n-direction and x-direction of the matrix. However, generating high-order polynomials is not possible in these recurrence algorithms. This is due to the use of the initial values and the number of recurrence times. The research works make use of either the x-direction or the n-direction of the recurrence algorithm as their calculation algorithms. To the best of our knowledge, no research studies have looked into using Charlier polynomials and their moments for 3D object detection. This paper investigates the effect of using Charlier polynomials for 3D object recognition. This paper also aims to provide an efficient method for achieving a recurrence relation to compute Charlier polynomials for high-order polynomials.
In what follows, the Charlier polynomials and their moments computation are presented.

Computation of Charlier Polynomials
Charlier polynomials C n (y; p) of dth dimension can be calculated as follows: n, where p denotes the parameter of the Charlier polynomials, and 2 F 0 represents the mathematical formulation of the hypergeometric series, which is expressed as [79] where (a) k denotes the ascending factorial, which is termed as the Pochhammer symbol [79]. Following the expressions provided by Equations (1) and (2), Charlier polynomials can be written as It is worth noting that the orthogonality condition should be met with Charlier polynomials. Besides, the weighted function can be applied to the Charlier polynomials so that where D = N − 1, ω C (x; p), which denotes the weighted function and ρ C (d; p) represents the squared norm of Charlier polynomials dx. The weighted function and the squared norm of Charlier polynomials are provided in expressions (5) and (6), respectively.
It is worth noting that the calculation of the Charlier polynomials' coefficients provided by the expression in Equation (3) may cause numerical instability. Hence, to overcome this issue, a weighted normalized Charlier polynomial is applied. To this end, the nth order of weighted normalized Charlier polynomials can be expressed aŝ

Computation of Charlier Moments
This subsection discusses the computation of Charlier moments. The Charlier moments, denoted as transform coefficients, are scalar quantities utilized to demonstrate signals without redundancy [49,80]. For a one-dimensional (1D) signal, denoted as f (x), Charlier moments can be computed in the moment domain as where µ n denotes the Charlier moments and Ord represents the maximum number of orders utilized for signal representation. To obtain the signalf (x) from the Charlier domain (moments domain), inverse transform can be utilized as follows: For a two-dimensional (2D) signal f (x, y) of size N × M, the Charlier moments with 2D signal, denoted as µ nm , can be computed as where the parameters Ord 1 and Ord 2 denote the highest order used for the representation of the signal. To reconstruct the 2D signalf (x, y), denoted asf = f , from the Charlier domain, the following inverse transformation is used: To compute the moments for higher dimensional space, in our case, the 3D signal, f (x, y, z), the following formula is used:

Charlier Coefficients Computation Using Recurrence Relation Algorithm
This section presents the algorithm exploited to compute the coefficients of Charlier polynomials. It is worth noting that the algorithm used in this paper is the recurrence relation, which has been presented in [78].
The computation of initial values of Charlier polynomials' coefficients is essential for obtaining an efficient and reliable recurrence relation algorithm. It should be noted that both three-term recurrence relations algorithms in the x-direction and the n-direction depend on two sets of initial values. To this end,Ĉ 0 (x; p) andĈ 1 (x; p) are the two initial values used in the three-term recurrence relation algorithm in the x-direction. In general, calculating the set of initial values is mathematically intractable. This is attributed to incorrectly computed values. To address this issue, a logarithmic function is used [78]: where logΓ denotes the logarithmic mathematical operation for the gamma function.
For the range n > p, n = p + 1, p + 2, . . . , N − 1, the following expression is used: After computing the coefficients for x = 0, they are used to compute the Charlier polynomials' coefficients for x = 1 using the following recurrence relation: To this end, the polynomial space of the Charlier polynomials is divided into two portions: lower triangle and upper triangle [78]. These portions are known as "Part 1" and "Part 2", which are shown in Figure 1. Charlier polynomials' coefficients in the lower triangle matrix ("Part 1") are obtained using three three-term recurrence relations. In addition, Charlier polynomials' coefficients in the upper triangle matrix ("Part 2") are obtained using the symmetry relation provided in the expression given bŷ C n (x; p) =Ĉ x (n; p) n = 0, 1, . . . , N − 1, and x = 0, 1, . . . , n − 1.  After the weighted normalized Charlier polynomials' identity and initial values calculations are presented, the calculation of the Charlier polynomials' coefficients in "Part 1" is performed by exploiting the three-term recurrence in the x-direction. To this end, the following calculations arê where x = 1, 2, . . . , N − 1 and n = x, x + 1, . . . , N − 1; the parameters A and B are obtained, respectively, as [78] For more clarification, the utilized algorithm for the weighted normalized Charlier polynomials is summarized in Algorithm 1.  11: for n = 0 to N − 1 do 12:Ĉ n (1; p) ← (p−n) √ nĈ n (0; p) 13: end for 14: {Compute the coefficients in "Part 1"} 15: for x = 1 to N − 1 do 16: for n = x to N − 1 do

Methodology of the Proposed Feature Extraction and Recognition Method of 3D Object
This section presents the feature extraction and recognition processes for the presented 3D object recognition algorithm. For any recognition system, a feature extraction process is employed to represent signals. As a result, local feature extraction can be used to enable more effective object recognition systems rather than global feature extraction due to their effectiveness, as discussed earlier in the introduction. Therefore, the 3D image might be separated into blocks to increase recognition accuracy. Each block has a size of B x × B y × B z . The Charlier polynomials are generated using the procedures in Section 2.4, where Charlier polynomials can be generated with parameter p. The brief methodology of the presented 3D recognition algorithm is shown in Figure 2. First, the 3D image information is obtained. Then, the Charlier polynomials are generated with parameter p. Next, the overlapped polynomials are generated to reduce the computation cost. After that, the fast 3D moments' computation is used to transform the 3D images into the moment domain. Finally, the features are normalized and used to train the SVM model for recognition. The global-based feature extraction approach is, to some extent, inaccurate for noisy environments, which highly impedes the characterization of efficient 3D object algorithms in more realistic settings. Moreover, the performance of 3D object recognition accuracy may be degraded in noisy environments [45,81]. Therefore, preprocessing for the 3D object becomes essential to mitigate the effect of noise but it may come at the expense of increasing the computation complexity. The extraction of local features leads to a high computation cost because the traditional method is used, which is considered a bottleneck for real-time application [82]. The local features are extracted after partitioning the 3D object into sub-blocks. For more clarification, see Figure 3. A fast overlapped block-processing technique is exploited to overcome the above challenges. To extract local features, most applications use a non-overlapping blockprocessing technique. On the other hand, overlapped block processing could enhance the accuracy of 3D object recognition [45,81]. Typically, the processing of the blocks in parallel will significantly raise the cost of computing. We solved this issue by using the fast overlapped block processing described in [68]. The fundamental idea behind fast overlapped block processing (FOBP) is to extend the image by adding auxiliary matrices, which does away with the requirement for nested loops. The computing cost of the feature extraction procedure will be drastically reduced by eliminating the nested loops (see Figure 4). Suppose a 3D image F with a size of N x × N y × N z needs to be partitioned into overlapped blocks. The size of the blocks are B x × B y × B z with overlapping sizes of v x , v y , and v z in the x, y, and z-direction, respectively. This lead to a total blocks (T Blocks ) of T Blocks = Blocks x × Blocks y × Blocks z , For further details about the expressions above, see Figure 5. Suppose the matrix G represents the extended version of F and can be computed as follows [68]: where E x , E y , and E z are rectangular matrices with sizes of (B x · Blocks x × N x ), (B y · Blocks y × N y ), and (B z · Blocks z × N z ), respectively. For further elucidation, the matrix E d is shown in Figure 6, where d represents the dimensions (x, y, and z).
To compute the moments (M) for a 3D image using matrix multiplication, Equation (12) can be rewritten as follows [83]: By substituting Equation (22) in Equation (23), we obtain Note that M represents the matrix form of the moments µ nml . By following the proof presented in [68], Equation (24) can be rewritten as follows: where Q d are computed as follows: where the matrix R d can be obtained as follows: where matrix I denotes an identity matrix, ⊗ denotes the Kronecker product, and H d represents the Charlier polynomials. Note that d represents the dimensions x, y, or z. Due to the matrices independence from the image, they are generated, stored, and repeatedly used [45,68]. The process for the matrices generation is depicted in Figure 7. After the matrices (Q x , Q y , and Q z ) are generated, the images are transformed into the Charlier moment domain to extract features (see Algorithm 2). Then, these features are normalized to obtain the feature vector. Finally, the objects are classified based on the extracted features.

Algorithm 2
The 3D moments computation [83] Input: F = 3D image, Q d = Charlier polynomials. Output: FV = Charlier moments. 1: Generate extended 3D image (G) from the 3D image F {Equation (22).} 2: Get stored Charlier polynomials Q x , Q y , and Q z {Using Equation (26).} 3: for z = 1 to Ord z do 4: M ← M + R z ⊗ Q x G Q y 5: end for 6: FV ← reshape(M) {Reshape the computed moments as a feature vector.} 7: return FV {Note: in the training and testing phases, the feature vector is normalized.} In this paper, the normalized feature vector is obtained and considered an input to the classifier. To this end, a label (ID) is considered for each input image of the objects. The classification procedure is performed in this paper using SVM. The SVM technique is selected here due to its effectiveness in optimizing the margin between hyperplane separation classes and data [84]. Furthermore, the SVM technique can be very efficient for object recognition. This is attributed to the fact that SVM is more robust to signal fluctuation [85]. In this paper, LIB-SVM is used in the classification process [86]. Figure 8 shows a model of the proposed 3D object recognition method.

Experiments and Discussions
In this section, the performance of the proposed Charlier polynomials algorithm for 3D object recognition is evaluated. In this experiment, the well-known McGill datasetdeveloped in [77]-is used as a benchmark dataset. In particular, this dataset contains 19 classes, denoted as 3D objects. These 3D objects are named as planes, spiders, spectacles, snakes, pliers, octopus, teddies, dolphins, fours, ants, humans, tables, chairs, dinosaurs, fishes, hands, cups, craps, and birds. Samples of the aforementioned 3D objects are shown in Figure 9. In this experiment, the results are obtained over 19 different objects and various effects. These effects are translations and rotations. The sample objects are translated in the x, y, and z axes and their combinations (xy, xz, yz, and xyz) range from (1, 1, 1) to (10, 10, 10) with a step of (1, 1, 1). In addition, for each direction, the sample objects are rotated in the x, y, z, xy, xz, yz, and xyz axes between 10 • and 360 • with a step of 10 • . The resulting number of samples per object is 1252, which produces a total number of 23,788 samples for all objects.
The flow diagram process of the 3D object recognition is shown in Figure 8. For the 3D object recognition, different block sizes are considered in this experiment, which are given by the block sizes of 64 × 64 × 64, 32 × 32 × 32, and 16 × 16 × 16. Tables 1-3 present the performance results of block sizes of 64 × 64 × 64, 32 × 32 × 32, and 16 × 16 × 16, respectively. Besides, different overlap sizes are also considered in addition to the sizes of the testing and training sets considered during this experiment, which are given as 70% and 30%, respectively. As discussed earlier, the proposed solution for 3D object recognition and feature extraction is Charlier polynomials (see Algorithm 1) with parameter p = Block Size/2. The SVM model is used in the proposed algorithm for object classification. The LIB-SVM library developed in [86] is used to train the extracted features. The SVM kernel exploits LIB-SVM and uses the radial basis function. In the training phase, five-fold cross-validation is utilized to obtain the values of the SVM parameters (see Figure 2). The recognition accuracy is the number of correct predictions divided by the total number of predictions as follows: Tables 1-3 reported the recognition rate for clean and noisy environments. First, we will discuss the clean environment results; then, the noisy environment will be considered successively for different types of noise.
The results in Table 1 show that the accuracy of block size of 64 × 64 × 64 starts at 68.25% and increases to 80.04% as the overlap block size is increased from 0 × 0 × 0 to 16 × 16 × 16, which shows an improvement ratio of 14.73%. This implies that increasing the overlap block size can help in improving the recognition accuracy. For the block size of 32 × 32 × 32 given in Table 2, the object recognition accuracy starts with a value of 76.28% at overlap size of 0 × 0 × 0, which achieves an accuracy improvement of 8.03% higher than that obtained with the block size of 64 × 64 × 64 at an overlap size of 0 × 0 × 0. In addition, the object recognition accuracy of the block size of 32 × 32 × 32 is increased to 80.10% at an overlap block size of 4 × 4 × 4. For the block size of 16 × 16 × 16 given in Table 3, the object recognition accuracy is increased from 70.58% to 80.10% as the overlap size is increased from 0 × 0 × 0 to 2 × 2 × 2. To this end, the highest accuracy performance is achieved at a block size of 64 × 64 × 64 when an overlap size of 16 × 16 × 16 is used, at a block size of 32 × 32 × 32 when an overlap size of 4 × 4 × 4 is exploited, and at a block size of 16 × 16 × 16 when an overlap size of 2 × 2 × 2 is utilized, as illustrated in Tables 1-3, respectively. In a nutshell, the best accuracy can be achieved when the overlap block size is increased and the block size is decreased.
Different noisy environments are considered for further clarification and evaluation of the proposed object recognition method, and the results for each type are reported. It is worth noting that (GN) stands for Gaussian noise, (SPN) stands for salt-and-pepper noise, and SPKN stands for Speckle noise. Note that different noise levels are considered for each type of noise. From Table 1, it is obvious for the case of GN with all its different densities values from 0.0001 to 0.0005 that the accuracy is increased as the values of the overlap block size are increased. In addition, the same observation is perceived for SPN and SPKN for all noise density values. Moreover, Table 2 shows that for all types of noise and noise densities, higher accuracy is achieved at the highest overlap block size. On the other hand, for a block size of 16 × 16 × 16, higher accuracy is obtained for the overlap block size equal to 1 × 1 × 1 for all noisy environments. The results show that the best-case scenario can be obtained at a block size of 32 × 32 × 32 and an overlap size of 4 × 4 × 4. The accuracy of object recognition starts at very low values with the block size of 64 × 64 × 64, which is given in Table 1 when an overlap size of 0 × 0 × 0 is considered. The recognition accuracy is then increased to the highest values at a block size of 64 × 64 × 64 and an overlap size of 16 × 16 × 16, as given in Table 1; a block size of 32 × 32 × 32 and overlap size of 4 × 4 × 4, as shown in Table 2; and a block size of 16 × 16 × 16 and overlap size of 2 × 2 × 2, as demonstrated in Table 3.
To evaluate the performance of the presented algorithm, a comparison is performed with existing works in terms of recognition accuracy.  Table 4.
It can be observed from Table 4 that the average recognition accuracy of the presented algorithm outperforms the accuracy of the existing works. According to the results obtained from this table, the recognition accuracy of the presented algorithm is significantly high compared with that computed from the existing algorithms for all the given values of the block size (64 × 64 × 64, 32 × 32 × 32, and 16 × 16 × 16) and overlap size (16 × 16 × 16, 4 × 4 × 4, and 1 × 1 × 1). Therefore, it can be concluded that the presented algorithm can be useful in object recognition applications. Furthermore, in order to provide further performance evaluation of the proposed method, the computation time of the proposed algorithm is compared with the traditional algorithm. To this end, Figure 10 illustrates an average computation time for 10 runs for both the proposed and traditional algorithm under different values of block sizes and overlap sizes. In addition, the percentage performance improvement between the proposed algorithm and the traditional algorithm is also provided. This percentage performance improvement is obtained by dividing the results from the traditional algorithm by those obtained from the proposed algorithm. Figure 10 shows that the proposed algorithm significantly outperforms the traditional algorithm, where the average performance improvement across whole values is recorded as around 4.70 compared with the traditional algorithm. The proposed recognition algorithm achieves the highest percentage performance improvement when the block size is 16 × 16 × 16 with an overlapped size of 4 × 4 × 4, which is recorded as 7.86. This clearly signifies the robustness of our algorithm when a small block size, i.e., 16 × 16 × 16, is considered.

Conclusions
This paper presents an efficient algorithm for 3D object recognition with low computational complexity and fast execution time based on Charlier polynomials. The proposed algorithm has a powerful capability for extracting the features of the 3D object in a fast manner. This was attributed to the overlapped block-processing technique, which allows the signals to be virtually extended to auxiliary matrices to avoid the time delay during loops computation. In addition, in order to characterize the effectiveness of the proposed 3D object recognition method, a noise environment was considered in the evaluation and comparison. This paper also implemented the SVM algorithm to classify the 3D object features. The proposed 3D object recognition method was evaluated under different environments. The results illustrate that the proposed 3D object approach achieved high recognition accuracy as well as low computation time under the different noisy environments considered. This achievement signifies the importance of the proposed 3D object recognition method for future applications.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: