Next Article in Journal
Invariance as a Tool for Ontology of Information
Previous Article in Journal
A Novel Local Structure Descriptor for Color Image Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Study on Weighted Central Moment and Its Application in 2D Shape Retrieval

1
School of Computer Science & Engineering, Jiangsu University of Science & Technology, Zhenjiang 212003, China
2
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
3
School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
*
Author to whom correspondence should be addressed.
Information 2016, 7(1), 10; https://doi.org/10.3390/info7010010
Submission received: 15 December 2015 / Revised: 31 January 2016 / Accepted: 19 February 2016 / Published: 1 March 2016

Abstract

:
Moment invariants have been extensively studied and widely used in object recognition. The pioneering investigation of moment invariants in pattern recognition was due to Hu, where a set of moment invariants for similarity transformation were developed using the theory of algebraic invariants. This paper details a comparative analysis on several modifications of the original Hu moment invariants which are used to describe and retrieve two-dimensional (2D) shapes with a single closed contour. The main contribution of this paper is that we propose several different weighting functions to calculate the central moment according to human visual processing. The comparative results are detailed through experimental analysis. The results suggest that the moment invariants improved by weighting functions can get a better retrieval performance than the original one does.

1. Introduction

Shape is a significant visual clue for human perception. Using the shape of an object for object recognition and image retrieval is a hot topic in computer vision [1]. Moment invariants have been extensively studied and widely used in shape recognition and identification since they were first proposed by Hu [2]. Since then, many other kinds of moments have been proposed in the literature, including Zernike moments [3,4,5], Legendre moments [6,7], Fourier-Mellin moments [8,9,10], etc. [2]. Zernike moments and Legendre moments are both proposed by Teague [11]. Zernike moments are used in pattern recognition applications as invariant descriptors of the image shape. The Zernike moment descriptor has desirable properties such as rotation and scale invariance, robustness to noise, expression efficiency and fast computation. Legendre moments use Legendre polynomials as basis functions. These polynomials are orthogonal and cause Legendre moments to extract independent features within the image, with no information redundancy. Though Legendre moments have good retrieval properties, they are not invariant to linear operation and rotation. The Fourier-Mellin moment is one of the complex moments and it was proposed by Sheng and Shen [12]; it can be transformed to rotation and translation invariants. It attains good results in shape recognition. These various moment invariants have been successfully utilized as pattern features in a number of applications including character recognition [13,14], aircraft recognition [15], object identification and discrimination [16,17], content-based image retrieval [18], two-dimensional (2D) flow fields analysis [19], etc. [20,21,22].
It is known that Hu’s moment invariants are area moment invariants, which means that they are computed over all pixels including the shape boundary and its associated interior part. All pixels are usually taken as identically important in computing these moment invariants, which may not be in accordance with human perception. Generally, there are two ways to specify a 2D shape. One is to specify a shape with the whole region occupied by the object. Another way is to define a shape by only specifying its boundary. Based on the concept of shape representation through boundaries, Chen [17] introduced the curve moment invariants, which are reformulations of Hu's moments, and they are a set of invariants devised in such a way as to be evaluated only with the object boundary pixels. Though this modification reduces computation, the shape information is reduced to a certain extent as well. In order to increase the ability of noise tolerance, Balslev [21] introduced a spatial weighting function to Hu’s central moment. Their weighting technique emphasized heavy weights to the areas near the center-of-mass for noise tolerance. As far a 2D non-rigid shape with a single closed contour is concerned, the centroid of the object may be outside of the region of the object. Balslev’s method is not suitable for such cases.
In this work, we present several novel weighting functions into the central moment formula from a completely different perspective for 2D non-rigid shape retrieval. We have a completely different starting point from [23]. The specific weighting techniques will be discussed in detail in Section 3.
The paper is organized as follows. Section 2 of this paper provides the basic idea of traditional Hu’s moments. Section 3 explains our weighting functions for the central moment according to human perception. Section 4 analyzes the experimental results and presents a discussion. Finally, the conclusion is given in Section 5.

2. Traditional Geometric Moment Invariants

In this section, we briefly review Hu’s invariant moments. The 2D traditional geometric moments of order p + q of a density distribution f ( x , y ) are defined as
m p q = x p y q f ( x , y ) d x d y    p , q = 0 , 1 , 2 ,
When the geometric moments m p q in Equation (1) are referred to the object centroid ( x c , y c ) , they become the central moments, and are given by
μ p q = ( x x c ) p ( y y c ) q f ( x , y ) d x d y
where x c = m 10 / m 00 and y c = m 01 / m 00 .
For a digital image represented in a 2D array, Equations (1) and (2) are given as Equations (3) and (4), respectively,
m p q = x = 1 M y = 1 N x p y q f ( x , y ) ,    p , q = 0 , 1 , 2 ,
μ p q = x = 1 M y = 1 N ( x x c ) p ( y y c ) q f ( x , y )
where M and N are the horizontal and vertical dimensions, respectively, and f ( x , y ) is the intensity at point ( x , y ) in the image.
The normalized central moments of an image are given by
η p q = μ p q μ 00 γ
where γ = p + q 2 + 1 , p + q = 2 , 3 , . These moments are invariant to both translation and scale of the image.
Hu defines a set of seven moment invariants of orders of three or less, which are invariant to object scale, translation and rotation. In terms of the central moments, the seven moments are given as:
ϕ 1 = η 20 + η 02
ϕ 2 = ( η 20 η 02 ) 2 + 4 η 11 2
ϕ 3 = ( η 30 3 η 12 ) 2 + ( 3 η 12 η 03 ) 2
ϕ 4 = ( η 30 + η 12 ) 2 + ( η 21 + η 03 ) 2
ϕ 5 = ( η 30 3 η 12 ) ( η 30 + η 12 ) [ ( η 30 + η 12 ) 2 3 ( η 21 + η 03 ) 2 ] + ( 3 η 21 η 21 ) ( η 21 + η 03 ) [ 3 ( η 30 + η 12 ) 2 ( η 21 + η 03 ) 2 ]
ϕ 6 = ( η 20 η 02 ) [ ( η 30 + η 12 ) 2 ( η 21 + η 03 ) 2 ] + 4 η 11 ( η 30 + η 12 ) ( η 21 + η 03 )
ϕ 7 = ( 3 η 21 η 03 ) ( η 30 + η 12 ) [ ( η 30 + η 12 ) 2 ( η 21 + η 03 ) 2 ] + ( 3 η 12 η 30 ) ( η 21 + η 03 ) [ 3 ( η 30 + η 12 ) 2 ( η 21 + η 03 ) 2 ]

3. Several Weighting Functions

3.1. Boundary Weighting Function

Let us discuss the issue of shape recognition from another angle. For a 2D non-rigid shape with a single closed contour, as illustrated in Figure 1, an object’s shape is mainly discriminated by its boundary pixels according to the characteristics of human visual perception. In other words, the boundary part of the object makes a greater contribution for distinguishing the object’s shape than the central part of the object does [24,25].
Based on this fact, we propose a kind of boundary-weighted central moment in this paper. In the process of the central moment being calculated, pixels closer to the object boundary should be assigned larger weights and inner pixels farther away from the object boundary should be assigned smaller weights. According to this idea, Equation (4) should be modified as follows
μ p q = x = 1 M y = 1 N ( x x c ) p ( y y c ) q f ( x , y ) δ ( x , y )
where δ ( x , y ) is a weighting function which should emphasis the weights of those pixels closer to the object boundary. Both the linear and nonlinear functions are considered in our work.
If the linear weighting function is concerned, δ ( x , y ) can be set simply as Equation (14):
δ ( x , y ) = 1 d ( x , y )
If the nonlinear weighting function is concerned, δ ( x , y ) can be set simply as the Gauss function in Equation (15)
δ ( x , y ) = 1 σ 2 π e ( d ( x , y ) μ ) 2 2 σ 2
d ( x , y ) = min ( ( x , y ) , ( x i , y i ) ) , ( x i , y i ) B P s
where * denotes some sort of distance metric (Euclidean distance is used in this paper), B P s is a set of pixels that is located on the shape boundary, d ( x , y ) is the minimum distance between B P s and a pixel ( x , y ) which belongs to the inner region of the shape. Parameters μ and σ are the mean and the standard deviation with its variance σ 2 , and μ is set to 0 in the experiments.
Both Equations (14) and (15) contain the information that the pixels at different locations in a shape have different contributions. Those pixels closer to the shape boundary enjoy larger weight values, and on the contrary, those inner pixels farther away from the shape boundary enjoy smaller weight values. Let us take Equation (14) as an example. If inner pixel ( x , y ) is close to the object boundary, the distance d ( x , y ) between it and the B P s will get a small value, and the weight function δ ( x , y ) will get a large value. Equation (15) has a similar effect as well.

3.2. Balance Weighting Function

Chen [17] proposed to compute moment invariants based on shape boundary pixels only and neglected the pixels within the region. In other words, the weights of pixels on the boundary are set to 1 while the weights of pixels within the region are set to 0. Thus, the computing cost is reduced, and the contribution of the interior region disappears as well. In order to achieve the balance, we propose a general linear rational weighting function as in Equation (17)
δ ( x , y ) = p × ( d max ( x , y ) d ( x , y ) ) p × d max ( x , y ) ( 2 p 1 ) × d ( x , y )
where d ( x , y ) is same as in Section 3.1, and d max ( x , y ) is the maximum such distance in the object. Additionally, p is an adjustable parameter. If p is prone to 0, the moments converge with the boundary moments; if p is prone to 1, the moments converge with the traditional geometric moments. This weighting function balances the traditional geometric moments and boundary moments.

3.3. Central Weighting Function

In order to enhance the noise tolerance, Balslev [23] proposed a spatial function so that the regions near the center-of-mass are given higher weights. This central weighting function is given as follows
δ ( x , y ) = 1 1 + α 2 ( ( x x 0 ) 2 + ( y y 0 ) 2 )
where α is an adjustable parameter, ( x 0 , y 0 ) is the center-of-mass. The typical range for α is 0 < α < 10 / R G , where R G is the radius of gyration, R G = ( μ 20 + μ 02 ) / μ 00 . Balsley’s weighting function emphasizes the pixels that are close to the center-of-mass and weakens the pixels that are close to the boundary.

4. Experimental Study

4.1. Data Set and Distance Measure

We will give the details of the comparative study of the retrieval performance of variant-modified Hu moment invariants based on different weighted central moment approaches in 2D non-rigid shapes with single closed contours. We use the common performance measures, i.e., average precision and recall of the retrieval, as the evaluation measures (also called PVR curve). For each query, the precision of the retrieval at each level of the recall is obtained. The resulting precision of the retrieval is the average precision of all the query retrievals. Most publicly obtained benchmark shape databases, including Kimia’s shape dataset and the MPEG-7 dataset, will be considered in the following comparative experiments, respectively. Kimia’s shape dataset has 18 classes, each consisting of each 12 images. The MPEG7 dataset has 70 classes, each consisting of 20 images.
For the sake of simplicity, we defined seven invariants from Hu’s moment invariants, based on the different weighted central moment as the features of the shape region. The dissimilarities (distances), defined in Equation (19), between the shapes are measured by the Euclidean distance.
D i s t ( Q , T ) = i = 1 7 ( ϕ i q ϕ i t ) 2
where Q and T denote the query image and target image, respectively.

4.2. Comparative Study on the Different Weighting Approaches

In this section, the average precision and recall curves of queries using the original Hu moments [2], boundary moments [17], centroid weighted moments [23] and proposed weighted moments in the two different shape datasets are shown in Figure 2 and Figure 3, respectively.
In order to obtain good performance, the parameters are hand-picked. In Kimia’s shape database, parameter σ for the non-linear weighting function is set as 0.05, parameter p for the balance weighting function is set as 0.01, and parameter α for the central weighting function is set as 4 / R G . In the MPEG-7 shape database, parameter σ for the non-linear weighting function is set as 0.05, parameter p for the balance weighting function is set as 0.02, and parameter α for the central weighting function is set as 3 / R G . There are no parameters for the other weighting functions.
It is clear from the average precision and recall curves that the modified Hu moment invariants based on the linear weighting function, non-linear weighting function, balance weighting function and central weighting function can get better retrieval results compared with the original Hu moment invariants and the boundary moment invariants. In addition, the boundary moments outperform the original Hu moments, but underperform when compared to the improved Hu moments based on other weighting functions. Thus, the weighting scheme for the central moments defined in this paper is effective in identifying different 2D shapes.
The bull’s-eye test is selected as another evaluation criterion in the experiments as well. In the bull’s-eye test, each shape is used as a test query. Retrieval is considered correct if it is in the same class as the query. Figure 4 and Figure 5 show the bull’s-eye test results on Kimia’s shape datasets and MPEG-7 shape datasets, respectively. All of the improved approaches can get a better retrieval performance compared with the original Hu invariant moments, especially the two improved Hu moments based on central moments with the balance weighting function and linear weighting function.

4.3. Comparative Study of the Parameters

Several weighted central moment approaches have been presented in Section 3. From Equations (15), (17) and (18), it is clear that the nonlinear weighting function has two parameters that should be carefully set ( μ is set to 0 in all experiments), and that both the balance weighting function and central weighting function have one parameter that should be carefully set as well. It is necessary to discuss the influence of different parameters on the retrieval performance. Figure 6, Figure 7 and Figure 8 show the details of the influence of different parameters on the retrieval performance on Kimia’s and the MPEG-7 shape database. As for parameters μ and p , they should be set around 0.03 so that the performances are optimized. Additionally, parameter α should be set around 3 / R G so that the performances are optimized.

4.4. Comparative Study of Different Distance Metrics

The distance metric is very important for shape similarity measurements. As the moment invariants extracted from the images are vectors, Euclidean distance is used to measure the distance between two shapes in the above experiments for simplicity. Besides Euclidean distance, there are some other commonly used distance metrics including the Cityblock distance, Mahalanobis distance, Correlation distance, Chebychev distance and Cosine distance. Figure 9 and Figure 10 express the comparison results of these different distance metrics used for the similarity measurement of the modified Hu moment invariants with the linear weighting function on Kimia’s and the MPEG-7 shape database, respectively. From Figure 9, it is clear that Euclidean distance gets a better retrieval performance than the other distance metrics, followed by Cityblock distance and others. As illustrated in Figure 10, Mahalanobis distance obtains the best retrieval performance while it gets inferior results in Figure 9. Mahalanobis distance is not robust enough. Euclidean distance gets the second-best performance in Figure 10, followed by Cityblock distance and others. From Figure 9 and Figure 10, Euclidean distance gets favorable results in both shape databases.

4.5. Retrieval Illustration

Table 1 lists the retrieval results with 10 query images from Kimia’s shape database. The left column shows query shapes and the right rows show the first 11 ranked nearest neighbors for each query shape, respectively. The returned images that do not belong to the query types are marked with a blue background in the table grids.

4.6. Discussion

We have tested the original Hu invariant moments and other improved Hu moments in the above sections. The original Hu invariant moments assign identical weights to all of the pixels within an object, including the boundary pixels. This leads to their relatively low performance. For reducing the computations, Ref. [17] considers only the pixels on the shape boundary. Thus, the weights of the pixels on the contour are set to 1, and the weights of the inner pixels are set to 0. Compared with the original Hu invariant moments, Ref. [17] achieved better results. Other weighting approaches take all the pixels of an object into consideration and assign different weights to different pixels according to some rules. This brings us more robust results. The central weighting function assigns larger weights to pixels close to the center-of-mass and this gives favorable results. However, this approach may be disabled when the center-of-mass of an object is located outside of the object boundary. According to human perception, the boundary weighting approaches (Equations (14) and (15)) emphasize the region near the object boundary and allocate heavy weights to the pixels close to the object boundary. This modification makes the performance more robust. The balance weighting method shares the merits of both the central weighting function and the boundary weighting function. If parameter p is selected carefully, a better performance will be obtained. In addition, compared with other weighting functions, the weighting function of Equation (14) does not need any parameters and can obtain respectable results.

5. Conclusions

In this paper, we elaborate on several weighting functions for calculating the central moment from the perspective of different angles. With the experiments, it is shown that the weighting functions applied in the calculation of central moments are able to considerably increase the performance, compared with the original Hu moments, in 2D shape retrievals. The results also show that different pixels in an object have different contributions for object identification. Future work can try to introduce the weighting method to other kinds of moments and apply them to other image-related applications, such as character recognition.

Acknowledgments

The authors wish also to thank the reviewers for their helpful and constructive comments. This work is partly supported by National Natural Science Foundation of China (Grant Nos. 61373055, 61471182), Natural Science Foundation of Jiangsu Province of China (Grant No. BK20130473), Postdoctoral Scientific Research Foundation of Jiangsu Province (Grant No. 1402068C), Natural Science Foundation of Jiangsu Higher Education Institutions of China (Grant No. 13KJB5200003, 14KJB520009) and Innovation Funds of Industry-Academy-Research Cooperation of Jiangsu Province (Grant No. BY2013066-03).

Author Contributions

Xin Shu and Qianni Zhang designed the experiments; Xin Shu and Jinlong Shi performed the experiments; Yunsong Qi analyzed the data; Xin Shu wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, D.; Lu, G. Review of shape representation and description techniques. Pattern Recognit. 2004, 37, 1–19. [Google Scholar] [CrossRef]
  2. Hu, M.K. Visual pattern recognition by moment invariants. IRE Trans. Inform. Theory 1962, 8, 179–187. [Google Scholar]
  3. Kim, W.Y.; Kim, Y.S. A region-based shape descriptor using Zernike moments. Signal Process. Image Commun. 2000, 16, 95–102. [Google Scholar] [CrossRef]
  4. Chong, C.W.; Raveendran, P.; Mukundan, R. Translation invariants of Zernike moments. Pattern Recognit. 2003, 36, 1765–1773. [Google Scholar] [CrossRef]
  5. Belkasim, S.; Hassan, E.; Obeidi, T. Explicit invariance of Cartesian Zernike moments. Pattern Recognit. Lett. 2007, 28, 1969–1980. [Google Scholar] [CrossRef]
  6. Hosny, K.M. Exact Legendre moments computation for gray level images. Pattern Recognit. 2007, 40, 3597–3605. [Google Scholar] [CrossRef]
  7. Fu, B.; Zhou, J.Z.; Li, Y.H.; Zhang, G.J.; Wang, C. Image analysis by modified Legendre moments. Pattern Recognit. 2007, 40, 691–704. [Google Scholar] [CrossRef]
  8. Chao, K.; Mandyam, D.S. Invariant character recognition with Zernike and orthogonal Fourier-Mellin moments. Pattern Recognit. 2002, 35, 143–154. [Google Scholar]
  9. Derrode, S.; Ghorbel, F. Robust and efficient Fourier-Mellin transform approximations for gray-level image reconstruction and complete invariant description. Comput. Vis. Image Underst. 2001, 83, 57–78. [Google Scholar] [CrossRef]
  10. Zhang, H.; Shu, H.Z.; Haigron, P.; Luo, L.M.; Li, B.S. Construction of a complete set of orthogonal Fourier-Mellin moment invariants for pattern recognition applications. Image Vis. Comput. 2010, 28, 38–44. [Google Scholar] [CrossRef] [Green Version]
  11. Teague, M. Image analysis via the general theory of moments. J. Opt. Soc. Am. 1980, 70, 920–930. [Google Scholar] [CrossRef]
  12. Sheng, Y.L.; Shen, L.X. Orthogonal Fourier-Mellin moments for invariant pattern recognition. J. Opt. Soc. Am. 1994, 11, 1748–1757. [Google Scholar] [CrossRef]
  13. Flusser, J.; Suk, T. Affine moment invariants: A new tool for character recognition. Pattern Recognit. Lett. 1994, 15, 433–436. [Google Scholar] [CrossRef]
  14. Chim, Y.C.; Kassim, A.A.; Ibrahim, Y. Character recognition using statistical moments. Image Vis. Comput. 1999, 17, 299–307. [Google Scholar] [CrossRef]
  15. Zhang, F.; Liu, S.; Wang, D.; Guan, W. Aircraft recognition in infrared image using wavelet moment invariants. Image Vis. Comput. 2009, 27, 313–318. [Google Scholar] [CrossRef]
  16. Sluzek, A. Identification and inspection of 2-D objects using new moments-based shape descriptors. Pattern Recognit. Lett. 1995, 16, 687–697. [Google Scholar] [CrossRef]
  17. Chen, C.C. Improved moment invariants for shape discrimination. Pattern Recognit. 1993, 26, 683–686. [Google Scholar] [CrossRef]
  18. Cheng, S.C. Content-based image retrieval using moments-preserving edge detection. Image Vis. Comput. 2003, 21, 809–826. [Google Scholar] [CrossRef]
  19. Schlemmer, M.; Heringer, M.; Morr, F.; Hotz, I.; Bertram, M.-H.; Garth, C.; Kollmann, W.; Hamann, B.; Hagen, H. Moment invariants for the analysis of 2D flow fields. IEEE Trans. Vis. Comput. Graph. 2007, 13, 1743–1750. [Google Scholar] [CrossRef] [PubMed]
  20. Zhu, Y.; De Silva, L.C.; Ko, C.C. Using moment invariants and HMM in facial expression recognition. Pattern Recognit. Lett. 2002, 23, 83–91. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Wang, S.; Sun, P.; Phillips, P. Pathological brain detection based on wavelet entropy and Hu moment invariants. Bio-Med. Mater. Eng. 2015, 26, 1283–1290. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, S.; Pan, H.; Zhang, C.; Tian, Y. RGB-D image based detection of stairs, pedestrian crosswalks and traffic signs. J. Vis. Commun. Image Represent. 2014, 25, 263–272. [Google Scholar] [CrossRef]
  23. Balslev, I.; Døring, K.; Eriksen, R.D. Weighted Central Moments in Pattern Recognition. Pattern Recognit. Lett. 2000, 21, 381–384. [Google Scholar] [CrossRef]
  24. Nasreddine, K.; Benzinou, A.; Fablet, R. Variational shape matching for shape classification and retrieval. Pattern Recognit. Lett. 2010, 31, 1650–1657. [Google Scholar] [CrossRef]
  25. Direkoglu, C.; Nixon, M.S. Shape classification via image-based multiscale description. Pattern Recognit. 2011, 44, 2134–2146. [Google Scholar] [CrossRef]
Figure 1. Object and its contour.
Figure 1. Object and its contour.
Information 07 00010 g001
Figure 2. Average retrieval precision-recall curves on Kimia’s shape database.
Figure 2. Average retrieval precision-recall curves on Kimia’s shape database.
Information 07 00010 g002
Figure 3. Average retrieval precision-recall curves on MPEG-7 shape database.
Figure 3. Average retrieval precision-recall curves on MPEG-7 shape database.
Information 07 00010 g003
Figure 4. Bull’s-eye test on Kimia’s shape database.
Figure 4. Bull’s-eye test on Kimia’s shape database.
Information 07 00010 g004
Figure 5. Bull’s-eye test on MPEG-7 shape database.
Figure 5. Bull’s-eye test on MPEG-7 shape database.
Information 07 00010 g005
Figure 6. Bull’s-eye test with different values for parameter μ in Equation (15).
Figure 6. Bull’s-eye test with different values for parameter μ in Equation (15).
Information 07 00010 g006
Figure 7. Bull’s-eye test with different values for parameter p in Equation (17).
Figure 7. Bull’s-eye test with different values for parameter p in Equation (17).
Information 07 00010 g007
Figure 8. Bull’s-eye test with different values for parameter α in Equation (18).
Figure 8. Bull’s-eye test with different values for parameter α in Equation (18).
Information 07 00010 g008
Figure 9. Bull’s-eye test on Kimia’s shape database.
Figure 9. Bull’s-eye test on Kimia’s shape database.
Information 07 00010 g009
Figure 10. Bull’s-eye test on MPEG-7 shape database.
Figure 10. Bull’s-eye test on MPEG-7 shape database.
Information 07 00010 g010
Table 1. Some query results from Kimia’s shape database.
Table 1. Some query results from Kimia’s shape database.
Information 07 00010 i001

Share and Cite

MDPI and ACS Style

Shu, X.; Zhang, Q.; Shi, J.; Qi, Y. A Comparative Study on Weighted Central Moment and Its Application in 2D Shape Retrieval. Information 2016, 7, 10. https://doi.org/10.3390/info7010010

AMA Style

Shu X, Zhang Q, Shi J, Qi Y. A Comparative Study on Weighted Central Moment and Its Application in 2D Shape Retrieval. Information. 2016; 7(1):10. https://doi.org/10.3390/info7010010

Chicago/Turabian Style

Shu, Xin, Qianni Zhang, Jinlong Shi, and Yunsong Qi. 2016. "A Comparative Study on Weighted Central Moment and Its Application in 2D Shape Retrieval" Information 7, no. 1: 10. https://doi.org/10.3390/info7010010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop