Blind Robust 3 D Mesh Watermarking Based on Mesh Saliency and Wavelet Transform for Copyright Protection

Three-dimensional models have been extensively used in several applications including computer-aided design (CAD), video games, medical imaging due to the processing capability improvement of computers, and the development of network bandwidth. Therefore, the necessity of implementing 3D mesh watermarking schemes aiming to protect copyright has increased considerably. In this paper, a blind robust 3D mesh watermarking method based on mesh saliency and wavelet transform for copyright protection is proposed. The watermark is inserted by quantifying the wavelet coefficients using quantization index modulation (QIM) according to the mesh saliency of the 3D semiregular mesh. The synchronizing primitive is the distance between the mesh center and salient points in the descending order. The experimental results show the high imperceptibility of the proposed scheme while ensuring a good robustness against a wide range of attacks including smoothing, additive noise, element reordering, similarity transformations, etc.


Introduction
Due to the achieved development in telecommunication networks and digital media processing, the multimedia contents including image, audio, video, 3D objects can be easily copied and redistributed by unauthorized users.As a result, the need to protect these contents has increasingly become crucial.Digital watermarking, which consists of inserting a watermark in the host data to protect copyright, is considered as an efficient solution to overcome the above-mentioned issue [1].Here, we consider only 3D triangular mesh as cover media and we address the robust watermarking.Three-dimensional objects are widely used in several applications such as computer aided design (CAD), virtual reality, and medical imaging.Unlike 2D images, several representation for 3D models exist, including NURBS, voxels, and meshes.However, the 3D mesh has become the standard representation because of its simplicity and usability [2].A 3D mesh is a collection of polygonal facets that approximate a real 3D object.It has three primitives: vertices, facets, and edges.Another representation of the 3D mesh is geometry which refers to the coordinates of vertices and connectivity that describes the adjacency relations between the vertices and facets.The degree of a facet refers to the number of its component's edges, while the number of incidents edges of a vertex represents its valence.The majority of digital watermarking approaches have focused on image, video, and audio, while few works have been proposed for 3D meshes.This situation is mainly due to the difficulties encountered while manipulating the complex topology and irregular representation of 3D meshes, as well as the severity of the existing attacks.Unlike 2D images in which pixels have an intrinsic order to synchronize the watermark bits, 3D meshes have no obvious robust intrinsic ordering.Indeed, the intuitive order such as the order of vertices in the Cartesian coordinate system can be modified easily [3].
Each 3D watermarking technique should ensure a tradeoff between three main constraints [4]: imperceptibility, capacity, and robustness.Imperceptibility is the similarity between the original model and the stego model, while capacity refers to the maximum of bits that can be embedded in the 3D model.Robustness means the ability to extract the watermark bits from the stego mesh even after applying some manipulations called attacks.These latter can be divided into two main kinds.Geometric attacks such as local deformation operations, similarity transformations (translation, rotation, and uniform scaling), and signal processing manipulations that include noise addition, smoothing, and compression.Connectivity attacks include subdivision, cropping, remeshing, and simplification.Three-dimensional mesh watermarking techniques have different applications such as authentication, content enhancement, and copyright protection, among others.It is worth noticing that the proposed method focuses on copyright protection.
There are several classification criteria for 3D watermarking methods.According to the embedding domain, the existing methods can be divided into spatial [5] and transform techniques [6].In the spatial domain, the watermark is embedded by modifying the geometry or the connectivity of the 3D mesh, while in the transform domain the watermark is inserted by altering the coefficients obtained after a certain transformation such as wavelet transform.Based on resistance to attacks, the watermarking algorithms can be classified into robust, semi-fragile, and fragile.Each kind is used in a specific application.It is well known that the robust methods are used with the aim of protecting copyright.These methods should ensure high robustness against common attacks such as signal processing attacks as well as geometric manipulations, while maintaining good imperceptibility.The proposed method is based on wavelet transform using the subdivision of Lounsbery et al. [7].We note that such a transformation can be applied only for semi-regular meshes [3].Several methods using wavelet transformation have been proposed [3,8,9].
Three-dimensional blind watermarking schemes have a very challenging issue, which consists of the geometric distortions causing damage to the mesh appearance.This problem can be more complex in some applications such as medical diagnostic or manufacturing, where a small modification can cause a significant difference between the original mesh and the watermarked one.The obvious way to overcome this issue is to use techniques that preserve the important regions of the 3D mesh.The importance of a vertex of a region can be different throughout the 3D surfaces for the human eye.To describe this importance, some techniques used the mesh saliency in order to define salient and non-salient regions [10][11][12].Therefore, to ensure the invisibility requirement of the 3D mesh, the embedding of the watermark should not have a visual impact in the geometry of perceptual important regions.Thus, mesh saliency can provide important information and adjust the effects to the perceived quality of 3D models.In [3], Wang et al. proposed a hierarchical watermarking scheme for semi-regular meshes in which fragile, high-capacity, and robust watermarks have been embedded at different levels of resolutions using wavelet transform.
In fact, a few saliency-based watermarking methods for 3D meshes have been proposed.In [10], a 3D mesh watermarking algorithm based on the mesh saliency of Lee et al. [13] is presented.First, using the mesh saliency, perceptually conspicuous regions have been identified.Second, for each vertex, the norm is calculated and its histogram is constructed.Finally, the associated vertex norms of each bin are normalized.In [11], Son et al. presented a 3D watermarking technique aiming to preserve the appearance of the watermarked 3D mesh.The vertex norm histogram is used as a watermarking primitive, which has been already proposed by Cho et al. [14].Indeed, the watermark is embedded by modifying the mean or the variance of the vertex norm histogram.Recently, Medimegh et al. [12] proposed a robust and blind watermarking method for 3D meshes based on auto diffusion function (ADF).The authors extract the salient point using the (ADF), and the mesh is segmented according to these points and the watermark is inserted statistically in each region.
In this context, the proposed method takes the full advantage of mesh saliency and (QIM) quantization to design a watermarking method robust to a wide range of attacks and ensures high imperceptibility.The proposed method is based on the visual saliency associated with wavelet coefficient vectors.The watermark bits are inserted in the original 3D mesh by quantization of wavelet coefficient vectors after carrying out one wavelet decomposition.The coefficients to be quantized are chosen based on visual saliency of the 3D mesh, while the watermarking synchronization is the distance between the mesh center and salient points.This order has been found to be robust to different attacks such as similarity transformations and elements reordering.
The rest of this paper is organized as follows.Section 2 describes the used terminologies.The proposed method is reported in Section 3 followed by a presentation of the experimental setup in Section 4. Section 5 discusses the experimental results.The paper is concluded in Section 6.

Three-Dimensional Mesh Saliency
Generally, the visual attention of humans is directed to the salient regions of the 3D mesh.The mesh saliency used in our method is Lee et al.'s [13] scheme.The saliency of each vertex is computed by utilizing the difference in the mean curvature of the 3D model regions from those at other vertices in the neighborhood.The surface curvatures are computed; the curvature at each vertex v is calculated based on Taubin's scheme [15].We note Curv(v) as the mean curvature of a 3D model at a vertex v.The Gaussian-weighted average of the mean curvature can be defined as ) where x is a mesh point and N(v, σ) denotes the neighborhood for a vertex v, which represents a set of points within a Euclidean distance σ calculated as The saliency S(v) of a vertex v is calculated as the absolute difference between the Gaussian-weighted averages computed at fine and coarse scale.
Figure 1 exhibits an example of mesh saliency of a cat and vase using Lee's method [13].(right) 3D mesh saliency.

Quantization Index Modulation
Quantization index modulation (QIM) schemes represent a set of non-linear data hiding.Most previous research has focused on applying QIM in image, audio, and video.However, few watermarking methods based on QIM have been proposed for 3D data.With the aim of inserting a binary message composed of 0 and 1 in the 3D mesh, two quantifiers are needed.Moreover, QIM methods are simple to implement and have a small complexity.In addition, they ensure a high tradeoff between capacity and robustness.
Let b ∈ (0, 1) represent the watermark bit and x the host signal to quantize.It is worth noticing that the QIM operates independently on the above-mentioned elements.Two quantizers Q 0 and Q 1 are needed to insert a watermark bit b [16].These quantizers are calculated as follows: where [] refers to the rounding operation and ∆ refers to the quantization step.In order to extract the watermark bits, the two quantizers are recalculated as in an embedding process.The extracted bits are calculated as follows: where y = Q b (x) + n and n refers to the noise caused by the channel.

Multiresolution Wavelet Decomposition
Multiresolution analysis applied to 3D meshes has been widely used in the literature since it guarantees a good tradeoff between the mesh complexity and the processing of the available resource [17].It produces different representations of the 3D mesh starting from the low frequencies (coarse mesh) to a set of medium and high frequencies representing detailed information at different resolution levels.Each level represents the same 3D mesh but with a different complexity.The advantage of using multiresolution analysis in 3D watermarking lies in the fact that it offers several embedding locations and provides high robustness and imperceptibility requirements.In addition, such analysis renders the watermarking method useful for several applications.Indeed, the coarsest level is used to insert a robust watermark for copyright protection, a fragile watermark can be embedded into a dense mesh to ensure authentication, etc. Wavelet transformation is a common tool for performing mesh multiresolution analysis.The mathematical formula of synthesis and wavelet analysis of 3D models was introduced by Lounsbery et al. [7].The principle of lazy wavelet decomposition for semi-regular triangular meshes is sketched in Figure 2. One iteration of lazy wavelet transform consists of merging each of the four triangles in one triangle at low-resolution level j + 1, i.e., three of the six initial vertices are conserved in the lower resolution and so on.The prediction errors for the deleted vertices (v 3 ).Note that such analysis can only be applied on a dense mesh with semi-regular connectivity.Figure 3 illustrates the wavelet decomposition of a bunny mesh.The multiresolution representation of the six vertices (v ) can be expressed as follows: where

The Proposed Method
In this paper, a blind robust 3D mesh watermarking technique based on mesh saliency and wavelet coefficients for copyright protection is proposed.The main contribution of this work is the use of QIM quantification of the wavelet coefficients as well as mesh saliency in order to ensure both high imperceptibility and robustness to a wide range of attacks.The imperceptibility requirement is achieved by exploiting the advantages of mesh saliency in order to define the wavelet vectors to be quantized, while robustness is ensured by using the QIM scheme of the selected wavelet vectors using the distance between the vertices and the mesh center as synchronizing primitives.Firstly, multiresolution analysis is applied to the original 3D semi-regular mesh giving a series of approximation meshes and a sequence of wavelet coefficients.Next, the mesh saliency is calculated for the mesh obtained after one wavelet decomposition and salient points are extracted.The reason behind using a certain level I (Figure 3b) is due to the good capacity-invisibility tradeoff offered.Afterwards, the norms of wavelet coefficients of these points are quantized using QIM quantization.The principle role of the mesh saliency is to define the candidate wavelet coefficients norms to be quantified.The wavelet coefficients, which correspond to the vertices that have the biggest saliency values, are chosen and their norms are quantized.A threshold (T r ) representing the 70% maximum values of the saliency vector was adopted for all the 3D meshes.The embedding and extracting schemes are described in Figures 4 and 5, respectively.

Watermark Embedding
The first step is to do a wavelet analysis until an intermediate level I is reached, after applying one wavelet decomposition of the original mesh.The watermarking primitive is the wavelet coefficient in the level I.The watermark bits are inserted by modifying the norm wavelet coefficients associated with the sorted vertices according to a predefined order.This order is defined as the Euclidean distance between the vertices and the mesh center.Firstly, the vertices are sorted in the descending order according to their distance to the mesh gravity.Next, the salient points in the level I are extracted by calculating the mesh saliency using Lee et al.'s methods [13].Afterwards, the wavelet coefficients corresponding to the salient vertices in the level I and their Euclidean norms are calculated.To define a salient point, a threshold T r is defined in order to choose a candidate wavelet coefficient to be altered in the embedding process.The parameter T r is here chosen as 70% maximum values of the saliency vector, which has been adopted for all 3D objects.Thus, the wavelet coefficients corresponding to the salient points are quantized using a QIM scheme.Finally, the watermarked 3D mesh is reconstructed starting from the modified wavelet coefficients norms using Equation ( 8) [19].
where W C and W C refers to the modified wavelet coefficient after embedding and the wavelet coefficients calculated before the watermark embedding, respectively.V (x , y , z ) is the new vertex coordinate of the watermarked mesh, and V(x, y, z) is the vertex coordinate of the original mesh.Figure 4 sketches the watermark insertion process, which is described in detail in Algorithm 1.
Algorithm 1: Watermark embedding.Require: Original 3D mesh, Watermark, key_1, key_2.Ensure: Watermarked 3D mesh. 1. Do wavelet analysis until an intermediate level I is reached.2. Extract salient points of the 3D mesh in scale I using mesh saliency.
3. Find out the wavelet coefficients corresponding to the salient vertices according to the threshold T r and calculate their norms.4. Quantify the norms of wavelet coefficients by the QIM scheme using Equation (4). 5. Reconstruct the mesh starting from the modified wavelet coefficients norms to obtain the watermarked 3D mesh.

Watermark Extraction
The watermark extraction is quite simple and blind.Neither the original object nor the watermark is needed in the extraction process; only the secret keys (key_1, key_2) that represent the generator of the watermark and the quantization step, respectively, are needed.First, multiresolution analysis is applied to the watermarked 3D mesh by performing wavelet decomposition until the level I is reached.Afterwards, the mesh saliency is carried out to the watermarked mesh and the modified wavelet coefficients are defined according to the threshold T r .Next, the norms of the modified wavelet coefficients are re-quantized using the same quantizer as the embedding.Finally, the watermark bits are extracted using Equation (5). Figure 5 illustrates the watermark extraction process that is presented in detail in Algorithm 2.
2. Calculate mesh saliency and extract the modified wavelet coefficients according to the threshold T r .
3. Calculate the norms of the extracted wavelet coefficients and apply QIM quantization.4. Extract the watermark bits using Equation (5).

Experimental Setup
The performance of the proposed watermarked method in terms of imperceptibility and robustness is tested on several 3D objects: Bunny (34, 835 vertices, 69, 666 facets), Horse (112, 642 vertices, 225, 280 facets), Venus (100, 759 vertices, 201, 514 facets), Armadillo (26, 002 vertices, 52, 000 facets), Rabbit (70, 658 vertices, 141, 312 facets), Flower (2523 vertices, 4895 facets), Vase (2527 vertices, 5004 facets), Cup (9076 vertices, 18, 152 facets), Ant (7654 vertices, 15, 304 facets), Bimba (8857 vertices, 17, 710 facets), and Cat (3534 vertices, 6975 facets).LIRIS/EPFL General-Purpose [20] database was created in Switzerland and contains 88 models in total.It contains 4 reference models which are Armadillo, Dyno, venus and RockerArm.The subjective evaluation was done by 12 observers.The distortions that the reference mesh undergoes are smoothing and noise addition.We note that the scores are between 0 referring to good quality and 10 representing a bad quality.For each mesh, the average of the scores given by the 12 observers is computed in order to have a normalized mean opinion score (MOS).The LIRIS Masking database was created at the University of Lyon in France and contains 26 models, and the subjective evaluation was done by 11 observers [21].Only some objects are taken from the two mentioned databases.The model sizes are provided in Table 1. Figure 6a,c,e show an example of three original 3D objects.The quantization step is tuned experimentally in such a way that ensures the best tradeoff between imperceptibility and robustness.To do so, extensive experiments have been conducted (see Figures 7-9) using several empirically different values of ∆.We note that these experiments have been carried out for all the 3D meshes.For the brevity of space we have given only the results for 3D meshes Bimba, Horse, and Bunny.According to these results, the best found value is ∆ = 0.10.The parameter T r , which represents the threshold used to choose the candidate wavelet coefficients, is kept at the 70% maximum value of the saliency vector.We note that the size of the used watermark in the simulations is 64 bits.
Subjective evaluation is done by human subjects, each subject provides a quality score.The MOS is the average of all the scores.Objective evaluation is done by metrics.The mean opinion score reflects the observers' opinions regarding the visual difference between the original mesh and the distorted one.The MOS values presented in Table 2 are obtained after calculating the average of scores given by the observers.For example, the MOS value of Armadillo taken from LIRIS-EFPL-Gen-Purpose [21] is 2.5.This value is obtained after averaging all the scores given by the 12 observers.The mean opinion score (MOS) values of Armadillo, Venus, and Bimba are shown in Table 2.

Object
Mean Opinion Score (MOS)

Evaluation Metrics
Before applying the attacks to the 3D models, different experiments were conducted to evaluate the effectiveness of the proposed method in terms of imperceptibility and robustness.The distortion introduced by the watermark insertion is compared objectively and visually.The robustness of the proposed scheme is evaluated using the normalized correlation (NC).

Imperceptibility
In order to evaluate the imperceptibility of the proposed method, several metrics were used to measure the amount of distortion introduced by the embedding process.This distortion can be measured geometrically or perceptually.The maximum root mean square error (MRMS) proposed in [22] was used to calculate the objective distortion between the original mesh and the watermarked ones.
The MRMS, which refers to the maximum between the two root mean square error (RMS) distances, is calculated by where p is a point on surface M, |M| represents the area of M, and d(p, Mw) is the point-to-surface distance between p and Mw.It is worth noticing that surface-to-surface distance, as the MRMS metric, does not represent the visual distance between the two meshes [21].Therefore, another perceptual metric is needed to measure the distortion caused by the watermark insertion.The mesh structural distortion measure (MSDM) metric is chosen to measure the visual degradation of the watermarked meshes [21].
The MSDM value is equal to 0 when the original and watermarked 3D objects are identical.Otherwise, the MSDM value is equal to 1 when the objects are visually very different.
The global MSDM distance between the original mesh M and the watermarked mesh Mw having n vertices respectively is defined by d LMSDM is the local MSDM distance between two mesh local windows a and b (in meshes M and Mw, respectively), which is defined by Curv, Cont, and Sur f refers to curvature, contrast, and structure comparison functions, respectively.

Robustness
The robustness is measured using the normalized correlation (NC) between the inserted watermark and the extracted one as given by following equation: where i ∈ {1, 2, . . . ,M}, w * and w are the averages of the watermark bits, respectively.

Imperceptibility
Figure 6 shows the original objects and the watermarked ones.It can be seen that the distortion is negligible and cannot be noticed by the human eye thanks to the use mesh saliency in the embedding process.This means that the imperceptibility of the proposed method is enough to make the viewer hard to distinguish the original mesh from the watermarked one.Figure 10 sketches the visual impact of the watermark caused by the watermark embedding for Bunny, Bimba, and Cat meshes.It can be seen in Figure 10 that no perceptible distortion caused by the watermark embedding exists, especially in salient regions.Besides, according to Table 3, it can be concluded that the proposed method can achieve a high imperceptibility performance in terms of MRMS, HD, and MSDM.We guess that this is achieved thanks to the exploitation of mesh saliency to minimize distortions after watermark insertion.In fact, only wavelet coefficients corresponding to salient vertices are altered.It can also be observed that imperceptibility results in terms of MRMS, HD, and MSDM are not the same for the test 3D meshes.We believe this is due to the curvature nature of each one of these 3D objects.In order to evaluate the importance of using mesh saliency in the proposed work, we compared the imperceptibility performance in terms of MRMS, HD, and MSDM without and with saliency.The obtained results for six 3D meshes are listed in Table 4.According to this table, the proposed scheme based on mesh saliency gives higher imperceptibility scores than the scheme without using saliency, which illustrates the importance of saliency to improve the imperceptibility performance of the proposed scheme especially for the MSDM, which is more correlated with human perception.

Robustness
The resistance of the proposed watermarking method is tested under several attacks including element reordering, noise addition, smoothing, quantization, similarity transformations translation, rotation, and uniform scaling) and cropping.To this end, Wang et al.'s benchmarking system has been used [23].Figure 11 shows the Bimba object after several attacks.The robustness is measured using the normalized correlation NC between the extracted watermark bit and the original one.
The noise addition attack aims to add a pseudo-random noise on vertex coordinates.The robustness against this attack is primordial since it simulates the artifacts induced during mesh transmission.We added random noise to each vertex of the original 3D meshes using several intensities 0.05%, 0.10%, 0.30%, and 0.50%.Table 5 sketches the robustness results in terms of NC for six 3D meshes.It can be seen from Table 5 that the proposed method is robust against noise addition for all the 3D test meshes.The robustness performance of the proposed method was also tested against smoothing attack.This is a common operation aiming to remove the noise caused by the generation of the mesh.The smoothing applied to the watermarked 3D meshes is Laplacian smoothing [15] with a fixed deformation factor (λ = 0.1) with several amounts of iterations (5, 10, 30, and 50).Table 6 highlights the robustness evaluation of the proposed method in terms of NC.According to Table 6, it can be seen that our method can achieve high performance against this attack.Even with 50 iterations, the obtained NC values for the six objects are above 0.88.
The robustness performance has been investigated for element reordoring attack in which the vertex/facets are reordered.It clearly appears from Table 7 that the method shows good robustness against element reordering manipulation for the three types of this attack presented in Wang et al.'s benchmark [23].
Cropping attack is considered one of the most severe attacks that 3D can suffer from.This manipulation consists of cutting one part or several parts of the 3D mesh.This attack has been applied to the 3D objects using different ratios (10, 30, and 50).As depicted in Table 12, it can be concluded that the resistance of the proposed method against this attack shows relative weakness.We believe that this is due to the fact that the cropped regions could contain salient points that have been used to choose the wavelet coefficients to be quantized.

Comparison with Alternative Methods
To further evaluate the performance of the proposed method, we compare it with methods [3,11,12,14,24] in terms of imperceptibility and robustness.It can be seen from Table 13 that the proposed method outperforms the imperceptibility of the schemes [14,24], and [11] in terms of MRMS and MSDM.In addition, the imperceptibility performance in terms of the HD of the proposed technique is compared to the scheme [12].

Method
MRMS (10 −3 ) MSDM [14] 3.17 0.3197 [24] 1.48 0.2992 [11] 2.90 0.3197 Proposed method 0.38 0.2254 From Table 14, it can be highlighted that the proposed method achieves good imperceptibility performance.The reported results in terms of MRMS and HD illustrates the invisibility of the proposed method.Moreover, our scheme outperforms Wang et al.'s [3] scheme for MRMS and HD for the three objects Venus, Horse, and Rabbit.
Table 15 sketches the imperceptibility comparison with [14] and [12] schemes in terms of the Hausdorff distance for Bunny, Venus, and Horse models.As depicted in Table 15, the proposed technique achieves good results since all the obtained HD values are less than 1.71.Moreover, the proposed method outperforms Cho et al.'s [14] and Medimegh et al.'s [12] schemes.In addition, as depicted in Table 16, the proposed method shows high robustness to different attacks including noise addition and smoothing, and outperforms the schemes in [14] and [12], except with respect to cropping attack, Medimegh et al.'s scheme [12] shows high robustness compared to our method.We believe that this is due to the redundant insertion of the watermark in several patches.It can be seen in Figure 12 that the proposed scheme shows high robustness to several attacks including noise addition, quantization, and smoothing.Moreover, our methods achieves high robustness compared to Cho's method [14] in terms of normalized correlation.
The robustness of our method in terms of normalized correlation (NC) was compared to Wang et al.'s [3] method for noise addition, quantization, and smoothing attacks with different parameters.For noise addition, three amplitudes were used in the comparison.Quantization was applied with 9 bits, 8 bits, and 7 bits.Regarding smoothing attack, a watermarked rabbit has undergone this process using a fixed deformation factor λ = 0.10 with different iteration numbers.Figure 13 highlighted the obtained results for the Rabbit object.It can be observed from Figure 13 that the proposed method shows high robustness compared to Wang's method in terms of NC for noise addition, quantization, and smoothing.Moreover, as depicted in Table 17, the proposed method is able to withstand a smoothing attack, and the obtained results in terms of BER show the superiority of our method compared to Son's method [11].show the quality and robustness comparison with Wang et al.'s method [3] against noise addition, quantization, and smoothing for Venus and Horse, respectively.The quality evaluation metrics used in the comparison are MRMS, HD, and MSDM, while the robustness is evaluated using NC.
Regarding noise addition, Horse and Venus were chosen as comparison meshes, and three amplitudes were used in the comparison 0.05%, 0.25%, and 0.5%.According to Table 18, it can be concluded that our method is robust against noise and outperforms the obtained results of Wang's method in terms of robustness (NC) and quality (MRMS, HD, and MSDM).For quantization attack, 9 bits, 8 bits, and 7 bits are the parameters used in the comparison, respectively.It can be seen from Table 19 that the proposed method is able to withstand quantization attack for Venus and Horse 3D meshes.In addition, Table 19 shows high robustness and imperceptibility performances compared to Wang et al.'s [3] scheme.Moreover, the robustness comparison with Son et al.'s method sketched in Table 9 demonstrates the superiority of the proposed method.
For comparison purposes, Venus and Horse models have undergone Laplacian smoothing using λ = 0.10 with 10, 30, and 50 iterations.The robustness and imperceptibility performances were evaluated using NC, MRMS, HD, and MSDM.In Table 20, it can be observed that the proposed technique exhibits superiority over Wang's method in terms of robustness.Moreover, the quality performance illustrates the superiority of the proposed method in terms of MRMS, HD, and MSDM.Table 21 presents the robustness comparison with Cho's method [14] in terms of NC for the Bunny object.According to Table 21, it can be seen that our method shows good robustness against noise, quantization, smoothing, and simplification.Moreover, the proposed method outperforms Cho et al.'s method in terms of the mentioned attacks.It can be seen in Table 22 that the proposed method outperforms Nakazawa et al.'s method in a wide range of attacks including noise addition, quantization, smoothing, and simplification.In sum, the majority of the previous proposed 3D watermarking schemes based on saliency give good performance in terms of imperceptibility due to the use of this perceptual characteristic.However, they generally show weakness in several attacks since they used a spatial domain to embed the watermark.The novelty of the proposed method is that it gives good results for both imperceptibility and robustness thanks to the exploitation of the advantages of QIM quantification of wavelets and mesh saliency.

Conclusions
In this paper, a blind robust 3D mesh watermarking method based on visual saliency and wavelet coefficient vectors for copyright protection is proposed.The proposed method takes the full advantage of jointing mesh saliency and QIM quantization of wavelet coefficients to ensure both high imperceptibility and robustness.The robustness requirement is achieved by quantifying the wavelet coefficients using the QIM scheme, while the imperceptibility performance is ensured by adjusting the embedding process according to the visual saliency.The experimental results demonstrate that the proposed scheme yields a good tradeoff between the imperceptibility and robustness requirements.Moreover, experimental simulations show that the proposed method outperforms the existing methods against the majority of the attacks.Future work will focus on improving the robustness against severe attacks such as cropping and remeshing using the weights of the saliency mesh in order to embed more data.
vertices coordinates at resolution level j, k j represents the number of vertices at level j, and W j+1 = w j+1 1 , w j+1 2 , . . ., w j+1 t T is the wavelet coefficient vector at resolution level j + 1. t j+1 is the number of wavelet coefficient vectors at resolution level j + 1, where t j+1 = k j − k j+1 .A j+1 is a non-square matrix that represents the triangle reduction by joining four triangles into one.Non-square matrix B j+1 produces the wavelet vectors that start from the midpoint of the edge in the lower resolution j + 1 and ends at the vertices that are deleted at the same level j + 1.

Figure 7 .
Figure 7.The robustness performance in terms of correlation using several quantization steps for Bimba, Horse, and Bunny models.

Figure 8 .
Figure 8.The imperceptibility performance in terms of MSDM using several quantization steps for Bimba, Horse, and Bunny models.

Figure 9 .
Figure 9.The imperceptibility performance in terms of MRMS using several quantization steps for Bimba, Horse, and Bunny models.

Table 1 .
Size of the 3D models used in the experiments.

Table 3 .
Watermark imperceptibility measured in terms of maximum root mean square error (MRMS), HD, and mesh structural distortion measure (MSDM).

Table 4 .
Watermark imperceptibility without and with saliency measured in terms of MRMS, HD, and MSDM.

Table 5 .
Watermark robustness against additive noise measured in terms of correlation.

Table 10 .
Watermark robustness against similarity transformations measured in terms of correlation.

Table 11 .
Watermark robustness against subdivision measured in terms of correlation.

Table 12 .
Watermark robustness against cropping measured in terms of correlation.

Table 14 .
[3]erceptibility comparison with Wang et al.'s[3]method measured in terms of MRMS and MSDM for Horse and Venus models.

Table 17 .
[11]lts of robustness comparison with Son et al.'s[11]method measured in terms of BER against smoothing.

Table 18 .
[3]lity and robustness comparison with Wang et al.'s scheme[3]against noise addition measured in terms of MRMS, HD, MSDM, and NC.

Table 19 .
[3]ults of quality and robustness comparison with Wang et al.'s scheme[3]against quantization measured in terms of MRMS, HD, MSDM, and NC.

Table 21 .
[14]lts of robustness comparison with Cho et al.'s[14]method measured in terms of NC for the Bunny model.

Table 22 .
[10]lts of robustness comparison with Nakazawa et al.'s[10]method measured in terms of NC for the Bunny model.