Analysis of Laser Tracker-Based Volumetric Error Mapping Strategies for Large Machine Tools

: The measurement and compensation of volumetric error in milling machines of medium and large size is a key aspect to meeting the precision requirements of the most demanding applications. There are several solutions for volumetric error measurement—usually based on laser or in calibrated artifacts—that o ﬀ er di ﬀ erent speciﬁcations and lead to a variety of levels of precision, complexity of implementation and automation, cost of equipment, and measurement time, amongst others. Therefore, it is essential to have tools that allow, in each case, analysis as to which is the optimal calibration strategy, providing the criteria for evaluating di ﬀ erent measurement equipment and strategies. To respond to this need, several tools have been developed which are able to simulate the entire calibration and compensation process (machine, measurement, model adjustment, etc.) and apply optimization methods to ﬁnd the best measurement strategy for each application. For a given machine architecture and expected error ranges, the compensation error for each strategy is obtained by propagating measurement uncertainties and expected machine errors through the measurement and compensation model ﬁtting process by Monte Carlo simulations. The use of this tool will be demonstrated through the analysis of the inﬂuence of the main design parameters of a measurement strategy for the calibration of a 3-axis machine tool, based on the measurement of tool position with a laser tracker.


Introduction
Geometric errors are one of the most important aspects when evaluating the precision of a machine tool. The form and assembly errors of the components of the machine (guideways, spindles, linear encoders, etc.) affect the position of the tool center point (TCP), causing a volumetric error that varies according to the position of the axes of the machine [1,2].
Traditionally, these errors have been measured by means of the so-called direct methods, measuring the error components of each axis (positioning, straightness, orientation) and their relative orientation (perpendicularity, parallelism) individually, to then compensate mechanically or by numerical control. Due to the high cost in time and the difficulties of measuring some errors individually, the so-called indirect methods have been developed in recent years. These procedures, usually based on calibrated artifacts [3] or laser interferometry with tracking (laser trackers) [4], directly measure the deviations of the TCP in different positions of the working volume and analytically infer the geometric errors that cause such deviations. In this way, a model is obtained that can be extrapolated to other points in the workspace, thus performing a complete volumetric calibration by means of a single procedure.
The earlier procedures for volumetric calibration have been based on the measurement of calibrated artifacts with different shapes and patterns [5][6][7][8], comparing the distances measured in the machine

Virtual Calibration and Compensation
The calibration process consists of measuring the position of the TCP along several positions of the working volume with the laser tracker. At each point, the 3D position measured by the LT (API, Rockville, MD, USA) and the nominal axes position will be recorded, and there will be differences between them due to both the geometric errors of the machine itself and the measurement uncertainties of the laser tracker. The deviation recorded between both measurements will be used to adjust and obtain a mathematical model that allows for the compensation of errors in the whole volume (see Section 2.3). The quality of the compensation and the obtained improvement in the volumetric error will depend on the proportion between the geometric errors of the machine tool and the uncertainties of the LT. To be able to simulate the calibration process virtually, it is necessary to define several aspects: • Structure and dimensions of the virtual machine tool, in addition to the shape and magnitude of the geometric errors.

•
Modeling of the laser tracker and characterization of the sources of uncertainty in the measurement process. An important aspect to keep in mind in these analyses is that in real calibrations, the geometric errors of the machine are not known a priori. These errors can vary in magnitude and form for different machines, and what can be a correct strategy for one case, may not be the right one for another. Therefore, it is necessary to consider this variability and to use the prior knowledge of the researcher for good modeling of these errors.
To estimate the uncertainty or accuracy level of a whole calibration plan is not straightforward, and several methodologies have recently been proposed [18]. To make an adequate comparison between different design alternatives, the calibration procedure is presented as an uncertainty propagation calculus, where the geometric errors represent the main input quantities (X M ) along with the measurement uncertainties of LT (X LT ). The volumetric error (whose quantification is defined in Section 2.4) will be the output quantity Y. Therefore, the analysis is not carried out on a single machine with specific geometrical errors, since the results of a single case may not be representative of the whole set. Instead, a Monte Carlo simulation will be carried out for each case study: where f m represents the whole calibration procedure and Ym represents the volumetric error distribution for case study m. This way, several virtual machines with different geometrical errors will be generated according to a set of parameters (see Section 2.2) and this case will be evaluated considering the distribution composed by the totality of the results of the simulated cases. To define the number of simulations required in each case, the Monte Carlo simulation will be performed iteratively according to Guide to the Expression of Uncertainty in Measurement (GUM) supplement 1 [19], in which the stabilization of the results will be evaluated. A diagram of the entire process, described above, can be seen in Figure 1. The upper part of Figure 1 represents a single calibration and compensation procedure, performed in a specific machine by a LT, obtaining a compensation model with a parameter identification algorithm and evaluating the improvement of the volumetric error in a validation process. Several design parameters will be defined for each case study, and the whole procedure will be simulated N times in each. Output distribution will be used to compare different cases. The upper part of Figure 1 represents a single calibration and compensation procedure, performed in a specific machine by a LT, obtaining a compensation model with a parameter identification algorithm and evaluating the improvement of the volumetric error in a validation process. Several design parameters will be defined for each case study, and the whole procedure will be simulated N times in each. Output distribution will be used to compare different cases.

Modeling
A virtual volumetric calibration of a large gantry type milling machine will be carried out as an application example, whose schematic representation can be seen in Figure 2. Spatial error measurement is an issue that has recently been addressed with new formulations based on homogeneous transformation matrices [7,20], in either multi-axis lathes or in milling machines. Therefore, the kinematic structure of the machine is modeled using transformation matrices [9]. This virtual machine model provides the coordinates of the TCP, taking into account the nominal movement of the axes, the tool offset, and any error source that may cause deviations, such as the effects of the geometric errors present in each axis of the machine or environmental effects and the flexible behavior of the structure. Two machine models will be needed to implement the virtual calibration procedure. One will be used to simulate the real machine M in the calibration procedure, where any effect previously mentioned can be simulated. The other represents the calibration model C, where only the effects that want to be identified will be included (geometric errors in this case), since the identification of further effects will require a more extensive measurement procedure. Both machine models are defined as M = VM(X, t, EM),

Modeling
A virtual volumetric calibration of a large gantry type milling machine will be carried out as an application example, whose schematic representation can be seen in Figure 2. Spatial error measurement is an issue that has recently been addressed with new formulations based on homogeneous transformation matrices [7,20], in either multi-axis lathes or in milling machines. Therefore, the kinematic structure of the machine is modeled using transformation matrices [9]. This virtual machine model provides the coordinates of the TCP, taking into account the nominal movement of the axes, the tool offset, and any error source that may cause deviations, such as the effects of the geometric errors present in each axis of the machine or environmental effects and the flexible behavior of the structure. Two machine models will be needed to implement the virtual calibration procedure. One will be used to simulate the real machine M in the calibration procedure, where any effect previously mentioned can be simulated. The other represents the calibration model C, where only the effects that want to be identified will be included (geometric errors in this case), since the identification of further effects will require a more extensive measurement procedure. Both machine models are defined as where M represents the machine model used to generate the calibration test, C is the calibration model whose parameters need to be estimated, X represents the nominal movement of the axes, t is the tool offset, and E M and E C contains the parameters associated with different error sources. The modeling of the geometric errors present in model M will be carried out by means of a combination of Fourier series and ordinary polynomial terms of order 0 and 1 [5]: E NN (x) = a 0 + a 1 · x + n (b n · sin(nπ · x) + c n · cos(nπ · x)), where a 0 and a 1 are the 0 and 1 order terms of an ordinary polynomial, n is the order of the Fourier series terms, b n and c n are the sine and cosine amplitudes of n th Fourier term, and x represents the normalized position of the axis. With this equation, each of the 18 position-dependent geometric errors contained in a three-axis machine will be modeled [21]. where M represents the machine model used to generate the calibration test, C is the calibration model whose parameters need to be estimated, X represents the nominal movement of the axes, t is the tool offset, and EM and EC contains the parameters associated with different error sources. The modeling of the geometric errors present in model M will be carried out by means of a combination of Fourier series and ordinary polynomial terms of order 0 and 1 [5]: where a0 and a1 are the 0 and 1 order terms of an ordinary polynomial, n is the order of the Fourier series terms, bn and cn are the sine and cosine amplitudes of n th Fourier term, and x represents the normalized position of the axis. With this equation, each of the 18 position-dependent geometric errors contained in a three-axis machine will be modeled [21]. As discussed in Section 2, the analysis of each case study is based on the results of several simulations. In each of them the geometric errors will be randomly generated, assigning a value according to a uniform distribution to the parameters of Equation (3). Parameters representing the amplitude (half-width) of the range for each error are described in Table 1. As discussed in Section 2, the analysis of each case study is based on the results of several simulations. In each of them the geometric errors will be randomly generated, assigning a value according to a uniform distribution to the parameters of Equation (3). Parameters representing the amplitude (half-width) of the range for each error are described in Table 1. The parameter set E C defined in model C will represented the geometric errors that will be identified in the parameter estimation procedure. The modeling of these geometric errors will be carried out by the combination of Legendre polynomials of grade 6, where each error be modeled as where the parameters a n are the parameter to be estimated and L n the Legendre polynomials of degree n. The specifications of the virtual machine are summarized in Table 2, which remain the same for both M and C machine model.

Machine Tool X (mm) Y (mm) Z (mm)
Working volume 0-5000 0-3000 0-1500 The measurement process will be simulated by modeling a conventional laser tracker, consisting of a laser interferometer and two articulations controlled by angular encoders. The kinematic structure of the LT is also modeled by homogeneous transformation matrices. The virtual model was based on the commercial LT API Radian Pro ® [24], in which it is considered that the systematic errors have been corrected: where T L is the transformation matrix representing the laser interferometer measurement, and R EL and R AZ are the rotation matrices for azimuth and elevation articulations of the laser tracker. RT XYZ0 represents the translation and rotation between the machine coordinate system and the LT, which is obtained a posteriori, fitting the coordinates measured by the LT with those recorded by the virtual machine. The base and distance-dependent uncertainties of the encoders and the length measurement of the laser interferometer are modeled. To obtain the values of these parameters, a fitting process has been carried out simulating the in-line distance and horizontal scale bar measurements described in ASME B89.4.19 [25], and fitting the results with the ones provided by the LT manufacturer [24]. The fitting results for both tests can be seen in Figure 3. The base and distance-dependent uncertainties of the encoders and the length measurement of the laser interferometer are modeled. To obtain the values of these parameters, a fitting process has been carried out simulating the in-line distance and horizontal scale bar measurements described in ASME B89.4.19 [25], and fitting the results with the ones provided by the LT manufacturer [24]. The fitting results for both tests can be seen in Figure 3.  Notable fitting has been achieved as both manufacturer data and model uncertainty values deviate less than 5 µm for the horizontal bar test and inside 1 µm in the in-line test, even for very large distances. Therefore, it is considered that the LT has been modeled with a sufficient level of accuracy.
Parameter values obtained in the previous procedure are shown in Table 3, where the uncertainty parameters are given for a coverage factor k = 1, assuming Gaussian distribution, as well as the range for the interferometer and the angular joints. No base uncertainty has been considered for distance measurement as the device will be used in the interferometer mode (IFM) and not as an absolute distance meter (ADM).  Notable fitting has been achieved as both manufacturer data and model uncertainty values deviate less than 5 µm for the horizontal bar test and inside 1 µm in the in-line test, even for very large distances. Therefore, it is considered that the LT has been modeled with a sufficient level of accuracy.
Parameter values obtained in the previous procedure are shown in Table 3, where the uncertainty parameters are given for a coverage factor k = 1, assuming Gaussian distribution, as well as the range for the interferometer and the angular joints. No base uncertainty has been considered for distance measurement as the device will be used in the interferometer mode (IFM) and not as an absolute distance meter (ADM).

Parameter Estimation
Once the virtual calibration test has been generated, it is necessary to implement an estimation procedure that takes the recorded deviations as a starting point and tries to minimize them by adjusting the results to a model. In this case, the compensation model is a virtual machine similar to the one used in the generation of the test in which the 18 geometrical errors of a three-axis machine are parameterized in order to adjust to the measured errors. However, there are important differences regarding the machine used in the generation of the virtual calibration.
The polynomials used in the parameter estimation process are Legendre polynomials, which differ from those used when modeling the geometric errors of the original machine (see Equation (4)). In real machines, the geometric errors will never perfectly follow the shape of a specific polynomial form, so this modification is introduced to take into account the difference between the errors of the machine and those characterized in the compensation model.
These differences between the compensation model and the original system are known as model uncertainties [26]. These uncertainties are inevitable in any estimation process on a real system, either because you can never model the whole system in detail or because the calibration test does not contain enough information to infer and adjust all the parameters associated with the error sources. The differences between the original machine and the compensation model in the virtual calibration and compensation process allow for evaluation of the robustness of the process. It is important to note that the obtention of the compensation model is insensitive to whether the recorded errors come from a real test on a machine or a virtual one, as is the case here.
The goal of the adjustment process is to find the parameters E C of the Legendre polynomials that represent the geometric errors, in order to minimize the error between the measured deviations and those predicted by the model, adjusting the mentioned polynomials to the error forms of the original system. That is, an estimation procedure will be carried out so that the error between the real model M and the compensation model C in measured points X is minimized: where e represents the error in X, Y, and Z direction (e X , e Y , e Z ) of each point measured in the calibration. This estimation is achieved by weighted least squares, a variation of the generalized least squares in which each measure is assumed to be independent of the others (there is no correlation between them), but that heteroscedasticity is present, that is, the measured values have different variances [6,27]. The analytical expression of this procedure can be observed in Equations (7) and (8).
where X represents the linear relationship between parameters and measurement errors defined by the compensation model and W is the weight matrix that is calculated according to the uncertainty associated with each measurement. The matrix is the form adopted by the covariance matrix Ω when all measurements are independent, so that all the elements outside the main diagonal are 0 and the elements of the diagonal are the variances associated with each measurement point in the three directions (σ 2 x , σ 2 y , σ 2 z ). The calculation of this variance is made by propagating uncertainties of the LT model described in Section 2.2.

Validation of the Compensation Model
To evaluate the quality of the model obtained, the improvement of the volumetric error in the whole working volume is checked. For this purpose, a three-dimensional mesh is defined with uniformly spaced points in X, Y, and Z, and the deviation from the nominal point caused by the geometric errors is evaluated, taking this value as the initial error. Next, the volumetric compensation in three directions is applied for each point, which is calculated according to the model obtained in the previous step, improving the existing volumetric error: where VE 0 represents the initial volumetric error compared with an ideal machine behavior and VE the volumetric error after compensation is applied in all X v validation points. Equations (9) and (10) provide the volumetric error distribution for X v point set. In this paper, the RMS value of this distribution will be taken as a numeric value that defines the quality of the compensation procedure.
Regarding the Monte Carlo simulation presented in Equation (1), the RMS value obtained in each simulation represents the y r draw of output variable Y that will compound the distribution of results.
It is important to highlight that using virtual tools significantly facilitates the evaluation of the adjustment quality. In the case of a real test, after carrying out the calibration test and adjusting the data, it is necessary to make new measurements on the machine to observe the improvement, which may involve a time cost very similar to the one invested in the calibration phase. Furthermore, the very same measurement uncertainties present in the calibration test will probably affect the validation test, as the same procedure is usually used, distorting the results that are supposed to be validated. However, in the case of virtual simulations, the validation is practically instantaneous and is done without any additional source of uncertainty.

Results and Discussion
In this section, comparisons between different case studies will be shown, that will allow optimizing different aspects of the calibration and compensation process. The aspects analyzed are the following:

•
Comparison between different types of measurement strategies.

•
Variation of the spacing between measurement points. • Similar measurements placing the LT in different positions.

Measurement Trajectory Strategies
The first aspect to be analyzed is the path that the TCP should describe throughout the calibration process. The most common strategies consist of linear trajectories along the three linear axes in which the measurement of the TCP is registered at regular intervals. These trajectories are then combined to form planes, cubes, or other forms that encompass the entire working volume and provide enough information to infer the geometric errors.
In this comparison, three different measurement strategies have been studied: trajectories in the form of main planes, in the form of a hexahedron, and forming the main diagonals of the volume. These strategies can be observed in Figure 4. Due to the disparity of trajectories between the different strategies, the spacing between points of measurement has been adapted for each case, resulting in approximately the same total number of points and, therefore, in tests of comparable duration.
provide enough information to infer the geometric errors.
In this comparison, three different measurement strategies have been studied: trajectories in the form of main planes, in the form of a hexahedron, and forming the main diagonals of the volume. These strategies can be observed in Figure 4. Due to the disparity of trajectories between the different strategies, the spacing between points of measurement has been adapted for each case, resulting in approximately the same total number of points and, therefore, in tests of comparable duration.  Simulation has been carried out according to the procedure shown in Sections 2-4. The results are shown in Figure 5, in which can be seen that the option of the plane-type trajectories is the one that provides the best results when it comes to improving the error in the volume. Therefore, in the successive analyses, it will be taken as the benchmark test to optimize the other aspects. Simulation has been carried out according to the procedure shown in Sections 2-4. The results are shown in Figure 5, in which can be seen that the option of the plane-type trajectories is the one that provides the best results when it comes to improving the error in the volume. Therefore, in the successive analyses, it will be taken as the benchmark test to optimize the other aspects.

Spacing between Measurement Points
In this section, the improvement of volumetric error will be analyzed as the number of measured points increases (reducing the spacing between them) in the calibration test. The trajectory of the hexahedron presented in the previous section is used.
The results of the simulations are shown in Figure 6. The original volumetric error of the machine is compared with the error remaining after compensation for a growing number of measurements. The duration of the tests has been calculated by assigning 15 s to the measurement of each point and a fixed duration of 20 min to set the measurement system. As expected, as the number of points increases, the quality of the compensation improves, obtaining a substantial improvement at the beginning but stabilizing as the density increases. On the other hand, as the number of measurements increases, the duration of the test will also increase proportionally. Therefore, the decision on this point lies in deciding how long it is worthwhile to continue increasing the number of measurements considering the incremental improvement from one set to another.

Spacing between Measurement Points
In this section, the improvement of volumetric error will be analyzed as the number of measured points increases (reducing the spacing between them) in the calibration test. The trajectory of the hexahedron presented in the previous section is used.
The results of the simulations are shown in Figure 6. The original volumetric error of the machine is compared with the error remaining after compensation for a growing number of measurements. The duration of the tests has been calculated by assigning 15 s to the measurement of each point and a fixed duration of 20 min to set the measurement system. As expected, as the number of points increases, the quality of the compensation improves, obtaining a substantial improvement at the beginning but stabilizing as the density increases. On the other hand, as the number of measurements increases, the duration of the test will also increase proportionally. Therefore, the decision on this point lies in deciding how long it is worthwhile to continue increasing the number of measurements considering the incremental improvement from one set to another. Analyzing the results for this case, it could be said that above 600 points, the incremental improvement obtained would not compensate for the increase in the calibration time.
It is important to note that this tradeoff between duration and accuracy may vary depending on the application or the objectives of the calibration.

Positioning of Laser Tracker
The position of the LT with respect to the measurement points is another important aspect to consider when making the measurements. There are two main issues that must be taken into account when evaluating the position of the LT.
On the one hand, due to independent operation, the measurement uncertainty of the points furthest from the LT will always be greater because the accuracy of the interferometer decreases with distance, and the angular error of the encoders is also amplified (see Table 3). Therefore, the LT should be placed as close as possible to the set of points to be measured, including among the possibilities to place it within the volume itself.
On the other hand, there are several technical limitations regarding the possibility of measuring any point. Firstly, the articulation corresponding to the elevation angle has a limited range (see Table  3), limiting the possibilities to measure points strictly above the LT. Secondly, the reflector that reflects the laser beam, which is inserted in the position of the tool, usually has a limited angle of incidence (between 120° and 150°) that does not allow measuring in the most cornered positions. There are active reflectors with motorized axes that allow measurement from any angle, but Analyzing the results for this case, it could be said that above 600 points, the incremental improvement obtained would not compensate for the increase in the calibration time.
It is important to note that this tradeoff between duration and accuracy may vary depending on the application or the objectives of the calibration.

Positioning of Laser Tracker
The position of the LT with respect to the measurement points is another important aspect to consider when making the measurements. There are two main issues that must be taken into account when evaluating the position of the LT.
On the one hand, due to independent operation, the measurement uncertainty of the points furthest from the LT will always be greater because the accuracy of the interferometer decreases with distance, and the angular error of the encoders is also amplified (see Table 3). Therefore, the LT should be placed as close as possible to the set of points to be measured, including among the possibilities to place it within the volume itself.
On the other hand, there are several technical limitations regarding the possibility of measuring any point. Firstly, the articulation corresponding to the elevation angle has a limited range (see Table 3), limiting the possibilities to measure points strictly above the LT. Secondly, the reflector that reflects the laser beam, which is inserted in the position of the tool, usually has a limited angle of incidence (between 120 • and 150 • ) that does not allow measuring in the most cornered positions. There are active reflectors with motorized axes that allow measurement from any angle, but equipment with such capabilities are limited and suppose a higher economic cost. Finally, there are mechanical limitations that depend on each case, in which, for example, the machine structure itself interferes with the measurements. For example, in Figure 7, the columns of the gantry machine would interfere to measure some points in position #3. In the simulations carried out in this section, these limitations of the structure itself will not be taken into account, as the correct evaluation of these effects would require virtual commissioning tools. measurement, since it is the one that requires the lowest range of angle of incidence for the reflector. On the other hand, it is at the same time the furthest away from the measurement points. The positions #2 and #3 have a much more centered position, especially the #3, as it is located along the longer axis. However, these positions require greater amplitude to the reflector and may limit the measurement of some points. At this stage, it is necessary to decide between two alternatives: • Keep the LT in the same position, leaving the points out of range without measuring, hoping that a closer proximity to the other points will result in better characterization.

•
Move the LT slightly away in the opposite direction, measuring all the required points, but losing some precision due to the greater distance to the points. In these analyses, the second option has been chosen. Position #4 is a special case since, to perform a measurement from there, it is absolutely necessary to have an active reflector. At the same time, it is the one with the least distance to the points set and, therefore, from which the best result could be expected. It is included in this analysis despite the limitations involved, to have it as a reference with respect to the other cases.  Figure 7 shows the simulated cases for this analysis. Position #1 is the most usual in this type of measurement, since it is the one that requires the lowest range of angle of incidence for the reflector. On the other hand, it is at the same time the furthest away from the measurement points. The positions #2 and #3 have a much more centered position, especially the #3, as it is located along the longer axis. However, these positions require greater amplitude to the reflector and may limit the measurement of some points. At this stage, it is necessary to decide between two alternatives: • Keep the LT in the same position, leaving the points out of range without measuring, hoping that a closer proximity to the other points will result in better characterization.

•
Move the LT slightly away in the opposite direction, measuring all the required points, but losing some precision due to the greater distance to the points.
In these analyses, the second option has been chosen. Position #4 is a special case since, to perform a measurement from there, it is absolutely necessary to have an active reflector. At the same time, it is the one with the least distance to the points set and, therefore, from which the best result could be expected. It is included in this analysis despite the limitations involved, to have it as a reference with respect to the other cases.
As can be seen in Figure 7, positions #1-#3 show very similar results. It is therefore considered that the effects mentioned in the previous paragraph are compensated to result in a similar calibration. Position #4 shows a slight improvement over the other positions as expected.

Discussion
Results obtained in Section 3 can be summarized in the following points: • Different trajectories in tests of similar duration affect the quality of the compensation. The orthogonal planes offer slightly better results than the hexahedron, which is the usual strategy in these types of calibrations.

•
Calibration quality shows an asymptotic behavior when tests with an increasing number of points are performed. When the number of measured points is high enough, stabilization of the improvement is reached.

•
The positioning points of the LT studied in this paper show minor influence in the quality of the calibration process. Centered position shows slightly better results, but an active reflector is required to measure all points.
In Section 3.1, the trajectory composed by the main diagonals shows the worst results. Further analysis has shown that the three axis movements used in this strategy makes it harder to decouple individual geometric errors, compared to multiple 1-axis linear trajectories, even if the total number of measurements remains the same. Regarding the results of orthogonal planes and hexahedron, similar results were expected, as both strategies represents similar trajectories in different configurations. The slightly better improvement in planes strategy is justified, as the mean distance from the measured points to LT position (position 1 in Figure 7) is smaller in this case.
In Section 3.2, it is important to note that the stabilization observed in Figure 6 as the number of points increases can vary according to the geometric errors of the machine. If the spatial frequencies of the errors are greater, it is expected that a greater density of points will be needed to characterize them correctly and, therefore, such stabilization will be achieved with a greater number of points. Further research on the relationship between measurement density, machine error order, and estimation model order is of major interest, in order to define an optimal measurement strategy and understand the influence of overfitting/underfitting. In Section 3.3, the positions #1-#3 were shown to have minor influence regarding the final results. As the measurement strategies analyzed in this paper are symmetric, similar measurement distances from LT to TCP were achieved in each of the cases. Future research should be done analyzing the same effect in nonsymmetric measurement strategies. The improvement level for position #4 will depend on the distance-dependent accuracy of the LT. In such cases, using an active reflector should be considered.

Conclusions
This article has presented a new strategy for optimizing volumetric error calibration processes based on laser tracker by means of simulation of the whole calibration process. By following this approach, the goal is to be able to find the optimal calibration procedure for each machine to be compensated, which achieves the expected calibration accuracy with the minimal machine occupation time.

•
The feasibility and interest of the proposed approach have been demonstrated by performing an analysis of the calibration process on a large machine tool, which has provided meaningful results.

•
The calibration results obtained in the analyses indicate that laser trackers are a valid solution for the calibration of large machine tools but, also, that the measurement strategy can have a relevant influence on the calibration uncertainty and on the measurement time.

•
These resulting optimization criteria cannot be considered of general application, since the optimal solution for each machine type and customer requirements will lead to different solutions. What can be taken instead, as a general solution, is the methodology and simulation tools presented here for finding the optimal solution. • The simulation of the machine to be calibrated is critical for obtaining representative results, and requires an estimation of the expected machine errors. The proposed error modeling strategy and Monte Carlo simulation ensure that the required engineering judgement is properly considered in the optimization process. • The uncertainty model of the laser tracker, as considered in the product catalogue, has not been considered appropriate for the requirements of some of the analyses, such as the influence of the position of the laser tracker, and a model based on results of ASME B89.4.19 tests has been successfully implemented.