Calibration of In-Plane Center Alignment Errors in the Installation of a Circular Slide with Machine-Vision Sensor and a Reflective Marker

This paper describes a method for calibrating in-plane center alignment error (IPCA) that occurs when installing the circular motion slide (CMS). In this study, by combini ng the moving carriage of the CMS and the planar PKM (parallel kinematic mechanism) with the machine tool, the small workspace of the PKM is expanded, and the workpiece is placed on the table with the CMS installed is processed through the machine tool. However, to rigidly mount the CMS on the table, the preload between the guide and the support bearings must be adjusted with the eccentric bearing, and in this process, the IPCA occurs. After installing a reflective marker on the PKM, the PKM is slowly rotated along with the ring guide in the way of stop-and-go without the PKM’s own motion. Then, using a machine vision camera installed at the top of the CMS, the IPCA, which is the difference between the actual center position and the nominal center position of the CMS with respect to the camera, can be successfully calibrated through the circular fitting process. Consequently, it was confirmed that the IPCA of 0.37 mm can be successfully identified with the proposed method.


Introduction
The structure of the manipulator used as industrial robots can be mainly divided into a serial kinematic mechanism (SKM) and a parallel kinematic mechanism (PKM). The PKM is naturally associated with a set of constraint functions characterized by its kinematical closure constraints. The PKM has a closed-loop mechanical structure composed of two or more mutually connected links and has relatively high structural stiffness compared with the open chained structure. However, due to its structural characteristics that are mutually constrained, various types of singularities exist inside of its workspace [1,2], as well as the workspace is very small. To overcome these limitations from its inherent structural characteristics of the PKM, various studies have been conducted to eliminate the actuator and end-effector singularities, as well as to efficiently enlarge the usable workspace [3]. F.C. Park and J. W. Kim [4] used differential geometric tools to study PKM's singularities and provided a finer classification of singularities. In their later works, they proposed the way of actuator redundancy as a means of eliminating actuator singularities and enlarging the usable spatial workspace. Consequently, a six-axis spatial PKM was manufactured based on this principle [5,6]. Gao. F described the characteristic of the workspace according to the variable link length and studied the workspace optimization of the parallel instrument, and Merlet, J.P. studied 3D or 6D boxes obtained from an introductory approach into this area. In addition, Ryu, S. J. carried out workspace optimization by adding linear joint design, or motion guide, to the instrument. Many studies have been conducted, such as performing the optimal design of the device itself, or installing the device on top of the motion guide [7][8][9][10][11][12][13][14][15]. However, ring slides, together with linear slides, have been used extensively as transfer systems for machine tool and automation applications [16] Figure 2 shows the applications that perform the process through ring slides and linear slides. (a) Optical lens assembly machines are the most representative form of industrial processes using motion guides, and ring slides are used for directional switching purposes and carry out transport operations through linear slides. (b) The moving saw for tube cutting is a piece of equipment made to cut the tube, and a cutting saw placed on the ring slide moves along the ring slide and cuts the long tube. (c) The pick and place device, with a ring slide equipped with a series instrument performing a steady 360° rotational motion. As such, ring slides have been mainly used for the purpose of turning the transportation route and have been used in areas where precision is not required much even if the process takes place on the ring slide.  However, ring slides, together with linear slides, have been used extensively as transfer systems for machine tool and automation applications [16]. Figure 2 shows the applications that perform the process through ring slides and linear slides. (a) Optical lens assembly machines are the most representative form of industrial processes using motion guides, and ring slides are used for directional switching purposes and carry out transport operations through linear slides. (b) The moving saw for tube cutting is a piece of equipment made to cut the tube, and a cutting saw placed on the ring slide moves along the ring slide and cuts the long tube. (c) The pick and place device, with a ring slide equipped with a series instrument performing a steady 360 • rotational motion. As such, ring slides have been mainly used for the purpose of turning the transportation route and have been used in areas where precision is not required much even if the process takes place on the ring slide.
Sensors 2020, 20, x FOR PEER REVIEW 2 of 15 optimization by adding linear joint design, or motion guide, to the instrument. Many studies have been conducted, such as performing the optimal design of the device itself, or installing the device on top of the motion guide [7][8][9][10][11][12][13][14][15]. Due to the nature of the PKM for machine tools in this study, the end effector, as shown in Figure  1, should be capable of approaching and performing operations in any direction of the workpieces. Thus, by combining the PKM with machine tools with the moving carriage on the ring guide, the workspace was expanded by rotating the singularity-free workspace of the PKM about a vertical axis passing through the center of the ring guide. However, ring slides, together with linear slides, have been used extensively as transfer systems for machine tool and automation applications [16] Figure 2 shows the applications that perform the process through ring slides and linear slides. (a) Optical lens assembly machines are the most representative form of industrial processes using motion guides, and ring slides are used for directional switching purposes and carry out transport operations through linear slides. (b) The moving saw for tube cutting is a piece of equipment made to cut the tube, and a cutting saw placed on the ring slide moves along the ring slide and cuts the long tube. (c) The pick and place device, with a ring slide equipped with a series instrument performing a steady 360° rotational motion. As such, ring slides have been mainly used for the purpose of turning the transportation route and have been used in areas where precision is not required much even if the process takes place on the ring slide.  However, there is an important difference between this study and Trioptics' application of ring slides, in 'Image master cineflex' has a structure in which the ring slide is fixed on the platform on which the ring slide is based, and the bearing and pinion with a measurement camera on it rotates. In this study, on the other hand, the bearings, and motors secure the rotating pinion to the platform as the base, above which the PKM-equipped ring slide rotates. Figure 4 shows the V-bearing and circular motion slides of an external single-edge ring system. When rigidly integrating the V-surface edge of the ring slide with three support bearings, two concentric bearings, and one eccentric bearing should be evenly spaced at 120° intervals according to the installation guide. In the case of the eccentric bearing used in this study, the eccentric offset between the central axis of rotation and the stud axis of the bearing is 1.9-5.5 mm. The preload adjustment using this eccentric offset can prevent improper slide operation owing to positional dimension errors between the bearing mounting stud holes of the base. However, as shown in Figure 5, a difference between the nominal and actual dimensions of the center of the slide against the base reference occurs. In this study, the process is carried out by rotating However, there is an important difference between this study and Trioptics' application of ring slides, in 'Image master cineflex' has a structure in which the ring slide is fixed on the platform on which the ring slide is based, and the bearing and pinion with a measurement camera on it rotates. In this study, on the other hand, the bearings, and motors secure the rotating pinion to the platform as the base, above which the PKM-equipped ring slide rotates. Figure 4 shows the V-bearing and circular motion slides of an external single-edge ring system. When rigidly integrating the V-surface edge of the ring slide with three support bearings, two concentric bearings, and one eccentric bearing should be evenly spaced at 120 • intervals according to the installation guide. In the case of the eccentric bearing used in this study, the eccentric offset between the central axis of rotation and the stud axis of the bearing is 1.9-5.5 mm. The preload adjustment using this eccentric offset can prevent improper slide operation owing to positional dimension errors between the bearing mounting stud holes of the base. However, like Figure 3, a variety of applications, including Trioptics high precision optical measurement system 'Image master crane flex', a measurement camera was mounted on the ring slides to ensure 10 um of Flange focal length acuity and ±0.2% effective local length accuracy. [17] This shows that ring slides can be used in applications that require relatively precise control. However, there is an important difference between this study and Trioptics' application of ring slides, in 'Image master cineflex' has a structure in which the ring slide is fixed on the platform on which the ring slide is based, and the bearing and pinion with a measurement camera on it rotates. In this study, on the other hand, the bearings, and motors secure the rotating pinion to the platform as the base, above which the PKM-equipped ring slide rotates. Figure 4 shows the V-bearing and circular motion slides of an external single-edge ring system. When rigidly integrating the V-surface edge of the ring slide with three support bearings, two concentric bearings, and one eccentric bearing should be evenly spaced at 120° intervals according to the installation guide. In the case of the eccentric bearing used in this study, the eccentric offset between the central axis of rotation and the stud axis of the bearing is 1.9-5.5 mm. The preload adjustment using this eccentric offset can prevent improper slide operation owing to positional dimension errors between the bearing mounting stud holes of the base. However, as shown in Figure 5, a difference between the nominal and actual dimensions of the center of the slide against the base reference occurs. In this study, the process is carried out by rotating However, as shown in Figure 5, a difference between the nominal and actual dimensions of the center of the slide against the base reference occurs. In this study, the process is carried out by rotating the ring slide directly, so the in-plane center alignment error (IPCA) that can occur when installing the ring slide must be identified and calibrated because it seriously affects the relative positioning accuracy between the workpiece fixed inside the ring slide and the cutting part rotating along the circular guide [18][19][20].
Sensors 2020, 20, x FOR PEER REVIEW 4 of 15 the ring slide directly, so the in-plane center alignment error (IPCA) that can occur when installing the ring slide must be identified and calibrated because it seriously affects the relative positioning accuracy between the workpiece fixed inside the ring slide and the cutting part rotating along the circular guide [18][19][20]. Thus, in this study, a machine vision camera-based calibration method is proposed to identify this IPCA with a reflective marker mounted on the T-shaped fixture at the end-effector of the PKM. The retro-reflective marker-based real-time positioning and localization with the machine vision camera has been actively used in fields requiring precise measurement, such as robotic neurosurgery [21,22], until recently. After installing a reflective marker on the PKM, the PKM is slowly rotated along the ring guide in the way of stop-and-go without the PKM's own motion, and then the automated marker localization process is performed to obtain the position of the marker with respect to the camera installed on the top of the circular motion slide (CMS). After all the center coordinates of the marker with reference to the camera coordinate system were obtained through this look-thenmove process, the centroid coordinates of the CMS are estimated with the circular fit of the set of reflective-marker's positions. As a result, the IPCA can be determined by calculating the relative distance from the nominal origin.

Definition of IPCA in CMS
Premise 1: The origin of all the coordinate systems to be described later is coplanar. Premise 2: Frame {C} and frame {E} estimated through circle fitting have the same orientation.  (2) Figure 6 shows the camera frame {C}, reference nominal frame {Nr}, nominal center frame of the "design" slide {Nc}, actual frame {A}, tool frame {T} attached to the T-shaped calibration tool, and frame {E} estimated according to the trajectory of the frame {T} and the field of view (FOV) of the camera. Because the actual location {A} of the CMS center is unknown, to define the nominal origin, a reference square is installed with three reflective markers attached to the "design" center on the optical table where the mechanism and camera are mounted, and {Nc}, is defined. The detailed method of defining a nominal center frame is described in Chapter 3. Thus, in this study, a machine vision camera-based calibration method is proposed to identify this IPCA with a reflective marker mounted on the T-shaped fixture at the end-effector of the PKM. The retro-reflective marker-based real-time positioning and localization with the machine vision camera has been actively used in fields requiring precise measurement, such as robotic neurosurgery [21,22], until recently. After installing a reflective marker on the PKM, the PKM is slowly rotated along the ring guide in the way of stop-and-go without the PKM's own motion, and then the automated marker localization process is performed to obtain the position of the marker with respect to the camera installed on the top of the circular motion slide (CMS). After all the center coordinates of the marker with reference to the camera coordinate system were obtained through this look-then-move process, the centroid coordinates of the CMS are estimated with the circular fit of the set of reflective-marker's positions. As a result, the IPCA can be determined by calculating the relative distance from the nominal origin.

Definition of IPCA in CMS
Premise 1: The origin of all the coordinate systems to be described later is coplanar. Premise 2: Frame {C} and frame {E} estimated through circle fitting have the same orientation. Because the actual location {A} of the CMS center is unknown, to define the nominal origin, a reference square is installed with three reflective markers attached to the "design" center on the optical table where the mechanism and camera are mounted, and {N c }, is defined. The detailed method of defining a nominal center frame is described in Chapter 3. In this study, we aim to identify the alignment errors Nc ∆ E.ORG between the origins of two coordinate systems through frame {Nc} and frame {E} expressed with reference to frame {C}, as indicated by Equations (3) and (4).
To estimate {E}, a T-shaped tool with a reflective marker attached to the edge of the cutting part is installed, the marker moves on the cutting part at a constant angular displacement along the CMS (counterclockwise, CCW), and "look-then-move" [23] is repeated, where the camera recognizes the position of the marker with reference to {C}. Five images are captured at each position, and the mean position of the marker is estimated through the machine-vision process with reference to frame {C}. Through this process, with circle fitting of the collected trajectory of the marker, the origin position of the coordinate system {E} is obtained. In this case, all the measurement coordinates are expressed with reference to frame {C}.
The entire process of look-then-move is summarized in Figure 7.   In this study, we aim to identify the alignment errors Nc ∆P E . ORG between the origins of two coordinate systems through frame {N c } and frame {E} expressed with reference to frame {C}, as indicated by Equations (3) and (4).
To estimate {E}, a T-shaped tool with a reflective marker attached to the edge of the cutting part is installed, the marker moves on the cutting part at a constant angular displacement along the CMS (counterclockwise, CCW), and "look-then-move" [23] is repeated, where the camera recognizes the position of the marker with reference to {C}. Five images are captured at each position, and the mean position of the marker is estimated through the machine-vision process with reference to frame {C}. Through this process, with circle fitting of the collected trajectory of the marker, the origin position of the coordinate system {E} is obtained. In this case, all the measurement coordinates are expressed with reference to frame {C}.
The entire process of look-then-move is summarized in Figure 7. In this study, we aim to identify the alignment errors Nc ∆ E.ORG between the origins of two coordinate systems through frame {Nc} and frame {E} expressed with reference to frame {C}, as indicated by Equations (3) and (4).
To estimate {E}, a T-shaped tool with a reflective marker attached to the edge of the cutting part is installed, the marker moves on the cutting part at a constant angular displacement along the CMS (counterclockwise, CCW), and "look-then-move" [23] is repeated, where the camera recognizes the position of the marker with reference to {C}. Five images are captured at each position, and the mean position of the marker is estimated through the machine-vision process with reference to frame {C}. Through this process, with circle fitting of the collected trajectory of the marker, the origin position of the coordinate system {E} is obtained. In this case, all the measurement coordinates are expressed with reference to frame {C}.
The entire process of look-then-move is summarized in Figure 7.

Measurement System Configuration
As shown in Figure 8 and Table 1, the measurement system consists of a machine-vision camera, dimmable light-emitting diode (LED) lights, a CMS, a parallel cutting part, a calibration tool, reflective markers, a reference square, and a controller.

Measurement System Configuration
As shown in Figure 8 and Table 1, the measurement system consists of a machine-vision camera, dimmable light-emitting diode (LED) lights, a CMS, a parallel cutting part, a calibration tool, reflective markers, a reference square, and a controller.   Here, h represents the height from the calibration tool to the camera, R represents the radius of the CMS, r represents the radius of rotation of the marker, w represents the width of the FOV at h, and d represents the depth of the FOV at h. The main parameters related to the camera and lens selection are the working distance h, which is the installation height of the camera; the minimum FOV width w of the camera; the depth d; the sensor width n; the depth t; and the pixel size m of the image acquired by the camera. Additionally, f represents the focal length between the camera and the lens, and α and β represent the horizontal and vertical pixel resolutions of the camera, respectively. The factor that determines the measurement precision of the entire measurement system is the size of the unit pixel (in the unit of mm), and this value is determined linearly according to the working distance h and the installation height of the camera, after the machine-vision camera is selected. Therefore, in this study, the camera and the working distance were determined using Equations (5)- (8) so that the minimum size (in mm) of the unit pixel would be approximately 0.05 mm. The results are presented in Table 2. The detailed specifications of the camera and lens were presented in a previous study [24]. At this time, as the calibration tool that functions as the end effector of the mechanism is closer to the CMS that serves as a supporting base, the vibration resulting from the position transition of the look-then-move process is minimized. As shown in Figure 9, the look-then-move experiment was performed upon adjusting the circular trajectory to be circumscribed to the greatest possible extent within the given FOV.
Here, h represents the height from the calibration tool to the camera, R represents the radius of the CMS, r represents the radius of rotation of the marker, w represents the width of the FOV at h, and d represents the depth of the FOV at h. The main parameters related to the camera and lens selection are the working distance h, which is the installation height of the camera; the minimum FOV width w of the camera; the depth d; the sensor width n; the depth t; and the pixel size m of the image acquired by the camera. Additionally, f represents the focal length between the camera and the lens, and α and β represent the horizontal and vertical pixel resolutions of the camera, respectively.
The factor that determines the measurement precision of the entire measurement system is the size of the unit pixel (in the unit of mm), and this value is determined linearly according to the working distance h and the installation height of the camera, after the machine-vision camera is selected. Therefore, in this study, the camera and the working distance were determined using Equations (5)-(8) so that the minimum size (in mm) of the unit pixel would be approximately 0.05 mm. The results are presented in Table 2. The detailed specifications of the camera and lens were presented in a previous study [24]. At this time, as the calibration tool that functions as the end effector of the mechanism is closer to the CMS that serves as a supporting base, the vibration resulting from the position transition of the look-then-move process is minimized. As shown in Figure 9, the look-then-move experiment was performed upon adjusting the circular trajectory to be circumscribed to the greatest possible extent within the given FOV.

Calibration of Camera
Prior to calibrating the IPCA using the previously selected machine-vision camera, as shown in Figure 10, camera calibration should be performed. Here, error factors are identified and calibrated, such as the distortion of the lens (including the radial distortion and tangential distortion that arise when a point in a three-dimensional space is mapped onto a two-dimensional image plane) and installation uncertainties [25][26][27].

Calibration of Camera
Prior to calibrating the IPCA using the previously selected machine-vision camera, as shown in Figure 10, camera calibration should be performed. Here, error factors are identified and calibrated, such as the distortion of the lens (including the radial distortion and tangential distortion that arise when a point in a three-dimensional space is mapped onto a two-dimensional image plane) and installation uncertainties [25][26][27]. The camera calibration error factors are mainly divided into external and internal factors. The external factors include the working distance of the camera, the light intensity, and the horizontal accuracy of the camera, and the internal factors include the focal length f (the distance between the lens center and the image sensor) and the principal point (the image coordinate of the foot of the perpendicular from the center of the lens to the image sensor).
Through the camera calibration, the aforementioned errors are calibrated, increasing the measurement and calculation accuracy in the machine-vision process. There are many methods for camera calibration [28][29][30][31][32][33], and in this study, calibration was performed using the camera calibration tool Vision Assistant provided by NI LabVIEW (Figures 11 and 12) [34,35]. The camera calibration error factors are mainly divided into external and internal factors. The external factors include the working distance of the camera, the light intensity, and the horizontal accuracy of the camera, and the internal factors include the focal length f (the distance between the lens center and the image sensor) and the principal point (the image coordinate of the foot of the perpendicular from the center of the lens to the image sensor).
Through the camera calibration, the aforementioned errors are calibrated, increasing the measurement and calculation accuracy in the machine-vision process. There are many methods for camera calibration [28][29][30][31][32][33], and in this study, calibration was performed using the camera calibration tool Vision Assistant provided by NI LabVIEW (Figures 11 and 12) [34,35].

Calibration of Camera
Prior to calibrating the IPCA using the previously selected machine-vision camera, as shown in Figure 10, camera calibration should be performed. Here, error factors are identified and calibrated, such as the distortion of the lens (including the radial distortion and tangential distortion that arise when a point in a three-dimensional space is mapped onto a two-dimensional image plane) and installation uncertainties [25][26][27]. The camera calibration error factors are mainly divided into external and internal factors. The external factors include the working distance of the camera, the light intensity, and the horizontal accuracy of the camera, and the internal factors include the focal length f (the distance between the lens center and the image sensor) and the principal point (the image coordinate of the foot of the perpendicular from the center of the lens to the image sensor).
Through the camera calibration, the aforementioned errors are calibrated, increasing the measurement and calculation accuracy in the machine-vision process. There are many methods for camera calibration [28][29][30][31][32][33], and in this study, calibration was performed using the camera calibration tool Vision Assistant provided by NI LabVIEW (Figures 11 and 12) [34,35].

Image Acquisition and Processing
The vision sensor must undergo several processing steps for the accurate recognition of the reflective marker [36,37]. First, the grayscale image is converted into a binary image. The purpose of this conversion is to convert the image with grayscale data in the range of 0-255 into an image consisting of only 0 s and 1 s. This reduces the total capacity, and only reflective markers are displayed on the binary image. In LabVIEW, this process is performed using the threshold function. When the brightness of the image is expressed on the scale 0-255, the numbers from 0 to 170 correspond to 0 (black), and the numbers from 171 to 255 correspond to 1 (red) (Figure 12).
Second, all the points in the image that have a circular shape are identified through the "Finding circle" function. At this time, as shown in Figure 12, the head of the fastening screw also has a circular shape, and these are also recognized as circles.
Therefore, when the circle diameter is limited to 12-13 mm (so that only the reflective marker with the diameter of 12.66 mm can be recognized), only three circles are recognized, the center coordinates of these circles are acquired, and the process continues to the marker-identification step. [38] Figures 13 and 14 show the LabVIEW code for the machine-vision process and the hardware configuration for the experiment. After the look-then-move process, the center coordinates of the target marker at each angle were obtained through the machine-vision process, and among the circle fitting functions [39][40][41][42], the Pratt method [42] is used to estimate the origin of the frame {E} of the CMS with reference to the coordinate system {C}.

Image Acquisition and Processing
The vision sensor must undergo several processing steps for the accurate recognition of the reflective marker [36,37]. First, the grayscale image is converted into a binary image. The purpose of this conversion is to convert the image with grayscale data in the range of 0-255 into an image consisting of only 0 s and 1 s. This reduces the total capacity, and only reflective markers are displayed on the binary image. In LabVIEW, this process is performed using the threshold function. When the brightness of the image is expressed on the scale 0-255, the numbers from 0 to 170 correspond to 0 (black), and the numbers from 171 to 255 correspond to 1 (red) (Figure 12).
Second, all the points in the image that have a circular shape are identified through the "Finding circle" function. At this time, as shown in Figure 12, the head of the fastening screw also has a circular shape, and these are also recognized as circles.
Therefore, when the circle diameter is limited to 12-13 mm (so that only the reflective marker with the diameter of 12.66 mm can be recognized), only three circles are recognized, the center coordinates of these circles are acquired, and the process continues to the marker-identification step [38]. Figures 13 and 14 show the LabVIEW code for the machine-vision process and the hardware configuration for the experiment. After the look-then-move process, the center coordinates of the target marker at each angle were obtained through the machine-vision process, and among the circle fitting functions [39][40][41][42], the Pratt method [42] is used to estimate the origin of the frame {E} of the CMS with reference to the coordinate system {C}.

Image Acquisition and Processing
The vision sensor must undergo several processing steps for the accurate recognition of the reflective marker [36,37]. First, the grayscale image is converted into a binary image. The purpose of this conversion is to convert the image with grayscale data in the range of 0-255 into an image consisting of only 0 s and 1 s. This reduces the total capacity, and only reflective markers are displayed on the binary image. In LabVIEW, this process is performed using the threshold function. When the brightness of the image is expressed on the scale 0-255, the numbers from 0 to 170 correspond to 0 (black), and the numbers from 171 to 255 correspond to 1 (red) (Figure 12).
Second, all the points in the image that have a circular shape are identified through the "Finding circle" function. At this time, as shown in Figure 12, the head of the fastening screw also has a circular shape, and these are also recognized as circles.
Therefore, when the circle diameter is limited to 12-13 mm (so that only the reflective marker with the diameter of 12.66 mm can be recognized), only three circles are recognized, the center coordinates of these circles are acquired, and the process continues to the marker-identification step. [38] Figures 13 and 14 show the LabVIEW code for the machine-vision process and the hardware configuration for the experiment. After the look-then-move process, the center coordinates of the target marker at each angle were obtained through the machine-vision process, and among the circle fitting functions [39][40][41][42], the Pratt method [42] is used to estimate the origin of the frame {E} of the CMS with reference to the coordinate system {C}.  The estimated origin {E} of the circular slide is compared with the nominal origin, and the following procedure is performed using the machine-vision process and design drawings to define this nominal origin. First, as shown in Figure 15, the center coordinates of the three reflective markers attached to the reference square are obtained through the machine-vision process and defined as  Table 3 and Equation (8). Second, the nominal center frame { c N } of the CMS is defined in the design drawing. In this case, the nominal origin is the center of the circle with the Vsurface edge line of the CMS to be connected to the aforementioned three V bearings as the circumference. Because it is fixed by the eccentric bearing, to calculate { c N }, the center coordinates obtained with all three bearings set as the center and the diameter of the center coordinate trajectory of the CMS obtained with reference to the eccentric bearing offset of 3.6 mm. Therefore, owing to the distance error from the origin of the frame {E}, the error rate occurs in this range. In the design, it is determined that the fastening angle α of the eccentric bearing is between 0° and 31°, and the resulting diameter 1.23 mm of the center coordinate trajectory of the CMS is shown in Figure 15. Additionally, in Figure 16, the V-surface edged lines that can be connected for each angle α are shown.  The estimated origin {E} of the circular slide is compared with the nominal origin, and the following procedure is performed using the machine-vision process and design drawings to define this nominal origin. First, as shown in Figure 15, the center coordinates of the three reflective markers attached to the reference square are obtained through the machine-vision process and defined as P 1 , P 2 and P 3 . The nominal reference frame {N r } is defined by these three coordinates, and the rotation by 1.3 • with reference to frame {C} is calculated. The rotation matrix C R N r and C P N r at this point are presented in Table 3 and Equation (8). Second, the nominal center frame {N c } of the CMS is defined in the design drawing. In this case, the nominal origin is the center of the circle with the V-surface edge line of the CMS to be connected to the aforementioned three V bearings as the circumference. Because it is fixed by the eccentric bearing, to calculate {N c }, the center coordinates obtained with all three bearings set as the center and the diameter of the center coordinate trajectory of the CMS obtained with reference to the eccentric bearing offset of 3.6 mm. Therefore, owing to the distance error from the origin of the frame {E}, the error rate occurs in this range. In the design, it is determined that the fastening angle α of the eccentric bearing is between 0 • and 31 • , and the resulting diameter 1.23 mm of the center coordinate trajectory of the CMS is shown in Figure 15. Additionally, in Figure 16, the V-surface edged lines that can be connected for each angle α are shown.
C P N r + C R N r · N r P N c = C P N c (10)

Analysis of Experimental Results
An experiment was performed in which the light intensity and the angle shifting interval of the look-then-move process were varied. The conditions were 480 and 770 lux and 5°, 10°, and 20°, respectively. Tables 4 and 5 present the mean value and the standard deviation (STD) of the estimated radius for each condition, as well as the distance error, which was the in-plane alignment error.

Analysis of Experimental Results
An experiment was performed in which the light intensity and the angle shifting interval of the look-then-move process were varied. The conditions were 480 and 770 lux and 5°, 10°, and 20°, respectively. Tables 4 and 5 present the mean value and the standard deviation (STD) of the estimated radius for each condition, as well as the distance error, which was the in-plane alignment error.  Therefore, the nominal origin coordinate C P N c can be obtained from the estimated {N r } with reference to the coordinate system {C} and the already known N r P N c , as follows:

Analysis of Experimental Results
An experiment was performed in which the light intensity and the angle shifting interval of the look-then-move process were varied. The conditions were 480 and 770 lux and 5 • , 10 • , and 20 • , respectively. Tables 4 and 5 present the mean value and the standard deviation (STD) of the estimated radius for each condition, as well as the distance error, which was the in-plane alignment error. As indicated by the experimental results, the distance error varied by up to 640%, depending on the light intensity. A circle fitted at 770 lux exhibited little variation with regard to the radii estimated at all the angles and the STD, whereas a circle fitted at 480 lux exhibited a large variation. This is a problem, as the vision sensor was unable to properly recognize the target marker under the low light intensity, as confirmed by Figure 17a. Even after the deviations of the values were calibrated using the Pratt method, the results were unreliable, indicating an adverse effect on the precision. As indicated by the experimental results, the distance error varied by up to 640%, depending on the light intensity. A circle fitted at 770 lux exhibited little variation with regard to the radii estimated at all the angles and the STD, whereas a circle fitted at 480 lux exhibited a large variation. This is a problem, as the vision sensor was unable to properly recognize the target marker under the low light intensity, as confirmed by Figure 17a. Even after the deviations of the values were calibrated using the Pratt method, the results were unreliable, indicating an adverse effect on the precision.  Figure 18 shows the origin fitting at 770 lux, shown at a scale within 0.8 mm in total. The results were valid, as they were almost independent of the angle shifting interval. Additionally, an alignment error occurred, as evidenced by a comparison with Figure 18 with reference to the normal origin. The alignment error was 0.37 mm, and the error rate was 27% by the range of the nominal origin.   Figure 18 shows the origin fitting at 770 lux, shown at a scale within 0.8 mm in total. The results were valid, as they were almost independent of the angle shifting interval. Additionally, an alignment error occurred, as evidenced by a comparison with Figure 18 with reference to the normal origin. The alignment error was 0.37 mm, and the error rate was 27% by the range of the nominal origin. As indicated by the experimental results, the distance error varied by up to 640%, depending on the light intensity. A circle fitted at 770 lux exhibited little variation with regard to the radii estimated at all the angles and the STD, whereas a circle fitted at 480 lux exhibited a large variation. This is a problem, as the vision sensor was unable to properly recognize the target marker under the low light intensity, as confirmed by Figure 17a. Even after the deviations of the values were calibrated using the Pratt method, the results were unreliable, indicating an adverse effect on the precision.  Figure 18 shows the origin fitting at 770 lux, shown at a scale within 0.8 mm in total. The results were valid, as they were almost independent of the angle shifting interval. Additionally, an alignment error occurred, as evidenced by a comparison with Figure 18 with reference to the normal origin. The alignment error was 0.37 mm, and the error rate was 27% by the range of the nominal origin.

Conclusions
To calibrate the center alignment error that occurs when a CMS is used for expanding the workspace of the parallel mechanism, a method for determining the error relative to the normal origin was developed. The method involves rotating the mechanism with a reflective marker attached and defining the center of the circular trajectory of the marker as the actual origin.
For this purpose, a camera was selected, and the relationship between the length per unit pixel and the working distance h was selected as the camera height for the adequate calibration precision. Therefore, h is set at 565 mm, length per unit pixel is set at 0.07841 mm. Then, this value was implemented in an actual experimental environment setting. Additionally, to validate the experimental results, the experiments were conducted several times, while the light intensity and shifting angle were varied. As a result, the origin center alignment error was identified as 0.37 mm in 770 lux brightness conditions and in all angles.
Because the proposed calibration method involves a camera that can be easily installed and leveled through a tripod, it can be applied as long as the reflective-marker recognition is possible, even if the entire space is limited. Thus, it is very useful in the application of a CMS.