Design and Test of a Spatial Nanopositioner for Evaluating the Out-of-Focus-Plane Performance of Micro-Vision

Micro-vision possesses high in-focus-plane motion tracking accuracy. Unfortunately, out-of-focus-plane displacements cannot be avoided, decreasing the in-focus-plane tracking accuracy of micro-vision. In this paper, a spatial nanopositioner is proposed to evaluate the out-of-focus-plane performance of a micro-vision system. A piezoelectric-actuated spatial multi-degree-of-freedom (multi-DOF) nanopositioner is introduced. Three in-plane Revolute-Revolute-Revolute-Revolute (RRRR) compliant parallel branched chains produce in-focus-plane motions. Three out-of-plane RRRR chains generate out-of-focus-plane motions. A typical micro-vision motion tracking algorithm is presented. A general grayscale template matching (GTM) approach is combined with the region of interest (ROI) method. The in-focus-plane motion tracking accuracy of the micro-vision system is tested. Different out-of-focus-plane displacements are generated using the proposed nanopositioner. The accuracy degradation of the in-focus-plane motion tracking is evaluated. The experimental results verify the evaluation ability of the proposed nanopositioner.


Introduction
Micro-vision, consisting of a microscope and a camera, possesses the advantage of being a non-contact method with visualization capabilities [1][2][3][4][5][6][7]. The larger the eyepiece multiplier, the smaller the depth of field. Due to the small depth of field, micro-vision is generally employed to measure micrometer-scale or sub-micrometer-scale displacements in the focus plane. Several factors affect the in-focus-plane measurement accuracy, such as defocus blur, motion blur, and Gaussian blur. Unfortunately, out-of-focus-plane displacements are ubiquitously unavoidable. The relative distance between the lens of the microscope and the measured object always changes. The result causes different defocus blurs. Compared with macro-vision, the accuracy degradation of micro-vision is more prominent [6][7][8][9][10][11]. Out-of-focus-plane displacements of moving targets are more serious than those of stationary objects. Therefore, the defocus effect of micro-vision is worse for in-focus-plane motion tracking.

Mechanical Design of the Spatial Nanopositioner
A spatial nanopositioner is proposed. A 6-Revolute-Revolute-Revolute-Revolute (6-RRRR) CPM acts as the mechanical unit of the nanopositioner. The 6-RRRR CPM consists of six parallel branched chains. Each branched chain is composed of four rotating pairs using notch flexure hinges. The first rotational pair, as the equivalent active pair, is denoted using R. The other three rotating pairs, as passive pairs, are represented using RRR. Therefore, every branched chain is labeled as RRRR. The 6-RRRR CPM possesses a twoin-one structural configuration of two layers. The upper layer is an in-plane 3-RRRR CPM. The lower layer is an out-of-plane 3-RRRR CPM. The two layers are connected using a metal plate. The end-effector of the nanopositioner connects the six RRRR branches directly. Six PEAs drive the six RRRR branches separately and act as the actuating unit of the nanopositioner.

In-Plane Motion Generation and Measurement
The upper layer, namely, the in-plane 3-RRRR CPM, is composed of three RRRR branches. The three branches are located on the same plane. The in-plane three-degree-offreedom (3-DOF) nanoscale-accuracy motion is generated. The end-effector acts as the tracking target of the micro-vision system. Three capacitive displacement sensors (CDSs) are used to measure the 3-DOF output displacements of the end-effector. Three PEAs (marked in blue) and three CDSs (marked in red) are shown in Figure 1.

Out-of-Plane Motion Generation and Measurement
The lower layer, namely, the out-of-plane 3-RRRR CPM, is composed of three RRRR branches. The three branches are located on three different planes. The plane of each outof-plane RRRR branch is perpendicular to the same plane of the three in-plane RRRR branches. The out-of-plane motion is produced and added to the in-plane trajectory of the end-effector. One CDS is employed to measure the out-of-plane movement of the end-

Out-of-Plane Motion Generation and Measurement
The lower layer, namely, the out-of-plane 3-RRRR CPM, is composed of three RRRR branches. The three branches are located on three different planes. The plane of each out-ofplane RRRR branch is perpendicular to the same plane of the three in-plane RRRR branches. The out-of-plane motion is produced and added to the in-plane trajectory of the end-effector. One CDS is employed to measure the out-of-plane movement of the end-effector. Three PEAs (marked in blue) and one CDS (marked in red) are shown in Figure 2.
effector. Three PEAs (marked in blue) and one CDS (marked in red) are shown in Figure  2.

In-Focus-Plane Motion Tracking of Micro-Vision
In order to represent as many application cases as possible, typical calculation methods are selected for use in the micro-vision system. The focus plane of micro-vision is the calculation benchmark of out-of-focus-plane displacements. External measurement and image feature evaluation are two common methods to search the focus plane. Definition evaluation methods based on image features are mature, low-cost, and easy to implement.

Determination of the Focus Plane
The variance method is an algorithm used to characterize the difference in image sharpness values. The difference in the grayscale values of clear images is larger than that of fuzzy images. The pixel of the image is set as M × N. F is labeled as the result and expressed as follows: where I(i,j) denotes the grayscale value at point (i,j), and μ represents the average grayscale value.
The variance evaluation function is unimodal and anti-noise. Based on the image sharpness function, the position of the clearest image is searched to determine the focus plane.

Grayscale Template Matching (GTM) Method
Typical template matching algorithms use the sum of squared differences (SSD) or normalized cross-correlation (NCC) to calculate the similarity. Let S(x,y) represent an image of size M × N, and T(x,y) denote a template image of size m × n. The similarity formula D(i,j) using the SSD algorithm is as follows:

In-Focus-Plane Motion Tracking of Micro-Vision
In order to represent as many application cases as possible, typical calculation methods are selected for use in the micro-vision system. The focus plane of micro-vision is the calculation benchmark of out-of-focus-plane displacements. External measurement and image feature evaluation are two common methods to search the focus plane. Definition evaluation methods based on image features are mature, low-cost, and easy to implement.

Determination of the Focus Plane
The variance method is an algorithm used to characterize the difference in image sharpness values. The difference in the grayscale values of clear images is larger than that of fuzzy images. The pixel of the image is set as M × N. F is labeled as the result and expressed as follows: where I(i,j) denotes the grayscale value at point (i,j), and µ represents the average grayscale value.
The variance evaluation function is unimodal and anti-noise. Based on the image sharpness function, the position of the clearest image is searched to determine the focus plane.

Grayscale Template Matching (GTM) Method
Typical template matching algorithms use the sum of squared differences (SSD) or normalized cross-correlation (NCC) to calculate the similarity. Let S(x,y) represent an image of size M × N, and T(x,y) denote a template image of size m × n. The similarity formula D(i,j) using the SSD algorithm is as follows: where (i,j) represents the upper left corner. A subgraph with a size of m × n is taken to calculate the similarity to the template.

Region of Interest (ROI) Method
The typical ROI method is also used. Before the motion tracking, an original frame of the image is collected for template matching. The point (u 0 ,v 0 ) represents the central The image of a new frame i is matched based on the ROI region of the previous frame i − 1. The point (R ui ,R vi ) of the new frame i represents the relative position in the ROI area of the previous frame i − 1. The point (u i ,v i ) of the new frame i represents the absolute position in the new frame i and is calculated as follows: The updated ROI area R i of the new frame i is defined as follows: The point (u,v) of every frame is acquired to calculate the in-focus-plane displacements.

Prototype Test of the Nanopositioner and Out-of-Focus-Plane Evaluation
Aluminum alloy 7075-T651 was selected as the material for the prototype of the presented spatial 6-RRRR CPM. Wire electrical discharge machining (WEDM) and computer numerical control (CNC) technology were used to fabricate the two 3-RRRR CPMs separately. The in-plane 3-RRRR CPM was equipped with three packaged PEAs (P-841. The micro-vision system consisted of a microscope and a camera. The selected microscope had a magnification of 112.5 (Mitutoyo 50× objective, Navitar Inc., Rochester, NY, USA). The sensor of the selected camera had a resolution of 2448 × 2048 @ 75 fps, and a pixel pitch of 3.45 µm (Sony IMX250 CMOS, FLIR Systems Inc., Wilsonville, OR, USA). The microscope and camera were driven by a lifting sliding stage (KA050Z, Zolix Instruments Co., Ltd., Beijing, China). The resolution of this stage was 1 µm, and the positioning precision was better than ±3 µm. This stage was used to search for the focus plane of the micro-vision system. The equivalent pixel displacement relationship was calculated using a negative combined resolution and a distortion test target (R1L1S1N, Thorlabs Inc., Newton, NJ, USA). The calculated pixel displacement conversion relationship was 0.0311 µm/pixel. The whole experiment setup is shown in Figure 3.
As shown in Figure 3, four types of controllers were employed during the prototype test. The third controller (MicroLabBox) was the overall controller of the whole experimental system. The first controller connected the four capacitive sensors of the nanopositioner and the third controller. The second controller connected the six PEAs of the nanopositioner and the third controller. The fourth controller connected the lifting platform of the micro-vision system and the third controller.  As shown in Figure 3, four types of controllers were employed during the prototype test. The third controller (MicroLabBox) was the overall controller of the whole experi mental system. The first controller connected the four capacitive sensors of the nanoposi tioner and the third controller. The second controller connected the six PEAs of the na nopositioner and the third controller. The fourth controller connected the lifting platform of the micro-vision system and the third controller.

Test Results of the in-Focus-Plane Motion Generation Ability
The in-plane workspace of the nanopositioner was tested. Then, an in-plane circula trajectory was generated using a PID controller. The results are shown in Figure 4.  As shown in Figure 4, the in-plane workspace of the nanopositioner is 140 × 170 μm 2 For a circle with a diameter of 25 μm, the positioning error along the x-axis using the 3δ (δ: standard deviation) principle is 0.038 μm, and the 3δ error along the y-axis is 0.054 μm The in-plane trajectory provides a standard in-focus-plane tracking target for the micro vision system.
The area of the viewing field of the proposed micro-vision system is 76.1 × 63.7 μm 2 The calculated pixel displacement conversion relationship is 0.0311 μm/pixel. The in plane reachable workspace of the proposed nanopositioner is more than four times large than the viewing field of the micro-vision system. The 3δ trajectory tracking precision o the nanopositioner is close to the identified displacement of one pixel of the micro-vision system.

Test Results of the in-Focus-Plane Motion Generation Ability
The in-plane workspace of the nanopositioner was tested. Then, an in-plane circular trajectory was generated using a PID controller. The results are shown in Figure 4. As shown in Figure 3, four types of controllers were employed during the prototype test. The third controller (MicroLabBox) was the overall controller of the whole experimental system. The first controller connected the four capacitive sensors of the nanopositioner and the third controller. The second controller connected the six PEAs of the nanopositioner and the third controller. The fourth controller connected the lifting platform of the micro-vision system and the third controller.

Test Results of the in-Focus-Plane Motion Generation Ability
The in-plane workspace of the nanopositioner was tested. Then, an in-plane circular trajectory was generated using a PID controller. The results are shown in Figure 4.  As shown in Figure 4, the in-plane workspace of the nanopositioner is 140 × 170 μm 2 . For a circle with a diameter of 25 μm, the positioning error along the x-axis using the 3δ (δ: standard deviation) principle is 0.038 μm, and the 3δ error along the y-axis is 0.054 μm. The in-plane trajectory provides a standard in-focus-plane tracking target for the microvision system.
The area of the viewing field of the proposed micro-vision system is 76.1 × 63.7 μm 2 . The calculated pixel displacement conversion relationship is 0.0311 μm/pixel. The inplane reachable workspace of the proposed nanopositioner is more than four times larger than the viewing field of the micro-vision system. The 3δ trajectory tracking precision of the nanopositioner is close to the identified displacement of one pixel of the micro-vision system. As shown in Figure 4, the in-plane workspace of the nanopositioner is 140 × 170 µm 2 . For a circle with a diameter of 25 µm, the positioning error along the x-axis using the 3δ (δ: standard deviation) principle is 0.038 µm, and the 3δ error along the y-axis is 0.054 µm. The in-plane trajectory provides a standard in-focus-plane tracking target for the microvision system.

Test Results of the Out-of-Focus-Plane Motion Generation Ability
The area of the viewing field of the proposed micro-vision system is 76.1 × 63.7 µm 2 . The calculated pixel displacement conversion relationship is 0.0311 µm/pixel. The inplane reachable workspace of the proposed nanopositioner is more than four times larger than the viewing field of the micro-vision system. The 3δ trajectory tracking precision of the nanopositioner is close to the identified displacement of one pixel of the microvision system.

Test Results of the Out-of-Focus-Plane Motion Generation Ability
The out-of-plane stroke of the nanopositioner was tested. The results are shown in Figure 5. As shown in Figure 5, the out-of-plane stroke of the nanopositioner is 90.4 µm. For every point in the area of the viewing field, 76.1 × 63.7 µm 2 , the corresponding out-of-plane stroke of the nanopositioner is more than ten times larger than the depth of focus of the micro-vision system. The out-of-plane motion of the nanopositioner is enough for the out-of-focus-plane excitation of the micro-vision system.
The out-of-plane stroke of the nanopositioner was tested. The results are sh Figure 5. As shown in Figure 5, the out-of-plane stroke of the nanopositioner is 90.4 μ every point in the area of the viewing field, 76.1 × 63.7 μm 2 , the corresponding plane stroke of the nanopositioner is more than ten times larger than the depth of f the micro-vision system. The out-of-plane motion of the nanopositioner is enough out-of-focus-plane excitation of the micro-vision system.

Test Results of the Out-of-Focus-Plane Performance
The field of view of the selected micro-vision system is 76.1 × 63.7 μm², and th pling rate is 15 Hz. The in-focus-plane circular trajectory was generated, and differe of-focus-plane harmonic displacements were added to the in-focus-plane trajecto motion tracking results of the circular diameter are shown in Table 1. As shown in Table 1, out-of-focus-plane displacements changed the in-focu measurement results of the micro-vision system. When the out-of-focus-plane di ment reached a threshold value, 7.737 ± 2.512 μm, the micro-vision system was un work.

Performance Comparison of Spatial Nanopositioners
The proposed spatial nanopositioner possesses a compact structure of Φ200 × 5 an in-plane workspace of 140 × 170 μm 2 , and an out-of-plane stroke of 90.4 μm. Com with other nanopositioners [19,20,29], the presented nanopositioner has the ability t uate the out-of-focus-plane performance of micro-vision systems and can be eas bedded into these systems. The nanopositioner proposed in [20] can expand the

Test Results of the Out-of-Focus-Plane Performance
The field of view of the selected micro-vision system is 76.1 × 63.7 µm 2 , and the sampling rate is 15 Hz. The in-focus-plane circular trajectory was generated, and different out-of-focus-plane harmonic displacements were added to the in-focus-plane trajectory. The motion tracking results of the circular diameter are shown in Table 1. As shown in Table 1, out-of-focus-plane displacements changed the in-focus-plane measurement results of the micro-vision system. When the out-of-focus-plane displacement reached a threshold value, 7.737 ± 2.512 µm, the micro-vision system was unable to work.

Performance Comparison of Spatial Nanopositioners
The proposed spatial nanopositioner possesses a compact structure of Φ200 × 56 mm 3 , an in-plane workspace of 140 × 170 µm 2 , and an out-of-plane stroke of 90.4 µm. Compared with other nanopositioners [19,20,29], the presented nanopositioner has the ability to evaluate the out-of-focus-plane performance of micro-vision systems and can be easily embedded into these systems. The nanopositioner proposed in [20] can expand the actual application of optical alignment elements in projection lenses with 193 nm immersion lithography.
Additionally, the 3δ positioning accuracy of the proposed nanopositioner is satisfactory, being close to the identified displacement of one pixel of the micro-vision system. A comparison of the key performance indexes of the selected nanopositioners is shown in Table 2.

Conclusions
A spatial nanopositioner is proposed in this paper. The end-effector acts as the infocus-plane measurement target of the micro-vision system. A 3-RRRR CPM is employed to generate in-plane motion. Another 3-RRRR CPM is used to generate different out-of-plane displacements to evaluate the out-of-focus-plane performance of the micro-vision system. The micro-vision system uses the typical GTM and ROI methods. The experimental results verify the accuracy degradation of the in-focus-plane motion tracking of the micro-vision system using different out-of-focus-plane displacements. The proposed nanopositioner possesses a motion generation ability for evaluating the out-of-focus-plane performance of micro-vision systems.
Future research will focus on the accuracy deterioration caused by high-frequency out-of-focus-plane displacements and the diffraction effect, and the real-time compensation or correction of the micro-vision system at the software level.