Shape Measurement Method of Two-Dimensional Micro-Structures beyond the Diffraction Limit Based on Speckle Interferometry

: A technique based on speckle interferometry for observing microstructures beyond the diffraction limit by detecting the spatial phase distribution of scattered light from microstructures has previously been reported. In this study, the development of this technique using a two-dimensional method is discussed. In order to observe general two-dimensional images, development of new technology in several stages is required. A two-dimensional ﬁltering technique to reduce the noise component and a two-dimensional integration path to detect the three-dimensional shape of the surface are described in detail. As a ﬁrst step toward observing complex two-dimensional structures in the future, it is investigated that directional two-dimensional information such as ﬁbrous materials and micro-linear structures can be visually captured and treated as meaningful two-dimensional structures. As a result, it is shown that it is possible to observe ﬁne two-dimensional letters with a line width of 100 nm, which is beyond the diffraction limit of the objective lens, demonstrating the effectiveness of the observation technique for microstructures by phase detection.


Introduction
The observation of microstructures has been widely used in biological research [1][2][3]. However, it is difficult to observe microstructures by optical microscopy due to the diffraction phenomenon of light [4], and electron microscopy is generally used for observation. In recent years, fluorescent proteins have been used to observe micro-structures in the field of biotechnology, and new super-resolution techniques [5] have been developed to promote biotechnology research, such as PALM (photoactivated localization microscopy) [6] and STED (stimulated emission depletion) [7]. However, these techniques require a lot of time to capture image information and require a dying process, which makes it difficult to capture dynamic biological living objects as two-dimensional (2D) images directly in the atmosphere. The development of super-resolution technology that can observe dynamic biological living objects is required.
It is thought that the use of optical cameras as area sensors is an important technique for capturing 2D images of dynamically active organisms. However, capturing an image of a microstructured object implies the acquisition of a diffraction-limited image beyond the Rayleigh limit [8]. However, the acquisition of images beyond the Rayleigh limit requires to capture a higher-order diffraction light of lens [9], which is generally not possible. The observation of structures beyond the diffraction limit, traditionally thought to be impossible, is based on the use of an objective lens in a microscope to form an image of the object using optical lens system. In other words, the problem is considered to lie in the fact that the conventional technique analyzes information by collecting the intensity distribution of light with a lens system and converting it into an image [8,9].
In recent years, however, a technique for observing microstructures beyond the Rayleigh limit has been reported using a technique based on laser speckle interferometry [10].
This newly reported method based on speckle interference technology [11] is based on the concept of perfect optics [12], and instead of imaging the intensity distribution of light using a lens, the microstructure is observed by detecting the phase change caused by the lateral shift of the scattered light emitted from each point on the surface of the measurement object that passes through the lens. In this process, the scattered light plays an important role in bringing more information about the shape of the object surface through the lens to the image plane [11].
The current resolution of the speckle deformation measurement is about 5-10 nm [13,14], and this high-resolution analysis technique enables the observation of microstructures [10,11].
With the goal of observing bio-related images in the future, a filtering technique for image information with directional characteristics, such as diffraction gratings, which are samples of periodic structures, characters with visually meaningful branch-like structures that are aware of vascular structures, and fibrous structures were discussed in this study. At the same time, the two-dimensional integration paths necessary to obtain three-dimensional shapes is investigated. This paper describes the first step toward the final goal of this research, which is the analysis of two-dimensional images with complex shapes.
As a result, it was confirmed that even a character with a line width of 100 nm can be observed as a character information by strictly designing and using a 2D filter, showing results that will lead to the observation of more complex structures to be handled in the future. In speckle interferometry, a laser beam is divided by a half mirror: one beam is irradiated to a measurement object with a rough surface, and the other beam is irradiated to a rough reference surface to generate a reference beam [15,16]. Speckles as interference fringes are formed between the scattered light from each rough surface to achieve the deformation measurement of the object. In such an optical system, the captured image is a speckle pattern with a bias component superimposed on a signal component that has significant information as interference fringes [15,16]. Therefore, to analyze such a speckle pattern with high resolution and extract the phase component with high resolution, at least three or more speckle patterns must be captured, as done in conventional interference fringe analysis, and a temporal fringe scanning method must be used [17].
Moreover, a carrier signal is generated for each speckle if a plane wave is used as the reference light in the optical system and an angle is provided between the wavefronts of the reference light and the object light. It has been reported that by providing such a carrier signal and performing a Fourier transform, it is possible to separate the bias and signal components in the frequency domain [14,18]. If this process is used to extract only the signal component, the phase component can be detected using only two speckle patterns before and after deformation in the processing, along with noise removal in the low-frequency domain [19,20]. This implies that deformation measurements with a high resolution of approximately 5-10 nm can now be achieved using only the two speckle patterns before and after deformation [13]. Using an optical system that can achieve this high-resolution phase, 3D shape measurement was performed under the following analysis principle.
Assuming that the cross-section of the object in the x-direction can be defined as f(x), as shown in Figure 1a, if the object is shifted laterally by exactly δx, the new cross-section of the object is f (x + δx). If the deformation value f(x) − f(x + δx) caused by the lateral shift is precisely analyzed and divided by the amount of shift δx, the value (∂f/∂x) corresponding to the approximate derivative of the shape of the object in the x-direction, as shown in Equation (2), is obtained. Furthermore, by integrating this derivative coefficient in the x-direction, f(x) can be estimated as shown in Equation (3), and the shape of the measured object can be reconstructed.
That is, if the point PA in Figure 1a is shifted by δx in the horizontal directio point PB in Figure 1a corresponds to the position of point PA after the shift, as sho Figure 1b [10].
Following this processing concept, specifically, if the difference in the direction between point PA and point PB is detected with high-resolution using a s interferometer [14,21], the amount of change represented by Equation (1) can be obt Furthermore, because the amount of shift δx in the horizontal direction is know differential coefficient of the shape shown in Figure 1a can be calculated using Eq (2). Finally, by integrating this differential coefficient value in the x-direction, the f(x) of the measured object can be obtained. Based on this concept, it is possible to m 3D microstructures beyond the diffraction limit of the objective lens using scattere as the illumination light [10].

Measurement Optics and Measurement Process
The speckle interferometer used in this experiment is shown in Figure 2. interferometer, scattered light with multi-directional ray vectors, which are gen after passing through the ground glass, is used as the illumination light. When the is irradiated with this illumination light, the scattered light reflected on the surface object is focused by the ultra-long-working distance objective lens (M Plan Apo 200 0.62, magnification: 200×, working distance: 13 mm manufactured by Mi Corporation (Kawasaki, Japan). This scattered light containing phase information r That is, if the point P A in Figure 1a is shifted by δx in the horizontal direction, the point P B in Figure 1a corresponds to the position of point P A after the shift, as shown in Figure 1b [10].
Following this processing concept, specifically, if the difference in the height direction between point P A and point P B is detected with high-resolution using a speckle interferometer [14,21], the amount of change represented by Equation (1) can be obtained.
Furthermore, because the amount of shift δx in the horizontal direction is known, the differential coefficient of the shape shown in Figure 1a can be calculated using Equation (2).
Finally, by integrating this differential coefficient value in the x-direction, the shape f(x) of the measured object can be obtained. Based on this concept, it is possible to measure 3D microstructures beyond the diffraction limit of the objective lens using scattered light as the illumination light [10].

Measurement Optics and Measurement Process
The speckle interferometer used in this experiment is shown in Figure 2. In this interferometer, scattered light with multi-directional ray vectors, which are generated after passing through the ground glass, is used as the illumination light. When the object is irradiated with this illumination light, the scattered light reflected on the surface of the object is focused by the ultra-long-working distance objective lens (M Plan Apo 200x, NA: 0.62, magnification: 200×, working distance: 13 mm manufactured by Mitutoyo Corporation (Kawasaki, Japan). This scattered light containing phase information reaches the image sensor (pixel size: 1.6 µm, pixel number: 1024 × 1024, grayscale resolution: 12 bit) through a pinhole, as shown in Figure 2. the image sensor (pixel size: 1.6 μm, pixel number: 1024 × 1024, grayscale resolution: 12 bit) through a pinhole, as shown in Figure 2. The beam emitted from the laser (wavelength: 671 nm, 100 mW) was split with a polarizing beam splitter, collimated with a collimator, used as reference light, and interfered with the object light using a beam splitter located just before the imaging device to form a speckle pattern with carrier fringes. The speckle pattern caused by the interference between the scattered light from the measured object and the reference light was captured by the image element. In this case, the diffraction limit was determined to be 660 nm (= 0.61 × 671/0.62) from the laser wavelength and NA of the objective lens. The optical system was compact (the dimensions of 600 × 700 mm 2 ). The system is easy to use in a room because it is constructed with a honeycomb based on an active vibration isolation table (SAT-56: Showa Science Active vibration isolation table). Using this optical system, the shape of the measured object is detected during the analysis process shown in Figure 3, based on the principle shown in Figure 1, through a multilayered combination of several analysis techniques for speckle developed by the authors to date [13,14,20,21]. The details of each speckle analysis technique are not described here; however, they are referred to in the references. The beam emitted from the laser (wavelength: 671 nm, 100 mW) was split with a polarizing beam splitter, collimated with a collimator, used as reference light, and interfered with the object light using a beam splitter located just before the imaging device to form a speckle pattern with carrier fringes. The speckle pattern caused by the interference between the scattered light from the measured object and the reference light was captured by the image element. In this case, the diffraction limit was determined to be 660 nm (= 0.61 × 671/0.62) from the laser wavelength and NA of the objective lens. The optical system was compact (the dimensions of 600 × 700 mm 2 ). The system is easy to use in a room because it is constructed with a honeycomb based on an active vibration isolation table (SAT-56: Showa Science Active vibration isolation table). Using this optical system, the shape of the measured object is detected during the analysis process shown in Figure 3, based on the principle shown in Figure 1, through a multilayered combination of several analysis techniques for speckle developed by the authors to date [13,14,20,21]. The details of each speckle analysis technique are not described here; however, they are referred to in the references.
The first step in the analysis process, as shown in Figure 3(1), captures the speckle pattern using the optical system shown in Figure 2. For example, as shown in Figure 4a, when a diffraction grating with a period of 833 nm is measured as an SEM image, the speckle pattern also shows a fringe image with a period of 833 nm, as shown in Figure 4b, because the measurement object does not exceed the diffraction limit. This speckle pattern (SP1) is artificially shifted by three pixels (45 nm) in the x-direction on the computer memory [22], as shown in Figure 3(2), to create the speckle pattern (SP2). In this optical system, it had been confirmed that one pixel corresponds to 15 nm on the image element using a calibrated scale; therefore, it is known that three pixels correspond to 45 nm.  The first step in the analysis process, as shown in Figure 3(1), captures the speckle pattern using the optical system shown in Figure 2. For example, as shown in Figure 4a, when a diffraction grating with a period of 833 nm is measured as an SEM image, the speckle pattern also shows a fringe image with a period of 833 nm, as shown in Figure 4b, because the measurement object does not exceed the diffraction limit. This speckle pattern (SP1) is artificially shifted by three pixels (45 nm) in the x-direction on the computer memory [22], as shown in Figure 3(2), to create the speckle pattern (SP2). In this optical system, it had been confirmed that one pixel corresponds to 15 nm on the image element using a calibrated scale; therefore, it is known that three pixels correspond to 45 nm. When these speckle patterns are Fourier transformed, a carrier signal can be generated in the speckles because of the angle between the wavefronts of the reference light and the object light [14,18]. Using this carrier signal, the bias and signal components can be separated in the frequency domain, as shown in Figure 4c. In this process, only this signal component is extracted, and the bias component is removed, as shown in Figure  3(3). When this process is applied to SP1 and SP2, the extracted signal has real and imaginary parts, Re(SP1), Im(SP1), Re(SP2), and Im(SP2), respectively.
If the real part is considered as the cosine component and the imaginary part as the sine component, then based on the addition theorem of trigonometric functions, the difference between the phase distributions of cos (ф1 − ф2) and sin (ф1 − ф2) can be obtained as a specklegram, as shown in Figure 3(4) [11]. In this study, instead of using the ratio of  The first step in the analysis process, as shown in Figure 3(1), captures the speckle pattern using the optical system shown in Figure 2. For example, as shown in Figure 4a, when a diffraction grating with a period of 833 nm is measured as an SEM image, the speckle pattern also shows a fringe image with a period of 833 nm, as shown in Figure 4b, because the measurement object does not exceed the diffraction limit. This speckle pattern (SP1) is artificially shifted by three pixels (45 nm) in the x-direction on the computer memory [22], as shown in Figure 3(2), to create the speckle pattern (SP2). In this optical system, it had been confirmed that one pixel corresponds to 15 nm on the image element using a calibrated scale; therefore, it is known that three pixels correspond to 45 nm. When these speckle patterns are Fourier transformed, a carrier signal can be generated in the speckles because of the angle between the wavefronts of the reference light and the object light [14,18]. Using this carrier signal, the bias and signal components can be separated in the frequency domain, as shown in Figure 4c. In this process, only this signal component is extracted, and the bias component is removed, as shown in Figure  3(3). When this process is applied to SP1 and SP2, the extracted signal has real and imaginary parts, Re(SP1), Im(SP1), Re(SP2), and Im(SP2), respectively.
If the real part is considered as the cosine component and the imaginary part as the sine component, then based on the addition theorem of trigonometric functions, the difference between the phase distributions of cos (ф1 − ф2) and sin (ф1 − ф2) can be obtained as a specklegram, as shown in Figure 3(4) [11]. In this study, instead of using the ratio of When these speckle patterns are Fourier transformed, a carrier signal can be generated in the speckles because of the angle between the wavefronts of the reference light and the object light [14,18]. Using this carrier signal, the bias and signal components can be separated in the frequency domain, as shown in Figure 4c. In this process, only this signal component is extracted, and the bias component is removed, as shown in Figure 3(3). When this process is applied to SP1 and SP2, the extracted signal has real and imaginary parts, Re(SP1), Im(SP1), Re(SP2), and Im(SP2), respectively.
If the real part is considered as the cosine component and the imaginary part as the sine component, then based on the addition theorem of trigonometric functions, the difference between the phase distributions of cos (φ 1 − φ 2 ) and sin (φ 1 − φ 2 ) can be obtained as a specklegram, as shown in Figure 3(4) [11]. In this study, instead of using the ratio of cosine and sine to obtain the phase difference φ 1-2 by the arctangent function, the spatial fringe analysis method [23], which can remove the noise component through more detailed processing, was used. Phase difference φ 1-2 was smoothly extracted from the value of cos (φ 1-2 ). Furthermore, cos (φ 1-2 ) was filtered using the Fourier transform to remove noise in the frequency domain, as shown in Figure 3 period/20 μm). Because this phase component is the distribution of the difference in the shape of the measurement object before and after the shift, the phase component ф1-2 was converted into the length dimension based on the laser wavelength, and it was divided by the shift amount to set the rate of change distribution, which is shown in Equation (2). The shape of the measurement object can be estimated by integrating in Figure 3 (7), as shown in Figure 5b. In the one-dimensional analysis, the passband of the filter with the dashed line around the signal component shown in Figure 5a is used in the frequency domain, and the signal is extracted and integrated as a smooth measurement result for the sawtooth wave-shaped grating shown in Figure 4a without noise, as shown in Figure 5b [10].

Detection of the Rate of Change Distribution of the Measured Shape in 2D Analysis
If the diffraction grating shown in Figure 4a is rotated, as shown in Figure 6a, the speckle pattern obtained is a fringe image (Figure 6b), which is not parallel to the y-axis direction ( Figure 4b). In this study, because scattered light was used as the irradiation light to the measurement object, some noise was observed in the image and the fringe image was disturbed; however, the fringe image was clearly along the groove direction of the diffraction grating. Moreover, it should be noted that the image captured by the camera has an origin at the upper left of the frame, and the calculation result has an origin at the lower left of the frame.
According to the flowchart of the process in Figure 3, new speckle patterns are artificially created by the horizontal (x-direction) displacement in a similar manner as in Figure 4b, and each speckle pattern is Fourier transformed to extract the signal components. Although the direction of the signal is rotated, the results shown in Figure  6c are similar to that of the signal components in the frequency domain shown in Figure  5a.
In Figure 5a, if the signal is extracted along the frequency axis in the x-direction, as shown by the dashed line, the result shown in Figure 5b can be finally obtained. However, Fy x y bias signal Phase component φ 1-2 was detected from the signal after noise removal using the spatial fringe analysis method as shown in Figure 3 (6). The passband is shown in Figure 5a  Because this phase component is the distribution of the difference in the shape of the measurement object before and after the shift, the phase component φ 1-2 was converted into the length dimension based on the laser wavelength, and it was divided by the shift amount to set the rate of change distribution, which is shown in Equation (2). The shape of the measurement object can be estimated by integrating in Figure 3 (7), as shown in Figure 5b.
In the one-dimensional analysis, the passband of the filter with the dashed line around the signal component shown in Figure 5a is used in the frequency domain, and the signal is extracted and integrated as a smooth measurement result for the sawtooth wave-shaped grating shown in Figure 4a without noise, as shown in Figure 5b [10].

Detection of the Rate of Change Distribution of the Measured Shape in 2D Analysis
If the diffraction grating shown in Figure 4a is rotated, as shown in Figure 6a, the speckle pattern obtained is a fringe image (Figure 6b), which is not parallel to the y-axis direction ( Figure 4b). In this study, because scattered light was used as the irradiation light to the measurement object, some noise was observed in the image and the fringe image was disturbed; however, the fringe image was clearly along the groove direction of the diffraction grating. Moreover, it should be noted that the image captured by the camera has an origin at the upper left of the frame, and the calculation result has an origin at the lower left of the frame.
According to the flowchart of the process in Figure 3, new speckle patterns are artificially created by the horizontal (x-direction) displacement in a similar manner as in Figure 4b, and each speckle pattern is Fourier transformed to extract the signal components. Although the direction of the signal is rotated, the results shown in Figure 6c are similar to that of the signal components in the frequency domain shown in Figure 5a.
In Figure 5a, if the signal is extracted along the frequency axis in the x-direction, as shown by the dashed line, the result shown in Figure 5b can be finally obtained. However, in Figure 6c, the frequency signal in the direction that is neither x-nor y-directions must be extracted. Therefore, as shown in Figure 6d, it is necessary to set the region where the signal component in Figure 6c exists as the passband of the filter in the frequency domain. To capture 2D objects without being affected by noise, a 2D filter is required that can extract the signals distributed in two dimensions as noise-free components.
in Figure 6c, the frequency signal in the direction that is neither x-nor y-directions must be extracted. Therefore, as shown in Figure 6d, it is necessary to set the region where the signal component in Figure 6c exists as the passband of the filter in the frequency domain. To capture 2D objects without being affected by noise, a 2D filter is required that can extract the signals distributed in two dimensions as noise-free components. A 2D filter spreading in the x-and y-directions, with the passband shown in Figure  5a rotated, is set up as shown in Figure 6d, and the information in Figure 6c is extracted, resulting in a modulated carrier signal, as shown in Figure 7a. Because the carrier signal with a period of 8 pixels/period in the x-direction [13] is originally a spatial fringe image, the image shown in Figure 7a confirms that the carrier signal parallel to the y-axis direction is modulated based on the shape of the grating. By analyzing this signal using the spatial fringe analysis method, it is possible to obtain the distribution of the rate of change of the measured object (the shape of the diffraction grating in the oblique direction), as shown in Figure 7b.
In the one-dimensional analysis, the shape of the grating can be obtained by sequentially integrating the signal in the x-direction. However, in the 2D analysis, the measurement object exhibits a shape change in the y-direction. Therefore, in the integration operation, it is necessary to obtain the appropriate 2D integration path and the rate of change distribution in the y-direction by applying the same process to the ydirection as in the x-direction.  A 2D filter spreading in the x-and y-directions, with the passband shown in Figure 5a rotated, is set up as shown in Figure 6d, and the information in Figure 6c is extracted, resulting in a modulated carrier signal, as shown in Figure 7a. Because the carrier signal with a period of 8 pixels/period in the x-direction [13] is originally a spatial fringe image, the image shown in Figure 7a confirms that the carrier signal parallel to the y-axis direction is modulated based on the shape of the grating. By analyzing this signal using the spatial fringe analysis method, it is possible to obtain the distribution of the rate of change of the measured object (the shape of the diffraction grating in the oblique direction), as shown in Figure 7b. A 2D filter spreading in the x-and y-directions, with the passband shown in Figu 5a rotated, is set up as shown in Figure 6d, and the information in Figure 6c is extract resulting in a modulated carrier signal, as shown in Figure 7a. Because the carrier sig with a period of 8 pixels/period in the x-direction [13] is originally a spatial fringe ima the image shown in Figure 7a confirms that the carrier signal parallel to the y-a direction is modulated based on the shape of the grating. By analyzing this signal usi the spatial fringe analysis method, it is possible to obtain the distribution of the rate change of the measured object (the shape of the diffraction grating in the obliq direction), as shown in Figure 7b.
In the one-dimensional analysis, the shape of the grating can be obtained sequentially integrating the signal in the x-direction. However, in the 2D analysis, measurement object exhibits a shape change in the y-direction. Therefore, in integration operation, it is necessary to obtain the appropriate 2D integration path and rate of change distribution in the y-direction by applying the same process to the direction as in the x-direction.  In the one-dimensional analysis, the shape of the grating can be obtained by sequentially integrating the signal in the x-direction. However, in the 2D analysis, the measurement object exhibits a shape change in the y-direction. Therefore, in the integration operation, it is necessary to obtain the appropriate 2D integration path and the rate of change distribution in the y-direction by applying the same process to the y-direction as in the x-direction.

Setting up a 2D Integration Path
To find the value of point P 1 starting from point P 0 , as shown in Figure 8a, the integration path from P 0 to P 1 must be determined. In this case, various routes were established. However, this value can be obtained by accumulating the amount of change between two points along the integration path, as shown in Equations (A) and (B) of Figure 8a, using the increment δx in the x-direction, the increment δy in the y-direction, and the partial differentiation coefficients in the x-and y-directions based on the concept of total differentiation. If δx and δy are small, the integration path from P 0 to P 1 is not necessarily fixed; however, it can be set as needed. In this study, the most convenient integration path is shown in Figure 8b, where the values at each point on the line parallel to the y-axis are accumulated in advance as the amount of change. As shown in Figure 8b, the values of each point on the line parallel to the y-axis direction starting from P0 point are accumulated as Σ∂φ/∂y j δy, and the initial value of the integration in the x-direction of each point is obtained. After this calculation process, the values obtained for each point in the y-direction are used as initial values, and the amount of change in the x-direction is set as Σ∂φ/∂x i δx.

Setting up a 2D Integration Path
To find the value of point P1 starting from point P0, as shown in Figure 8a, the integration path from P0 to P1 must be determined. In this case, various routes were established. However, this value can be obtained by accumulating the amount of change between two points along the integration path, as shown in Equations (A) and (B) of Figure 8a, using the increment δx in the x-direction, the increment δy in the y-direction, and the partial differentiation coefficients in the x-and y-directions based on the concept of total differentiation. If δx and δy are small, the integration path from P0 to P1 is not necessarily fixed; however, it can be set as needed. In this study, the most convenient integration path is shown in Figure 8b, where the values at each point on the line parallel to the y-axis are accumulated in advance as the amount of change. As shown in Figure 8b, the values of each point on the line parallel to the y-axis direction starting from P0 point are accumulated as Σ∂ф∂yjδy, and the initial value of the integration in the x-direction of each point is obtained. After this calculation process, the values obtained for each point in the y-direction are used as initial values, and the amount of change in the x-direction is set as Σ∂ф∂xiδx. By setting up an integration path, the phase of each point-such as Pa, Pb, Pc, . . ., Pf.-is obtained. In a relatively narrow region, such a calculation can be used to obtain the phase of the 2D distribution by obtaining the amount of change in the y-direction as the distribution of the amount of change on a single line including the P0 point as the initial value for each operation in the x-direction, and the distribution of the rate of change in the y-direction is used as the initial value for each operation in the x-direction.
Based on this concept, the 2D shape distribution can be obtained in two dimensions, as shown in Figure 7c, by defining the distribution of the rate of change in the y-direction, which is obtained by setting the phase at point P0 to zero in the x-direction, as shown in Figure 7b, as the initial value for the integration operation in the x-direction at each point.
Based on the above idea, the shape distribution of two-dimensional microstructures can be observed. In this study, it is assumed that a two-dimensional structure is directional, such as fibrous and further branching structures in the image. That is, it is neither a periodic structure such as a diffraction grating, nor a shape measurement of microspheres randomly distributed in space, as discussed in a previous paper [11]. The initial letter "K" of Kansai University, to which the author belongs, was used as a test sample. This is an alphabet consisting of three line segments, and each line segment has a relatively characteristic direction vector. In other words, the proposed method is also By setting up an integration path, the phase of each point-such as Pa, Pb, Pc,..., Pf.-is obtained. In a relatively narrow region, such a calculation can be used to obtain the phase of the 2D distribution by obtaining the amount of change in the y-direction as the distribution of the amount of change on a single line including the P 0 point as the initial value for each operation in the x-direction, and the distribution of the rate of change in the y-direction is used as the initial value for each operation in the x-direction.
Based on this concept, the 2D shape distribution can be obtained in two dimensions, as shown in Figure 7c, by defining the distribution of the rate of change in the y-direction, which is obtained by setting the phase at point P 0 to zero in the x-direction, as shown in Figure 7b, as the initial value for the integration operation in the x-direction at each point.
Based on the above idea, the shape distribution of two-dimensional microstructures can be observed. In this study, it is assumed that a two-dimensional structure is directional, such as fibrous and further branching structures in the image. That is, it is neither a periodic structure such as a diffraction grating, nor a shape measurement of microspheres randomly distributed in space, as discussed in a previous paper [11]. The initial letter "K" of Kansai University, to which the author belongs, was used as a test sample. This is an alphabet consisting of three line segments, and each line segment has a relatively characteristic direction vector. In other words, the proposed method is also effective for the information of branch structure shape where the object has an orientation, which is neither a periodic structure nor a structure randomly distributed in space.

Results for the Case of Characters of a Scale Not Exceeding the Diffraction Limit
As a 2D measurement object, an image of the alphabet K, consisting of three line elements ((A), (B), and (C)), as shown in Figure 9a, was drawn as a character by combining three line elements on a resist deposited on a silicon substrate with a thickness of 250 nm using an electronic drawing machine, and the surface was platinum-coated, as is generally observed by SEM.

Results for the Case of Characters of a Scale Not Exceeding the Diffraction Limit
As a 2D measurement object, an image of the alphabet K, consisting of three line elements ((A), (B), and (C)), as shown in Figure 9a, was drawn as a character by combining three line elements on a resist deposited on a silicon substrate with a thickness of 250 nm using an electronic drawing machine, and the surface was platinum-coated, as is generally observed by SEM. First, a character with a width of 1400 nm and a height of 2800 nm, which does not exceed the diffraction limit, was measured as an SEM image, as shown in Figure 9a. Although the scale of the measured character was not finer than the diffraction limit (660 nm) of the optical system used in this experiment, the width of each element that constituted the character was 300 nm. Therefore, the actual image as a speckle pattern was processed by this method in the same area as the SEM image, after confirming the area where the object exists from the coordinate system of the SEM image. Within the calculated area, the luminance distribution becomes blurred, as shown in Figure 9b. However, because multiple white-shining speckles can be observed in the image, the character's information is not captured in a single speckle. The character images were divided into multiple speckles. Therefore, the phase information itself is not continuous between individual speckles. When the information is distributed among multiple speckles, problems are expected to occur during the phase-detection calculation. That is, it is assumed that there may be a problem in the phase-detection calculation. Although the observation of microstructure using this method can be easily performed on a single speckle, there are problems with the continuity of information between different speckles.
Currently, this problem is avoided by narrowing the aperture to produce speckles of larger diameters. The development of a method that considers phase continuity to observe a larger area simultaneously is still under consideration [24].
The angles from the horizontal line of each element that constitutes a character are 55°, 90°, and −65°, respectively, as shown in Figure 9a. In the speckle pattern shown in Figure 9b, the carrier fringes are confirmed. This carrier signal is used to separate the bias and signal components, as shown in Figure 4c [14].
Then, by Fourier transforming the image shown in Figure 9b and extracting only the signal components, a specklegram can be obtained by the process in (4) of First, a character with a width of 1400 nm and a height of 2800 nm, which does not exceed the diffraction limit, was measured as an SEM image, as shown in Figure 9a. Although the scale of the measured character was not finer than the diffraction limit (660 nm) of the optical system used in this experiment, the width of each element that constituted the character was 300 nm. Therefore, the actual image as a speckle pattern was processed by this method in the same area as the SEM image, after confirming the area where the object exists from the coordinate system of the SEM image. Within the calculated area, the luminance distribution becomes blurred, as shown in Figure 9b. However, because multiple white-shining speckles can be observed in the image, the character's information is not captured in a single speckle. The character images were divided into multiple speckles. Therefore, the phase information itself is not continuous between individual speckles. When the information is distributed among multiple speckles, problems are expected to occur during the phase-detection calculation. That is, it is assumed that there may be a problem in the phase-detection calculation. Although the observation of microstructure using this method can be easily performed on a single speckle, there are problems with the continuity of information between different speckles.
Currently, this problem is avoided by narrowing the aperture to produce speckles of larger diameters. The development of a method that considers phase continuity to observe a larger area simultaneously is still under consideration [24].
The angles from the horizontal line of each element that constitutes a character are 55 • , 90 • , and −65 • , respectively, as shown in Figure 9a. In the speckle pattern shown in Figure 9b, the carrier fringes are confirmed. This carrier signal is used to separate the bias and signal components, as shown in Figure 4c [14].
Then, by Fourier transforming the image shown in Figure 9b and extracting only the signal components, a specklegram can be obtained by the process in (4) of Figure 3. When this specklegram is Fourier transformed, the signal in the frequency domain, as shown in Figure 9c, can be observed. For an object with a periodic structure, such as a diffraction grating, as shown in Figures 5a and 6c, the signal component exists as a single peak value in the frequency domain. However, if the component of the structure is a single element with constant length and width, the signal in the frequency domain is considerably different from the signals shown in Figures 5a and 6c. That is, the frequency domain distribution shown in Figure 9c confirms the presence of low-frequency components, such as bias components, similar to the frequency domain distribution of the speckle pattern shown in Figure 4c. Because the scattered light is used as illumination light in this study, it is considered that a large amount of speckle noise exists within the signal components. To observe more complex fine images, it is necessary to develop a new method that reduces such components [20].
Another characteristic of this signal in the frequency domain is the presence of several protruding signals radially from the origin of the frequency. This signal is characterized by the fact that the letter "K" in the alphabet is composed of elements of finite length and width, and it has inherent directivity. In order to use this method in bio-related fields, 2D analysis will be performed on images with directional and branch structure shapes, such as fibrous and vascular, so it is necessary to handle such directional elements as well. Therefore, the meaning of the image for these protrusions was examined by performing a Fourier transform of the alphabet K in the Gothic font. The results are shown in Figure 10. Because the actual measured character shown in Figure 9a was composed of line segments using an electronic drawing machine, it was considered to be a sufficient object to discuss the characteristics of the character K, although it is a slightly different font, and the characteristics of the same were examined here. elements as well. Therefore, the meaning of the image for these protrusions was examined by performing a Fourier transform of the alphabet K in the Gothic font. The results are shown in Figure 10. Because the actual measured character shown in Figure 9a was composed of line segments using an electronic drawing machine, it was considered to be a sufficient object to discuss the characteristics of the character K, although it is a slightly different font, and the characteristics of the same were examined here.
The angles between the line segments as components are shown in Figure 10a. When this character is subjected to the Fourier transform, it is clear that the character image of K has a protruding component with a characteristic radial angle from the origin in the frequency domain, as shown in Figure 10b. Furthermore, as shown in Figure 10b, the three directional protruding components are extracted as the passband surrounded by the dashed lines of the 2D filter (half of passing length is 100 period/1024 pixel, half of passing width is 1 period/1024 pixel), and the inverse Fourier transform is transformed to reconstruct the original image, as shown in Figure 10c.
In Figure 10b, if only passbands (1) and (2) are extracted, and if only passbands (1) and (3) are extracted, the inverse Fourier transform yields the results shown in Figure  10d,e, respectively. Clearly, the information of passband (3) is missing in Figure 10d, and the information of passband (2) is missing in Figure 10e; therefore, the respective line segment components cannot be recovered.  The angles between the line segments as components are shown in Figure 10a. When this character is subjected to the Fourier transform, it is clear that the character image of K has a protruding component with a characteristic radial angle from the origin in the frequency domain, as shown in Figure 10b. Furthermore, as shown in Figure 10b, the three directional protruding components are extracted as the passband surrounded by the dashed lines of the 2D filter (half of passing length is 100 period/1024 pixel, half of passing width is 1 period/1024 pixel), and the inverse Fourier transform is transformed to reconstruct the original image, as shown in Figure 10c.
In Figure 10b, if only passbands (1) and (2) are extracted, and if only passbands (1) and (3) are extracted, the inverse Fourier transform yields the results shown in Figure 10d,e, respectively. Clearly, the information of passband (3) is missing in Figure 10d, and the information of passband (2) is missing in Figure 10e; therefore, the respective line segment components cannot be recovered.
Furthermore, the character shown in Figure 11a, which is a 90 • rotation of the character shown in Figure 10a, was investigated in the same manner as in Figure 10. When the character shown in Figure 11a is subjected to the Fourier transform, the signal components in the frequency domain, as shown in Figure 11b, can be confirmed. Moreover, the information for each line segment forming the character is a rotated version of the result in Figure 10b. The 2D filter is created for the radial protrusion signal components shown in Figure 11b to collect information on all the components of the character, and the inverse Fourier transform can be performed to reproduce the original figure, as shown in Figure 11c. Furthermore, the character shown in Figure 11a, which is a 90° rotation of the character shown in Figure 10a, was investigated in the same manner as in Figure 10. When the character shown in Figure 11a is subjected to the Fourier transform, the signal components in the frequency domain, as shown in Figure 11b, can be confirmed. Moreover, the information for each line segment forming the character is a rotated version of the result in Figure 10b. The 2D filter is created for the radial protrusion signal components shown in Figure 11b to collect information on all the components of the character, and the inverse Fourier transform can be performed to reproduce the original figure, as shown in Figure 11c. Next, Figure 11d shows the result of the inverse Fourier transform when the 2D filter shown in Figure 10b is directly applied to the frequency domain information shown in Figure 11b. It can be observed that even if three line segments are identical, if a filter with characteristics different from the original character figure information is used, the original character figure cannot be recovered, as shown in Figure 11d. That is, it can be confirmed that the original graphic as an observation object cannot be accurately reproduced unless the information related to the original graphic information itself is extracted from the image information by a filtering process to remove noise.
This means that if the components of the alphabet K in Figure 10a are (1), (2), and (3), each protrusion signal in Figure 10b must be filtered precisely for each component of the alphabet to reconstruct the image accurately.
Now that the location of the information in the frequency domain is clear, the next step is to measure the angle of the projected component in the frequency domain shown in Figure 9c, as shown in Figure 12a, and filter the specklegram as shown in Figure 3(5) using a filter with a 2D passband shown in Figure 12b, and processed as in Figure 3(6,7), the result shown in Figure 12c is obtained as a shape result of integrating the phase distribution. Although there is some distortion, the letter K can be confirmed. Next, Figure 11d shows the result of the inverse Fourier transform when the 2D filter shown in Figure 10b is directly applied to the frequency domain information shown in Figure 11b. It can be observed that even if three line segments are identical, if a filter with characteristics different from the original character figure information is used, the original character figure cannot be recovered, as shown in Figure 11d. That is, it can be confirmed that the original graphic as an observation object cannot be accurately reproduced unless the information related to the original graphic information itself is extracted from the image information by a filtering process to remove noise.
This means that if the components of the alphabet K in Figure 10a are (1), (2), and (3), each protrusion signal in Figure 10b must be filtered precisely for each component of the alphabet to reconstruct the image accurately.
Now that the location of the information in the frequency domain is clear, the next step is to measure the angle of the projected component in the frequency domain shown in Figure 9c, as shown in Figure 12a, and filter the specklegram as shown in Figure 3(5) using a filter with a 2D passband shown in Figure 12b, and processed as in Figure 3(6,7), the result shown in Figure 12c is obtained as a shape result of integrating the phase distribution. Although there is some distortion, the letter K can be confirmed.
Although the scale of the character, 1400 nm in width and 2800 nm in height, as confirmed in the SEM image was also confirmed in the measurement results, the width of the line as a component was 300 nm; however, in the measurement results, it was from 500 to 550 nm.
In the case of Figure 12c, multiple ovals connect the line segments as components, implying that multiple speckles are connected to form a character. In the speckle image shown in Figure 9b, it can also be confirmed that multiple speckles with high luminance levels are connected. As reported in [11], when a single speckle was used to record a single microsphere, the shape of the individual microspheres could be detected when the phase distribution was detected with a single speckle, even if multiple silica spheres were connected. However, when the structure is observed as an aggregate of multiple speckles, it is necessary to further investigate the extraction of the phase information recorded in individual speckles while maintaining continuity. Although the scale of the character, 1400 nm in width and 2800 nm in heig confirmed in the SEM image was also confirmed in the measurement results, the wi the line as a component was 300 nm; however, in the measurement results, it was 500 to 550 nm.
In the case of Figure 12c, multiple ovals connect the line segments as compon implying that multiple speckles are connected to form a character. In the speckle i shown in Figure 9b, it can also be confirmed that multiple speckles with high lumi levels are connected. As reported in [11], when a single speckle was used to record a microsphere, the shape of the individual microspheres could be detected when the distribution was detected with a single speckle, even if multiple silica spheres connected. However, when the structure is observed as an aggregate of multiple spe it is necessary to further investigate the extraction of the phase information record individual speckles while maintaining continuity.

Results for the Case of Characters of a Scale beyond the Diffraction Limit
As in the previous section, to investigate the usefulness of the measurement me it was also applied to smaller characters beyond the diffraction limit. The measure object is the letter K, as shown in Figure 13a. The size of the letter was measured usin SEM image shown in Figure 13a. The width and height of the letter were 600 and nm, respectively, while the diffraction limit of the measurement system was 660 nm width of the line segments of the three components of the character was 100 nm. F 13b shows the speckle pattern of the area with 2D coordinates based on the marks SEM image where the character is supposed to be located, as the image of the chara beyond the diffraction limit cannot be observed visually, and the same image area SEM image was processed by this method.

Results for the Case of Characters of a Scale beyond the Diffraction Limit
As in the previous section, to investigate the usefulness of the measurement method, it was also applied to smaller characters beyond the diffraction limit. The measurement object is the letter K, as shown in Figure 13a. The size of the letter was measured using the SEM image shown in Figure 13a. The width and height of the letter were 600 and 1350 nm, respectively, while the diffraction limit of the measurement system was 660 nm. The width of the line segments of the three components of the character was 100 nm. Figure 13b shows the speckle pattern of the area with 2D coordinates based on the marks in the SEM image where the character is supposed to be located, as the image of the character K beyond the diffraction limit cannot be observed visually, and the same image area as the SEM image was processed by this method.
In this case, the carrier fringes were also present, and these fringes were used to remove the bias component in the speckle pattern. The specklegram is detected according to the processing shown in Figure 3, as in the previous section, and the result of measuring the shape using a 2D filter is shown in Figure 13c. In this case, the image was not composed of multiple speckle patterns, as shown in Figure 9b, and phase detection was performed within a single speckle. Figure 13d shows the contour image of Figure 13c. The shape of the letter K can be confirmed as a contour line in Figure 13d, although the measurement result is slightly enlarged compared to the SEM image shown in Figure 13a. The line width of the line segment, which is a component of the letter, was detected at approximately 120 nm in the contour image. Figure 14a shows a detailed view of the A-A' cross-section of the letter K shown in Figure 13c,d. The line width of 120 nm, confirmed by the contour lines, is approximately near the top, and the actual line width is considered to be wider. A depth of approximately 36.4 nm was observed in the depth direction of the grooves forming the letters. The actual groove depth of the letters was 250 nm. Therefore, Figure 14b shows the phase distribution of the specklegram before integration to obtain the shape. This is also shown in the A-A' cross-section. The interval between the points of the maximum and minimum values of the phase distribution corresponding to the inflection points of the cross-section of the line segment was 240 nm, and the width of the line segment observed as 100 nm in the SEM image is considered to have been expanded to approximately 240 nm. In this case, the carrier fringes were also present, and these fringes were used to remove the bias component in the speckle pattern. The specklegram is detected according to the processing shown in Figure 3, as in the previous section, and the result of measuring the shape using a 2D filter is shown in Figure 13c. In this case, the image was not composed of multiple speckle patterns, as shown in Figure 9b, and phase detection was performed within a single speckle. Figure 13d shows the contour image of Figure 13c. The shape of the letter K can be confirmed as a contour line in Figure 13d, although the measurement result is slightly enlarged compared to the SEM image shown in Figure 13a. The line width of the line segment, which is a component of the letter, was detected at approximately 120 nm in the contour image. Figure 14a shows a detailed view of the A-A' cross-section of the letter K shown in Figure 13c,d. The line width of 120 nm, confirmed by the contour lines, is approximately near the top, and the actual line width is considered to be wider. A depth of approximately 36.4 nm was observed in the depth direction of the grooves forming the letters. The actual groove depth of the letters was 250 nm. Therefore, Figure 14b shows the phase distribution of the specklegram before integration to obtain the shape. This is also shown in the A-A' cross-section. The interval between the points of the maximum and minimum values of the phase distribution corresponding to the inflection points of the cross-section of the line segment was 240 nm, and the width of the line segment observed as 100 nm in the SEM image is considered to have been expanded to approximately 240 nm. However, the phase difference in the speckle can be detected as 0.089 rad for a line segment with a line width of 100 nm using the speckle interferometry technique used in this study, and the absolute minimum value per pixel of the phase change in the bottom region near 400 nm as the position of the phase distribution shown in Figure 14b is 3.09 × 10 −3 rad/pixel. In fact, the resolution of the phase analysis may not be that high; however, if the phase resolution is 0.01 rad/pixel, which is three times higher than that, and the wavelength is 671 nm, it can be assumed that a shape height of 1 nm/pixel (=671 × However, the phase difference in the speckle can be detected as 0.089 rad for a line segment with a line width of 100 nm using the speckle interferometry technique used in this study, and the absolute minimum value per pixel of the phase change in the bottom region near 400 nm as the position of the phase distribution shown in Figure 14b is 3.09 × 10 −3 rad/pixel. In fact, the resolution of the phase analysis may not be that high; however, if the phase resolution is 0.01 rad/pixel, which is three times higher than that, and the wavelength is 671 nm, it can be assumed that a shape height of 1 nm/pixel (=671 × 0.01/6.28) can be detected. In this case, it is necessary to set the measurement object within the depth of the focus under a perfect optical system.
As shown in Figure 14b, it can be confirmed that this method, which detects the change rate of the phase with high resolution, is effective in observing the shape of an object, such as an alphabet that exceeds the diffraction limit. This means that it is possible to detect the shape of an object that could not be analyzed using conventional methods based on the intensity distribution of the captured image. In any case, a major challenge for the future will be to design a 2D filter that can efficiently reduce the noise component in a randomly shaped 2D image.
In the case of a branch-structured object with directional characteristics, the prominent signal components appear in the frequency domain. This indicates that a filter with a passband for the protruding signal is necessary. However, for objects that exist independently and randomly, such as the microspheres treated in the previous paper [11], a filter parallel to the x-axis direction in the frequency domain used in the one-dimensional processing was observed to be effective. For directional periodic structures, a filter such as the one in Figure 6d is also useful. Furthermore, because the Fourier transform is a linear operation, it is not necessary to set up a filter with a passband for all components at once. The development of filtering techniques by overlapping several patterns is required. It is clear that these techniques need to be developed in order to process more complex two-dimensional structures in the future.
When shape analysis is based on phase analysis using speckle interferometer technology, it is supposed that the use of a perfect optical system may make it possible to observe small areas that greatly exceed the diffraction limit problem in the traditional concept, which is caused by handling the intensity distribution of light.

Conclusions
The technique of observing structures beyond the diffraction limit by detecting the spatial variation of the phase distribution of the scattered light from the microstructure of the measurement object when the light is irradiated has been reported. In this study, the concept of a 2D integration process and filtering was proposed to extend this technique to two dimensions. Specifically, the conventional one-dimensional method is extended to a two-dimensional analysis method for use in biotechnology-related fields. In addition, the effectiveness of the newly developed two-dimensional analysis method was verified by observing the letter "K" and diffraction gratings with directional and periodic structures, which model microstructures with shapes such as branch-like and fibrous structures.
An example of the application to the letter K of the alphabet shows that letters can be observed on a scale that exceeds the diffraction limit of the objective lens used. It is thought that this method has taken a step forward to a state where complex image processing in bio-related fields will be possible. It was also demonstrated that further development of new techniques, such as filtering, is required in the future to detect more complex images.