The Design and Positioning Method of a Flexible Zoom Artificial Compound Eye

The focal lengths of the sub-eyes in a single-layer uniform curved compound eye are all the same, resulting in poor imaging quality for the compound eye. A non-uniform curved compound eye can effectively solve the problem of poor edge-imaging quality, however, it suffers from a large spherical aberration, and is unable to achieve zoom imaging. To solve these problems, a new type of aspherical artificial compound eye structure with variable focal length is proposed in this paper. The structure divides the surface compound eye into three fan-shaped areas with different focal lengths of the microlens in different areas, which allow the artificial compound eye to zoom in a certain range. The focal length and size of the microlens is determined by the area and the location of the microlens. The aspherical optimization of the microlens is calculated, and spherical aberration in each area is reduced to one percent of the initial value. Through simulation analysis, the designed artificial compound eye structure realizes focal length adjustment and effectively reduces the problem of the poor imaging quality of the curved compound eye edge. As a result, an aspherical artificial compound eye sample—where the number of sub-eyes is n = 61, and the diameter of the base is Φ = 8.66 mm—was prepared by using a molding method. Additionally, the mutual relationship between the eyes of the child was calibrated, and hence, a mathematical model for the simultaneous identification of multiple sub-eyes was established. This study set up an experimental artificial compound eye positioning system, and through a number of microlens capture target point settlement coordinates, achieved an error value of less than 10%.


Introduction
Artificial compound eyes that are useful for application, are mostly of the planar structure type, which largely loses the advantage of insect compound eye structure: a large viewing angle [1]. Although many different types of curved compound eye preparation methods have been proposed, these methods have disadvantages, such as expensive processing equipment and complicated preparation processes, and hence, they still remain in the experimental stage [2]. There are many difficulties in the research of curved compound eyes, especially the structural design of curved compound eye imaging systems and the processing technology for curved microlens arrays [3][4][5].
In order to realize the characteristics of biological compound eyes and improve the imaging quality of the edge field of view, scholars have conducted many studies on bionic compound eyes [6][7][8][9][10]. Jeong et al. developed a bionic compound eye equivalent to compound insect eye dimensions [11]; positioning in later uses of the compound eye [24]. Therefore, we performed aspheric optimization of the microlenses at all levels to improve the overall imaging quality of the compound eye.
Micromachines 2018, 9, x 3 of 14 positioning in later uses of the compound eye [24]. Therefore, we performed aspheric optimization of the microlenses at all levels to improve the overall imaging quality of the compound eye. (b) detector energy pattern, the energy speckle of each sub-eyes is relatively divergent, and the peak irradiance is only 3.9372 × 10 3 Watts/cm 2 .

Structural Design of the Variable Focal Length Artificial Compound Eye
The overall structure of the curved compound eye for zoom, as shown in Figure 2a, divides the curved surface into three equal parts, each with an angle of 120° and a corresponding "focal length" that is different. For the red, yellow, and green parts of the design, as depicted in Figure 2a, the focal lengths of the central small eye are 2.227 mm, 1.927 mm, and 2.527 mm, respectively. In this way, zooming can be achieved within a certain range to enhance the imaging performance of the artificial compound eye.
The microlens array on the curved substrate ultimately needs to be imaged on a planar photodetector array, which results in different distances from the center of microlenses that are at different locations on the substrate to the photodetection array. According to the geometric imaging principle, the effective focal length of the microlens is equal to the distance from the center of the microlens to the photodetection array, so only microlenses at the same position in each region have the same size. Apart from a central lens in the red area of the structure, the microlens arrays in the three areas have the same relative positions and numbers of lenses ( Figure 2a). The design parameters of the variable-focus surface compound eye structure are shown in Table 1.

Structural Design of the Variable Focal Length Artificial Compound Eye
The overall structure of the curved compound eye for zoom, as shown in Figure 2a, divides the curved surface into three equal parts, each with an angle of 120 • and a corresponding "focal length" that is different. For the red, yellow, and green parts of the design, as depicted in Figure 2a, the focal lengths of the central small eye are 2.227 mm, 1.927 mm, and 2.527 mm, respectively. In this way, zooming can be achieved within a certain range to enhance the imaging performance of the artificial compound eye.
The microlens array on the curved substrate ultimately needs to be imaged on a planar photodetector array, which results in different distances from the center of microlenses that are at different locations on the substrate to the photodetection array. According to the geometric imaging principle, the effective focal length of the microlens is equal to the distance from the center of the microlens to the photodetection array, so only microlenses at the same position in each region have the same size. Apart from a central lens in the red area of the structure, the microlens arrays in the three areas have the same relative positions and numbers of lenses ( Figure 2a). positioning in later uses of the compound eye [24]. Therefore, we performed aspheric optimization of the microlenses at all levels to improve the overall imaging quality of the compound eye.
(a) (b) Figure 1. Non-uniform surface compound eye Zemax simulation trace results before optimization: (a) curved compound eye ray tracing; and different levels of sub-eyes use different colors of light to track; (b) detector energy pattern, the energy speckle of each sub-eyes is relatively divergent, and the peak irradiance is only 3.9372 × 10 3 Watts/cm 2 .

Structural Design of the Variable Focal Length Artificial Compound Eye
The overall structure of the curved compound eye for zoom, as shown in Figure 2a, divides the curved surface into three equal parts, each with an angle of 120° and a corresponding "focal length" that is different. For the red, yellow, and green parts of the design, as depicted in Figure 2a, the focal lengths of the central small eye are 2.227 mm, 1.927 mm, and 2.527 mm, respectively. In this way, zooming can be achieved within a certain range to enhance the imaging performance of the artificial compound eye.
The microlens array on the curved substrate ultimately needs to be imaged on a planar photodetector array, which results in different distances from the center of microlenses that are at different locations on the substrate to the photodetection array. According to the geometric imaging principle, the effective focal length of the microlens is equal to the distance from the center of the microlens to the photodetection array, so only microlenses at the same position in each region have the same size. Apart from a central lens in the red area of the structure, the microlens arrays in the three areas have the same relative positions and numbers of lenses ( Figure 2a). The design parameters of the variable-focus surface compound eye structure are shown in Table 1. The design parameters of the variable-focus surface compound eye structure are shown in Table 1. The distance from the optical center of the central eye to the target surface of the photodetector is taken as a design reference, and the distance from the center of each microlens to the photodetection array is the effective focal length of each sub-eye [12,13]. Taking the sub-eye array in the red area ( Figure 2a) as an example, as shown in Figure 2b, the sub-eyes are divided into five levels according to their deflection angle relative to the central sub-eye. The sub-eyes in each level also have the same distance from the target surface of the photodetector. Therefore, the focal length and radius of curvature are also the same for all sub-eyes in a level.
According to the geometric relationship between the radius of curvature of the lens and the deflection angle of the sub-eyes, the effective focal length of the sub-eyes in each level can be calculated. The distance λ n from the inner surface of the substrate corresponding to the center of the nth (n ≤ 5) sub-eye to the photodetector target surface is R-curved base radius; θ-suborbital deflection angle; λ 1-5 -the distance between the sub-eye and the optical detection array.
The effective focal length of the sub-eyes at all levels can be obtained from the thin lens manufacturing equation where f n is the effective focal length of the nth sub-eye, r n is the radius of curvature of the nth sub-eye, R is the radius of the curved surface base, and n i is the refractive index of the material selected for making the compound eye. According to Equation (2), the initial values of the effective radii of the sub-eyes in the red region can be obtained, as shown in Table 2. According to the obtained initial parameters of each sub-eye, a parametric model of curved compound eye is established in Zemax, and ray tracing is performed to study the imaging performance of the sub-eyes of each level. By analyzing the ray tracing results of the first-degree sub-eye, Figure 3a, it can be seen that although the sub-eye can be imaged, the image does not converge to a single point. From the ray fan, Figure 3b, it is found that there is a large spherical aberration in the initial structure, and the ray aberration is 5.4 µm. From the dot plot, Figure 3c, it is seen that the radius of the Airy diffraction spot is 4.035 µm, which is relatively decentralized, and affects the imaging capability of the microlens.  In order to obtain the best imaging quality, aspherical optimization of the sub-eye structure is required. We establish a target optimization function in terms of the effective focal length (EFFL), spherical aberration (SPHA), and modulation transfer function (MIFT), setting optimization target values and weights for each. Then, to optimize the sub-eye's spherical aberration, we determine the radius of curvature for each level that obtains the best imaging quality.
After optimization, the surface of the sub-eye becomes an aspherical structure, as shown in Figure 4. According to the Zemax ray tracing results, Figure 4a, the light is well focused to a point by the aspherical sub-eye, and no divergence of light occurs. At the same time, the analysis of the aspherical surface of the fan shows that the spherical aberration of the lens is greatly changed, and the aberration of the sub-areas are reduced to about 2.98 μm, as shown in Figure 4b. Observing the aspherical eyelet spot diagram, Figure 4c, the dispersion of the Airy diffraction spot has a root mean square radius of 1.448 μm and is relatively concentrated, which is beneficial to improving the imaging quality of the sub-eye lens. In order to obtain the best imaging quality, aspherical optimization of the sub-eye structure is required. We establish a target optimization function in terms of the effective focal length (EFFL), spherical aberration (SPHA), and modulation transfer function (MIFT), setting optimization target values and weights for each. Then, to optimize the sub-eye's spherical aberration, we determine the radius of curvature for each level that obtains the best imaging quality.
After optimization, the surface of the sub-eye becomes an aspherical structure, as shown in Figure 4. According to the Zemax ray tracing results, Figure 4a, the light is well focused to a point by the aspherical sub-eye, and no divergence of light occurs. At the same time, the analysis of the aspherical surface of the fan shows that the spherical aberration of the lens is greatly changed, and the aberration of the sub-areas are reduced to about 2.98 µm, as shown in Figure 4b. Observing the aspherical eyelet spot diagram, Figure 4c, the dispersion of the Airy diffraction spot has a root mean square radius of 1.448 µm and is relatively concentrated, which is beneficial to improving the imaging quality of the sub-eye lens.  The subocular surface is changed from a spherical surface to an aspheric surface, and the spherical aberration at each level of the sub-eye is reduced to one-hundredth of the initial structure. However, the non-spherical sub-eye structure puts forward higher requirements on the processing level of the compound mold for the later curved surface. Table 3 shows the ball differences before and after optimization of the red zone subocular lens.  Table 4 shows the various dimensions of the optimized aspherical microlens in the red region. After the model is derived from Zemax, it is assembled to a corresponding position on the curved surface to form a three-focal-length microlens array.  The subocular surface is changed from a spherical surface to an aspheric surface, and the spherical aberration at each level of the sub-eye is reduced to one-hundredth of the initial structure. However, the non-spherical sub-eye structure puts forward higher requirements on the processing level of the compound mold for the later curved surface. Table 3 shows the ball differences before and after optimization of the red zone subocular lens.  Table 4 shows the various dimensions of the optimized aspherical microlens in the red region. After the model is derived from Zemax, it is assembled to a corresponding position on the curved surface to form a three-focal-length microlens array.

Analysis of the Imaging Performance of the Zoomed Compound Eye Model after Optimization
After the optimization, a complex-eye surface model was established, and the 1-5 rays of the subocular microlenses in the surface model were traced using Zemax. The subocular lens arrays are arranged on a curved substrate and the lenses on each level have the same focal length. After optimization, the light focus and energy spot distribution of the sub-eyes at all levels are shown in Figure 5.

Analysis of the Imaging Performance of the Zoomed Compound Eye Model after Optimization
After the optimization, a complex-eye surface model was established, and the 1-5 rays of the subocular microlenses in the surface model were traced using Zemax. The subocular lens arrays are arranged on a curved substrate and the lenses on each level have the same focal length. After optimization, the light focus and energy spot distribution of the sub-eyes at all levels are shown in Figure 5. Comparing the results of the ray tracings in Figures 1 and 5, we can find that before optimization, the size of the focused spot of each sub-eye is larger, and the light is scattered. Peak irradiance before optimization is only 3.9372 × 10 3 watts/cm 2 . After optimization, the light levels of the sub-eyes of all levels are better focused, and have stronger spot energies on the detector, with a peak irradiance reaching 1.4832 × 10 5 watts/cm 2 . This shows that the optimization of the light through our method is better; the imaging quality of the sub-eyes at all levels is improved, which provides a foundation for image recognition and positioning.

Multi-Eye Positioning Mathematical Model
According to the target imaging positioning mechanism, the target point, the center of the subeye lens, and the center of the target image point are collinear. If many sub-eyes on the compound eye can capture the target point and obtain a clear image at the same time, the location of the target point is at the intersection of the lines between the center of the target image and the corresponding sub-eye lens center. However, in actual imaging processing, due to the aberrations of the subocular lenses at all levels and the processing errors when preparing the curved surface of the compound eye, a non-linear distortion in the complex-eye image processing of the curved surface is unavoidable.
Due to the distortion, it is difficult to directly establish a mathematical correction model for object images. Thus, we divide the question into two parts: linear equations between the target point and subocular lens, and the correspondence between the subocular lens and the target image point. Schematics of these are shown in Figure 6a, using the two mathematical models established below to solve for the spatial coordinates of the target point. Comparing the results of the ray tracings in Figures 1 and 5, we can find that before optimization, the size of the focused spot of each sub-eye is larger, and the light is scattered. Peak irradiance before optimization is only 3.9372 × 10 3 watts/cm 2 . After optimization, the light levels of the sub-eyes of all levels are better focused, and have stronger spot energies on the detector, with a peak irradiance reaching 1.4832 × 10 5 watts/cm 2 . This shows that the optimization of the light through our method is better; the imaging quality of the sub-eyes at all levels is improved, which provides a foundation for image recognition and positioning.

Multi-Eye Positioning Mathematical Model
According to the target imaging positioning mechanism, the target point, the center of the sub-eye lens, and the center of the target image point are collinear. If many sub-eyes on the compound eye can capture the target point and obtain a clear image at the same time, the location of the target point is at the intersection of the lines between the center of the target image and the corresponding sub-eye lens center. However, in actual imaging processing, due to the aberrations of the subocular lenses at all levels and the processing errors when preparing the curved surface of the compound eye, a non-linear distortion in the complex-eye image processing of the curved surface is unavoidable.
Due to the distortion, it is difficult to directly establish a mathematical correction model for object images. Thus, we divide the question into two parts: linear equations between the target point and subocular lens, and the correspondence between the subocular lens and the target image point. Schematics of these are shown in Figure 6a, using the two mathematical models established below to solve for the spatial coordinates of the target point. As shown in Figure 6a, the sub-eye lens center coordinates are P0i = (X0i, Y0i, Z0i), while the target point is Pi = (Xi, Yi, Zi). Thus, the direction vector p between the two points can be expressed as p = (tanαi, tanβi, 1), Let p'(ai, bi, ci) = p(tanα, tanβ, 1), where a = tanα, b = tanβ, c = 1, and the relationship between the subocular lens and the target point can be expressed as where i = 1, 2, …, 61 is an index labelling each sub-eye lens. For this calculation, the coordinate system of the sub-eye in the defined compound eye is the world coordinate system, and the deflection angle is between the 2-5-level sub-eye and the first-level sub-eye optical axis. In order to be unified in the calculation, it is necessary to unify the coordinate systems of the 2-5-level sub-eyes with the main coordinate system, as shown in Figure 6b. This coordinate system transformation follows the Euler transformation law.
From the conversion relationship of Euler angles, we obtain the rotation matrix of the 2-5-level sub-eye coordinate systems to the main coordinate system: cos cos sin sin sin sin cos cos sin cos 0 where, is the rotation angle between the 2-5 sub-eye coordinate system and the Z-axis, and is the rotation angle between the sub-eye coordinate system and the Y-axis. Therefore, the relationship between the direction vector p' corresponding to each sub-eye in its local coordinate frame and the corresponding direction vector p in the world coordinate system is It is known that from the world coordinate of the lens center of each sub-eye, and the direction vector p in the world coordinate system of the corresponding sub-eye lens, the linear formula between the target point and the lens in the world coordinate system is obtained according to Equation  As shown in Figure 6a, the sub-eye lens center coordinates are P 0i = (X 0i , Y 0i , Z 0i ), while the target point is P i = (X i , Y i , Z i ). Thus, the direction vector p between the two points can be expressed as p = (tanα i , tanβ i , 1), Let p'(a i , b i , c i ) = p(tanα, tanβ, 1), where a = tanα, b = tanβ, c = 1, and the relationship between the subocular lens and the target point can be expressed as where i = 1, 2, . . . , 61 is an index labelling each sub-eye lens. For this calculation, the coordinate system of the sub-eye in the defined compound eye is the world coordinate system, and the deflection angle is between the 2-5-level sub-eye and the first-level sub-eye optical axis. In order to be unified in the calculation, it is necessary to unify the coordinate systems of the 2-5-level sub-eyes with the main coordinate system, as shown in Figure 6b. This coordinate system transformation follows the Euler transformation law.
From the conversion relationship of Euler angles, we obtain the rotation matrix of the 2-5-level sub-eye coordinate systems to the main coordinate system: where, ψ is the rotation angle between the 2-5 sub-eye coordinate system and the Z-axis, and θ is the rotation angle between the sub-eye coordinate system and the Y-axis. Therefore, the relationship between the direction vector p' corresponding to each sub-eye in its local coordinate frame and the corresponding direction vector p in the world coordinate system is It is known that from the world coordinate of the lens center of each sub-eye, and the direction vector p in the world coordinate system of the corresponding sub-eye lens, the linear formula between the target point and the lens in the world coordinate system is obtained according to Equation (3): If the target point can be captured by n sub-eyes at the same time, the relationship between the target point and the sub-eye lens can be expressed in terms in matrix notation: RX = D (6) where matrix R, and vectors X and D are To solve the over-determined equations, if R is full, let N = R T R, then the three-dimensional coordinates of the point X can be expressed as

Analysis on the Imaging Performance of a Zoom Compound Eye on a Curved Surface
The method of mold forming is adopted in the preparation of a zoom curved-surface compound eye. This method has the advantages of simple operation, controllable precision, and batch preparation. The mold is made with a precision five-axis numerically controlled (NC) machine tool ( Figure 7a) and polydimethylsiloxane (PDMS) as the forming material. The curing agent and PDMS were poured into the mold at a ratio of 1:10, and the temperature was fixed at 80 • C for one hour. After curing was completed, the compound eye sample was obtained, as shown in Figure 7b. If the target point can be captured by n sub-eyes at the same time, the relationship between the target point and the sub-eye lens can be expressed in terms in matrix notation: To solve the over-determined equations, if R is full, let N = R T R, then the three-dimensional coordinates of the point X can be expressed as

Analysis on the Imaging Performance of a Zoom Compound Eye on a Curved Surface
The method of mold forming is adopted in the preparation of a zoom curved-surface compound eye. This method has the advantages of simple operation, controllable precision, and batch preparation. The mold is made with a precision five-axis numerically controlled (NC) machine tool ( Figure 7a) and polydimethylsiloxane (PDMS) as the forming material. The curing agent and PDMS were poured into the mold at a ratio of 1:10, and the temperature was fixed at 80 °C for one hour. After curing was completed, the compound eye sample was obtained, as shown in Figure 7b. In order to verify the imaging performance of the prepared curved compound eye, an imaging test bench was constructed, as shown in Figure 8a,b. The experimental bench is mainly composed of a cluster light source (adjustable aperture size), a mask plate (circular, cross aperture), a convex lens, a curved compound eye camera, and a PC. By adjusting the distance between the components of the bench, the best imaging effect on the compound eye camera is obtained. Figure 8c shows the image collected by the compound eye camera when the light source was passed through a circular diaphragm. It can be seen from the image that the bright eyes of each sub-eye can be obtained at all levels. The image processing software ToupView is used to analyze the images collected in the compound eye. The three different color regions are denoted by A, B, and C, and the size of the eye spot in each region is measured. The imaging spot sizes in regions A, B, and C for each level are shown in Table 5. In order to verify the imaging performance of the prepared curved compound eye, an imaging test bench was constructed, as shown in Figure 8a,b. The experimental bench is mainly composed of a cluster light source (adjustable aperture size), a mask plate (circular, cross aperture), a convex lens, a curved compound eye camera, and a PC. By adjusting the distance between the components of the bench, the best imaging effect on the compound eye camera is obtained. Figure 8c shows the image collected by the compound eye camera when the light source was passed through a circular diaphragm. It can be seen from the image that the bright eyes of each sub-eye can be obtained at all levels. The image processing software ToupView is used to analyze the images collected in the compound eye. The three different color regions are denoted by A, B, and C, and the size of the eye spot in each region is measured. The imaging spot sizes in regions A, B, and C for each level are shown in Table 5.  Figure 8d shows when the light source passes through the cross-shaped diaphragm and the compound eye camera collects the image. In the area where the compound eye can receive light, multiple sub-eyes are imaged to form a "cross", which is the same as the mask image. Due to the different focal lengths of the sub-eyes in the three imaging areas, the sizes of the sub-eye spots of the same grade in the "cross" image are different. By analyzing two different shapes of image information collected by the camera, it can be concluded that the designed zoom compound eye can achieve a certain range of zoom imaging.   Figure 8d shows when the light source passes through the cross-shaped diaphragm and the compound eye camera collects the image. In the area where the compound eye can receive light, multiple sub-eyes are imaged to form a "cross", which is the same as the mask image. Due to the different focal lengths of the sub-eyes in the three imaging areas, the sizes of the sub-eye spots of the same grade in the "cross" image are different. By analyzing two different shapes of image information collected by the camera, it can be concluded that the designed zoom compound eye can achieve a certain range of zoom imaging.

Multi-Eye Positioning Experiment on the Artificial Compound Eye
In order to perform a multi-eye positioning experiment, we employ a system consisting of an artificial compound eye, a complementary metal-oxide-semiconductor (CMOS) detector (CFV301-H2, DO3THINK, Shenzhen, China), a beam splitter, a laser light source, and a target surface, as shown in Figure 9. With the artificial eye and CMOS assembly, the focal length of the compound eye center is used as the regulation standard to adjust the distance between the surface of the artificial eye and the surface of the CMOS photodetector, so that all sub-eyes can obtain clear images. The CMOS optical detection array is used to receive multi-channel imaging of the target point and output the captured image to a PC for further processing.
Image acquisition was achieved by the CMOS photosensitive surface with a size of 6.55 mm × 4.92 mm, containing 2048 × 1536 effective pixels, each pixel having a unit size of 3.2 μm × 3.2 μm. The CMOS was connected through USB to a PC, whereupon the image could be processed further.

Multi-Eye Positioning Experiment on the Artificial Compound Eye
In order to perform a multi-eye positioning experiment, we employ a system consisting of an artificial compound eye, a complementary metal-oxide-semiconductor (CMOS) detector (CFV301-H2, DO3THINK, Shenzhen, China), a beam splitter, a laser light source, and a target surface, as shown in Figure 9. With the artificial eye and CMOS assembly, the focal length of the compound eye center is used as the regulation standard to adjust the distance between the surface of the artificial eye and the surface of the CMOS photodetector, so that all sub-eyes can obtain clear images. The CMOS optical detection array is used to receive multi-channel imaging of the target point and output the captured image to a PC for further processing.
Image acquisition was achieved by the CMOS photosensitive surface with a size of 6.55 mm × 4.92 mm, containing 2048 × 1536 effective pixels, each pixel having a unit size of 3.2 µm × 3.2 µm. The CMOS was connected through USB to a PC, whereupon the image could be processed further.
In order to facilitate the determination of the coordinates of the sub-eye lenses during the target positioning process, each sub-eye lens needs to be numbered. The number of fly-eye lenses is shown in Figure 10a. According to the experimental conditions, to obtain the three-dimensional coordinates of the target, the following conditions need to be satisfied:

•
The horizontal axis movement plane must be perpendicular to the target plane; • The target plane must be parallel to the artificial eye compound plane; • The intersection of the optical axis of the main compound lens with the target plane and the distance between the center of the main lens and the intersection point of the main compound lens must be the same.
Micromachines 2018, 9, x 11 of 14 In order to facilitate the determination of the coordinates of the sub-eye lenses during the target positioning process, each sub-eye lens needs to be numbered. The number of fly-eye lenses is shown in Figure 10a. According to the experimental conditions, to obtain the three-dimensional coordinates of the target, the following conditions need to be satisfied: • The horizontal axis movement plane must be perpendicular to the target plane; • The target plane must be parallel to the artificial eye compound plane; • The intersection of the optical axis of the main compound lens with the target plane and the distance between the center of the main lens and the intersection point of the main compound lens must be the same.
The sub-eye imaging information collected by the experiment is shown in Figure 10b. Through experimentation, the number of sub-eyes imaged at the target point was 33. In order to dissect the positioning error of the same target in the zoom-curved compound eye imaging system, calibration was performed. In the calibration process, the complex spherical eyeball center is used as the origin, and the target point coordinate is set to P0 = (18.6 mm, 12.52 mm, 85 mm). According to Equation (7), if the target point is captured by two or more sub-eye channels, its three-dimensional coordinates can be determined based on the composition of the over-determined equations. Therefore, the sub-eye channels 2-20 of the 20 lenses are extracted to solve for the three-dimensional coordinates of the spatial target point P0, as shown in Table 6. In order to facilitate the determination of the coordinates of the sub-eye lenses during the target positioning process, each sub-eye lens needs to be numbered. The number of fly-eye lenses is shown in Figure 10a. According to the experimental conditions, to obtain the three-dimensional coordinates of the target, the following conditions need to be satisfied:

•
The horizontal axis movement plane must be perpendicular to the target plane; • The target plane must be parallel to the artificial eye compound plane; • The intersection of the optical axis of the main compound lens with the target plane and the distance between the center of the main lens and the intersection point of the main compound lens must be the same.
The sub-eye imaging information collected by the experiment is shown in Figure 10b. Through experimentation, the number of sub-eyes imaged at the target point was 33. In order to dissect the positioning error of the same target in the zoom-curved compound eye imaging system, calibration was performed. In the calibration process, the complex spherical eyeball center is used as the origin, and the target point coordinate is set to P0 = (18.6 mm, 12.52 mm, 85 mm). According to Equation (7), if the target point is captured by two or more sub-eye channels, its three-dimensional coordinates can be determined based on the composition of the over-determined equations. Therefore, the sub-eye channels 2-20 of the 20 lenses are extracted to solve for the three-dimensional coordinates of the spatial target point P0, as shown in Table 6. The sub-eye imaging information collected by the experiment is shown in Figure 10b. Through experimentation, the number of sub-eyes imaged at the target point was 33. In order to dissect the positioning error of the same target in the zoom-curved compound eye imaging system, calibration was performed. In the calibration process, the complex spherical eyeball center is used as the origin, and the target point coordinate is set to P 0 = (18.6 mm, 12.52 mm, 85 mm). According to Equation (7), if the target point is captured by two or more sub-eye channels, its three-dimensional coordinates can be determined based on the composition of the over-determined equations. Therefore, the sub-eye channels 2-20 of the 20 lenses are extracted to solve for the three-dimensional coordinates of the spatial target point P 0 , as shown in Table 6. From the coordinates of the target point shown in Table 6, the error in the calculated X, Y, and Z coordinate values of the target point are determined to gradually decrease as the number of captured target sub-eyes increases. The error curve in three directions is shown in Figure 11. Analysis of the trend of the curve can be obtained when the number of sub-eyes captured at the target point is less than 20, and the error of the coordinates of the calculated target point is larger. When the number of captured target sub-eyes reaches 20 or more, the error in the determined three-dimensional coordinates of the target object obtained by the curved compound eye positioning model is reduced to less than 10%. From the coordinates of the target point shown in Table 6, the error in the calculated X, Y, and Z coordinate values of the target point are determined to gradually decrease as the number of captured target sub-eyes increases. The error curve in three directions is shown in Figure 11. Analysis of the trend of the curve can be obtained when the number of sub-eyes captured at the target point is less than 20, and the error of the coordinates of the calculated target point is larger. When the number of captured target sub-eyes reaches 20 or more, the error in the determined three-dimensional coordinates of the target object obtained by the curved compound eye positioning model is reduced to less than 10%.  Due to the errors in the preparation process of the compound eye on a curved surface, CMOS assembly error, measurement error of the target point, distance error between the target surface and the compound eye camera, and the like, the three-dimensional coordinates of the currently-solved target point have a relatively large error. These problems will be reduced as much as possible during subsequent experiments to improve the accuracy of the solution. Despite this, we have optimized