1. Introduction
In the rapidly advancing technological era, optical imaging plays an indispensable role in various fields. The scope of optical imaging applications spans a wide range of fields, including medical diagnosis, geological exploration, aerospace, and military reconnaissance [
1,
2,
3,
4,
5]. With the continuous advancement of science and technology, the demand for imaging with higher resolution, larger FOV, and greater detail is increasing steadily [
6,
7,
8]. When designing and utilizing optical imaging systems, researchers often face a fundamental challenge: how to balance imaging resolution and FOV. Higher resolution enables the system to capture more details and smaller features, thereby providing more accurate information and data. However, achieving high resolution often requires sacrificing the size of the FOV [
9,
10,
11].
Imaging FOV represents the size of the entire spatial range or scene that an optical system can observe. An imaging system with a large FOV can cover a larger area, capturing a broader range of information. However, increasing FOV often results in a reduction in resolution. This is because within a limited optical system, focusing on a broader scene means dispersing limited light resources over a wider area, which affects resolution. Therefore, there is an inherent trade-off between resolution and FOV. Improving resolution often requires sacrificing FOV size, and imaging techniques with a large FOV may perform poorly in terms of resolution.
In recent years, many researchers have explored various technical methods for achieving large FOV and high resolution imaging. For example, Kopf team and Sargent team use a single high resolution SLR camera to scan and capture a large number of photos at a specific location [
12,
13], which are then stitched together to obtain high resolution images with a large FOV, consisting of billions of pixels. However, there is a time interval between the frames of the scanned images, making this method suitable only for static scenes. Wilburnden [
14] proposed a large-scale imaging system composed of 96 cameras, each equipped with an individual data processor. By arranging the cameras in space and adjusting the camera’s FOV, a series of images with different but overlapping perspectives are simultaneously obtained, enabling the reconstruction of high resolution images with a large FOV. Cossairt from Columbia University employed computational imaging methods to achieve high resolution imaging in small camera sizes, while using spherical lenses and detectors to achieve large FOV imaging [
15]. However, their designed system exhibits significant spherical and chromatic aberration, which cannot be completely eliminated by computational imaging. Suntharalingam achieves high resolution imaging with a large FOV by concatenating multiple detectors to create a continuous larger detector [
16]. Although this detector is suitable for single-exposure image acquisition applications, it suffers from stitching gaps and blind spots in the captured images. The use of fisheye lenses can expand the FOV at a given resolution [
17,
18]. However, there is distortion in the imaging surface, making it impossible to achieve consistent resolution across the entire field, particularly in the edge regions, where compression is most severe, leading to significant information loss. Overcoming the contradiction between resolution and FOV, as mentioned above, presents certain challenges. In comparison, the multi-scale imaging design proposed by Brady [
19] from Duke University offers significant advantages, which are thoroughly discussed in the following section.
2. Multi-Scale Imaging System
Multiscale optical imaging technology seeks to integrate imaging capabilities at different scales within a single system to achieve comprehensive observation and analysis of the scene. The development of this technology has significantly broadened the application areas of optical imaging, offering novel solutions for a variety of critical fields. Multiscale optical imaging is a multi-level imaging system that is different from traditional optical systems. Currently, multiscale optical systems consist of two levels: a large optical system at the front level and multiple small relay systems at the secondary level, as shown in
Figure 1 The front-end large-sized optical system is used to collect as much optical information as possible and perform preliminary aberration correction. The secondary multiple small-sized optical systems form a multi aperture relay imaging array structure, which is used to segment and relay the intermediate image formed by the front-end optical system, thus completing local residual aberration correction. This design integrates the FOV acquisition capability of large-sized optical systems and the local aberration correction capability of small-sized multi aperture relay imaging arrays. Together, this configuration offers strong aberration correction capability, ensuring a large FOV while maintaining high resolution imaging.
Although multiscale optical design offers significant advantages, when traditional lenses are used as the primary objective lens of the system, the aberrations generated by different FOV positions of the primary objective lens are different. This results in the need for different and numerous secondary relay systems to correct the local aberrations. As a consequence, the design and manufacturing processes become more complex.
In order to ensure that the local aberrations corrected by secondary relay imaging systems at different FOV positions are consistent, and that the secondary relay imaging systems are fully symmetrical, Brady et al. proposed a multi-scale monocentric optical imaging system [
20,
21,
22,
23,
24,
25,
26]. The multi-scale monocentric optical system effectively addresses the issue of varying aberrations at different field-of-view positions, which require different secondary relay systems and an asymmetric arrangement for aberration correction. Due to the rotational symmetry of the FOV provided by the spherical primary lens, a larger FOV can be achieved, with the resulting intermediate image plane being spherical. The aberrations at different positions on the intermediate image plane are identical. Therefore, the secondary relay system can be completely symmetric and arranged uniformly on the sphere behind the intermediate image plane. This advantage significantly simplifies the optical system’s structure, reduces processing costs and assembly difficulties, and represents the most effective method for achieving both a large FOV and high resolution imaging simultaneously. The schematic diagram of the multi-scale monocentric optical imaging system is shown in
Figure 2.
In order to achieve both a large FOV and high resolution imaging, this study investigates the principle of multi-scale monocentric imaging and designs a multi-scale monocentric optical system for imaging in the visible light band. The front group of the system adopts a double-layer monocentric objective, and the initial structural parameters of the double-layer monocentric objective are calculated through aberration theory. After obtaining the initial structural parameters, the structure can then be further optimized using optical design software.
3. Initial Structural Calculation of Multi-Scale Monocentric Imaging System
The multi-scale monocentric imaging system consists of a large spherically symmetric objective and a set of identical secondary relay imaging system. The spherically symmetric objective is used for the initial imaging of wide-area scenes, generating a curved intermediate image plane. The secondary relay imaging system divides the large FOV. There is overlap between the segmented fields of view, which is used for later field stitching. Each secondary relay imaging system further corrects the residual aberrations of the primary objective lens and performs relay imaging for the intermediate image within their respective FOV channels. Finally, it can be stitched through sub images, achieving the high resolution imaging of large FOV.
Due to the identical secondary relay imaging systems, we focus only on the system model formed by the primary objective lens and a single secondary relay imaging system. The residual aberration of the monocentric objective can be compensated and corrected by the secondary relay imaging system. If the residual aberration is large, it will increase the optical complexity of the relay system in correcting the residual aberration and, additionally, increase the tolerance sensitivity of the system, leading to higher assembly costs. Therefore, we should first design the monocentric objective independently to eliminate most of the aberrations and then optimize the entire design by incorporating the secondary relay lens to further correct the residual aberrations in the system.
The primary objective lens of the multi-scale monocentric imaging system employs a spherically symmetric optical system, which consists of a transparent spherical lens and a series of concentric spherical shells. As the number of spherical shells increases, the degree of freedom in the design also increases, allowing for better optimization of various parameters to enhance system imaging quality. However, this also increases the design complexity. Taking all factors into consideration, the primary objective lens designed in this study is a double-layer monocentric spherical lens, as shown in
Figure 3.
According to the theory of paraxial optics, the formula for calculating the focal length of the double-layer monocentric spherical lens can be derived, as shown in Equation (1), where
n1 and
n2 are the refractive indices of the double-layer spherical lens, and
r1 and
r2 are the curvature radii of the double-layer spherical lens.
The primary aberration of a double-layer monocentric spherical lens is spherical aberration. Using the theory of optical aberration, the formula for eliminating spherical aberration in a double-layer spherical objective is derived, as shown in Equation (2) [
27].
Therefore, we can combine Equations (1) and (2) to calculate the initial structure of the double-layer spherical objective, i.e., the initial values of
r1 and
r2. As a design example, we consider a double-layer glued monocentric spherical lens system with a target focal length of
f = 100 mm. The outer glass material selected is heavy flint glass H-ZF3 from the CDGM optical glass library, which has a high refractive index of
n = 1.71736. The inner glass material selected is Crown glass H-QK3L from the CDGM optical glass library, which has a low refractive index of
n = 1.48749. Substituting the above parameters into the focal length Equation (1) and the extinction spherical aberration condition Equation (2), we obtain
r1 = 67.0787 mm and
r2 = 36.6632 mm. Therefore, the thickness of the double-layer spherical lens is
d1 =
r1 −
r2 = 30.4155 mm, and
d2 =
r2 = 36.6632 mm, respectively. Then the two calculated radius of curvature values and thickness values were entered into the optical design software, and the imaging wavelength was set to the visible band, 486 nm to 656 nm. By substituting the obtained parameters
r1 = 67.0787 mm,
r2 = 36.6632 mm,
d1 = 30.4155 mm,
d2 = 36.6632 mm into the optical design software, the imaging optical path of the monocentric primary objective was obtained, as shown in
Figure 4. It can be observed that the angle of light passing through the double-layer objective is relatively smooth. From the image position, it can be seen that the rays are well focused at the image plane, indicating that the initial parameters obtained from the aberration theory serve as a good starting point for optimization.
The double-layer spherical lens has corrected some aberrations, but the residual aberrations especially the field curvature need further correction by the subsequent relay lenses. Due to the subsequent relay lens group taking on the role of residual aberration correction and relay imaging, the design of the relay lens group is more complex and cannot be directly solved using analytical methods, as with the monocentric objective. Instead, it is necessary to select a suitable initial structure from existing patent or lens libraries, and then optimize it in the later stage. An endoscopic configuration can indeed be employed as an initial structure to achieve near-field relay imaging. However, compared with endoscopic systems, the double-Gauss configuration offers greater design flexibility and advantages such as larger aperture. As a result, the double-Gauss lenses are still more commonly adopted in practical optical engineering. For example, the AWARE10 camera use double-Gauss lenses as the initial structure for relay imaging lenses [
28]. Therefore, we select the classic double-Gaussian objective as the initial structure of the secondary relay imaging system, as shown in
Figure 5. The selected double-Gaussian lens consists of six lenses, featuring a compact and symmetrical overall structure. The aperture stop is located between the third and fourth lenses, which facilitates the attainment of a larger relative aperture and imaging FOV.
4. System Structures for Optimized Design
Combine the double-layer spherical lens shown in
Figure 4 with the six-piece double-Gaussian lenses shown in
Figure 5 for further joint optimization and design. The optimization steps involve scaling the lens sizes of the secondary relay system, optimizing the radii of curvature and thickness of each lens in both the primary and secondary relay systems, optimizing the inter-lens distances, optimizing the replacement of glass materials, and the position of the aperture. Through repeated iterative optimization, we ultimately obtained a design that satisfies all requirements. The imaging optical path of the final design is illustrated in
Figure 6. The design wavelengths are 450 to 650 nm for visible wavelengths. Some other information about the optical system include an F-number of 3, a focal length of 55.8 mm, and a field of view of ±1.5°.
The spot diagrams are presented in
Figure 7. As shown in
Figure 7, the FOV of a single secondary imaging system is ±1.5°, corresponding to a total FOV of 3°. Such a relatively small FOV implies that a large number of secondary imaging systems are required to realize a large FOV system. However, this configuration also offers important advantages. On the one hand, a small FOV is more favorable for aberration correction. On the other hand, stitching a large number of imaging channels makes it possible to achieve high resolution imaging with a very large number of pixels. In an ideal case, without considering the field overlap required for stitching, the required number of secondary imaging systems is approximately equal to the total desired FOV divided by the FOV of a single channel. Although employing a large number of secondary imaging systems enables large field, high resolution imaging, it also introduces significant challenges, including increased complexity in field stitching, massive image data processing, and difficulties in system assembly and alignment. Therefore, the number of secondary imaging systems should be carefully designed and optimized according to practical requirements and engineering constraints.
From
Figure 7, it can be observed that all imaging spots are compact, with the RMS value of the imaging spot at the central FOV being 1.8 μm, and the RMS value at the maximum FOV being 2 μm. The energy of the imaging spots is well concentrated, and the imaging spots are compact, demonstrating that the optical system effectively corrects aberrations, including spherical aberration, astigmatism, field curvature, and chromatic aberration. The number of pixels in an optical system detector can describe the resolution of the optical system, which represents the amount of information the optical system can capture. The rays in these spot diagrams fall almost within the Airy disc radius, showing near diffraction-limited performance.
In practical imaging systems, the size of the imaging spot should be properly matched to the pixel size of the detector. In our work, the analysis is based on a theoretical model, in which we assume that the imaging spot size is already well matched to the detector. Under this assumption, the formula for calculating the resolution of an optical system is given by n = A/S, where A is the area of the imaging sensor and S is the pixel size of the imaging sensor. According to the formula for calculating the resolution of an optical system, the smaller the imaging spot of the optical system, the more advantageous it is to improve the imaging resolution of the optical system.
In this paper, the maximum RMS value of the imaging spot size in the designed optical system is constrained to 2 μm. Thus, a smaller pixel size imaging sensor can be selected, which is highly advantageous for enhancing the imaging resolution of the optical system. In addition, when multiple detectors are used, the imaging resolution is significantly improved and can achieve hundreds of millions of pixels, satisfying the imaging requirements for high resolution and wide-area imaging. It can be widely applied in various scenarios, such as city squares photography, station surveillance, live sports broadcasts and so on.
The modulation transfer function (MTF) of an optical system is one of the most effective methods for characterizing the imaging quality of an optical system. For a diffraction-limited optical imaging system, the MTF calculation is given by Equation (3) [
29].
When an optical system has aberrations, the expression for the MTF due to aberrations is shown in Equation (4) [
30], where
Wrms represents the root mean square value of the wavefront aberration,
fx represents the spatial frequency (in cycles/mrad), and
foco represents the optical cutoff frequency,
foco = D/λ. The empirical value of
A is a constant,
A = 0.18.
From the concept of optical system transfer function, it can be obtained that the MTF of an actual optical system is expressed as the product of the diffraction-limited MTF
diff and the aberration MTF
aberration, as shown in Equation (5). By substituting Equations (3) and (4) into Equation (5), the MTF of the actual optical system can be calculated.
The MTF curves for various wavefront aberrations are presented in
Figure 8. As shown in
Figure 8, as the wavefront aberration increases, the MTF of the optical system decreases significantly, deviating further from the optical transfer function curve of the diffraction limit.
Figure 9 illustrates the MTF of the monocentric multiscale imaging system designed in this paper. As depicted in
Figure 9, the MTF curves for each FOV are consistent and closely follow the diffraction limit curve. By comparing the MTF curves in
Figure 8 and
Figure 9, it is clear that the optical system designed in this paper effectively corrects the wavefront aberrations. When a detector with a pixel size of 2 μm is selected, it is calculated that the maximum resolution of the detector is 250 lp/mm. Thus, the MTF cutoff frequency can be set to 250 lp/mm. As shown in
Figure 9, at the cutoff frequency, the transfer function values for all fields of view are greater than 0.35, and the MTF of each FOV is close to the diffraction limit, indicating that the imaging performance of the optical system is excellent.