Next Article in Journal
Temperature-Dependent Polarization Characterization and Birefringence Inversion in Super-Twisted Nematic Liquid Crystals
Previous Article in Journal
Creation of Low-Loss Dual-Ring Optical Filter via Temporal Coupled Mode Theory and Direct Binary Search Inverse Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Field-of-View Reconstruction Technology of Specific Bands for Spatial Integral Field Spectrographs

1
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, 500 Yutian Road, Hongkou District, Shanghai 200083, China
2
Key Laboratory of Infrared System Detection and Imaging, Chinese Academy of Sciences, Shanghai 200083, China
3
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Photonics 2025, 12(7), 682; https://doi.org/10.3390/photonics12070682
Submission received: 29 May 2025 / Revised: 27 June 2025 / Accepted: 4 July 2025 / Published: 7 July 2025

Abstract

Integral field technology, as an advanced spectroscopic imaging technique, can be used to acquire the spatial and spectral information of the target area simultaneously. In this paper, we propose a method for the field reconstruction of characteristic wavelength bands of a space integral field spectrograph. The precise positioning of the image slicer is crucial to ensure that the spectrograph can accurately capture the position of each slicer in space. Firstly, the line spread function information and the characteristic location coordinates are obtained. Next, the positioning points of each group of image slicers under a specific spectral band are determined by quintic spline interpolation and a double-closed-loop optimization framework, thus establishing connection points for the responses of different image slicers. Then, the accuracy and reliability of the data are further improved by fitting the signal intensity of pixel points. Finally, the data of all image slicers are aligned to complete the field reconstruction of the characteristic wavelength bands of the space integral field spectrograph. This provides new ideas for the two-dimensional spatial reconstruction of spectrographs using image slicers as integral field units in specific spectral bands and accurately restores the two-dimensional spatial field observations of spatial integral field spectrographs.

1. Introduction

The origin and evolution of the universe and life, as the ultimate questions at the boundaries of human cognition, have consistently driven the depth and breadth of scientific exploration. This interdisciplinary subject not only propels breakthroughs in fundamental physical laws but also signifies humanity’s fundamental understanding of the forms of material existence. Since the 20th century, with the emergence of milestone achievements such as dark matter structure models, supermassive black hole imaging, and gravitational wave detection, the paradigm of astronomical research has transitioned from classical observation to multi-messenger and full electromagnetic spectrum collaborative detection [1]. In this process, astronomical telescopes, as the core instruments for decoding cosmic information, have undergone a technological leap from single imaging to multi-dimensional data acquisition. Their observational dimensions now encompass the entire electromagnetic spectrum from radio to gamma rays, with resolutions spanning milliarcsecond spatial scales and femtosecond temporal scales ( 10 15 s), providing unprecedented data support for revealing the evolution of large-scale cosmic structures.
Traditional astronomical observation equipment is limited by the discrete design of functional modules, rendering the imaging terminal and spectral terminal incapable of synchronous operation, thereby resulting in systematic deficiencies in the inversion of key physical parameters. For example, in galactic dynamics studies, imaging data alone cannot capture the distribution of stellar velocity fields, while long-slit spectrographs require mechanical scanning to achieve spatial coverage—a process that is inefficient and prone to stitching errors. Statistical data indicate that the observational efficiency of the long-slit spectroscopy mode of the Hubble Space Telescope’s third-generation Wide Field Camera3 (WFC3) is less than half that of integral field spectrographs (IFSs), severely constraining large-sample astrophysical research.
The advent of the integral field spectrograph (IFS) has fundamentally overcome this technological bottleneck [2,3,4,5,6,7,8]. By employing integral field units (IFUs) to achieve segmentation of the field-of-view, in conjunction with the multi-dimensional optical path design of dispersive elements, it enables the simultaneous acquisition of spatial–spectral information, thereby generating a three-dimensional data cube in (x, y, λ ) coordinates [9,10,11,12,13]. IFUs are mainly realized in three ways: lenslet arrays, lenslet arrays with fibers, and image slicers. The lenslet arrays technique segments an extended source imaged on the telescope’s focal plane into discrete sub-images directed to a back-end spectrograph for dispersion into corresponding spectra, yielding a three-dimensional spectral data cube; the lenslet arrays with fiber approach partitions the field into units with each microlens coupled to an optical fiber, whose output ends are linearly aligned at the spectrograph’s entrance slit for spectral dispersion, while the image slicer dissects the original image field into elongated sub-image fields via miniature planar mirrors, systematically reordering these sub-images at the spectrograph’s slit for subsequent spectral dispersion.
Taking the Multi-Unit Spectroscopic Explorer (MUSE) system of the European Space Agency’s Very Large Telescope as an example, 24 slice-based integral field unit (IFU) modules are utilized to achieve full coverage of 300 × 300 spatial sampling points within a 1 arcmin × 1 arcmin field-of-view across the 0.8–2.4 μm wavelength range, with a spectral resolution of R = 3000. This system has successfully resolved the nanoscale spatial variation characteristics of dark matter gravitational lensing effects within galaxy clusters [3,4,5]. With regard to the development of IFSs, China started late. CHILI, China’s first integrated field unit fiber optic spectrograph for night-time astronomical observations, is installed on the 2.4 m optical telescope in Lijiang. It is a joint project between the Shanghai Astronomical Observatory of the Chinese Academy of Sciences and the University of Texas at Austin (UT Austin), based on the Visible Integral-field Replicable Unit Spectrograph (VIRUS) from the HETDEX. CHILI’s IFU combines fiber optics with a microlens array, with a fill factor exceeding 96%, a single-fiber spatial sampling rate of about 3.2 arcseconds, and a total field-of-view of 71″ × 65″ [7].
IFSs are widely deployed in ground-based astronomical telescopes, whereas in space-based ones, currently only the Near Infrared Spectrograph (NIRSpec) and Mid Infrared Instrument (MIRI) on the James Webb Space Telescope (JWST) have IFU capabilities. NIRSpec covers a spectral range of 0.6–5.3 μm with three resolution modes: λ / Δ λ = 100, 1000, and 2700. It operates over a 3 arcmin × 3 arcmin field-of-view (FoV), segmented into 30 slices. MIRI spans 4.9–28.8 μm with wavelength-dependent resolution: λ / Δ λ = 1500 to 3500. Its FoV increases with wavelength from 3.9 arcmin × 3.9 arcmin to 7.7 arcmin × 7.7 arcmin [14,15].
This study focuses on the ultraviolet part of the spectral range in China’s upcoming space-borne IFS, complementing JWST’s NIRSpec with a range of 0.35–0.55 μm. Its field-of-view is designed as 6 arcmin × 6 arcmin, and the spectral resolution R is expected to be R > 1000 . To construct the 3D data cube of the spectrograph readout, dimensional extraction is required in both the spectral and spatial domains.
This study focuses on 2D spatial reconstruction of the field-of-view. Using simulated on-board calibration conditions, we focused on laboratory-based reconstruction of the single-band 2D FoV. For this purpose, a calibration light source composed of an integrated sphere equipped with a Hg-Ar lamp (providing characteristic spectral lines) and a halogen lamp was utilized. In accordance with the on-board calibration requirements of the space-based IFS, an on-board calibration platform was established by simulating the in-orbit calibration environment. This platform enabled the IFS to acquire the positional distribution of the image slicer within the detector, thereby facilitating two-dimensional spatial fitting of the detector readout data. By applying the wavelength calibration coefficients of the detector, the readout data were transformed into a three-dimensional data cube.

2. Optical System of the Integral Field Spectrograph and Experimental Introduction

2.1. Optical Introduction of the Integral Field Spectrograph

This study focuses on an IFS that employs image slicers as the IFU. The system comprises 32 image slicers, a pupil mirror, a slit, and a dichroic filter. The image slicers segment the incident two-dimensional field-of-view into 32 individual sub-fields. These sub-fields are reflected and linearly arranged by the pupil mirror, which maps them onto the spectrograph’s entrance slit. This configuration effectively converts the two-dimensional field-of-view into a linear field-of-view, optimizing the utilization of the detector’s pixels and enhancing the spectral resolution.
After passing through the entrance slit, the light is directed to a diffraction grating, which disperses the light into its constituent wavelengths, forming spectral lines that are projected onto the detector’s focal plane. The system is designed with two detectors: one for the near-infrared spectrum and another for the ultraviolet spectrum. This study specifically focuses on the ultraviolet detector, which covers a spectral range of 350 nm to 600 nm. The detailed configuration and optical path transmission are depicted in Figure 1; the distribution of the image slicers is shown in Figure 2.
Synchronized with the image slicer, the pupil mirror array consists of 32 sub-pupil mirrors. To reduce the spatial arrangement length of the pupil mirrors and shorten the spectrograph’s entrance slit length, a double-row spatially interlaced design is adopted for the pupil mirrors, halving their length and thereby reducing the spectrograph’s design complexity. The distribution of the pupil mirror array is shown in Figure 3.
Optical characterization enables a prediction of the approximate distribution of the incident field-of-view on the detector’s focal plane. The image presented by the panchromatic light source on the detector is shown in Figure 4.
This image shows a simulation of an integral field spectrograph’s full-band light on a detector. The horizontal axis indicates 32 groups of signal responses from 32 image slicers, while the vertical axis denotes the spectral dispersion direction.

2.2. Image Slicer Positioning Experimental System

Figure 4 illustrates the distribution of various image slicers within the detector. To acquire valid observational data, it is essential to perform wavelength calibration and geometric calibration on the spectrograph, thereby enabling the reconstruction of the two-dimensional spatial field-of-view and the three-dimensional data cube. Wavelength calibration can be achieved by irradiating the integrating sphere with a Hg-Ar lamp, and the wavelength calibration table is derived by fitting the spectral response at specific bands with the detector positions.
The reconstruction of two-dimensional spatial field-of-view (FOV) necessitates the acquisition of spatially resolved positional features within the corresponding dataset. In this study, an experimental configuration was implemented by positioning a slit aperture at the entrance port of the optical system, with its physical dimensions significantly smaller than the instantaneous field-of-view (IFOV) of the imaging chain. A halogen lamp illuminates it, producing a light signal with a response smaller than one pixel unit. The response areas in the detector are used as key points for the 32 image slicer groups. Each spectral band has 32 positioning points. Given the detector’s spectral resolution of 0.18 nm/pixel, it is important to ensure that the spacing between adjacent spectral bands exceeds 0.18 nm when extracting multiple spectral segments. Since the CCD achieves high operational efficiency at 273 K with significantly reduced dark current noise and readout circuit noise, a cryocooler is integrated to maintain stable temperature control at 273 K during on-board satellite operation. During testing, the device must replicate the satellite’s on-board environment; therefore, the integrating sphere and spectrograph are placed in a 273 K vacuum chamber. The vacuum environment works synergistically with temperature control to mimic on-orbit conditions, ensuring the spectrograph’s performance matches the in-orbit design specifications. In order to fit a specific band, the response of the Hg-Ar lamp is collected as the dataset for the two-dimensional space reconstruction of the image. The experimental platform is shown in Figure 5. In this experiment, the detector employs a CCD231-84 with a 4 K × 4 K pixel array. The vacuum chamber maintains pressure below 6.7 × 10 5 Pa with thermal stability ≤ 0.5 °C/h.

3. Data Processing and Results Analysis

In this section, a systematic approach to reconstructing the 2D spatial field of spectral data from an image slicer-based IFS is presented. The reconstruction aims to extract and integrate information from the 32 sub-fields split by the image slicer within a specific spectral band. To restore the two-dimensional field-of-view information observed at the object plane of the optical system under the designated wavelength range. Initially, the response position for a specific spectral band must be determined. Subsequently, the 32 slices are sequentially extracted and rearranged. During the signal extraction process, if certain spectral bands are located in sub-pixel positions, it is necessary to fit the pixel responses to reconstruct the response values for these specific bands. Furthermore, due to the varying response widths of each image slicer group, the coordinates of the 32 slices need to be aligned and clipped based on the target object to ultimately obtain the 2D spatial field map for the specific spectral band. The core technical workflow is shown in Figure 6.

3.1. Line Spread Function

The line spread function (LSF) serves as a fundamental descriptor of optical system performance for line source imaging [16,17]. It represents the light intensity distribution along a specific direction (usually perpendicular to the line light source) after the line light source is imaged by an optical system. Under normal circumstances, the LSF is mathematically derived from the Point Spread Function (PSF) through the integral relationship expressed in Equation (1).
L ( x ) = h ( x , y ) d y
Here, h ( x , y ) denotes the Point Spread Function (PSF) characterizing the optical system’s spatial response to a point source, ( x , y ) indicates the coordinate position of the response in the readout data. The LSF is obtained by integrating the PSF along the longitudinal axis (y-axis) parallel to the line source’s extended dimension. In this coordinate framework, the transversal x-axis defines the measurement direction of the LSF, while the longitudinal y-axis coincides with the geometric orientation of the line source.
The line spread functions (LSFs) across distinct spectral bands in multiple image slicers can be derived from slit-enhanced imagery acquired at the object plane. For each image slicer array response, a scanning frame protocol sequentially isolates effective data points along the image rows, followed by estimation of the LSF parameter through Gaussian fitting upon valid identification of the data point.
Common methodologies for characterizing LSFs encompass Gaussian profile fitting, Lorentzian distribution modeling, polynomial regression, Fermi function approximation, and composite function synthesis. The Gaussian and Lorentzian approaches demonstrate optimal efficacy for symmetric datasets with smooth curvature profiles, whereas polynomial decomposition has proven suitable for complex morphological distributions. Asymmetric data structures necessitate Fermi function implementation, while intricate LSF configurations require composite model integration. Empirical analysis of experimental response data reveals the superior suitability of Gaussian/Lorentzian paradigms for LSF reconstruction. The formula for the Gaussian fitting LSF is presented in Equation (2), and the formula for the Lorentzian fitting LSF is presented in Equation (3) [18]. When processing experimental detector-readout image data, each slice’s signal response is processed individually. The pixel data fitting procedure, conducted row-by-row, incorporates both screening operations and regression analysis applied exclusively to integer-row pixel arrays. The fitting process establishes a precise correspondence between the column indices of response data in the detector readout image and the signal response values (voltage code values) at their corresponding pixel locations.
G ( x ) = A exp ( x μ ) 2 2 σ 2 + b
L ( x ) = A π 1 2 Γ ( x x 0 ) 2 + 1 2 Γ 2 + b
In the above formulas, G ( x ) and L ( x ) represent the signal response values and x represents the column index of the pixel in the detector readout image. In Equation (2), A is the height of the peak, μ is the center position of the peak, b is the bias term, and σ is the standard deviation, which determines the width of the curve. In Equation (3), A is the height of the peak, x 0 is the center position of the peak, b is the bias term, and Γ is the half-width, which determines the width of the curve. Table 1 quantitatively compares these two methodologies through three key metrics: mean root mean square error (RMSE, calculated as the square root of the mean squared difference between the measured intensity values at test points and the corresponding values predicted by the fitted LSF curve), average coefficient of determination ( R 2 ), and successful fitting rows per image slicer group.
Table 1 presents a comparison of the performance of the Lorentzian and Gaussian fitting methods. The RMSE represents the root mean square deviation between the signal response values (voltage code values) at the original coordinate points and the predicted signal response values obtained from the fitted curve using the two methods at the input locations. Successful fitting rows refer to the number of rows in each image slicer that successfully fit a Gaussian curve and have a linearity greater than 0.94. The Lorentzian fitting method has a slightly smaller root mean square noise than the Gaussian fitting method. However, this slight advantage is offset by the fact that the number of successfully fitted curves using the Lorentzian method is significantly smaller than that of the Gaussian method. The number of fitted rows per group denotes the count of successfully fitted rows in each group. As the vertical direction of the detector’s response is the spectral dispersion direction, inserting a slit before the lens elicits a response from each band. Within the 440 nm to 600 nm range of the Hg-Ar lamp, each row corresponding to the bands in each group’s slicer response was fitted, and a slit position table for each band was generated, which was vital for subsequent calculations. A higher number of fitted rows enhances the subsequent processing precision; unresponsive bands require interpolation. This reflects the fitting method’s stability and applicability across data groups. Using the Lorentzian method results in fewer successfully fitted rows than the Gaussian method, implying a lower stability and applicability. Additionally, the Lorentzian method showed lower correlation. Thus, the Gaussian method offers better applicability and results in a more accurate LSF, proving to be superior, more reliable, and more effective for application. The halogen lamp response image obtained after placing a slit in front of the spectrometer lens can be seen in Figure 7.
Figure 8 and Figure 9 share the same coordinate system, with the horizontal axis indicating the detector column number and the vertical axis representing the pixel signal intensity. Figure 9 presents the measured LSFs for all image slicer groups at 546.10 nm, corresponding to the spectral channel mapped by the integer-row pixel with nearest-neighbor interpolation to the characteristic 546.07 nm emission line of the Hg-Ar calibration source. Figure 9 provides a magnified analysis of the first LSF curve from Figure 8.
Figure 8 shows the peak response at 546.10 nm for all 32 slices (numbered 1 to 32 from left to right), with peak positions ranging from column 461.85 to 3767.10. The horizontal axis represents the column number of the detector, and the vertical axis represents the pixel response value. Due to optical system attenuation, the response of each slice varies. The LSF for each slice has a full width at half maximum (FWHM) of 1.32 to 2.38 pixels, averaging 1.73 pixels. Figure 9 displays an enlarged view of the fitting curve for slice 1 (the leftmost slice in Figure 8), where the peak is located at column 461.85 and the full width at half maximum is 1.59 pixels. The horizontal axis represents the column number of the detector, and the vertical axis represents the pixel response value (digital number).

3.2. Confirmation of Specific Spectral Segment Positioning Points

After determining the positions of the selected spectral bands in the 32 slices through the wavelength calibration table, the 32 sets of positions need to be aligned into a new common coordinate system. The main purpose of this section is to obtain a set of point coordinates that serve as positioning points. These coordinates establish a connection between the different sub-fields from the image slicer for the same spectral band. This means that these point coordinates help combine the 32 sub-fields into a 2D field-of-view image at the spectrometer lens. Aiming at the simultaneous response characteristics of spatial and spectral dimensions produced by the image slicer technology in spatial IFSs, this study proposes a sub-pixel-level interlinking positioning method. This method uses spatial coordinate data generated by a modulated sub-field slit (smaller than 1 instantaneous field-of-view) as input, combines it with spectral coordinate data from the wavelength calibration table, and performs initial data screening via a dynamic sliding window strategy; this strategy reduces the influence of distal data values on interpolation fitting. A bidirectional mapping model is constructed based on quintic spline interpolation technology. An adaptive smoothing coefficient is used to balance the model complexity and noise suppression requirements. To further enhance positioning stability, a composite optimization objective function including second derivative constraints is designed, integrating multiple constraints such as curvature smoothness, and robust penalty terms, ultimately outputting stable intersection coordinates that meet sub-pixel-level accuracy requirements. The positioning method process is shown in Figure 10. In this section, the algorithm selects the spatial coordinate data of the characteristic spectral line at 546.07 nm as the spectral reference and combines it with the spatial information formed by the sub-field slit modulation as input to achieve sub-pixel-level positioning for each image slicer group.
As the incident field is split into 32 sub-fields by the image slicer, it is necessary to locate the alignment markers for each sub-field. The first sub-field is used as an example in the following description, while the remaining sub-fields follow an identical processing method. Firstly, from Figure 10, it can be seen that there are two inputs: the response position information ( x s , y s ) of a specific spectral segment in each slice of the image slicer obtained from the detector readout image via the wavelength calibration table, and the centroid position ( x p , y p ) of the halogen lamp’s spectral response, acquired through a slit-apertured optical configuration positioned at the spectrometer’s entrance port. For these two sets of positions, response functions of y s = S h ( x s ) and x p = S v ( y p ) are established. The function y s = S h ( x s ) represents the response position information of each slice of the image slicer in a specific spectral segment obtained from the detector readout image via the wavelength calibration table. Here, the input x s is an integer and denotes the column number of the detector readout image and the input y s denotes the row number. The function x p = S v ( y p ) represents the response position of each slice of the integral field spectrograph’s before introduction of the slit in response to the halogen lamp, which can be determined by the peak positions of each slice at 440 nm to 600 nm (the spectral range of the halogen lamp), as described in Section 3.1. Here, the input y p is an integer and denotes the row number of the detector readout image and the input x p denotes the column number. Theoretically, these two functions for each set of slices will have an intersection point.
To determine this intersection point, it is necessary to interpolate the two sets of functions to find a common position. To filter out valid positional information from the two datasets, reducing workload and enhancing result accuracy, a characteristic point (the integer point closest to the two sets of positional information) first needs to be identified. Then, an asymmetric search region is established around this characteristic point to select positional information and subsequently, interpolation is required to obtain the intersection of the two sets of input data. Given the shape of the two sets of positional information, a spline interpolation method suitable for one-dimensional data processing was chosen. Typically, cubic spline interpolation is commonly used for curve fitting problems [19]; however, it may not suffice in cases where high smoothness and curvature are required. Quintic spline interpolation has advantages in such scenarios [20,21]. While cubic spline ensures continuity of the first derivative, quintic spline achieves continuity of the second derivative, ensuring smooth curvature changes. Quintic spline interpolation also provides a higher precision and strong adaptability to noise. Quintic B-spline interpolation further enhances these advantages [22]. It offers a high precision and lower computational costs. It is capable of determining the solution providing an estimated solution at any mesh point in the domain. Given these advantages, we choose quintic B-spline to construct the functions y s = S h ( x s ) and x p = S v ( y p ) , which are shown in Equations (4) and (5), respectively.
y s = S h ( x s ) = i = 0 n B i , 5 ( x s ) · P i
x p = S v ( y p ) = j = 0 m B j , 5 ( y p ) · Q j
In Equations (4) and (5), ( x s , y s ) represents the response position in the detector readout data for a specific wavelength band. ( x p , y p ) represents the spatial coordinates in the detector focal plane data, measured after light from the halogen lamp passes through the pre-slit assembly of the spectrograph lens. P i and Q j represent the coordinate weights of the control points, corresponding to the fitting coefficients in their respective basis functions. These weights are optimized via the least squares method to approximate the input data points ( x s , y s ) and ( x p , y p ) with a spline curve. Adjusting these weights modulates the control points’ influence on the curve geometry, thereby enhancing the fidelity of the data fitting. The quintic spline basis functions B i , 5 ( x s ) and B j , 5 ( y p ) are defined using the Cox–de Boor recursive formula, as shown in Equations (6) and (7).
B i , 5 ( x s ) = x s t i t i + 5 t i B i , 4 ( x s ) + t i + 6 x s t i + 6 t i + 1 B i + 1 , 4 ( x s )
B j , 5 ( y p ) = y p t j t j + 5 t i B j , 4 ( y p ) + t j + 6 y p t j + 6 t j + 1 B j + 1 , 4 ( y p )
In these formula, t i and t j are knot vectors. They determine the intervals over which the basis functions are non-zero and thus control the local influence of each control point. Knot vectors are typically chosen to be non-decreasing sequences of real numbers, and their values can affect the smoothness and continuity of the resulting spline curves. The functions y s = S h ( x s ) and x p = S v ( y p ) can be represented by Figure 11. The function y s = S h ( x s ) can be represented by Figure 11a, where the green points indicate the position of the spectral response at 546.07 nm, and the orange line represents the function constructed using a fifth-order B-spline. The function x p = S v ( y p ) can be represented by Figure 11b, where the blue points indicate the position of the slit’s spatial response on the detector, and the blue line represents the function constructed using a fifth-order B-spline.
The objective function transforms the mathematical properties of quintic splines into a computable optimization problem through a closed-loop optimization framework, thus avoiding the need to solve for the intersection of two quintic spline functions, with its core coupling mechanism presented in Equation (8).
E ( x , y ) = | S h ( x ) y | + | S v ( y ) x |
In this formula, | S h ( x ) y | represents the horizontal residual that ensures consistency between the intersection’s x-coordinate (detector column) and the actual value. Similarly, | S v ( y ) x | denotes the vertical residual ensuring alignment of the y-coordinate (detector row) at the intersection with its true position. Absolute distances are adopted here to prevent sign-based error cancellation, which could otherwise produce false focal point coordinates by allowing positive and negative deviations to offset each other. These residuals together establish a dual closed-loop verification mechanism, effectively mitigating the cumulative impact of fitting deviations in single directions. When the value of E(x,y) is minimized, the positioning point (x,y) is obtained.
Figure 12 shows the positioning points of the first group of image slicers in the 546.07 nm band, with a focus on the fitting results of the center point. This figure includes specific data points and fitting curves, the localization point clearly demonstrates the positioning process.
Figure 12 presents the fitting results for 546.07 nm and the slit data of the first slice using a sub-pixel-level interconnect localization method. The blue dots show the position of 546.07 nm from the wavelength calibration table, with the fitted curve represented by the blue line. The green dots indicate the position mapped from the slit response, with its fitted curve in orange. The red dot marks the intersection point at (461.85689, 470.38038). From the figure, it can be observed that the calculated positioning points are accurate.

3.3. Image Slicer Group Alignment

Reconstructing the object–plane view of the lens requires integrating observation data from multiple spectral slice modules into a unified view. The position coordinates of each slice at a specific spectral segment can be obtained via a wavelength calibration table. Based on these coordinates, the signal response values for each position can be determined. Since the y-coordinate of each position is likely a fractional value, fitting using the spectral response values of integer pixel points is required to obtain the spectral response values at these sub-pixel locations.
For the laboratory data collected using a halogen lamp and a Hg-Ar lamp, the spectral dimension of the Hg-Ar lamp’s signal is used to extract intensity values at the selected spectral segments through Gaussian fitting. The halogen lamp response can be seen in Figure 7. Due to the similar, continuous responses of adjacent pixels, the response at a certain band cannot be obtained through Gaussian fitting. Given the high spectral resolution of the spectrograph, spectral aliasing between bands is negligible. Therefore, a simple one-dimensional linear interpolation is adopted for initial fitting and reconstruction evaluation. This allows for the acquisition of signal response values for all position points of the 32 slice groups at specific spectral segments.
Next, a new coordinate system must be established to integrate the 32 slice groups. Initially, the 32 sets of one-dimensional data are combined from top to bottom according to their slicing order in the optical system. At this point, each row may contain a different number of pixels, and it is uncertain whether the pixels are correctly connected vertically. Therefore, the positioning points obtained from Section 3.2 are used for verification. These positioning points are identified at the location of each slice and arranged linearly in the new coordinate system. Subsequently, each slice group is trimmed based on the shortest distance between its left and right boundaries and the positioning points. This process results in a well-organized two-dimensional field-of-view image.
The reconstructed images not only visualize data distribution characteristics but also establish a foundation for subsequent quantitative analysis. As representative examples, Figure 13 shows the 485.83 nm and 546.07 nm bands reconstructed using Quintic B-spline interpolation under halogen lamp illumination, with a slit installed in front of the spectrograph lens. Similarly, Figure 14 displays the 435.83 nm and 546.07 nm bands reconstructed with the same method under Hg-Ar lamp illumination. For a comprehensive method comparison, Table 2 quantifies the positioning accuracy by showing the average pixel deviation between peak response positions and ground truth points across all rows in the field-of-view images of Figure 13’s spectral bands, evaluating three interpolation methods: Quintic B-spline, Cubic B-spline, and Linear interpolation.
The offset in Table 2 refers to the peak response positions of the slit in each pixel row compared to the positioning points. This offset should theoretically correspond to the horizontal ordinate of the positioning points in each slice from Section 3.2. The results demonstrate that Quintic B-spline interpolation achieves the smallest deviation, followed by Cubic B-spline and Linear interpolation methods, with Quintic B-spline interpolation’s sub-0.25-pixel deviation validating the effectiveness of the proposed approach. In Figure 13 and Figure 14, red dots mark the 32 sub-pixel positioning points (not aligned with the pixel centers). The horizontal axis represents the one-dimensional data of the two-dimensional field-of-view segments from each slice, while the vertical axis represents the 32 slices of the image slicer.
Figure 13 displays the 2D field-of-view images of two spectral bands acquired with a slit installed in front of the spectrograph lens. The response signals are distributed across pixels adjacent to the positioned coordinates. Through the positioning point fitting algorithm, these coordinates are determined to be proximal to the true spectral peaks, which may not coincide with discrete pixel coordinates. In particular, strong responses at integer pixel positions are often located on either side of the actual peak position, consistent with the signal distribution shown in the image. This alignment validates the efficacy of our methodology.
In Figure 14, the 2D field-of-view images of two spectral bands from the Hg-Ar lamp reveal that slice 25 exhibits a weaker response compared to other slices. This discrepancy likely stems from the higher optical path attenuation within this specific slice. When observing targets in this region, an extended integration time should be applied to enhance the response intensity. Additionally, the pixel-wise non-uniformity observed in the image requires correction based on responses at different energy levels.

4. Conclusions

This study presents a novel approach to field-of-view reconstruction of characteristic wavelength bands in spaceborne IFSs. First, the operational principles and optical architecture of IFSs are introduced to establish a theoretical foundation for subsequent technical investigations. Precise localization of image slicers is critical to ensuring accurate spatial positioning of each slice element. The data processing workflow enhances data accuracy and usability through four key steps: (1) Line Spread Function Testing: Precise slit measurements are used to characterize the pixel-level LSF for each image slicer, providing LSF parameters for different spectral bands in different slices of the slice for subsequent testing. (2) Confirmation of Specific Spectral Segment Positioning Points: Quintic spline interpolation combined with a double closed-loop optimization framework identifies localization points for each slice group at specific wavelengths, establishing cross-slice response connections. (3) Fitting of Pixel Signal Responses in Specific Spectral Bands: Different fitting methods are used to fit the signal intensity values based on the characteristics of the detector’s response to different light sources in order to restore the authenticity of the signal response. (4) Slice Alignment: All slice datasets are registered to a common coordinate system, enabling unified analysis across groups. This method successfully reconstructs some bands imagery in response to Hg-Ar and halogen lamp, providing a new framework for two-dimensional spatial reconstruction in spectrographs using image slicers as IFUs. The 2D response image from the integral field spectrometer’s slit clearly shows the scheme’s effectiveness. The response distribution at the slit in the image meets the theoretical requirements, offering essential support for subsequent 2D spatial field reconstruction.
In this study, the limitations of the light source restrict the ability to achieve a better spatial resolution and a more precise LSF response analysis for IFSs. Future work will involve incorporating point–source calibration to enhance data processing accuracy and spatial resolution. Moreover, further analysis of the optical effects generated at the edges of the image slicing group is required to improve the clarity of observation results. Additionally, the impacts of external noise on the instrument during its on-orbit operation can be explored through a series of studies.

Author Contributions

Conceptualization, J.W., J.S. and X.H.; methodology, J.S.; software, J.S.; validation, J.S., X.H. and J.W.; formal analysis, X.H. and J.S.; investigation, J.S., J.W., X.H. and Y.T.; resources, J.W., X.H. and Y.T.; data curation, X.H.; writing—original draft preparation, J.S.; writing—review and editing, Y.T., X.H. and J.W.; visualization, J.S.; supervision, J.W., X.H. and Y.T.; project administration, J.W.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to express our gratitude to the IFS research group led by Jun Wei from the Shanghai Institute of Technical Physics, Chinese Academy of Sciences, for providing equipment support and technical assistance.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cheng, L. Galactic Evolution. Sci. Focus 2020, 6. [Google Scholar] [CrossRef]
  2. Junfan, W.; Yongtian, Z.; Zhongwen, H. 3D astronomical imaging spectroscopy technology based on integral field units. Prog. Astron. 2008, 73–79. [Google Scholar]
  3. Laurent, F.; Hénault, F. Collimating slicer for optical integral field spectroscopy. In Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation II; SPIE: Bellingham, WA, USA, 2016; Volume 9912, pp. 1730–1742. [Google Scholar]
  4. Bacon, R. Highlights from the Multi Unit Spectroscopic Explorer (MUSE): A 2nd generation VLT instrument for the VLT (Presentation Video). In Ground-Based and Airborne Instrumentation for Astronomy V; SPIE: Bellingham, WA, USA, 2014; Volume 9147, p. 1579. [Google Scholar]
  5. Kelz, A.; Kamann, S.; Urrutia, T.; Weilbacher, P.; Bacon, R. Multi-Object Spectroscopy with MUSE. arXiv 2015, arXiv:1512.03329. [Google Scholar]
  6. Allington-Smith, J. Basic principles of integral field spectroscopy. New Astron. Rev. 2006, 50, 244–251. [Google Scholar] [CrossRef]
  7. Ren, D.; Allington-Smith, J. On the application of integral field unit design theory for imaging spectroscopy. Publ. Astron. Soc. Pac. 2002, 114, 866. [Google Scholar] [CrossRef]
  8. Liu, J.; Cui, J.; Shi, H.; Wang, Q.; Liu, Z.; Li, Y.; Wang, P.; Fu, Q.; Wang, C. Analysis and Optical System Design of the Lenslet-Array Integral Field Spectrometer. Photonics 2023, 10, 1208. [Google Scholar] [CrossRef]
  9. Zhang, T.; Ji, H.; Hou, Y.; Hu, Z.; Wang, L. Integral field spectroscopy imaging technology. J. Appl. Opt. 2015, 36, 531–536. [Google Scholar]
  10. Jujia, Z.; Xiangming, C.; Jiayang, S.; Jinming, B. An Experimental Optical-Fiber Integral Field Spectrograph. Astron. Tech. Instruments 2011, 8, 139–145. [Google Scholar] [CrossRef]
  11. Hill, G.J.; Tuttle, S.E.; Lee, H.; Vattiat, B.L.; Cornell, M.E.; Depoy, D.L.; Drory, N.; Fabricius, M.H.; Kelz, A.; Marshall, J.L. VIRUS: Production of a massively replicated 33k fiber integral field spectrograph for the upgraded Hobby-Eberly Telescope. In Proceedings of the SPIE Astronomical Telescopes + Instrumentation, Amsterdam, The Netherlands, 1–6 July 2012. [Google Scholar]
  12. LI, H.; HAO, L. The Test Results of Spectral Resolution of China LijiangIntegral Field Unit. Prog. Astron. 2017, 35, 255–266. [Google Scholar]
  13. Song, J.; Ren, B.; Tang, Y.; Wei, J.; Huang, X. Comparison of On-Sky Wavelength Calibration Methods for Integral Field Spectrograph. Electronics 2024, 13, 4131. [Google Scholar] [CrossRef]
  14. Dorner, B.; Giardino, G.; Ferruit, P.; de Oliveira, C.A.; Birkmann, S.M.; Böker, T.; De Marchi, G.; Gnata, X.; Köhler, J.; Sirianni, M.; et al. A model-based approach to the spatial and spectral calibration of NIRSpec onboard JWST. Astron. Astrophys. 2016, 592, A113. [Google Scholar] [CrossRef]
  15. Bagnasco, G.; Heaney, J.B.; Burriesci, L.G.; Kolm, M.; Ferruit, P.; Honnen, K.; Koehler, J.; Lemke, R.; Maschmann, M.; Melf, M. Overview of the near-infrared spectrograph (NIRSpec) instrument on-board the James Webb Space Telescope (JWST). In Cryogenic Optical Systems and Instruments XII; SPIE: Bellingham, WA, USA, 2007; Volume 6692, pp. 174–187. [Google Scholar]
  16. Zhang, T.; Huang, X.; Wei, J.; Wang, H. Research on correction method of stray light in large field push-broom imaging spectrometer. Laser Infrared 2016, 46, 4. [Google Scholar]
  17. Manzanares, A.; Calvo, M.L.; Chevalier, M.; Lakshminarayanan, V. Line spread function formulation proposed by W. H. Steel: A revision. Appl. Opt. 1997, 36, 4362–4366. [Google Scholar] [CrossRef] [PubMed]
  18. Ni, J. Research on Testing Technology of Modulation Transfer Function of Ultraviolet Image Intensifier and Ultraviolet ICMOS. Master’s Dissertation, Nanjing University of Science and Technology, Nanjing, China, 2019. [Google Scholar] [CrossRef]
  19. He, L.; Zhu, Y.; Lu, Z. A cubic spline interpolation based numerical method for fractional differential equations. J. Ind. Manag. Optim. 2024, 20, 2770–2794. [Google Scholar] [CrossRef]
  20. Ciesielski, M. Numerical algorithms for approximation of fractional integrals and derivatives based on quintic spline interpolation. Symmetry 2024, 16, 252. [Google Scholar] [CrossRef]
  21. Khan, A.; Aziz, T. The numerical solution of third-order boundary-value problems using quintic splines. Appl. Math. Comput. 2003, 137, 253–260. [Google Scholar] [CrossRef]
  22. Yousaf, M.Z.; Srivastava, H.M.; Abbas, M.; Nazir, T.; Mohammed, P.O.; Vivas-Cortez, M.; Chorfi, N. A Novel Quintic B-Spline Technique for Numerical Solutions of the Fourth-Order Singular Singularly-Perturbed Problems. Symmetry 2023, 15, 1929. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of optical transmission.
Figure 1. Schematic diagram of optical transmission.
Photonics 12 00682 g001
Figure 2. Schematic diagram of the image slicer.
Figure 2. Schematic diagram of the image slicer.
Photonics 12 00682 g002
Figure 3. Distribution diagram of the pupil mirror array.
Figure 3. Distribution diagram of the pupil mirror array.
Photonics 12 00682 g003
Figure 4. Simulation distribution map of detector response under panchromatic light illumination.
Figure 4. Simulation distribution map of detector response under panchromatic light illumination.
Photonics 12 00682 g004
Figure 5. Schematic diagram of the experimental system.
Figure 5. Schematic diagram of the experimental system.
Photonics 12 00682 g005
Figure 6. Flowchart of 2D space reconstruction for a specific wavelength band.
Figure 6. Flowchart of 2D space reconstruction for a specific wavelength band.
Photonics 12 00682 g006
Figure 7. Halogen lamp response image with a slit.
Figure 7. Halogen lamp response image with a slit.
Photonics 12 00682 g007
Figure 8. LSF diagrams of 32 groups of image slicers at 546.10 nm.
Figure 8. LSF diagrams of 32 groups of image slicers at 546.10 nm.
Photonics 12 00682 g008
Figure 9. LSF diagram of the first group of image slicers at 546.10 nm.
Figure 9. LSF diagram of the first group of image slicers at 546.10 nm.
Photonics 12 00682 g009
Figure 10. Flowchart of the fitting algorithm for positioning points.
Figure 10. Flowchart of the fitting algorithm for positioning points.
Photonics 12 00682 g010
Figure 11. The functions y s = S h ( x s ) and x p = S v ( y p ) : (a) the function y s = S h ( x s ) ; (b) the function x p = S v ( y p ) .
Figure 11. The functions y s = S h ( x s ) and x p = S v ( y p ) : (a) the function y s = S h ( x s ) ; (b) the function x p = S v ( y p ) .
Photonics 12 00682 g011
Figure 12. The positioning points of the first group of image slicers in the 546.07 nm wavelength band.
Figure 12. The positioning points of the first group of image slicers in the 546.07 nm wavelength band.
Photonics 12 00682 g012
Figure 13. Field-of-view map of the response of the halogen lamp after adding a slit: (a) 485.83 nm; (b) 546.07 nm.
Figure 13. Field-of-view map of the response of the halogen lamp after adding a slit: (a) 485.83 nm; (b) 546.07 nm.
Photonics 12 00682 g013
Figure 14. Field-of-view map of the response of the Hg-Ar lamp: (a) 435.83 nm; (b) 546.07 nm.
Figure 14. Field-of-view map of the response of the Hg-Ar lamp: (a) 435.83 nm; (b) 546.07 nm.
Photonics 12 00682 g014
Table 1. Table of LSF fitting results.
Table 1. Table of LSF fitting results.
MethodRMSE R 2 Number of Successful Fitted Rows per Group
Gaussian fitting method21.220.86>900
Lorentzian fitting method20.310.73<150
Table 2. Table of average deviation in row direction position for different interpolation methods.
Table 2. Table of average deviation in row direction position for different interpolation methods.
Spectral BandQuintic B-Spline Interpolation/PixelCubic B-Spline Interpolation/PixelLinear Interpolation/Pixel
485.83 nm0.230.250.28
546.07 nm0.240.250.27
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, J.; Tang, Y.; Wei, J.; Huang, X. Research on Field-of-View Reconstruction Technology of Specific Bands for Spatial Integral Field Spectrographs. Photonics 2025, 12, 682. https://doi.org/10.3390/photonics12070682

AMA Style

Song J, Tang Y, Wei J, Huang X. Research on Field-of-View Reconstruction Technology of Specific Bands for Spatial Integral Field Spectrographs. Photonics. 2025; 12(7):682. https://doi.org/10.3390/photonics12070682

Chicago/Turabian Style

Song, Jie, Yuyu Tang, Jun Wei, and Xiaoxian Huang. 2025. "Research on Field-of-View Reconstruction Technology of Specific Bands for Spatial Integral Field Spectrographs" Photonics 12, no. 7: 682. https://doi.org/10.3390/photonics12070682

APA Style

Song, J., Tang, Y., Wei, J., & Huang, X. (2025). Research on Field-of-View Reconstruction Technology of Specific Bands for Spatial Integral Field Spectrographs. Photonics, 12(7), 682. https://doi.org/10.3390/photonics12070682

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop