Next Article in Journal
Integration of High-Brightness QLED-Excited Diamond Magnetic Sensor
Previous Article in Journal
Curling of Gel Scaffold Layer for Cell Culture by a Deformable Microactuator Mat Toward Biological Canal Formation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerated Super-Resolution Reconstruction for Structured Illumination Microscopy Integrated with Low-Light Optimization

1
College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China
2
College of Mechanical Engineering and Automation, Huaqiao University, Xiamen 361021, China
*
Author to whom correspondence should be addressed.
Micromachines 2025, 16(9), 1020; https://doi.org/10.3390/mi16091020
Submission received: 21 July 2025 / Revised: 25 August 2025 / Accepted: 28 August 2025 / Published: 3 September 2025
(This article belongs to the Special Issue Advanced Biomaterials, Biodevices, and Their Application)

Abstract

Structured illumination microscopy (SIM) with π/2 phase-shift modulation traditionally relies on frequency-domain computation, which greatly limits processing efficiency. In addition, the illumination regime inherent in structured illumination techniques often results in poor visual quality of reconstructed images. To address these dual challenges, this study introduces DM-SIM-LLIE (Differential Low-Light Image Enhancement SIM), a novel framework that integrates two synergistic innovations. First, the study pioneers a spatial-domain computational paradigm for π/2 phase-shift SIM reconstruction. Through system differentiation, mathematical derivation, and algorithm simplification, an optimized spatial-domain model is established. Second, an adaptive local overexposure correction strategy is developed, combined with a zero-shot learning deep learning algorithm, RUAS, to enhance the image quality of structured light reconstructed images. Experimental validation using specimens such as fluorescent microspheres and bovine pulmonary artery endothelial cells demonstrates the advantages of this approach: compared with traditional frequency-domain methods, the reconstruction speed is accelerated by five times while maintaining equivalent lateral resolution and excellent axial resolution. The image quality of the low-light enhancement algorithm after local overexposure correction is superior to existing methods. These advances significantly increase the application potential of SIM technology in time-sensitive biomedical imaging scenarios that require high spatiotemporal resolution.

1. Introduction

Microscopy is indispensable in life sciences, enabling landmark biological discoveries. Nonetheless, the diffraction barrier, initially outlined by Ernst Abbe in 1873, has historically confined optical resolution to approximately half the wavelength of the employed light [1]. The emergence of super-resolution microscopy (SRM) techniques has overcome this limitation, among which structured illumination microscopy (SIM) achieves twofold resolution enhancement via patterned excitation and computational reconstruction [2,3].
SIM offers distinct advantages, including rapid imaging, minimal phototoxicity, and compatibility with standard sample preparation, making it a powerful tool for biomedical research [4,5,6,7].
Despite its strengths, SIM reconstruction relies heavily on Fourier-domain processing, which imposes computational bottlenecks that hinder real-time imaging [8]. Conventional frequency-domain reconstruction (FDR) requires iterative parameter optimization (e.g., precise frequency/phase estimation), further delaying processing [9,10,11]. While recent advances accelerate reconstruction via hardware innovations [12], algorithmic refinements [13], or deep learning [14,15,16,17,18,19,20], spatial-domain methods have demonstrated exceptional efficiency gains. SP-SIM and functional expansion approaches achieve > 5× faster processing than FDR, yet both require a 2π/3 phase shift [21]. Critically, this restricts their application in high-resolution DMD-based SIM systems where illumination geometry mandates a π/2 phase shift to maximize spatial frequency near the diffraction limit [22]. Notably, π/2 shifting improves reconstruction accuracy under low modulation conditions, highlighting its importance for high-fidelity imaging [22]. In this study, the spatial domain reconstruction (SDR) method of structured lighting with π/2 phase difference will be studied.
Furthermore, SIM’s low-light illumination inherently compromises image contrast and visual quality. While low-light enhancement (LLIE) algorithms based on Retinex theory or deep learning exist [23], their applicability to SIM remains underexplored. The RUAS network [24] holds promise for enhancing SIM but is plagued by non-uniform illumination artifacts. These artifacts lead to overenhancement in bright regions and insufficient recovery of dark regions [25].
This limitation is particularly problematic in fluorescence microscopy, where sample sparsity and optical heterogeneity inherently produce intensity variations.
To address these dual challenges, the study proposes DM-SIM-LLIE, an integrated computational framework featuring the following:
(1) A novel spatial-domain reconstruction (SDR) algorithm optimized for π/2 phase-shifted SIM, eliminating FDR bottlenecks while enhancing axial resolution;
(2) An adapted RUAS network with dynamic estimate truncation, smoothness constraints, and attention mechanisms, specifically engineered to correct non-uniform illumination in SIM reconstructions.

2. Methods and System

2.1. Principle of DM-SIM

In the linear SIM system, cosine fringe structured illumination is used. Let S(r) represent fluorophore density distribution within a specimen and I θ , φ ( r ) be the illuminating sinusoidal intensity pattern,
I θ , φ ( r ) = I 0 [ 1 + m cos ( 2 π k 0 r + φ ) ]
where r ( x , y ) is the (two-dimensional) spatial position vector, k0 is the (sinusoidal) illumination frequency vector in reciprocal space, and φ denotes the phase of the sinusoidal fringe illumination pattern [11]. I0 and m constants are called the mean intensity and modulation depth, respectively [26].
Thus, the fluorescence emission distribution from the specimen is S ( r ) · I θ , φ ( r ) , and the observed emission distribution through the optical system is as follows:
D θ , φ ( r ) = [ I θ , φ ( r ) S ( r ) ] H ( r ) + B out
where H(r) is the optical system’s point spread function (PSF), ⊗ is the convolution operator, and Bout accounts for out-of-focus background contributions.
If making Fourier transform, Equation (2) can be expressed in frequency space as Equation (3) [27].
D ˜ θ , φ ( k ) = [ I ˜ θ , φ ( k ) S ˜ ( k ) ] . H ˜ ( k ) + B ˜ o u t ( k ) = I 0 [ S ˜ w ( k ) + m 2 S ˜ ( k + k 0 ) e i φ + m 2 S ˜ ( k k 0 ) e i φ ] H ˜ ( k )
where S ˜ w ( k ) = S ˜ ( k ) H ˜ ( k ) + B ˜ out ( k ) H ˜ ( k ) , H ˜ ( k ) is the system Optical Transfer Function (OTF). S ˜ w ( k ) in Equation (3) characterizes the standard frequency spectrum detectable through conventional microscopy, where the spatial frequency k is constrained within the range of [−kc, kc], with kc denoting the system’s optical cutoff frequency. Notably, Equation (3) incorporates two supplementary spectral components. These additional components exhibit distinct spectral shifts. One is displaced by +k0 and the other by −k0 along the frequency axis, while their respective k values satisfy modified boundary conditions. This spatial frequency redistribution effectively expands the observable spectral range to [−kck0, kc + k0]. Given that k0 can theoretically approach kc, the maximum achievable spectral bandwidth in linear SIM configurations demonstrates a two-fold expansion compared to conventional systems. This fundamental principle forms the basis for the twofold resolution enhancement characteristics of linear SIM techniques. To obtain the three terms of frequency spectra in Equation (3), three different phases, φ0, φ0 + Δφ, and φ0 + 2Δφ, for the illumination pattern can be set to get three independent linear equations. To obtain nearly isotropic resolution enhancement, the unidirectional sinusoidal fringe pattern should be rotated in at least two orthogonal directions or in three directions, such as θ, θ + π/3, and θ + 2π/3.
The schematic diagram of the SIM foundation is shown in Figure 1 [27].
Figure 1a The PSF of a microscope, which can be obtained from system calibration. The OTF and the PSF are a Fourier transform pair. FFT, Fourier transform; IFT, inverse Fourier transform. Figure 1b The OTF of a conventional wide-field microscope, which shows the observable region in the frequency domain. Moiré patterns reveal detailed structural information. When two fringes superpose on each other, low-frequency fringes will appear. Different angles of superposition lead to moiré fringes of different directions and frequencies. As shown in different rows of Figure 1c, the fringe angle is rotated by π/3 to generate fringes in three directions. These fringes extend the cutoff frequency of the OTF in the corresponding directions. In each direction, fringes are phase shifted three times with a step of Δφ figures in the same row in Figure 1c. Figure 1d A horizontal sinusoidal pattern with a spatial frequency kc expands the observable region Figure 1e. Achieving isotropic resolution enhancement by rotating the structured illumination pattern.
The study employs the relative spatial phases φ1,1 = φ0, φ1,2 = φ0 + Δφ = π/2, and φ1,3 = φ0 + 2Δφ = π. Using Equation (1) and combining the properties of trigonometric functions, we can obtain the expression of the three captured images.
D 1 , 1 ( r ) = I 0 [ 1 + m cos ( 2 π k 0 r + φ 0 ) ] S ( r ) H ( r ) + B out D 1 , 2 ( r ) = I 0 [ 1 m sin ( 2 π k 0 r + φ 0 ) ] S ( r ) H ( r ) + B out D 1 , 3 ( r ) = I 0 [ 1 m cos ( 2 π k 0 r + φ 0 ) ] S ( r ) H ( r ) + B out
Since the out-of-focus background remains unchanged, the background can be eliminated by subtracting two neighboring phase-shifted images. Further simplification is carried out using the formula for the sum and difference of two angles of a trigonometric function, and the result is shown in Equation (5).
Δ D 1 , 1 , 1 , 2 ( r ) = D 1 , 1 ( r ) D 1 , 2 ( r ) = = 2 m I 0 { [ cos ( 2 π k 0 r + φ 0 ) cos π 4 + sin ( 2 π k 0 r + φ 0 ) sin π 4 ] S ( r ) } H ( r ) = 2 m I 0 [ cos ( 2 π k 0 r + φ 0 π 4 ) S ( r ) ] H ( r ) Δ D 1 , 2 , 1 , 3 ( r ) = D 1 , 2 ( r ) D 1 , 3 ( r ) = 2 m I 0 { [ sin ( 2 π k 0 r + φ 0 ) cos π 4 cos ( 2 π k 0 r + φ 0 ) sin π 4 ] S ( r ) } H ( r ) = 2 m I 0 [ sin ( 2 π k 0 r + φ 0 π 4 ) S ( r ) ] H ( r )
Euler’s formula, e i r = cos ( r ) + i sin ( r ) , suggests the construction of a complex analytic signal, with Δ D 1 , 1 , 1 , 2 ( r ) as the real part and Δ D 1 , 2 , 1 , 3 ( r ) as the imaginary part. The following simplification via Euler’s formula directly yields Equation (6).
Z 1 ( r ) = Δ D 1 , 1 , 1 , 2 ( r ) i Δ D 1 , 1 , 2 , 3 ( r ) = 2 m I 0 [ cos ( 2 π k 0 r + φ 0 π 4 ) sin ( r ) ] H ( r ) + i 2 m I 0 [ sin ( 2 π k 0 r + φ 0 π 4 ) sin ( r ) ] H ( r ) = 2 [ m I 0 e i ( 2 π k 0 r + φ 0 π / 4 ) S ( r ) ] H ( r )
A local analysis of Equation (6) is performed according to the convolution formula [19], letting t = r t .
[ e i 2 π k 0 r S ( r ) ] H ( r ) = e i 2 π k 0 ( t ) S ( t ) H ( r t ) d t = e i 2 π k 0 ( r t ) S ( r t ) H ( t ) d t = e i 2 π k 0 r S ( r t ) H ( t ) e i 2 π k 0 r d t = e i 2 π k 0 r [ S ( r ) ( H ( r ) e i 2 π k 0 r ) ]
According to Equation (7), combined with the Fourier transform frequency shift property, Equation (6) can be simplified to Equation (8).
Z 1 ( r ) = 2 I 0 e i ( 2 π k 0 r + φ 0 π / 4 ) [ S ( r ) ( H ( r ) e i 2 π k 0 r ) ]
It is known from Euler’s formula and the properties of complex numbers that e i θ = 1 . Then, a super-resolution image of the sample with spatial frequency | k | ϵ ( 0 , | k 0 | + k c ) would be obtained by taking the modulus of Z1 (r), as shown in Equation (9).
Z 1 ( r ) = 2 m I 0 e i ( 2 π k 0 r + φ 0 π / 4 ) S ( r ) H ( r ) e i 2 π k 0 r = 2 m I 0 S ( r ) H ( r ) e i 2 π k 0 r = 2 m I 0 F T 1 [ s ˜ ( k ) h ˜ ( k k 0 ) ]
The above excitation mode angles are only for solving the super-resolution problem at one θ angle. To optimize the uniformity of spatial resolution, the reconstructed volumetric data should be enhanced across all orientations. The system performs fluorescence wide-field imaging with different phase values in three directions (i.e., different cosine modulation k0 values). The evolution images Z i ( r ) (i = 1, 2, 3) are obtained in sequence. Finally, it can obtain a 2D HR image D(r) by summing up the Z i ( r ) (i = 1, 2, 3) in three structured lighting θ angles of 0°, 60°, and 120°.
D ( r ) = i = 1 3 Z i ( r )

2.2. LLIE Principle and Improved Illumination Estimation Method

Based on the fundamental principles of Retinex theory, the RUAS computational framework performs image decomposition through dual-component separation. In the process of illumination estimation, the algorithm mainly relies on a local maximum to estimate the initial illumination component, although the algorithm uses a convolutional neural network to further optimize the illumination component to adapt to the situation of non-uniform illumination. However, this method still has the following shortcomings:
  • Local overexposure problem: Under non-uniform illumination conditions, the local maximum may correspond to the pixel value of the overexposed area. Directly using these values as the initial illumination estimation will cause the overexposed area to be over-magnified during the enhancement process, resulting in local overexposure.
  • Insufficient illumination smoothness: The spatial change of illumination components should have a certain smoothness, but the local maximum estimation method of the RUAS algorithm cannot effectively constrain the smooth transition of illumination components, and it is easy to produce mutations in local areas, further exacerbating the overexposure problem.
This study proposes an improved illumination estimation method that combines global constraints, dynamic range adjustments, and adaptive optimization to suppress the enhancement distortion of overexposed areas.
Step 1: Dynamically truncate the initial illumination estimate.
Goal: Avoid using the pixel values of overexposed areas as initial illumination estimates.
First, detect the overexposed areas. Scale the intensity values of each image channel to the normalized range between 0 and 1.
The input image will be segmented into n × n non-overlapping matrix blocks. In this study, four non-overlapping block sizes (i.e., 2 × 2, 4 × 4, 6 × 6, and 8 × 8) were used to segment the image. The purpose of dividing the blocks into different sizes is to analyze the exposure of the image from different scales.
Next, a parameter fbc called the block exposure fitness is introduced.
f b c i = n × n j = 1 n × n V i , j V ¯ i 2 n × n
The luminance value at position (i,j) within a given image block is denoted as Vi,j, while V i ¯ represents the mean intensity across the n × n pixel area of the corresponding block. The feature fbci is computed through a contrast-based calculation that evaluates the deviation of individual pixel intensities from their local mean. This metric is normalized to the interval [0, 1], with values approximating 0.5 indicating optimal visual quality and natural appearance. Regions exhibiting fbci values exceeding 0.65 are identified as potential overexposure areas requiring correction.
E x p o s u r e = Well-expose if   0 f b c i 0.65 Over-expose if   0.66   f b c i 1.0
Assign weights to the results of different segmentation block sizes. Smaller blocks get higher coefficients to amplify local brightness details, and larger blocks get less weights for overall brightness evaluation. Then, an illumination matrix is generated to determine whether each pixel of the image is overexposed.
Step 2: Lighting smoothness constraint.
Goal: Suppress the sudden change of illumination components and enhance the robustness to non-uniform illumination.
L T V = x i , y j L x i + L y j
Add a total variation (TV) regularization term to the loss function. Where L represents the illumination component, and xi and yj represent the horizontal and vertical coordinates of the image, respectively. When optimized by a convolutional neural network, the gradient of the illumination component is forced to be smooth.
Step 3: Adaptive attention mechanism.
Goal: Reduce the weight of overexposed areas in the illumination estimation. Make the enhancement algorithm pay more attention to normal areas.
According to the illumination matrix, an attention mechanism map A is generated, where the weight of overexposed areas is low (A = 0.1) and the weight of normal areas is high (A = 1.0). During the optimization process, give less weight to the loss of overexposed areas.
The optimized illumination component L will be used as the output of the IEM (Illumination Estimation Module) of the improved RUAS network. The network structure diagram of the improved lighting estimation method is shown in Figure 2.

2.3. Flowchart of DM-SIM-LLIE Method

A flowchart of our DM-SIM-LLIE is presented in Figure 3. The sample S(r) is sequentially illuminated by cosine fringes with different directions and phase shifts. A total of nine original images (three directions and three phase shifts for each direction) are recorded. Di,j(r) represents the captured image associated with the illumination pattern of the i-th orientation and j-th phase displacement, where i, j = 1, 2, 3. The DM-SIM algorithm sequentially processes each orientation to generate corresponding super-resolved sub-images. Finally, the overall two-dimensional super-resolution image D(r) is obtained by summing the three reconstructed sub-images across the three directions. The spectrograms of the images | Z i ( r ) | and D(r), along with the wide-field image of the microscope are shown simultaneously in Figure 3. This visualization effectively demonstrates the spectral expansion mechanism in the spatial frequency domain. Given that the parameter k0 reaches its theoretical maximum at the system’s cutoff frequency kc, the reconstruction process enables a doubling of the lateral resolution relative to standard microscopy configurations through an optimized manipulation of the optical transfer function. This finding is consistent with the theoretical resolution enhancement results of the frequency domain method of SIM.
The super-resolution image D(r) reconstructed by DM-SIM is input into the illumination estimation module of RUAS optimized by this study, and an attention mechanism map is generated. This encourages the enhancement algorithm to pay more attention to the non-overexposed areas, i.e., the normal areas, and to perform more effective low-light enhancement.

3. Experimental System Setup

The experiment was carried out by employing a SIM system that relied on LED illumination and DMD projection [28]. Figure 4 illustrates the experimental setup, featuring a multi-wavelength LED array with bandpass filters for excitation. Light passes through a TIR prism, reflects off a DMD chip programmed with periodic grating patterns, and is projected via a collimating lens and objective to create a cosine grating illumination. Image acquisition is performed using an sCMOS camera. DMD is a high-speed digital switching mirror array consisting of millions of micrometre-square aluminium micromirrors integrated on a memory chip to form a two-dimensional array, with each micromirror representing a pixel. Each pixel can be individually controlled, allowing binary stripes with different phases; angles and periods can be rapidly loaded without mechanical movement or rotation. Because of the low-pass filtering characteristic of the microscope system, only sinusoidal stripes with a fundamental frequency not exceeding the cut-off frequency can smoothly reach the focal plane, while higher-order harmonics are blocked. This process effectively transforms DMD binary stripes into sinusoidal stripes of the same frequency.

4. Results and Discussion

The following experiments will target different samples, sequentially obtaining wide-field images and reconstructed images of structured light illumination microscopy to verify the effectiveness of the algorithm. Part of the experimental process can be referenced in Figure 3. Wide-field imaging is a technique that captures images of a large area of a sample at once, rather than focusing on a small section, providing a comprehensive view of the whole. In microscopy, this means illuminating the entire sample and observing the image through the eyepieces or a digital camera. The SIM image acquisition process is carried out as follows: Instead of uniform light, SIM illuminates the sample with a patterned light, often a grid of stripes or a similar structure. When the patterned light interacts with the biological sample, it creates interference patterns called Moiré fringes. These Moiré fringes encode fine-scale information about the sample’s structure that is normally invisible under standard illumination due to the diffraction limit. Multiple images are captured with different patterns (e.g., rotated or phase-shifted light patterns). Software then analyzes these images and the known illumination patterns to computationally reconstruct a high-resolution image, effectively “unmixing” and extracting the encoded high-frequency information.

4.1. Improved Lateral Resolution Results and Analysis

A practical way to quantify resolution is the full width at half maximum (FWHM) of sub-diffraction-limited structures. This metric, easily obtainable through microscopy, serves as a standard for comparative analysis. The theoretical FWHM is given by FWHM = 0.51λ/(NA), approximately equal to λ/(2NA). Thus, the FWHM closely approaches Abbe’s diffraction limit and can also be derived from microscope image data.
The spatial resolution calibration protocol employed 200 nm diameter fluorescent beads from TetraSpeck (ThermoFisher). The results of the wide-field and DM-SIM measurements are presented in Figure 5. This work selected individual beads to plot their intensity distributions, as shown in Figure 5a,b, and fitted these distributions with a Gaussian function to obtain FWHM. The average FWHM values for wide-field and DM-SIM measurements are 370 ± 5 nm and 225 ± 5 nm, respectively, as illustrated in Figure 5c. The error components take into account both systematic errors and random errors. The resolution of the DM-SIM reconstruction is approximately 1.64 times that of the wide-field image. Given that the size of a DMD micromirror pixel used in the experimental system is 13.68 μm, and that four DMD pixels are used in one cycle, the size of one cycle after passing through the objective lens is approximately 548 nm. Consequently, the frequency of the structured illumination stripes is about 0.66 times the diffraction limit, with a theoretical resolution expected to be 1.66 times that of the wide-field resolution. Therefore, the experimental results are close to the theoretical values. If a DMD with smaller micromirror sizes is utilized in future experiments, DM-SIM is expected to achieve a system resolution close to twice that of the wide-field resolution, aligning with theoretical predictions.

4.2. Low-Light Image Enhancement Verification

BioSR is an open experimental dataset. Researchers utilized a multimodal SIM (Structured Illumination Microscopy) approach to collect a comprehensive dataset encompassing various cellular components, including clathrin-coated pits (CCPs), the endoplasmic reticulum (ER), microtubules (MTs), and F-actin filaments [29,30]. This study used ccps, MTs, and F-actin samples in the data set to verify our LLIE algorithm.
Figure 6 presents a comprehensive comparison of our approach, with various advanced methods across both supervised and unsupervised learning frameworks. The comparative analysis relies on datasets sourced from the BioSR repository. Usually, image quality is assessed by full reference metrics. Performance Evaluation is shown in Table 1. Structural Similarity Index (SSIM), Root Mean Square Error (RMSE), Contrast Improvement Index (CII), and Peak signal-to-noise ratio (PSNR) are selected as the parameters for image evaluation. SSIM assesses the structural similarity between two images by considering luminance, contrast, and structure. It is often used to evaluate the perceived quality of an image compared to a reference image. To measure the amount of improvement in contrast and brightness, the contrast improvement index (CII) was used. PSNR is a measure of the quality of a compressed or reconstructed image compared to the original image. It quantifies how much noise or distortion is present. The root mean square error is defined as the measure of the differences between values that are predicted by a model and values that are actually observed. Image quality improves with higher SSIM, PSNR, and CII values. In RMSE results for images, lower values are better.
Here, the study mainly selects some low-light enhancement algorithms that are advanced but do not require very high hardware resources for comparison. The comparison algorithms shown here mainly include the following: the self-calibrated illumination (SCI) learning framework introduced by Ma et al. [31]; an unsupervised generative adversarial network for LLIE (Englighten-GAN) [32]; Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement (Zero-DCE++) [33]; the low-light enhancement algorithm KIND++ with a multi-scale illumination attention module that has been introduced [34]; the low-light image enhancement convolutional neural network RetinexNet [35]; URetinex-Net: Retinex-based Deep Unfolding Network for Low-light Image Enhance [36]; and Retinexmamba: Retinex-based Mamba for Low-light Image Enhancement [37]. For the CCPS dataset, the proposed method achieved the highest scores in both SSIM and PSNR metrics. The CII metric was lower than most algorithms but higher than the RetinexNet algorithm. RMSE achieved the lowest score in the comparison, indicating the best results. Experiments on MTs samples yielded similar results, with the CII metric outperforming RetinexNet, ZeroDCE++, and Kind++ algorithms. Experiments on F-actin samples also achieved the best results among all algorithms in terms of SSIM, PSNR, and RMSE parameters. However, the CII metric was lower than that of other algorithms. Overall, the results of this study are satisfactory. This study’s proposed algorithm produces the most robust image enhancement effect.
In particular, some of the image enhancement algorithms produce blurred backgrounds. A few algorithms introduced artifacts.

4.3. Image Reconstruction Results After Combining the LLIE Process

Next, the study compared the imaging results under the traditional illumination mode (i.e., wide-field illumination mode) with the imaging results of the DM-SIM algorithm combining the LLIE process. As shown in Figure 7, (a) is the image of BPAE cells under wide-field conditions, and (b) is the image of the DM-SIM algorithm combining the LLIE process. (c) compares (a) and (b) together. (i) is the wide-field-of-view image, and (ii) is the DM-SIM image. (iii) and (iv) are magnified images with red rectangular boxes. In contrast, DM-SIM incorporates the LLIE process to automatically acquire clear super-resolution reconstructed images. It overcomes the shortcomings of the traditional SIM method, which is affected by low illumination and requires manual contrast adjustment in the reconstructed image.

4.4. Comparison of DM-SIM-LLIE with FDR-SIM

In the traditional FDR-SIM scheme, super-resolution images are achieved through Fourier transform and spatial spectrum processing. FairSIM is a commonly used program for reconstructing super-resolution images, and its principle is based on the frequency domain method [38]. The study used the FairSIM program to perform structured light illumination super-resolution reconstruction based on the frequency domain method, which is the FDR-SIM scheme.
To determine the comparison between DM-SIM and FDR-SIM, the study still selected BPAE cells (F36924, ThermoFisher) as samples. The reconstructed image shown in Figure 8a is the wide-field image, Figure 8b is reconstructed by FairSIM, and Figure 8c is reconstructed by DM-SIM. Figure 8d is the enlarged image of the white box part in Figure 8a–c, and Figure 8e is the intensity distribution fitting curve along the white line in Figure 8d. Comparison of the enlarged area of each image in Figure 8 shows that the image generated by DM-SIM is very similar to that generated by FairSIM. The contour plot also shows that DM-SIM can recover the same details as FairSIM using the frequency domain method.
Under the same hardware conditions, the image reconstruction time of each reconstruction method is compared, and the advantage of DM-SIM-LLIE is revealed.
Details are given in Table 2. The reconstruction time of DM-SIM for a 512 × 512 pixel image was 0.71 s. In comparison, FDR-SIM’s reconstruction time was 5.49 times slower at 3.90 s. In the aspect of improving low illumination, the traditional way is manual adjustment, which is easily influenced by human factors. DM-SIM-LLIE, on the other hand, achieves ideal time efficiency.

4.5. Improved Axial Resolution Results and Analysis

Wide-field fluorescence microscopy suffers from poor axial resolution and OTF missing cone issues, causing 2D images to mix in-focus details with out-of-focus blur. Removing this interference is crucial for clear 3D reconstruction.
The following experiments verify that the DM-SIM algorithm has the ability to reduce the interference of out-of-focus signals and improve the axial resolution.
The specimen selection is mouse kidney slices; light source: central wavelength 470 nm; dye: 488 nm wheat germ agglutinin; objective lens: 40×/NA 0.6.
Optical sectioning SIM (OS-SIM) microscopy employs orthogonal light paths to confine illumination to a single plane, enabling inherent optical sectioning. This computational imaging technique is widely regarded as an effective method for enhancing axial resolution.
Taking mouse kidney sections as specimens, Figure 9a–c compare the imaging results of the specimens under wide-field microscopy, OS-SIM, and DM-SIM. The results show that, similar to OS-SIM, DM-SIM can suppress defocused background noise, ensuring clarity in focal plane information.
Compared with wide-field imaging, the focal plane information resolution of DM-SIM is improved. Figure 9d is a comparison of the axial resolution of WF, OS-SIM, and DM-SIM. The axial resolution of WF is 580 ± 5 nm, the axial resolution of OS-SIM is 300 ± 5 nm, and the axial resolution of DM-SIM is 305 ± 5 nm.
DM-SIM has achieved an axial resolution close to that of OS-SIM, which is an advantage that traditional frequency domain methods do not have. This is mainly because the image differential process has the effect of eliminating defocus interference.

4.6. Experimental Evaluation on Different Samples

In addition to the above experiments on animal and plant cells, we also conducted evaluations on other samples to analyze the universality of the algorithm, as shown in Figure 10. Figures (i) and (iii) are wide-field images, and Figures (ii) and (iv) are DM-SIM-LLIE images. Figures (iii) and (iv) are magnified wide-field and DM-SIM-LLIE images of the same sample area, respectively, selected by the red frame. Firstly, we tested the performance of DM-SIM-LLIE on fluorescently labelled actin. Secondly, we selected a Pap smear sample. A Pap smear is a sample of exfoliated cervical cells. Thirdly, we selected a pathological tissue section sample. The experimental results for these three samples are shown in Figure 10a–c. The DM-SIM-LLIE method produced final super-resolution images with improved contrast.

5. Conclusions

This study focuses on three key challenges associated with structured illumination microscopy (SIM): poor performance when imaging weak light, low reconstruction efficiency, and insufficient axial resolution. The proposed spatial super-resolution algorithm, DM-SIM-LLIE, incorporates the following innovations:
Firstly, it incorporates an adaptive weak light enhancement module. Based on machine learning algorithms, this module dynamically optimizes pixel-level brightness and contrast, overcoming the dependence of traditional SIM systems on lighting conditions. It enables stable, high-quality, low-light imaging and the enhancement of various samples.
Secondly, the algorithm employs a temporal super-resolution reconstruction method. This algorithm uses spatiotemporal image correlations to enable rapid reconstruction in π/2 phase-shift SIM systems. Compared to traditional frequency-domain reconstruction methods, experimental data show that this algorithm achieves a 5.49-fold increase in reconstruction speed and nearly doubles lateral resolution, significantly enhancing real-time imaging efficiency.
Thirdly, the axial resolution has also been significantly improved. The DM-SIM-LLIE algorithm enhances the axial resolution of the SIM system on experimental samples from 580 ± 5 nm to 305 ± 5 nm. This performance is comparable to that of light-sheet microscopy, significantly expanding the range of applications for 3D imaging.
DM-SIM-LLIE demonstrates outstanding performance when imaging both mammalian and plant cells. It successfully resolves the balance between resolution, speed, and phototoxicity in biological imaging, offering new solutions for techniques such as live imaging, clinical pathology diagnosis, and 3D volume reconstruction. The algorithm has broad application prospects and research value.

Author Contributions

Conceptualization, C.H. and D.Y.; methodology, C.H. and D.Y.; software, C.H.; validation, C.H.; formal analysis, C.H.; investigation, D.Y. and C.H.; resources, C.H. and D.Y.; data curation, C.H.; writing—original draft, C.H.; writing—review and editing, D.Y.; supervision, D.Y.; project administration, L.Z.; funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Key Science and Technology Innovation Base Joint Development Fund of Liaoning (No. 2022KF2104), the National Natural Science Foundation of China (No. 52375539), the State Key Laboratory of Integrated Automation for Process Industry, Northeastern University (2022-KF-21-04).

Data Availability Statement

Data is contained within the article: the original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The study sincerely appreciates Baoli, Y. and Dan D researchers from the Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, for providing technical guidance and assistance. This study sincerely thanks Huang Lei from the School of Life Sciences, Xiamen University for providing professional assistance in the operation and technical guidance of experimental microscopy instruments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abbe, E. Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung. Arch. Mikrosk. Anat. 1873, 9, 456–468. [Google Scholar] [CrossRef]
  2. Gustafsson, M.G. Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution. Proc. Natl. Acad. Sci. USA 2005, 102, 13081–13086. [Google Scholar] [CrossRef]
  3. Chen, X.; Zhong, S.; Hou, Y.; Cao, R.; Wang, W.; Li, D.; Dai, Q.; Kim, D.; Xi, P. Superresolution structured illumination microscopy reconstruction algorithms: A review. Light Sci. Appl. 2023, 12, 172. [Google Scholar] [CrossRef] [PubMed]
  4. Prakash, K.; Diederich, B.; Reichelt, S.; Heintzmann, R.; Schermelleh, L. Super-resolution structured illumination microscopy: Past, present and future. Philos. Trans. R. Soc. A 2021, 379, 20200143. [Google Scholar] [CrossRef] [PubMed]
  5. Ma, Y.; Wen, K.; Liu, M.; Zheng, J.; Chu, K.; Smith, Z.J.; Liu, L.; Gao, P. Recent advances in structured illumination microscopy. J. Phys. Photonics 2021, 3, 024009. [Google Scholar] [CrossRef]
  6. Prakash Singh, S.; Raja, S.; Mahalingam, S. Lentiviral Vpx induces alteration of mammalian cell nuclear envelope integrity. Biochem. Biophys. Res. Commun. 2019, 511, 192–198. [Google Scholar] [CrossRef]
  7. Wang, Y.; Guan, M.; Wang, H.; Li, Y.; Zhang, K.; Xi, P.; Zhang, Y. The largest isoform of Ankyrin-G is required for lattice structure of the axon initial segment. Biochem. Biophys. Res. Commun. 2021, 578, 28–34. [Google Scholar] [CrossRef]
  8. Dan, D.; Wang, Z.; Zhou, X.; Lei, M.; Zhao, T.; Qian, J.; Yu, X.; Yan, S.; Min, J.; Bianco, P.R.; et al. Rapid Image Reconstruction of Structured Illumination Microscopy Directly in the Spatial Domain. IEEE Photonics J. 2021, 13, 1–11. [Google Scholar] [CrossRef]
  9. Shroff, S.A.; Fienup, J.R.; Williams, D.R. Phase-shift estimation in sinusoidally illuminated images for lateral superresolution. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2009, 26, 413–424. [Google Scholar] [CrossRef]
  10. Wicker, K. Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space. Opt. Express 2013, 21, 24692. [Google Scholar] [CrossRef]
  11. Zhou, X.; Lei, M.; Dan, D.; Yao, B.; Yang, Y.; Qian, J.; Chen, G.; Bianco, P.R. Image recombination transform algorithm for superresolution structured illumination microscopy. J. Biomed. Opt. 2016, 21, 096009. [Google Scholar] [CrossRef]
  12. Li, M.; Li, Y.; Liu, W.; Lal, A.; Jiang, S.; Jin, D.; Yang, H.; Wang, S.; Zhang, K.; Xi, P. Structured illumination microscopy using digital micro-mirror device and coherent light source. Appl. Phys. Lett. 2020, 116, 233702. [Google Scholar] [CrossRef]
  13. Dong, S.; Liao, J.; Guo, K.; Bian, L.; Suo, J.; Zheng, G. Resolution doubling with a reduced number of image acquisitions. Biomed. Opt. Express 2015, 6, 2946–2952. [Google Scholar] [CrossRef]
  14. Chen, Y.; Liu, Q.; Zhang, J.; Ye, Z.; Ye, H.; Zhu, Y.; Kuang, C.; Chen, Y.; Liu, W. Deep learning enables contrast-robust super-resolution reconstruction in structured illumination microscopy. Opt. Express 2024, 32, 3316–3328. [Google Scholar] [CrossRef]
  15. Ling, C.; Zhang, C.; Wang, M.; Meng, F.; Du, L.; Yuan, X. Fast structured illumination microscopy via deep learning. Photonics Res. 2020, 8, 1350–1359. [Google Scholar] [CrossRef]
  16. Song, L.; Liu, X.; Xiong, Z.; Ahamed, M.; An, S.; Zheng, J.; Ma, Y.; Gao, P. Super-resolution reconstruction of structured illumination microscopy using deep-learning and sparse deconvolution. Opt. Lasers Eng. 2024, 174, 107968. [Google Scholar] [CrossRef]
  17. Li, X.R.; Chen, J.J.; Wang, M.T. Advancement in Structured Illumination Microscopy Based on Deep Learning. Chin. J. Lasers-Zhongguo Jiguang 2024, 51, 2107103. [Google Scholar]
  18. Qian, J.; Wang, C.; Wu, H.; Chen, Q.; Zuo, C. Ensemble deep learning-enabled single-shot composite structured illumination microscopy (eDL-cSIM). PhotoniX 2025, 6, 1–21. [Google Scholar] [CrossRef]
  19. Chen, X.; Qiao, C.; Jiang, T.; Liu, J.; Meng, Q.; Zeng, Y.; Chen, H.; Qiao, H.; Li, D.; Wu, J. Self-supervised denoising for multimodal structured illumination microscopy enables long-term super-resolution live-cell imaging. PhotoniX 2024, 5, 4. [Google Scholar] [CrossRef]
  20. Saurabh, A.; Brown, P.T.; Bryan, J.S., IV; Fox, Z.R.; Kruithoff, R.; Thompson, C.; Kural, C.; Shepherd, D.P.; Pressé, S. Approaching maximum resolution in structured illumination microscopy via accurate noise modeling. Npj Imaging 2025, 3, 5. [Google Scholar] [CrossRef] [PubMed]
  21. Tu, S.; Liu, Q.; Liu, X.; Liu, W.; Zhang, Z.; Luo, T.; Kuang, C.; Liu, X.; Hao, X. Fast reconstruction algorithm for structured illumination microscopy. Opt. Lett. 2020, 45, 1567–1570. [Google Scholar] [CrossRef] [PubMed]
  22. Zhao, T.; Hao, H.; Wang, Z.; Liang, Y.; Feng, K.; He, M.; Yun, X.; Bianco, P.R.; Sun, Y.; Yao, B.; et al. Multi-color structured illumination microscopy for live cell imaging based on the enhanced image recombination transform algorithm. Biomed. Opt. Express 2021, 12, 3474–3484. [Google Scholar] [CrossRef]
  23. Tang, H.; Zhu, H.; Fei, L.; Wang, T.; Cao, Y.; Xie, C. Low-illumination image enhancement based on deep learning techniques: A brief review. Photonics 2023, 10, 198. [Google Scholar] [CrossRef]
  24. Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 2 November 2021. [Google Scholar]
  25. Liu, P.; Wang, Y.; Yang, J.; Li, W. An adaptive enhancement method for gastrointestinal low-light images of capsule endoscope. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023. [Google Scholar]
  26. Dan, D.; Yao, B.; Lei, M. Structured illumination microscopy for super-resolution and optical sectioning. Chin. Sci. Bull. 2014, 59, 1291–1307. [Google Scholar] [CrossRef]
  27. Lal, A.; Shan, C.; Xi, P. Structured Illumination Microscopy Image Reconstruction Algorithm. IEEE J. Sel. Top. Quantum Electron. 2016, 22, 50–63. [Google Scholar] [CrossRef]
  28. Dan, D.; Ming, L.; Baoli, Y. DMD-based LED-illumination super-resolution and optical sectioning microscopy. Sci. Rep. 2013, 23, 1116. [Google Scholar] [CrossRef]
  29. Qiao, C.; Li, D. BioSR: A biological image dataset for super-resolution microscopy. Figshare 2020, 9, 13264793. [Google Scholar]
  30. Qiao, C.; Li, D.; Guo, Y.; Liu, C.; Jiang, T.; Dai, Q.; Li, D. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 2021, 18, 194–202. [Google Scholar] [CrossRef] [PubMed]
  31. Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
  32. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
  33. Li, C.; Guo, C.; Chen, C.L. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef] [PubMed]
  34. Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond brightening low-light images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
  35. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar] [CrossRef]
  36. Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
  37. Bai, J.; Yin, Y.; He, Q.; Li, Y.; Zhang, X. Retinexmamba: Retinex-based mamba for low-light image enhancement. arXiv 2024, arXiv:2405.03349. [Google Scholar]
  38. Müller, M.; Mönkemöller, V.; Hennig, S.; Hübner, W.; Huser, T. Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ. Nat. Commun. 2016, 7, 10980. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic diagram of SIM foundation.
Figure 1. Schematic diagram of SIM foundation.
Micromachines 16 01020 g001
Figure 2. Network structure diagram of the improved lighting estimation method.
Figure 2. Network structure diagram of the improved lighting estimation method.
Micromachines 16 01020 g002
Figure 3. Flowchart of DM-SIM-LLIE.
Figure 3. Flowchart of DM-SIM-LLIE.
Micromachines 16 01020 g003
Figure 4. Schematic of the SIM system.
Figure 4. Schematic of the SIM system.
Micromachines 16 01020 g004
Figure 5. System resolution was calibrated using fluorescent microspheres with a diameter of 200 nm. (a) Wide-field reconstruction. (b) DM-SIM reconstruction. (c) FWHM of wide-field and DM-SIM system.
Figure 5. System resolution was calibrated using fluorescent microspheres with a diameter of 200 nm. (a) Wide-field reconstruction. (b) DM-SIM reconstruction. (c) FWHM of wide-field and DM-SIM system.
Micromachines 16 01020 g005
Figure 6. Visual comparison of the Bio dataset among advanced low-light image enhancement approaches.
Figure 6. Visual comparison of the Bio dataset among advanced low-light image enhancement approaches.
Micromachines 16 01020 g006
Figure 7. Performance of DM-SIM combining the LLIE process on reconstructed BPAE cell. (a) Wide-field; (b) DM-SIM-LLIE; (c) Comparison between wide-field and the DM-SIM-LLIE method.
Figure 7. Performance of DM-SIM combining the LLIE process on reconstructed BPAE cell. (a) Wide-field; (b) DM-SIM-LLIE; (c) Comparison between wide-field and the DM-SIM-LLIE method.
Micromachines 16 01020 g007
Figure 8. Imaging results of BPAE cell. (a) Wide-field image. (b) FDR-SIM reconstructed image. (c) DM-SIM reconstructed image. (d) Zoomed-in views of the boxed parts in (ac). (e) Intensity curves along the marked lines in (d).
Figure 8. Imaging results of BPAE cell. (a) Wide-field image. (b) FDR-SIM reconstructed image. (c) DM-SIM reconstructed image. (d) Zoomed-in views of the boxed parts in (ac). (e) Intensity curves along the marked lines in (d).
Micromachines 16 01020 g008
Figure 9. Imaging results of mouse kidney slices. (ac) Images by wide-field, OS-SIM, and DM-SIM-LLIE, respectively. (d) The axial resolution of WF, OS-SIM, and DM-SIM-LLIE.
Figure 9. Imaging results of mouse kidney slices. (ac) Images by wide-field, OS-SIM, and DM-SIM-LLIE, respectively. (d) The axial resolution of WF, OS-SIM, and DM-SIM-LLIE.
Micromachines 16 01020 g009
Figure 10. Experimental evaluation on different samples. (a) ACTIN sample. (b) Cervical cell sample. (c) Pathological tissue sample.
Figure 10. Experimental evaluation on different samples. (a) ACTIN sample. (b) Cervical cell sample. (c) Pathological tissue sample.
Micromachines 16 01020 g010
Table 1. Evaluation results of different methods in Figure 6.
Table 1. Evaluation results of different methods in Figure 6.
DatasetMetricsSCIEnGANZeroDCE++Kind++RetinexNetUnretNetRetMambaRUASOurs
CCPSSSIM↑0.8560.1590.5090.6770.0610.5820.1930.8780.893
PSNR↑19.94119.47714.95816.26316.71415.58616.44020.25520.490
CII↑2.8882.7313.2944.1632.0274.3373.8312.7452.702
RMSE↓25.67327.08145.56839.20637.22542.38638.41524.76324.102
MTsSSIM↑0.347 0.3610.3840.4530.2630.3090.5690.4740.699
PSNR↑10.46610.12410.82211.2567.2797.27515.07612.24717.130
CII↑2.8092.3931.8682.0031.4502.4362.1253.0862.101
RMSE↓76.42879.49173.35486.367110.301110.34744.94762.26135.483
F-actionsSSIM↑0.4270.2710.2560.2560.1799 0.2950.2740.5760.782
PSNR↑11.12610.36310.44111.4728.6037.5498.58212.92618.451
CII↑3.9423.3673.7693.7693.5184.2114.9043.3112.194
RMSE↓70.83177.34076.64076.36994.708106.91794.92857.57830.478
Table 2. Running time of the tested algorithms.
Table 2. Running time of the tested algorithms.
AlgorithmTime/s
FDR-SIM
FDR-SIM-LLIE
3.90
random
DM-SIM
DM-SIM-LLIE
0.71
2.23
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, C.; Yi, D.; Zhou, L. Accelerated Super-Resolution Reconstruction for Structured Illumination Microscopy Integrated with Low-Light Optimization. Micromachines 2025, 16, 1020. https://doi.org/10.3390/mi16091020

AMA Style

Huang C, Yi D, Zhou L. Accelerated Super-Resolution Reconstruction for Structured Illumination Microscopy Integrated with Low-Light Optimization. Micromachines. 2025; 16(9):1020. https://doi.org/10.3390/mi16091020

Chicago/Turabian Style

Huang, Caihong, Dingrong Yi, and Lichun Zhou. 2025. "Accelerated Super-Resolution Reconstruction for Structured Illumination Microscopy Integrated with Low-Light Optimization" Micromachines 16, no. 9: 1020. https://doi.org/10.3390/mi16091020

APA Style

Huang, C., Yi, D., & Zhou, L. (2025). Accelerated Super-Resolution Reconstruction for Structured Illumination Microscopy Integrated with Low-Light Optimization. Micromachines, 16(9), 1020. https://doi.org/10.3390/mi16091020

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop