Computational Optical Imaging: Theories, Algorithms, and Applications

A special issue of Photonics (ISSN 2304-6732). This special issue belongs to the section "Data-Science Based Techniques in Photonics".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 3702

Special Issue Editors


E-Mail Website
Guest Editor
Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0RE, UK
Interests: fluorescence lifetime imaging microscopy; time-correlated single-photon counting (TCSPC); computational imaging; machine learning
School of Engineering, Yunnan University, Kunming, China
Interests: computational imaging algorithms and systems with various imaging modalities; single-photon imaging (SPI); magnetic resonance imaging (MRI); computed tomography (CT)

E-Mail Website
Guest Editor
Fraunhofer Centre for Applied Photonics, Glasgow G1 1RD, UK
Interests: single-photon imaging; imaging microscopy systems; flow cytometry; biomedical image processing; plasmonics; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computational imaging is increasingly prevalent in autonomous systems and biomedical applications, enabled by advances in sensors and reconstruction algorithms. Raw sensor measurements are often difficult to interpret, creating ill-posed inverse problems and necessitating principled methods to recover meaningful, quantitative information. Augmenting the information content of measured data—through denoising and spatial and temporal resolution enhancement—is also critical. This Special Issue focuses on theories, algorithms, and systems that jointly incorporate sensing and computation to achieve robust imaging under challenging conditions, including sparse or compressed photon regimes, scattering media, long-range and non-line-of-sight scenarios, event-driven neuromorphic sensing, and low-emission constraints in biomedical imaging and spectroscopy. Emphasis is placed on statistical and optimization frameworks, physics-guided models, and data-driven machine learning approaches that improve accuracy, interpretability, and efficiency across diverse applications.

Topics of interest include (but are not limited to) the following:

  • Foundational theory for computational imaging and inverse problems;
  • Imaging sensors;
  • Statistical and optimization methods for robust reconstruction and inference;
  • Machine learning for optical imaging;
  • Compressive and sparse imaging;
  • Single-photon imaging;
  • Neuromorphic/event-based sensing and algorithms
  • Statistics and optimization algorithms in imaging;
  • Computational biomedical imaging and spectroscopic reconstruction.

We look forward to receiving your contributions.

Dr. Zhenya Zang
Dr. Yiwei Chen
Dr. Dong Xiao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Photonics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computational imaging
  • compressive imaging
  • single-photon imaging
  • biomedical imaging and sensing
  • statistics and optimization imaging algorithms
  • applied machine learning in imaging
  • smart imaging sensors

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 1814 KB  
Article
Physics-Prior-Guided Deep Learning for High-Precision Marker Localization Under Saturated Artifacts for Potential Surgical Navigation Applications
by Yan Xu, Shoubiao Zhang, Huanhuan Tian, Zhiyong Zou, Weilong Li, Anlan Huang, Nu Zhang and Xiang Ma
Photonics 2026, 13(3), 294; https://doi.org/10.3390/photonics13030294 - 18 Mar 2026
Viewed by 527
Abstract
Optical reflective markers are widely used in precision medicine, computer-assisted surgery, and robotic interventions. Nevertheless, intraoperative tracking still faces challenges such as sensor saturation, Point Spread Function (PSF) blooming, and flat-top artifacts, which affect localization precision and stability. Traditional deep learning detectors perform [...] Read more.
Optical reflective markers are widely used in precision medicine, computer-assisted surgery, and robotic interventions. Nevertheless, intraoperative tracking still faces challenges such as sensor saturation, Point Spread Function (PSF) blooming, and flat-top artifacts, which affect localization precision and stability. Traditional deep learning detectors perform well in general object recognition but are limited in handling saturated infrared reflective markers due to their neglect of optical physics and inability to separate signal from blooming interference. This paper presents a physics-prior-guided network integrating a Brightness-Prior-Enhanced Spatial Attention (BPESA) mechanism for high-precision sub-pixel marker localization under saturation conditions. The method achieves a Root Mean Square (RMS) error of 0.52 pixels (approximately 0.11 mm) on a dataset of 8000 binocular images and reduces the localization error by approximately 54.4% compared with the baseline YOLOv8 model, while maintaining an inference speed of 134.6 FPS. The results demonstrate that optical blooming interference can be effectively mitigated by a learnable physics-prior branch, providing accurate marker coordinates that form a foundation for potential downstream tracking or navigation tasks. Full article
(This article belongs to the Special Issue Computational Optical Imaging: Theories, Algorithms, and Applications)
Show Figures

Figure 1

24 pages, 4692 KB  
Article
SSTNT: A Spatial–Spectral Similarity Guided Transformer-in-Transformer for Hyperspectral Unmixing
by Xinyu Cui, Xinyue Zhang, Aoran Dai and Da Sun
Photonics 2026, 13(3), 276; https://doi.org/10.3390/photonics13030276 - 13 Mar 2026
Viewed by 475
Abstract
Vision Transformers (ViTs), owing to their strong capability in modeling global contextual dependencies, have been widely adopted in hyperspectral image unmixing (HU). However, standard ViTs process images by partitioning them into non-overlapping patches, which disrupts spatial continuity at the pixel level and neglects [...] Read more.
Vision Transformers (ViTs), owing to their strong capability in modeling global contextual dependencies, have been widely adopted in hyperspectral image unmixing (HU). However, standard ViTs process images by partitioning them into non-overlapping patches, which disrupts spatial continuity at the pixel level and neglects the fine-grained structural relationships among pixels within local regions. Consequently, effectively capturing the detailed spatial–spectral features required for accurate unmixing remains challenging. Furthermore, the high computational complexity of global self-attention and its sensitivity to noise limit the applicability of conventional Transformers to HU. To address these issues, we propose a spatial–spectral similarity guided Transformer-in-Transformer (SSTNT) framework. The proposed network adopts a modified TNT architecture, in which the inner Transformer employs a linear self-attention (LSA) mechanism to efficiently exploit pixel-level local features within sliding windows, while the outer Transformer preserves global attention to aggregate contextual information, thereby forming a cooperative local–global optimization scheme. Furthermore, a lightweight spatial–spectral similarity module is introduced to enhance the modeling of neighborhood structures. Finally, spectral reconstruction is achieved through a trainable endmember decoder and a normalized abundance estimation module. Extensive experiments conducted on both synthetic and real hyperspectral datasets demonstrate the effectiveness and robustness of the proposed method. Full article
(This article belongs to the Special Issue Computational Optical Imaging: Theories, Algorithms, and Applications)
Show Figures

Figure 1

21 pages, 6660 KB  
Article
Infrared and Visible Multi-Scale Pyramid Cross-Layer Fusion Algorithm Based on Thermal Extended Target Separation
by An Liang, Laixian Zhang, Yingchun Li, Hao Ding, Haijing Zheng, Rong Li and Rui Zhu
Photonics 2026, 13(3), 263; https://doi.org/10.3390/photonics13030263 - 10 Mar 2026
Viewed by 369
Abstract
Infrared and visible image fusion aims to synergistically combine the thermal target saliency of infrared images with the rich textual details of visible images. To address the limitations of traditional multi-scale methods in terms of target-background contrast and detail preservation, this paper introduces [...] Read more.
Infrared and visible image fusion aims to synergistically combine the thermal target saliency of infrared images with the rich textual details of visible images. To address the limitations of traditional multi-scale methods in terms of target-background contrast and detail preservation, this paper introduces a novel multi-scale pyramid cross-layer fusion framework. The core of this framework lies in a thermal expansion-based target separation mechanism for superior hierarchical decomposition. Source images are first decomposed via a Gaussian–Laplacian pyramid for multi-resolution representation. By exploiting infrared thermal saliency and visible geometric priors, the scene is explicitly segregated into a target layer and a background layer. The target layer employs deep feature extraction based on Iteratively Reweighted Nuclear Norm minimization to sharpen thermal prominences and enhance contrast; concurrently, the background layer undergoes a cross-modal, cross-layer consistency fusion strategy, integrating spatial textures across frequency bands to maintain structural fidelity and detail richness. This dual-layer paradigm, augmented by multi-scale aggregation, ensures seamless, artifact-free fusion. To comprehensively evaluate the proposed method, systematic experiments are conducted on two benchmark datasets: TNO and RoadScene. Evaluations on the dataset demonstrate that our method outperforms state-of-the-art baselines. Extended experiments on the MSRS dataset further confirm the strong generalization capability and robustness of our method. Furthermore, systematic hyperparameter experiments determine the optimal model configuration, and ablation studies substantiate the effective contribution of both the pyramid segregation module and the IRNN optimization module to the final fusion performance. Extensive hyperparameter testing identified the optimal setup, and ablation studies confirmed the contribution of each key module. Overall, our fusion algorithm demonstrates satisfactory performance in the experiments, representing a clear advance. Full article
(This article belongs to the Special Issue Computational Optical Imaging: Theories, Algorithms, and Applications)
Show Figures

Figure 1

21 pages, 7741 KB  
Article
Polarization-Guided Deep Fusion for Real-Time Enhancement of Day–Night Tunnel Traffic Scenes: Dataset, Algorithm, and Network
by Renhao Rao, Changcai Cui, Liang Chen, Zhizhao Ouyang and Shuang Chen
Photonics 2025, 12(12), 1206; https://doi.org/10.3390/photonics12121206 - 8 Dec 2025
Viewed by 783
Abstract
The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure [...] Read more.
The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure scenario, this paper proposes a closed-loop enhancement solution centered on polarization imaging as a core physical prior, comprising a real-world polarimetric road dataset, a polarimetric physics-enhanced algorithm, and a beyond-fusion network, while satisfying both perception enhancement and real-time constraints. First, we construct the POLAR-GLV dataset, which is captured using a four-angle polarization camera under real highway tunnel conditions, covering the entire process of entering tunnels, inside tunnels, and exiting tunnels, systematically collecting data on adverse illumination and failure distributions in day–night traffic scenes. Second, we propose the Polarimetric Physical Enhancement with Adaptive Modulation (PPEAM) method, which uses Stokes parameters, DoLP, and AoLP as constraints. Leveraging the glare sensitivity of DoLP and richer texture information, it adaptively performs dark region enhancement and glare suppression according to scene brightness and dark region ratio, providing real-time polarization-based image enhancement. Finally, we design the Polar-PENet beyond-fusion network, which introduces Polarization-Aware Gates (PAG) and CBAM on top of physical priors, coupled with detection-driven perception-oriented loss and a beyond mechanism to explicitly fuse physics and deep semantics to surpass physical limitations. Experimental results show that compared to original images, Polar-PENet (beyond-fusion network) achieves PSNR and SSIM scores of 19.37 and 0.5487, respectively, on image quality metrics, surpassing the performance of PPEAM (polarimetric physics-enhanced algorithm) which scores 18.89 and 0.5257. In terms of downstream object detection performance, Polar-PENet performs exceptionally well in areas with drastic illumination changes such as tunnel entrances and exits, achieving a mAP of 63.7%, representing a 99.7% improvement over original images and a 12.1% performance boost over PPEAM’s 56.8%. In terms of processing speed, Polar-PENet is 2.85 times faster than the physics-enhanced algorithm PPEAM, with an inference speed of 183.45 frames per second, meeting the real-time requirements of autonomous driving and laying a solid foundation for practical deployment in edge computing environments. The research validates the effective paradigm of using polarimetric physics as a prior and surpassing physics through learning methods. Full article
(This article belongs to the Special Issue Computational Optical Imaging: Theories, Algorithms, and Applications)
Show Figures

Figure 1

11 pages, 3094 KB  
Article
Fresnel Coherent Diffraction Imaging Without Wavefront Priors
by Ling Bai, Wen Cao, Yueshu Xu, Cuifang Kuang and Xu Liu
Photonics 2025, 12(11), 1066; https://doi.org/10.3390/photonics12111066 - 28 Oct 2025
Viewed by 892
Abstract
Fresnel diffraction plays a critical role in coherent diffraction imaging and holography. Experimental setups for these techniques are often designed based on plane-wave illumination. However, two key issues arise in practical applications: on the one hand, it is difficult to obtain an ideal [...] Read more.
Fresnel diffraction plays a critical role in coherent diffraction imaging and holography. Experimental setups for these techniques are often designed based on plane-wave illumination. However, two key issues arise in practical applications: on the one hand, it is difficult to obtain an ideal plane wave in experiments, which inevitably introduces wavefront curvature; on the other hand, the use of spherical waves enhances the quality of reconstruction results, while it also imposes additional requirements for the calibration of both the illumination wavefront and experimental parameters. To address these issues, we introduce a diffraction-adapted propagation model that integrates both the spherical wavefront effects and sampling variations within the diffraction model. The parameters of this model can be estimated through prior-free optimization, thereby eliminating the need for prior knowledge of system parameters or specific experimental setups. Our approach enables robust reconstruction across a wide range of Fresnel diffraction patterns. It also allows for the automatic calibration of experimental parameters using only the measured data. The effectiveness of the proposed method has been validated through both theoretical analysis and experimental results. Full article
(This article belongs to the Special Issue Computational Optical Imaging: Theories, Algorithms, and Applications)
Show Figures

Figure 1

Back to TopTop