Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,388)

Search Parameters:
Keywords = optical reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1866 KB  
Article
Mixed-Scene Holographic 3D Display for Film and Television Visual Content Presentation: Zero-Order-Suppressed Single-Hologram Fusion and Parallax-Preserving Digital Resizing
by Pengfei Huang and Tao Wang
Photonics 2026, 13(5), 428; https://doi.org/10.3390/photonics13050428 (registering DOI) - 27 Apr 2026
Abstract
Mixed-scene holographic 3D display for film and television visual content presentation remains challenging because recorded digital holograms and computer-generated holograms (CGHs) are produced under different numerical and hardware constraints. Direct hologram superposition typically causes strong zero-order interference, diffraction efficiency degradation, and sampling pitch [...] Read more.
Mixed-scene holographic 3D display for film and television visual content presentation remains challenging because recorded digital holograms and computer-generated holograms (CGHs) are produced under different numerical and hardware constraints. Direct hologram superposition typically causes strong zero-order interference, diffraction efficiency degradation, and sampling pitch mismatch between the recording sensor and the replay panel, while conventional resizing reduces the effective replay aperture and narrows the available parallax. To address these issues, this paper proposes a zero-order-suppressed single-hologram fusion framework with parallax-preserving digital resizing. A recorded digital hologram is first processed by Gaussian high-pass filtering to suppress the dominant zero-order component, then resampled to match the LCOS replay pitch, and finally normalized and fused with a CGH generated through bipolar intensity encoding. On this basis, two resizing routes are developed: a spatial-domain method for aperture-preserving whole-scene scaling and a frequency-domain method for object-selective scaling and translation. Optical validation on a three-channel LCOS prototype shows that the quantitative diffraction efficiency analysis predicts an increase from approximately 10.1% to 20.05% per reconstructed object for the two-hologram fusion case, and the revised experimental results are consistent with this improvement trend. The experiments further verify replay scaling at multiple factors, the selective manipulation of physical and virtual objects, mixed-scene color replay, and occlusion-consistent depth ordering. Together with the distortion analysis, these results demonstrate improved replay visibility after fusion while maintaining geometric controllability and effective replay aperture. By relying on hologram-domain preprocessing and resizing rather than full mixed-scene recomputation, the proposed method also reduces computational burden. The study therefore provides an efficient and controllable mixed-scene holographic replay framework for visually enriched film and television content presentation, although its depth applicability remains bounded and dedicated real-time timing benchmarks are left for future work. Full article
(This article belongs to the Special Issue Recent Advances in Holography and 3D Display)
Show Figures

Figure 1

37 pages, 2874 KB  
Article
Unified Stochastic Differential Equation Modeling and Fuzzy-RL Control for Turbulent UWOC
by Bowen Si, Jiaoyi Hou, Dayong Ning, Yongjun Gong, Ming Yi and Fengrui Zhang
J. Mar. Sci. Eng. 2026, 14(9), 792; https://doi.org/10.3390/jmse14090792 (registering DOI) - 26 Apr 2026
Abstract
Underwater wireless optical communication (UWOC) for autonomous underwater vehicles is severely compromised by the coupling of oceanic optical turbulence and platform motion. Traditional static statistical models fail to capture the temporal evolution of these stochastic processes, hindering effective real-time beam tracking. This paper [...] Read more.
Underwater wireless optical communication (UWOC) for autonomous underwater vehicles is severely compromised by the coupling of oceanic optical turbulence and platform motion. Traditional static statistical models fail to capture the temporal evolution of these stochastic processes, hindering effective real-time beam tracking. This paper proposes a unified dynamic framework and a hybrid intelligent control strategy to address beam misalignment in turbulent environments. First, a physically motivated stochastic differential equation (SDE) model is derived from the Radiative Transfer Equation via diffusion approximation. Validated by an inverse Fokker–Planck approach, this model accurately reconstructs drift fields for diverse channel conditions, serving as a dynamic generator for time-varying fading. Second, to maintain robust link alignment, a hybrid Fuzzy-Reinforcement Learning control strategy is developed. This approach integrates the interpretability of fuzzy logic with the adaptive optimization of Q-learning, incorporating a supervisor mechanism to handle deep fading events. Numerical simulations and hardware-in-the-loop (HIL) experiments demonstrate the system’s efficacy. The proposed controller achieves a median alignment error of 3.64 mm and reduces transient errors by over 80% compared to classical PID controllers during signal recovery. These results confirm that the proposed framework significantly enhances link stability and tracking robustness for AUVs in complex random media. Full article
(This article belongs to the Section Ocean Engineering)
20 pages, 5773 KB  
Article
Water Spectra Reconstruction for Sentinel-2 MSI: From Multispectral to Hyperspectral
by Songyu Chen, Yali Guo, Haiyang Zhao, Xiaodao Wei, Guojian Chen and Yuan Zhang
Remote Sens. 2026, 18(9), 1288; https://doi.org/10.3390/rs18091288 - 23 Apr 2026
Viewed by 201
Abstract
For studies utilizing methods such as water color parameter inversion and algal bloom classification, abundant spectral bands and high spectral resolution are of great significance. However, for multispectral satellite sensors that are not designed for water color studies (e.g., Sentinel-2 MSI), the number [...] Read more.
For studies utilizing methods such as water color parameter inversion and algal bloom classification, abundant spectral bands and high spectral resolution are of great significance. However, for multispectral satellite sensors that are not designed for water color studies (e.g., Sentinel-2 MSI), the number of bands in the visible–near-infrared range is limited, and lacks specific spectral bands with rich spectral information. Hyperspectral reconstruction of multispectral data based on hyperspectral remote sensing reflectance (Rrs) databases and machine learning algorithms have been proven to be a feasible solution. Based on the in situ measured Rrs data, this study constructed a large-sample hyperspectral Rrs database covering various optical water types using two Chinese hyperspectral satellites, and compared the spectral reconstruction accuracy of six machine learning algorithms. The results show that expanding the Rrs database for model training by integrating hyperspectral satellite data can effectively improve the reconstruction accuracy in waters of different optical types. Comparisons with in situ measured hyperspectral Rrs indicate that the reconstructed Sentinel-2 hyperspectral data achieve high accuracy, with the Spectral Angle Mapper (SAM) less than 5° and the correlation coefficient (r) higher than 0.7. Furthermore, the reconstructed data can effectively restore spectral information not captured by the original multispectral data, such as the suspended sediment Rrs peak at 580 nm and the chlorophyll Rrs valley at 680 nm. Through spectral reconstruction, the spectral resolution of Sentinel-2 can be maximized while retaining its advantages of fast revisit capability and high spatial resolution, thereby expanding its application potential in water color remote sensing. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

12 pages, 5834 KB  
Article
Quantitative Phase Factor Retrieval from Single-Shot Off-Axis Interferograms for Object Reconstruction
by Jialing Chen, Zixi Yu, Jianglong Lei, Yuanxiang Wang and Qingli Jing
Photonics 2026, 13(5), 412; https://doi.org/10.3390/photonics13050412 - 23 Apr 2026
Viewed by 138
Abstract
In the far-field approximation, an object’s diffraction field can be expressed as its Fourier transform multiplied by a phase factor. Here, we present a simple method with which to directly retrieve this phase factor from a single-shot off-axis interference pattern. By exploiting and [...] Read more.
In the far-field approximation, an object’s diffraction field can be expressed as its Fourier transform multiplied by a phase factor. Here, we present a simple method with which to directly retrieve this phase factor from a single-shot off-axis interference pattern. By exploiting and adjusting its unique two-dimensional quadratic form, the quadratic contribution from the object’s Fourier transform can generally be neglected, particularly for amplitude-only objects and slowly varying phase objects. The phase factor is extracted by fitting a quadratic surface to the unwrapped phase obtained via Fourier-transform-based phase retrieval. Removing this factor enables precise reconstruction through a straightforward inverse Fourier transform, without requiring iterative computations. Compared with conventional far-field diffraction setups, our approach reduces system length and allows the use of smaller CCD sensors. Experimental validation using a modified Mach–Zehnder interferometer demonstrates high reconstruction accuracy and robustness. Overall, this method provides an efficient, practical, and real-time solution for object reconstruction, with the potential to simplify and miniaturize optical setups, offering an alternative approach to standard coherent diffraction imaging techniques. Full article
(This article belongs to the Special Issue Quantum Optics: Communication, Sensing, Computing, and Simulation)
Show Figures

Figure 1

18 pages, 24765 KB  
Article
Field-Transformation-Based Light-Field Hologram Generation from a Single RGB Image
by Xiaoming Chen, Xiaoyu Jiang, Yingqing Huang, Xi Wang and Chaoqun Ma
Photonics 2026, 13(5), 407; https://doi.org/10.3390/photonics13050407 - 22 Apr 2026
Viewed by 254
Abstract
We propose a field-transformation-based framework for generating phase-only light-field holograms from a single RGB image. The method establishes an explicit pipeline from monocular scene inference to holographic wavefront synthesis, without requiring multi-view capture or task-specific hologram-network training. First, we construct a layered occlusion [...] Read more.
We propose a field-transformation-based framework for generating phase-only light-field holograms from a single RGB image. The method establishes an explicit pipeline from monocular scene inference to holographic wavefront synthesis, without requiring multi-view capture or task-specific hologram-network training. First, we construct a layered occlusion RGB-D model from the input image using monocular depth estimation, connectivity-based layer decomposition, and occlusion-aware inpainting, which provides a lightweight 3D prior for sparse-view rendering in the small-parallax regime. Second, we transform the rendered sparse RGB-D light field into a target complex wavefront on the recording plane through local frequency mapping, thereby bridging explicit scene geometry and wave-optical field construction. Third, we optimize the phase-only hologram under multi-plane amplitude constraints using a geometrically consistent initial phase and an error-driven adaptive depth-sampling strategy, which improves convergence stability and reconstruction quality under a limited computational budget. Numerical experiments show that the proposed method achieves better depth continuity, occlusion fidelity, and lower speckle noise than representative layer-based and point-based methods, and improves the average PSNR and SSIM by approximately 3 dB and 0.15, respectively, over Hogel-Free Holography. Optical experiments further confirm the physical feasibility and robustness of the proposed framework. Full article
Show Figures

Figure 1

12 pages, 1444 KB  
Article
Task-Oriented Inference Framework for Lightweight and Energy-Efficient Object Localization in Electrical Impedance Tomography
by Takashi Ikuno and Reiji Kaneko
Sensors 2026, 26(8), 2570; https://doi.org/10.3390/s26082570 - 21 Apr 2026
Viewed by 228
Abstract
Electrical Impedance Tomography (EIT) is a promising non-invasive sensing technique, yet its practical application in resource-constrained environments is often limited by the high computational cost of inverse image reconstruction. To address this challenge, we focus on specific sensing objectives rather than full image [...] Read more.
Electrical Impedance Tomography (EIT) is a promising non-invasive sensing technique, yet its practical application in resource-constrained environments is often limited by the high computational cost of inverse image reconstruction. To address this challenge, we focus on specific sensing objectives rather than full image recovery. In this study, we propose a lightweight, task-oriented inference framework for object localization in EIT that bypasses the need to solve computationally expensive inverse reconstruction problems. This approach addresses the high computational demands and hardware complexity of conventional iterative methods, which often hinder real-time monitoring in resource-constrained edge computing environments. Training datasets were generated via finite element method (FEM) simulations for Opposite and Adjacent current injection configurations. A feedforward neural network was developed to independently estimate the radial and angular object positions as probability distributions. Our systematic evaluation revealed that the localization performance depends on the injection configuration and model depth; notably, the Opposite method achieved perfect classification accuracy (1.00) for radial estimation with an optimized architecture of four hidden layers, whereas the Adjacent method exhibited higher ambiguity. Results quantitatively evaluated using the Wasserstein distance show that the Opposite configuration produces more localized, unimodal probability distributions than the Adjacent configuration by utilizing current fields that traverse the entire domain. Compared with existing image-based reconstruction methods, including the conventional electrical impedance tomography and diffuse optical tomography reconstruction software (EIDORS ver.3.12), the proposed framework reduced energy consumption from 3.09 to 0.96 Wh, demonstrating an approximately 70% improvement in energy efficiency while maintaining a high localization accuracy without the need for iterative Jacobian updates. This task-oriented framework enables reliable, high-speed, and energy-efficient localization, making it well-suited for low-power EIT applications in mobile and embedded sensor systems. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 2055 KB  
Article
Resilience Assessment and Enhancement Strategy for Transmission Lines Based on Distributed Fibre Optic Sensing
by Menghao Zhang, Qingwu Gong, Xiuyi Li and Hui Qiao
Electronics 2026, 15(8), 1739; https://doi.org/10.3390/electronics15081739 - 20 Apr 2026
Viewed by 264
Abstract
Typhoon-induced wind loads pose severe threats to transmission systems. However, existing resilience assessment approaches typically rely on sparse meteorological station data and assume spatially uniform wind speed distributions along transmission corridors, which fail to capture the span-level spatial difference of wind fields. To [...] Read more.
Typhoon-induced wind loads pose severe threats to transmission systems. However, existing resilience assessment approaches typically rely on sparse meteorological station data and assume spatially uniform wind speed distributions along transmission corridors, which fail to capture the span-level spatial difference of wind fields. To address this limitation, this paper proposes a distributed optical fiber sensing (DOFS)-driven span-level resilience assessment and hardening optimization framework for transmission networks. First, a phase-sensitive optical time domain reflectometry (Φ-OTDR)-based distributed optical fiber sensing system is employed, utilizing optical fibers embedded in existing OPGW cables as sensing media. By capturing vibration responses of the fiber induced by wind–structure interaction, real-time spatiotemporal wind speed sequences at the individual span level are reconstructed through signal processing and inversion algorithms, providing high-spatial-resolution environmental input data for resilience evaluation. Second, a span-level failure probability quantification method is established using a load–strength interference model. On this basis, a resilience evaluation framework—“span-level asset damage cost—line-level critical corridor identification—system-level load shedding assessment”—is constructed, enabling cross-scale resilience quantification from component damage to system-level performance degradation. Third, a span-level gradient hardening optimization model is developed. By adopting a scenario pre-calculation and iterative updating strategy, coordinated solving of reinforcement decisions and failure scenarios is achieved, thereby maximizing resilience enhancement benefits. The proposed framework is validated using DOFS-measured wind speed data collected from a 500 kV transmission line along the Fujian coast during three real typhoon events—Typhoon Shantuo, Typhoon Trami, and Typhoon Koinu—supporting the reliability of the acquired span-level wind speed information. Case studies conducted on a modified IEEE RTS-24 system demonstrate that the proposed span-level hardening strategy can substantially reduce reinforcement cost compared with the conventional line-level hardening strategy. In the reported benchmark case, it achieves zero load-shedding penalty with a markedly lower hardening cost, and under the same budget constraint, it further yields lower expected load shedding and lower expected asset damage. Full article
Show Figures

Figure 1

17 pages, 7609 KB  
Article
Plasma Physics-Based Deep Learning Modeling for Accurate Morphology Prediction in Femtosecond Bessel Laser Processing of ZnS
by Yifan Deng, Jingya Sun, Manlou Ye, Xiaokang Dong, Xiang Li and Yang Yang
Photonics 2026, 13(4), 394; https://doi.org/10.3390/photonics13040394 - 20 Apr 2026
Viewed by 317
Abstract
Femtosecond laser processing has become a powerful approach for high-precision micro- and nanofabrication in transparent materials, owing to its ultrashort pulse duration and minimized thermal effects. However, the limited predictability of processing depth remains a major obstacle to practical applications. Here, we present [...] Read more.
Femtosecond laser processing has become a powerful approach for high-precision micro- and nanofabrication in transparent materials, owing to its ultrashort pulse duration and minimized thermal effects. However, the limited predictability of processing depth remains a major obstacle to practical applications. Here, we present a morphology prediction framework for femtosecond Bessel laser processing of ZnS that integrates plasma physics modeling with deep learning. Through combined experimental measurements and plasma physics simulations, the influence of laser pulse energy on electron density evolution and material removal depth is systematically investigated. The results reveal the dominant roles of multiphoton ionization, avalanche ionization, and free-electron dynamics in deep-volume processing, and demonstrate the strong sensitivity of the processing morphology to the plasma distribution. Conventional plasma models can accurately reproduce the ablation diameter, yet exhibit significant limitations in predicting the processing depth. We propose a physics data-based framework for femtosecond Bessel beam processing, which integrates a depth-residual regression network conditioned on the peak electron density distribution to effectively learn and compensate for systematic modeling errors in plasma-based simulations. This strategy leads to excellent agreement between predicted and experimental processing depths and three-dimensional morphologies under various energy conditions. The model achieves a mean absolute error (MAE) of 4.9 nm at the pixel level for 3D crater reconstruction. Under rigorous crater-grouped cross-validation with Leave-One-Group-Out evaluation, the model achieves a mean R2 of 0.74 across 8 independent craters, demonstrating reliable generalization to unseen energy conditions. These results demonstrate that incorporating physical priors into data-driven learning provides an effective pathway to overcoming accuracy limitations in modeling complex laser–matter interactions. This approach offers a reliable tool for quantitative prediction and parameter optimization in deep femtosecond laser processing of transparent materials and enabling highly controllable and reproducible micro- and nanofabrication for advanced photonic and three-dimensional optical applications. Full article
Show Figures

Figure 1

24 pages, 3773 KB  
Article
An Integrated Tunable-Focus Light Field Imaging System for 3D Seed Phenotyping: From Co-Optimized Optical Design to Computational Reconstruction
by Jingrui Yang, Qinglei Zhao, Shuai Liu, Meihua Xia, Jing Guo, Yinghong Yu, Chao Li, Xiao Tang, Shuxin Wang, Qinglong Hu, Fengwei Guan, Qiang Liu, Mingdong Zhu and Qi Song
Photonics 2026, 13(4), 385; https://doi.org/10.3390/photonics13040385 - 17 Apr 2026
Viewed by 194
Abstract
Three-dimensional seed phenotyping requires imaging systems capable of achieving micron-level resolution across a centimeter-level field of view (FOV), a goal constrained by the resolution–FOV trade-off in conventional light field architectures. This paper presents a hardware–software co-optimized framework that integrates a reconfigurable optical system [...] Read more.
Three-dimensional seed phenotyping requires imaging systems capable of achieving micron-level resolution across a centimeter-level field of view (FOV), a goal constrained by the resolution–FOV trade-off in conventional light field architectures. This paper presents a hardware–software co-optimized framework that integrates a reconfigurable optical system with computational imaging pipelines to address this limitation. At the hardware level, we develop a tunable-focus lens module that enables flexible adjustment of the effective focal length, combined with a custom-designed microlens array (MLA). A mathematical model is established to analyze the interdependencies among FOV, lateral resolution, depth of field (DOF), and system configuration, guiding the design of individual optical components. On the computational side, we propose a hybrid aberration correction strategy: first, a co-calibration of lens and MLA aberrations based on line-feature detection; second, a conditional generative adversarial network (cGAN) with attention-guided residual learning to enhance sub-aperture images, achieving a PSNR of 34.63 dB and an SSIM of 0.9570 on seed datasets. Experimentally, the system achieves a resolution of 6.2 lp/mm at MTF50 over a 2–3 cm FOV, representing a 307% improvement over the initial configuration (1.52 lp/mm). The reconstruction pipeline combines epipolar plane image (EPI) analysis with multi-view consistency constraints to generate dense 3D point clouds at a density of approximately 1.5 × 104 points/cm2 while preserving spectral and textural features. Validation on bitter melon and rice seeds demonstrates accurate 3D reconstruction and accurate extraction of morphological parameters across a large area. By integrating optical and computational design, this work establishes a reconfigurable imaging framework that overcomes the resolution–FOV limitations of conventional light field systems. The proposed architecture is also applicable to robotic vision and biomedical imaging. Full article
(This article belongs to the Special Issue Optical Imaging and Measurements: 2nd Edition)
Show Figures

Figure 1

9 pages, 1265 KB  
Communication
Deep Learning-Assisted Design of All-Dielectric Micropillar Quantum Well Infrared Photodetectors
by Pengzhe Xia, Rui Xin, Tianxin Li and Wei Lu
Photonics 2026, 13(4), 381; https://doi.org/10.3390/photonics13040381 - 16 Apr 2026
Viewed by 287
Abstract
The integration of micro-nano optical structures has become an essential strategy for overcoming the performance bottlenecks of quantum well infrared photodetectors (QWIPs), specifically by addressing the inherent inability of planar devices to couple with normally incident light due to intersubband transition selection rules. [...] Read more.
The integration of micro-nano optical structures has become an essential strategy for overcoming the performance bottlenecks of quantum well infrared photodetectors (QWIPs), specifically by addressing the inherent inability of planar devices to couple with normally incident light due to intersubband transition selection rules. A critical factor in this integration is the precise spectral overlap between an optical mode and the material’s excitation mode. Therefore, achieving precise spectral engineering is indispensable. However, conventional electromagnetic simulations act as forward solvers, calculating optical responses based on given geometric parameters. They cannot directly perform inverse design, which involves deriving optimal geometric parameters directly from a desired optical response. Consequently, structural optimization is severely constrained by time-consuming trial-and-error iterations, which often struggle to find the global optimum in a complex design space. To overcome these limitations, this paper presents a comprehensive theoretical and numerical study proposing a deep learning framework for QWIPs coupled with all-dielectric micropillar structures. By establishing a structure-absorption spectrum dataset via finite difference time domain (FDTD) simulations, we developed a dual-network setup. For the forward prediction, a multilayer perceptron (MLP) maps geometric parameters (side length a and period p) to the absorption spectrum, achieving a computational speedup of seven orders of magnitude over traditional numerical simulations. Concurrently, a convolutional neural network (CNN) is employed for the inverse design, realizing on-demand design of geometric parameters based on target spectra with high reconstruction accuracy. Furthermore, the selected all-dielectric micropillar structures are highly compatible with mainstream semiconductor fabrication processes. This research provides an efficient, automated toolkit for the development of high-performance infrared photodetectors. Full article
Show Figures

Figure 1

24 pages, 1651 KB  
Article
FALB: A Frequency-Aware Lightweight Bottleneck with Learnable Wavelet Fusion and Contextual Attention for Enhanced Ship Classification in Remote Sensing
by Liang Huang, Yiping Song, Qiao Sun, He Yang, Lin Chen and Xianfeng Zhang
Remote Sens. 2026, 18(8), 1186; https://doi.org/10.3390/rs18081186 - 15 Apr 2026
Viewed by 316
Abstract
Ship classification in optical remote sensing requires balancing discriminative representation and model efficiency. Standard convolutional neural network (CNN) bottlenecks rely on local spatial kernels and may emphasize high-frequency texture cues, while stronger backbones increase parameter cost. We propose a frequency-aware lightweight bottleneck (FALB) [...] Read more.
Ship classification in optical remote sensing requires balancing discriminative representation and model efficiency. Standard convolutional neural network (CNN) bottlenecks rely on local spatial kernels and may emphasize high-frequency texture cues, while stronger backbones increase parameter cost. We propose a frequency-aware lightweight bottleneck (FALB) that couples enhanced wavelet convolution (WTsConv) and contextual anchor attention (CAA) in a cascaded design. WTsConv adopts Sym4 wavelets and a learnable symmetric fusion weight between spatial and wavelet-reconstructed features to improve frequency-aware feature mixing. CAA is then applied to the refined features for contextual aggregation. Integrated into ResNet-50 bottlenecks, FALB is evaluated on FGSCM-52 and achieves 97.88% top-1 accuracy with 17.78 M parameters, compared with 96.92% and 25.56 M for the ResNet-50 baseline, surpassing ResNet-50 by 0.96% and outperforming compared general-purpose baselines while reducing parameters by 30.4%. Under this experimental setting, FALB improves the observed accuracy–parameter trade-off for remote sensing ship classification. Full article
(This article belongs to the Special Issue Ship Imaging, Detection and Recognition for High-Resolution SAR)
Show Figures

Figure 1

24 pages, 2013 KB  
Article
Capacity-Enhanced Li-Fi Transmission Using Autoencoder-Based Latent Representation: Performance Analysis Under Practical Optical Links
by Serin Kim, Yong-Yuk Won and Jiwon Park
Photonics 2026, 13(4), 356; https://doi.org/10.3390/photonics13040356 - 8 Apr 2026
Viewed by 379
Abstract
Visible light communication (VLC)-based Li-Fi systems suffer from limitations in transmission capacity expansion due to the restricted modulation bandwidth of LEDs. In this study, a latent representation-based NRZ-OOK Li-Fi transmission framework that exploits the statistical feature distribution of the latent space is proposed [...] Read more.
Visible light communication (VLC)-based Li-Fi systems suffer from limitations in transmission capacity expansion due to the restricted modulation bandwidth of LEDs. In this study, a latent representation-based NRZ-OOK Li-Fi transmission framework that exploits the statistical feature distribution of the latent space is proposed to improve transmission efficiency without expanding the physical bandwidth. An autoencoder is employed to transform input images into low-dimensional latent vectors, which are then quantized and modulated for transmission. At the receiver, hard decision and inverse quantization are performed, and the image is reconstructed through a trained decoder by leveraging the distribution characteristics of the latent representation. The effective transmission capacity gain Gcap is defined to quantify the amount of representable information relative to the original data under the same physical link resources according to the latent dimension, achieving up to a 49-fold data representation efficiency. The experimental results over practical optical links (0.5–1.5 m) showed that, in short-range conditions, larger latent dimensions maintained higher reconstruction PSNR, whereas under channel degradation conditions, smaller latent dimensions exhibited higher robustness, demonstrating a performance inversion phenomenon. Furthermore, it was confirmed that the dominant factor governing reconstruction performance shifts from the representational capability of the data to error accumulation characteristics depending on the channel condition. These results suggest that the latent representation-based transmission framework is an effective Li-Fi strategy that can simultaneously consider transmission efficiency and channel robustness through information representation optimization in bandwidth-limited environments. Full article
Show Figures

Figure 1

11 pages, 1503 KB  
Article
Semiconductor Optoelectronic Polarization Imaging Approach for Enhanced Daytime Space Target Detection
by Guanyu Wen, Shuang Wang, Yukun Zeng, Shuzhuo Miao and Mingliang Zhang
Photonics 2026, 13(4), 355; https://doi.org/10.3390/photonics13040355 - 8 Apr 2026
Viewed by 294
Abstract
Daytime detection of space targets is challenging due to the strong skylight background and the limited resolution of conventional polarization imaging systems. In this work, we present a semiconductor-based polarization detection method that integrates a CMOS polarization imaging sensor with a Schmidt–Cassegrain telescope. [...] Read more.
Daytime detection of space targets is challenging due to the strong skylight background and the limited resolution of conventional polarization imaging systems. In this work, we present a semiconductor-based polarization detection method that integrates a CMOS polarization imaging sensor with a Schmidt–Cassegrain telescope. To compensate for the spatial resolution loss inherent in division-of-focal-plane semiconductor polarization detectors, a bicubic interpolation algorithm is applied to reconstruct the degree and angle of polarization images. Furthermore, a spectral filtering strategy is introduced to suppress skylight-induced stray radiation, improving image contrast and reducing the risk of detector saturation. The developed system combines semiconductor optoelectronic detection, optical filtering, and computational reconstruction into a compact experimental platform. Validation experiments on Polaris and low-Earth-orbit space targets under daytime conditions demonstrate that the proposed approach achieves clearer and sharper polarization images compared with traditional intensity-based methods. Objective evaluation metrics, including gradient, contrast, brightness, and spatial frequency, confirm significant improvements in image quality. These results highlight the potential of semiconductor optoelectronic devices for polarization-based imaging and provide an effective framework for enhancing daytime space target detection. Full article
Show Figures

Figure 1

16 pages, 1461 KB  
Article
Infrared Target Reconstruction Under Detector Multiplexing Using Polarization Encoding and Stokes Vector Decoding
by Menghan Bai, Zibo Yu, Guanyu Mu, Zhenyuan Guo and Chunyu Liu
Sensors 2026, 26(8), 2286; https://doi.org/10.3390/s26082286 - 8 Apr 2026
Viewed by 250
Abstract
Wide-field infrared imaging systems are often constrained by detector size, cooling requirements, and payload limitations, leading to the need for multi-FOV detector sharing. However, conventional geometric multiplexing introduces severe spatial aliasing, which significantly degrades target localization performance. This paper proposes a polarization-encoded field-of-view [...] Read more.
Wide-field infrared imaging systems are often constrained by detector size, cooling requirements, and payload limitations, leading to the need for multi-FOV detector sharing. However, conventional geometric multiplexing introduces severe spatial aliasing, which significantly degrades target localization performance. This paper proposes a polarization-encoded field-of-view multiplexing method for recovering spatial information from aliased detector measurements. The imaging plane is divided into multiple FOV regions, each assigned a distinct polarization state. After optical folding, the modulated sub-images are superimposed onto a common detector region. Six-channel polarization measurements are used to reconstruct pixel-wise Stokes vectors, and the spatial origin of each pixel is identified through polarization-domain similarity matching and target-level voting. MATLAB-based simulations were conducted using a nine-region multiplexing configuration. The proposed method achieves 97.3% pixel-level classification accuracy under ideal conditions and maintains over 95% accuracy at a noise level of σ = 0.02. The normalized Stokes reconstruction error is below 0.02, and stable performance is observed under polarization modulation deviations within ±10°. By introducing polarization as an additional encoding dimension, the proposed framework enables efficient separation of multiplexed spatial information without increasing detector resources, demonstrating its potential for compact wide-field infrared sensing applications. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

34 pages, 5761 KB  
Article
Wigner Quasiprobability of Coherent Phase States
by Alfred Wünsche
Physics 2026, 8(2), 37; https://doi.org/10.3390/physics8020037 - 8 Apr 2026
Viewed by 235
Abstract
The Wigner quasiprobability, along with some of its essentialproperties, is introduced and discussed in two versions, first covering real canonical variables such as W(q,p) and second a pair of complex conjugate coordinates such as [...] Read more.
The Wigner quasiprobability, along with some of its essentialproperties, is introduced and discussed in two versions, first covering real canonical variables such as W(q,p) and second a pair of complex conjugate coordinates such as W(α,α*). The reconstruction of the density operator ϱ of states is also given. Building upon the Susskind–Glogower concept of quantum phase operators, further aspects of phase operator algebras in the quantum optics of a harmonic oscillator are discussed in relation to the realization of the su(1,1) Lie algebra. Coherent phase states |ε are introduced in analogy to the common coherent states |α in two ways, as both eigenstates of certain operators and as states generated from a ground state |0 by operators of the Lie group SU(1,1). The limiting transition to the non-normalizable Fritz London phase states |eiφ on the unit circle and an (over)-completeness relation for the coherent phase states are derived. The Wigner quasiprobability W(q,p) for the coherent phase states is calculated and graphically represented. From the Wigner quasiprobability, a phase distribution W(φ) is calculated by integrating over the radius, and its uncertainty is defined and presented. The Hilbert–Schmidt distance is discussed as a measure of the non-classicality of states, where most of our with Viktor Dodonov work was carried out. Full article
Show Figures

Figure 1

Back to TopTop