Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (517)

Search Parameters:
Keywords = noise photons

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 4409 KiB  
Article
Performance of Dual-Layer Flat-Panel Detectors
by Dong Sik Kim and Dayeon Lee
Diagnostics 2025, 15(15), 1889; https://doi.org/10.3390/diagnostics15151889 - 28 Jul 2025
Viewed by 246
Abstract
Background/Objectives: In digital radiography imaging, dual-layer flat-panel detectors (DFDs), in which two flat-panel detector layers are stacked with a minimal distance between the layers and appropriate alignment, are commonly used in material decompositions as dual-energy applications with a single x-ray exposure. DFDs also [...] Read more.
Background/Objectives: In digital radiography imaging, dual-layer flat-panel detectors (DFDs), in which two flat-panel detector layers are stacked with a minimal distance between the layers and appropriate alignment, are commonly used in material decompositions as dual-energy applications with a single x-ray exposure. DFDs also enable more efficient use of incident photons, resulting in x-ray images with improved noise power spectrum (NPS) and detection quantum efficiency (DQE) performances as single-energy applications. Purpose: Although the development of DFD systems for material decomposition applications is actively underway, there is a lack of research on whether single-energy applications of DFD can achieve better performance than the single-layer case. In this paper, we experimentally observe the DFD performance in terms of the modulation transfer function (MTF), NPS, and DQE with discussions. Methods: Using prototypes of DFD, we experimentally measure the MTF, NPS, and DQE of the convex combination of the images acquired from the upper and lower detector layers of DFD. To optimize DFD performance, a two-step image registration is performed, where subpixel registration based on the maximum amplitude response to the transform based on the Fourier shift theorem and an affine transformation using cubic interpolation are adopted. The DFD performance is analyzed and discussed through extensive experiments for various scintillator thicknesses, x-ray beam conditions, and incident doses. Results: Under the RQA 9 beam conditions of 2.7 μGy dose, the DFD with the upper and lower scintillator thicknesses of 0.5 mm could achieve a zero-frequency DQE of 75%, compared to 56% when using a single-layer detector. This implies that the DFD using 75 % of the incident dose of a single-layer detector can provide the same signal-to-noise ratio as a single-layer detector. Conclusions: In single-energy radiography imaging, DFD can provide better NPS and DQE performances than the case of the single-layer detector, especially at relatively high x-ray energies, which enables low-dose imaging. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

17 pages, 1494 KiB  
Article
All-Optical Encryption and Decryption at 120 Gb/s Using Carrier Reservoir Semiconductor Optical Amplifier-Based Mach–Zehnder Interferometers
by Amer Kotb, Kyriakos E. Zoiros and Wei Chen
Micromachines 2025, 16(7), 834; https://doi.org/10.3390/mi16070834 - 21 Jul 2025
Viewed by 512
Abstract
Encryption and decryption are essential components in signal processing and optical communication systems, providing data confidentiality, integrity, and secure high-speed transmission. We present a novel design and simulation of an all-optical encryption and decryption system operating at 120 Gb/s using carrier reservoir semiconductor [...] Read more.
Encryption and decryption are essential components in signal processing and optical communication systems, providing data confidentiality, integrity, and secure high-speed transmission. We present a novel design and simulation of an all-optical encryption and decryption system operating at 120 Gb/s using carrier reservoir semiconductor optical amplifiers (CR-SOAs) embedded in Mach–Zehnder interferometers (MZIs). The architecture relies on two consecutive exclusive-OR (XOR) logic gates, implemented through phase-sensitive interference in the CR-SOA-MZI structure. The first XOR gate performs encryption by combining the input data signal with a secure optical key, while the second gate decrypts the encoded signal using the same key. The fast gain recovery and efficient carrier dynamics of CR-SOAs enable a high-speed, low-latency operation suitable for modern photonic networks. The system is modeled and simulated using Mathematica Wolfram, and the output quality factors of the encrypted and decrypted signals are found to be 28.57 and 14.48, respectively, confirming excellent signal integrity and logic performance. The influence of key operating parameters, including the impact of amplified spontaneous emission noise, on system behavior is also examined. This work highlights the potential of CR-SOA-MZI-based designs for scalable, ultrafast, and energy-efficient all-optical security applications. Full article
(This article belongs to the Special Issue Integrated Photonics and Optoelectronics, 2nd Edition)
Show Figures

Figure 1

13 pages, 617 KiB  
Project Report
European Partnership in Metrology Project: Photonic and Quantum Sensors for Practical Integrated Primary Thermometry (PhoQuS-T)
by Olga Kozlova, Rémy Braive, Tristan Briant, Stéphan Briaudeau, Paulina Castro Rodríguez, Guochun Du, Tufan Erdoğan, René Eisermann, Emile Ferreux, Dario Imbraguglio, Judith Elena Jordan, Stephan Krenek, Graham Machin, Igor P. Marko, Théo Martel, Maria Jose Martin, Richard A. Norte, Laurent Pitre, Sara Pourjamal, Marco Queisser, Israel Rebolledo-Salgado, Iago Sanchez, Daniel Schmid, Cliona Shakespeare, Fernando Sparasci, Peter G. Steeneken, Tatiana Steshchenko, Stephen J. Sweeney, Shahin Tabandeh, Georg Winzer, Anoma Yamsiri, Alethea Vanessa Zamora Gómez, Martin Zelan and Lars Zimmermannadd Show full author list remove Hide full author list
Metrology 2025, 5(3), 44; https://doi.org/10.3390/metrology5030044 - 19 Jul 2025
Viewed by 262
Abstract
Current temperature sensors require regular recalibration to maintain reliable temperature measurement. Photonic/quantum-based approaches have the potential to radically change the practice of thermometry through provision of in situ traceability, potentially through practical primary thermometry, without the need for sensor recalibration. This article gives [...] Read more.
Current temperature sensors require regular recalibration to maintain reliable temperature measurement. Photonic/quantum-based approaches have the potential to radically change the practice of thermometry through provision of in situ traceability, potentially through practical primary thermometry, without the need for sensor recalibration. This article gives an overview of the European Partnership in Metrology (EPM) project: Photonic and quantum sensors for practical integrated primary thermometry (PhoQuS-T), which aims to develop sensors based on photonic ring resonators and optomechanical resonators for robust, small-scale, integrated, and wide-range temperature measurement. The different phases of the project will be presented. The development of the integrated optical practical primary thermometer operating from 4 K to 500 K will be reached by a combination of different sensing techniques: with the optomechanical sensor, quantum thermometry below 10 K will provide a quantum reference for the optical noise thermometry (operating in the range 4 K to 300 K), whilst using the high-resolution photonic (ring resonator) sensor the temperature range to be extended from 80 K to 500 K. The important issues of robust fibre-to-chip coupling will be addressed, and application case studies of the developed sensors in ion-trap monitoring and quantum-based pressure standards will be discussed. Full article
Show Figures

Figure 1

14 pages, 465 KiB  
Article
Quantum W-Type Entanglement in Photonic Systems with Environmental Decoherence
by Kamal Berrada and Smail Bougouffa
Symmetry 2025, 17(7), 1147; https://doi.org/10.3390/sym17071147 - 18 Jul 2025
Viewed by 296
Abstract
Preserving quantum entanglement in multipartite systems under environmental decoherence is a critical challenge for quantum information processing. In this work, we investigate the dynamics of W-type entanglement in a system of three photons, focusing on the effects of Markovian and non-Markovian decoherence regimes. [...] Read more.
Preserving quantum entanglement in multipartite systems under environmental decoherence is a critical challenge for quantum information processing. In this work, we investigate the dynamics of W-type entanglement in a system of three photons, focusing on the effects of Markovian and non-Markovian decoherence regimes. Using the lower bound of concurrence (LBC) as a measure of entanglement, we analyze the time evolution of the LBC for photons initially prepared in a W state under the influence of dephasing noise. We explore the dependence of entanglement dynamics on system parameters such as the dephasing angle and refractive-index difference, alongside environmental spectral properties. Our results, obtained within experimentally feasible parameter ranges, reveal how the enhancement of entanglement preservation can be achieved in Markovian and non-Markovian regimes according to the system parameters. These findings provide valuable insights into the robustness of W-state entanglement in tripartite photonic systems and offer practical guidance for optimizing quantum protocols in noisy environments. Full article
Show Figures

Figure 1

14 pages, 1059 KiB  
Article
Radiomics Signature of Aging Myocardium in Cardiac Photon-Counting Computed Tomography
by Alexander Hertel, Mustafa Kuru, Johann S. Rink, Florian Haag, Abhinay Vellala, Theano Papavassiliu, Matthias F. Froelich, Stefan O. Schoenberg and Isabelle Ayx
Diagnostics 2025, 15(14), 1796; https://doi.org/10.3390/diagnostics15141796 - 16 Jul 2025
Viewed by 296
Abstract
Background: Cardiovascular diseases are the leading cause of global mortality, with 80% of coronary heart disease in patients over 65. Understanding aging cardiovascular structures is crucial. Photon-counting computed tomography (PCCT) offers improved spatial and temporal resolution and better signal-to-noise ratio, enabling texture [...] Read more.
Background: Cardiovascular diseases are the leading cause of global mortality, with 80% of coronary heart disease in patients over 65. Understanding aging cardiovascular structures is crucial. Photon-counting computed tomography (PCCT) offers improved spatial and temporal resolution and better signal-to-noise ratio, enabling texture analysis in clinical routines. Detecting structural changes in aging left-ventricular myocardium may help predict cardiovascular risk. Methods: In this retrospective, single-center, IRB-approved study, 90 patients underwent ECG-gated contrast-enhanced cardiac CT using dual-source PCCT (NAEOTOM Alpha, Siemens). Patients were divided into two age groups (50–60 years and 70–80 years). The left ventricular myocardium was segmented semi-automatically, and radiomics features were extracted using pyradiomics to compare myocardial texture features. Epicardial adipose tissue (EAT) density, thickness, and other clinical parameters were recorded. Statistical analysis was conducted with R and a Python-based random forest classifier. Results: The study assessed 90 patients (50–60 years, n = 54, and 70–80 years, n = 36) with a mean age of 63.6 years. No significant differences were found in mean Agatston score, gender distribution, or conditions like hypertension, diabetes, hypercholesterolemia, or nicotine abuse. EAT measurements showed no significant differences. The Random Forest Classifier achieved a training accuracy of 0.95 and a test accuracy of 0.74 for age group differentiation. Wavelet-HLH_glszm_GrayLevelNonUniformity was a key differentiator. Conclusions: Radiomics texture features of the left ventricular myocardium outperformed conventional parameters like EAT density and thickness in differentiating age groups, offering a potential imaging biomarker for myocardial aging. Radiomics analysis of left ventricular myocardium offers a unique opportunity to visualize changes in myocardial texture during aging and could serve as a cardiac risk predictor. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Graphical abstract

16 pages, 3084 KiB  
Article
Generating Large Time–Bandwidth Product RF-Chirped Waveforms Using Vernier Dual-Optical Frequency Combs
by Mohammed S. Alshaykh
Photonics 2025, 12(7), 700; https://doi.org/10.3390/photonics12070700 - 11 Jul 2025
Viewed by 259
Abstract
Chirped radio-frequency signals are essential waveforms in radar systems. To enhance resolution and improve the signal-to-noise ratio through higher energy transmission, chirps with high time–bandwidth products are highly desirable. Photonic technologies, with their ability to handle broad electrical bandwidths, have been widely employed [...] Read more.
Chirped radio-frequency signals are essential waveforms in radar systems. To enhance resolution and improve the signal-to-noise ratio through higher energy transmission, chirps with high time–bandwidth products are highly desirable. Photonic technologies, with their ability to handle broad electrical bandwidths, have been widely employed in the generation, filtering, processing, and detection of broadband electrical waveforms. In this work, we propose a photonics-based large-TBWP RF chirp generator utilizing dual optical frequency combs with a small difference in the repetition rate. By employing dispersion modules for frequency-to-time mapping, we convert the spectral interferometric patterns into a temporal RF sinusoidal carrier signal whose frequency is swept through the optical shot-to-shot delay. We derive analytical expressions to quantify the system’s performance under various design parameters, including the comb repetition rate and its offset, the second-order dispersion, the transform-limited optical pulse width, and the photodetector’s bandwidth limitations. We benchmark the expected system performance in terms of RF bandwidth, chirp duration, chirp rate, frequency step size, and TBWP. Using realistic dual-comb source parameters, we demonstrate the feasibility of generating RF chirps with a duration of 284.44 μs and a bandwidth of 234.05 GHz, corresponding to a TBWP of 3.3×107. Full article
Show Figures

Figure 1

20 pages, 5133 KiB  
Review
Photonics-Enabled High-Sensitivity and Wide-Bandwidth Microwave Phase Noise Analyzers
by Jingzhan Shi, Baojin Tu and Yiping Wang
Photonics 2025, 12(7), 691; https://doi.org/10.3390/photonics12070691 - 8 Jul 2025
Viewed by 309
Abstract
Phase noise constitutes a pivotal performance parameter in microwave systems, and the evolution of microwave signal sources presents new demands on phase noise analyzers (PNAs) regarding sensitivity and bandwidth. Traditional electronics-based PNAs encounter significant limitations in meeting these advanced requirements. This paper provides [...] Read more.
Phase noise constitutes a pivotal performance parameter in microwave systems, and the evolution of microwave signal sources presents new demands on phase noise analyzers (PNAs) regarding sensitivity and bandwidth. Traditional electronics-based PNAs encounter significant limitations in meeting these advanced requirements. This paper provides an overview of recent progress in photonics-based microwave PNA research. Microwave photonic (MWP) PNAs are categorized into two main types: phase-detection-based and frequency-discrimination-based architectures. MWP phase-detection-based PNAs utilize ultra-short-pulse lasers or optical–electrical oscillators as reference sources to achieve superior sensitivity. On the other hand, MWP frequency-discrimination-based PNAs are further subdivided into photonic-substitution-type PNA and MWP quadrature-frequency-discrimination-based PNA. These systems leverage innovative MWP technologies to enhance overall performance, offering broader bandwidth and higher sensitivity compared to conventional approaches. Finally, the paper addresses the current challenges faced in phase noise measurement technologies and suggests potential future research directions aimed at improving measurement capabilities. Full article
(This article belongs to the Special Issue Recent Advancement in Microwave Photonics)
Show Figures

Figure 1

17 pages, 489 KiB  
Review
Experimental Advances in Phase Estimation with Photonic Quantum States
by Laura T. Knoll, Agustina G. Magnoni and Miguel A. Larotonda
Entropy 2025, 27(7), 712; https://doi.org/10.3390/e27070712 - 1 Jul 2025
Viewed by 702
Abstract
Photonic quantum metrology has emerged as a leading platform for quantum-enhanced precision measurements. By taking advantage of quantum resources such as entanglement, quantum metrology enables parameter estimation with sensitivities surpassing classical limits. In this review, we describe the basic tools and recent experimental [...] Read more.
Photonic quantum metrology has emerged as a leading platform for quantum-enhanced precision measurements. By taking advantage of quantum resources such as entanglement, quantum metrology enables parameter estimation with sensitivities surpassing classical limits. In this review, we describe the basic tools and recent experimental progress in the determination of an optical phase with a precision that may exceed the shot-noise limit, enabled by the use of nonclassical states of light. We review the state of the art and discuss the challenges and trends in the field. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

11 pages, 1751 KiB  
Article
Opportunistic Diagnostics of Dental Implants in Routine Clinical Photon-Counting CT Acquisitions
by Maurice Ruetters, Holger Gehrig, Christian Mertens, Sinan Sen, Ti-Sun Kim, Heinz-Peter Schlemmer, Christian H. Ziener, Stefan Schoenberg, Matthias Froelich, Marc Kachelrieß and Stefan Sawall
J. Imaging 2025, 11(7), 215; https://doi.org/10.3390/jimaging11070215 - 30 Jun 2025
Viewed by 341
Abstract
Two-dimensional imaging is still commonly used in dentistry, but does not provide the three-dimensional information often required for the accurate assessment of dental structures. Photon-counting computed tomography (PCCT), a new three-dimensional modality mainly used in general medicine, has shown promising potential for dental [...] Read more.
Two-dimensional imaging is still commonly used in dentistry, but does not provide the three-dimensional information often required for the accurate assessment of dental structures. Photon-counting computed tomography (PCCT), a new three-dimensional modality mainly used in general medicine, has shown promising potential for dental applications. With growing digitalization and cross-disciplinary integration, using PCCT data from other medical fields is becoming increasingly relevant. Conventional CT scans, such as those of the cervical spine, have so far lacked the resolution to reliably evaluate dental structures or implants. This study evaluates the diagnostic utility of PCCT for visualizing peri-implant structures in routine clinical photon-counting CT acquisitions and assesses the influence of metal artifact reduction (MAR) algorithms on image quality. Ten dental implants were retrospectively included in this IRB-approved study. Standard PCCT scans were reconstructed at multiple keV levels with and without MAR. Quantitative image analysis was performed with respect to contrast and image noise. Qualitative evaluation of peri-implant tissues, implant shoulder, and apex was performed independently by two experienced dental professionals using a five-point Likert scale. Inter-reader agreement was measured using intraclass correlation coefficients (ICCs). PCCT enabled high-resolution imaging of all peri-implant regions with excellent inter-reader agreement (ICC > 0.75 for all structures). Non-MAR reconstructions consistently outperformed MAR reconstructions across all evaluated regions. MAR led to reduced clarity, particularly in immediate peri-implant areas, without significant benefit from energy level adjustments. All imaging protocols were deemed diagnostically acceptable. This is the first in vivo study demonstrating the feasibility of opportunistic dental diagnostics using PCCT in a clinical setting. While MAR reduces peripheral artifacts, it adversely affects image clarity near implants. PCCT offers excellent image quality for peri-implant assessments and enables incidental detection of dental pathologies without additional radiation exposure. PCCT opens new possibilities for opportunistic, three-dimensional dental diagnostics during non-dental CT scans, potentially enabling earlier detection of clinically significant pathologies. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

17 pages, 1613 KiB  
Article
Iterative Reconstruction with Dynamic ElasticNet Regularization for Nuclear Medicine Imaging
by Ryosuke Kasai and Hideki Otsuka
J. Imaging 2025, 11(7), 213; https://doi.org/10.3390/jimaging11070213 - 27 Jun 2025
Viewed by 283
Abstract
This study proposes a novel image reconstruction algorithm for nuclear medicine imaging based on the maximum likelihood expectation maximization (MLEM) framework with dynamic ElasticNet regularization. Whereas conventional the L1 and L2 regularization methods involve trade-offs between noise suppression and structural preservation, ElasticNet combines [...] Read more.
This study proposes a novel image reconstruction algorithm for nuclear medicine imaging based on the maximum likelihood expectation maximization (MLEM) framework with dynamic ElasticNet regularization. Whereas conventional the L1 and L2 regularization methods involve trade-offs between noise suppression and structural preservation, ElasticNet combines their strengths. Our method further introduces a dynamic weighting scheme that adaptively adjusts the balance between the L1 and L2 terms over iterations while ensuring nonnegativity when using a sufficiently small regularization parameter. We evaluated the proposed algorithm using numerical phantoms (Shepp–Logan and digitized Hoffman) under various noise conditions. Quantitative results based on the peak signal-to-noise ratio and multi-scale structural similarity index measure demonstrated that the proposed dynamic ElasticNet regularized MLEM consistently outperformed not only standard MLEM and L1/L2 regularized MLEM but also the fixed-weight ElasticNet variant. Clinical single-photon emission computed tomography brain image experiments further confirmed improved noise suppression and clearer depiction of fine structures. These findings suggest that our proposed method offers a robust and accurate solution for tomographic image reconstruction in nuclear medicine imaging. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

13 pages, 3883 KiB  
Article
Optimizing Imaging Parameters for Assessment of Hepatocellular Carcinoma Using Photon-Counting Detector Computed Tomography—Impact of Reconstruction Kernel and Slice Thickness
by Anna Szelenyi, Philipp Stelzer, Christian Wassipaul, Jakob Kittinger, Andreas Strassl, Victor Schmidbauer, Martin Luther Watzenböck, Florian Lindenlaub, Michael Arnoldner, Michael Weber, Matthias Pinter, Ruxandra-Iulia Milos and Dietmar Tamandl
Tomography 2025, 11(7), 77; https://doi.org/10.3390/tomography11070077 - 27 Jun 2025
Viewed by 357
Abstract
Background: The use of photon-counting detector computed tomography (PCD-CT) has improved image quality in cardiac, pulmonary, and musculoskeletal imaging. Abdominal imaging research, especially about the use of PCD-CT in hepatocellular carcinoma (HCC), is sparse. Objectives: We aimed to compare the image quality of [...] Read more.
Background: The use of photon-counting detector computed tomography (PCD-CT) has improved image quality in cardiac, pulmonary, and musculoskeletal imaging. Abdominal imaging research, especially about the use of PCD-CT in hepatocellular carcinoma (HCC), is sparse. Objectives: We aimed to compare the image quality of tumors, the liver parenchyma, and the vasculature in patients with HCC using PCD-CT reconstructions at different slice thicknesses and kernels to identify the most appropriate settings for the clinical routine. Methods: CT exams from twenty adult patients with HCC performed with a clinically approved, first-generation PCD-CT scanner (Naeotom Alpha®, Siemens Healthineers), were retrospectively reviewed. For each patient, images were reconstructed at four different sharp kernels, designed for abdominal imaging (Br40; Br44; Br48; Br56) and at three slice thicknesses (0.4 mm; 1 mm; 3 mm). The reconstruction with the Br40 kernel at 3 mm (Br403 mm) was used as a clinical reference. Three readers independently assessed the image quality of different anatomical abdominal structures and hypervascular HCC lesions using a five-point Likert scale. In addition, image sharpness was assessed using line-density profiles. Results: Compared with the clinical reference, the Br441 mm and Br481 mm reconstructions were rated superior for the assessment of the hepatic vasculature (median difference +0.67 [+0.33 to +1.33], p < 0.001 and +1.00 [+0.67 to +1.67], p < 0.001). Reconstructions for Br401 mm (+0.33 [−0.67 to +1.00], p < 0.001), and Br443 mm (+0.0 [0.0 to +1.00], p = 0.030) were scored superior for overall image quality. The noise demonstrated a continuous increase when using sharper kernels and thinner slices than Br403 mm (p < 0.001), leading to a decrease in contrast-to-noise ratio. Although there was a trend toward increased image sharpness using the slope analysis with higher kernels, this was not significantly different compared with the reference standard. Conclusion: PCD-CT reconstruction Br401 mm was the most suitable setting for overall image quality, while reconstructions with sharper kernels (Br441 mm and Br481 mm) can be considered for the assessment of the hepatic vasculature in patients with HCC. Full article
Show Figures

Figure 1

15 pages, 3073 KiB  
Article
Multiple-Diffraction Subtractive Double Monochromator with High Resolution and Low Stray Light
by Yinxin Zhang, Zhenyu Wang, Kai Chen, Daochun Cai, Tao Chen and Huaidong Yang
Appl. Sci. 2025, 15(13), 7232; https://doi.org/10.3390/app15137232 - 27 Jun 2025
Viewed by 284
Abstract
Spectrometers play a crucial role in photonic applications, but their design involves trade-offs related to miniaturization, spectral fidelity, and their measurement dynamic range. We demonstrated a high-resolution, low-stray-light spectrometer with a compact size comprising two symmetric multiple-diffraction monochromators. We analyzed the spectral resolution [...] Read more.
Spectrometers play a crucial role in photonic applications, but their design involves trade-offs related to miniaturization, spectral fidelity, and their measurement dynamic range. We demonstrated a high-resolution, low-stray-light spectrometer with a compact size comprising two symmetric multiple-diffraction monochromators. We analyzed the spectral resolution and stray light and built a platform with two double-diffraction monochromators. Multiple diffractions on one grating increased the spectral resolution without volumetric expansion, and the subtractive double-monochromator configuration suppressed stray light effectively. The simulation and experimental results show that compared with single diffraction, repeated diffractions improved the resolution by 5–7 times. The spectral resolution of the home-built setup was 18.8 pm at 1480 nm. The subtractive double monochromator significantly weakened the stray light. The optical signal-to-noise ratio was increased from 34.76 dB for the single monochromator to 69.17 dB for the subtractive double monochromator. This spectrometer design is promising for broadband high-resolution spectral analyses. Full article
(This article belongs to the Special Issue Advanced Spectroscopy Technologies)
Show Figures

Figure 1

24 pages, 30364 KiB  
Article
Bayesian Denoising Algorithm for Low SNR Photon-Counting Lidar Data via Probabilistic Parameter Optimization Based on Signal and Noise Distribution
by Qi Liu, Jian Yang, Yue Ma, Wenbo Yu, Qijin Han, Zhibiao Zhou and Song Li
Remote Sens. 2025, 17(13), 2182; https://doi.org/10.3390/rs17132182 - 25 Jun 2025
Viewed by 330
Abstract
The Ice, Cloud, and land Elevation Satellite-2 has provided unprecedented global surface elevation measurements through photon-counting Lidar (Light detection and ranging), yet its low signal-to-noise ratio (SNR) poses significant challenges for denoising algorithms. Existing methods, relying on fixed parameters, struggle to adapt to [...] Read more.
The Ice, Cloud, and land Elevation Satellite-2 has provided unprecedented global surface elevation measurements through photon-counting Lidar (Light detection and ranging), yet its low signal-to-noise ratio (SNR) poses significant challenges for denoising algorithms. Existing methods, relying on fixed parameters, struggle to adapt to dynamic noise distribution in rugged mountain regions where signal and noise change rapidly. This study proposes an adaptive Bayesian denoising algorithm integrating minimum spanning tree (MST) -based slope estimation and probabilistic parameter optimization. First, a simulation framework based on ATL03 data generates point clouds with ground truth labels under varying SNRs, achieving correlation coefficients > 0.9 between simulated and measured distributions. The algorithm then extracts surface profiles via MST and coarse filtering, fits slopes with >0.9 correlation to reference data, and derives the probability distribution function (PDF) of neighborhood photon counts. Bayesian estimation dynamically selects optimal clustering parameters (search radius and threshold), achieving F-scores > 0.9 even at extremely low SNR (1 photon/10 MHz noise). Validation against three benchmark algorithms (OPTICS, quadtree, DRAGANN) on simulated and ATL03 datasets demonstrates superior performance in mountainous terrain, with precision and recall improvements of 10–20% under high noise conditions. This work provides a robust framework for adaptive parameter selection in low-SNR photon-counting Lidar applications. Full article
Show Figures

Graphical abstract

17 pages, 2287 KiB  
Article
A Self-Adaptive K-SVD Denoising Algorithm for Fiber Bragg Grating Spectral Signals
by Hang Gao, Xiaojia Liu, Da Qiu, Jingyi Liu, Kai Qian, Zhipeng Sun, Song Liu, Shiqiang Chen, Tingting Zhang and Yang Long
Symmetry 2025, 17(7), 991; https://doi.org/10.3390/sym17070991 - 23 Jun 2025
Viewed by 274
Abstract
In fiber Bragg grating (FBG) sensing demodulation systems, high-precision peak detection is a core requirement for demodulation algorithms. However, practical spectral signals are often susceptible to environmental noise interference, which leads to significant degradation in the accuracy of traditional demodulation methods. This study [...] Read more.
In fiber Bragg grating (FBG) sensing demodulation systems, high-precision peak detection is a core requirement for demodulation algorithms. However, practical spectral signals are often susceptible to environmental noise interference, which leads to significant degradation in the accuracy of traditional demodulation methods. This study proposes a self-adaptive K-SVD (SAK-SVD) denoising algorithm based on adaptive window parameter optimization, establishing a closed-loop iterative feedback mechanism through dual iterations between dictionary learning and parameter adjustment. This approach achieves a synergistic enhancement of noise suppression and signal fidelity. First, a dictionary learning framework based on K-SVD is constructed for initial denoising, and the peak feature region is extracted by differentiating the denoised signals. By constructing statistics on the number of sign changes, an adaptive adjustment model for the window size is established. This model dynamically tunes the window parameters in dictionary learning for iterative denoising, establishing a closed-loop architecture that integrates denoising evaluation with parameter optimization. The performance of SAK-SVD is evaluated through three experimental scenarios, demonstrating that SAK-SVD overcomes the rigid parameter limitations of traditional K-SVD in FBG spectral processing, enhances denoising performance, and thereby improves wavelength demodulation accuracy. For denoising undistorted waveforms, the optimal mean absolute error (MAE) decreases to 0.300 pm, representing a 25% reduction compared to the next-best method. For distorted waveforms, the optimal MAE drops to 3.9 pm, achieving a 63.38% reduction compared to the next-best method. This study provides both theoretical and technical support for high-precision fiber-optic sensing under complex working conditions. Crucially, the SAK-SVD framework establishes a universal, adaptive denoising paradigm for fiber Bragg grating (FBG) sensing. This paradigm has direct applicability to Raman spectroscopy, industrial ultrasound-based non-destructive testing, and biomedical signal enhancement (e.g., ECG artefact removal), thereby advancing high-precision measurement capabilities across photonics and engineering domains. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

10 pages, 674 KiB  
Article
Abundant Exact Traveling-Wave Solutions for Stochastic Graphene Sheets Model
by Wael W. Mohammed, Taher S. Hassan, Rabeb Sidaoui, Hijyah Alshammary and Mohamed S. Algolam
Axioms 2025, 14(6), 477; https://doi.org/10.3390/axioms14060477 - 19 Jun 2025
Viewed by 253
Abstract
Here, we consider the stochastic graphene sheets model (SGSM) forced by multiplicative noise in the Itô sense. We show that the exact solution of the SGSM may be obtained by solving some deterministic counterparts of the graphene sheets model and combining the result [...] Read more.
Here, we consider the stochastic graphene sheets model (SGSM) forced by multiplicative noise in the Itô sense. We show that the exact solution of the SGSM may be obtained by solving some deterministic counterparts of the graphene sheets model and combining the result with a solution of stochastic ordinary differential equations. By applying the extended tanh function method, we obtain the soliton solutions for the deterministic counterparts of the graphene sheets model. Because graphene sheets are important in many fields, such as electronics, photonics, and energy storage, the solutions of the stochastic graphene sheets model are beneficial for understanding several fascinating scientific phenomena. Using the MATLAB program, we exhibit several 3D graphs that illustrate the impact of multiplicative noise on the exact solutions of the SGSM. By incorporating stochastic elements into the equations that govern the evolution of graphene sheets, researchers can gain insights into how these fluctuations impact the behavior of the material over time. Full article
Show Figures

Figure 1

Back to TopTop