Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (87)

Search Parameters:
Keywords = total variation (TV) algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1217 KB  
Article
Improving Endodontic Radiograph Interpretation with TV-CLAHE for Enhanced Root Canal Detection
by Barbara Obuchowicz, Joanna Zarzecka, Michał Strzelecki, Marzena Jakubowska, Rafał Obuchowicz, Adam Piórkowski, Elżbieta Zarzecka-Francica and Julia Lasek
J. Clin. Med. 2025, 14(15), 5554; https://doi.org/10.3390/jcm14155554 - 6 Aug 2025
Viewed by 375
Abstract
Objective: The accurate visualization of root canal systems on periapical radiographs is critical for successful endodontic treatment. This study aimed to evaluate and compare the effectiveness of several image enhancement algorithms—including a novel Total Variation–Contrast-Limited Adaptive Histogram Equalization (TV-CLAHE) technique—in improving the detectability [...] Read more.
Objective: The accurate visualization of root canal systems on periapical radiographs is critical for successful endodontic treatment. This study aimed to evaluate and compare the effectiveness of several image enhancement algorithms—including a novel Total Variation–Contrast-Limited Adaptive Histogram Equalization (TV-CLAHE) technique—in improving the detectability of root canal configurations in mandibular incisors, using cone-beam computed tomography (CBCT) as the gold standard. A null hypothesis was tested, assuming that enhancement methods would not significantly improve root canal detection compared to original radiographs. Method: A retrospective analysis was conducted on 60 periapical radiographs of mandibular incisors, resulting in 420 images after applying seven enhancement techniques: Histogram Equalization (HE), Contrast-Limited Adaptive Histogram Equalization (CLAHE), CLAHE optimized with Pelican Optimization Algorithm (CLAHE-POA), Global CLAHE (G-CLAHE), k-Caputo Fractional Differential Operator (KCFDO), and the proposed TV-CLAHE. Four experienced observers (two radiologists and two dentists) independently assessed root canal visibility. Subjective evaluation was performed using an own scale inspired by a 5-point Likert scale, and the detection accuracy was compared to the CBCT findings. Quantitative metrics including Peak Signal-to-Noise Ratio (PSNR), Signal-to-Noise Ratio (SNR), image entropy, and Structural Similarity Index Measure (SSIM) were calculated to objectively assess image quality. Results: Root canal detection accuracy improved across all enhancement methods, with the proposed TV-CLAHE algorithm achieving the highest performance (93–98% accuracy), closely approaching CBCT-level visualization. G-CLAHE also showed substantial improvement (up to 92%). Statistical analysis confirmed significant inter-method differences (p < 0.001). TV-CLAHE outperformed all other techniques in subjective quality ratings and yielded superior SNR and entropy values. Conclusions: Advanced image enhancement methods, particularly TV-CLAHE, significantly improve root canal visibility in 2D radiographs and offer a practical, low-cost alternative to CBCT in routine dental diagnostics. These findings support the integration of optimized contrast enhancement techniques into endodontic imaging workflows to reduce the risk of missed canals and improve treatment outcomes. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
Show Figures

Figure 1

17 pages, 2693 KB  
Article
Mitigating the Drawbacks of the L0 Norm and the Total Variation Norm
by Gengsheng L. Zeng
Axioms 2025, 14(8), 605; https://doi.org/10.3390/axioms14080605 - 4 Aug 2025
Viewed by 264
Abstract
In compressed sensing, it is believed that the L0 norm minimization is the best way to enforce a sparse solution. However, the L0 norm is difficult to implement in a gradient-based iterative image reconstruction algorithm. The total variation (TV) norm minimization [...] Read more.
In compressed sensing, it is believed that the L0 norm minimization is the best way to enforce a sparse solution. However, the L0 norm is difficult to implement in a gradient-based iterative image reconstruction algorithm. The total variation (TV) norm minimization is considered a proper substitute for the L0 norm minimization. This paper points out that the TV norm is not powerful enough to enforce a piecewise-constant image. This paper uses the limited-angle tomography to illustrate the possibility of using the L0 norm to encourage a piecewise-constant image. However, one of the drawbacks of the L0 norm is that its derivative is zero almost everywhere, making a gradient-based algorithm useless. Our novel idea is to replace the zero value of the L0 norm derivative with a zero-mean random variable. Computer simulations show that the proposed L0 norm minimization outperforms the TV minimization. The novelty of this paper is the introduction of some randomness in the gradient of the objective function when the gradient is zero. The quantitative evaluations indicate the improvements of the proposed method in terms of the structural similarity (SSIM) and the peak signal-to-noise ratio (PSNR). Full article
Show Figures

Figure 1

27 pages, 7457 KB  
Article
Three-Dimensional Imaging of High-Contrast Subsurface Anomalies: Composite Model-Constrained Dual-Parameter Full-Waveform Inversion for GPR
by Siyuan Ding, Deshan Feng, Xun Wang, Tianxiao Yu, Shuo Liu and Mengchen Yang
Appl. Sci. 2025, 15(15), 8401; https://doi.org/10.3390/app15158401 - 29 Jul 2025
Viewed by 259
Abstract
Civil engineering structures with damage, defects, or subsurface utilities create a high-contrast exploration environment. These anomalies of interest exhibit different electromagnetic properties from the surrounding medium, and ground-penetrating radar (GPR) has the potential to accurately locate and map their three-dimensional (3D) distributions. However, [...] Read more.
Civil engineering structures with damage, defects, or subsurface utilities create a high-contrast exploration environment. These anomalies of interest exhibit different electromagnetic properties from the surrounding medium, and ground-penetrating radar (GPR) has the potential to accurately locate and map their three-dimensional (3D) distributions. However, full-waveform inversion (FWI) for GPR data struggles to simultaneously reconstruct high-resolution 3D images of both permittivity and conductivity models. Considering the magnitude and sensitivity disparities of the model parameters in the inversion of GPR data, this study proposes a 3D dual-parameter FWI algorithm for GPR with a composite model constraint strategy. It balances the gradient updates of permittivity and conductivity models through performing total variation (TV) regularization and minimum support gradient (MSG) regularization on different parameters in the inversion process. Numerical experiments show that TV regularization can optimize permittivity reconstruction, while MSG regularization is more suitable for conductivity inversion. The TV+MSG composite model constraint strategy improves the accuracy and stability of dual-parameter inversion, providing a robust solution for the 3D imaging of subsurface anomalies with high-contrast features. These outcomes offer researchers theoretical insights and a valuable reference when investigating scenarios with high-contrast environments. Full article
Show Figures

Figure 1

29 pages, 1138 KB  
Article
Regularized Kaczmarz Solvers for Robust Inverse Laplace Transforms
by Marta González-Lázaro, Eduardo Viciana, Víctor Valdivieso, Ignacio Fernández and Francisco Manuel Arrabal-Campos
Mathematics 2025, 13(13), 2166; https://doi.org/10.3390/math13132166 - 2 Jul 2025
Viewed by 296
Abstract
Inverse Laplace transforms (ILTs) are fundamental to a wide range of scientific and engineering applications—from diffusion NMR spectroscopy to medical imaging—yet their numerical inversion remains severely ill-posed, particularly in the presence of noise or sparse data. The primary objective of this study is [...] Read more.
Inverse Laplace transforms (ILTs) are fundamental to a wide range of scientific and engineering applications—from diffusion NMR spectroscopy to medical imaging—yet their numerical inversion remains severely ill-posed, particularly in the presence of noise or sparse data. The primary objective of this study is to develop robust and efficient numerical methods that improve the stability and accuracy of ILT reconstructions under challenging conditions. In this work, we introduce a novel family of Kaczmarz-based ILT solvers that embed advanced regularization directly into the iterative projection framework. We propose three algorithmic variants—Tikhonov–Kaczmarz, total variation (TV)–Kaczmarz, and Wasserstein–Kaczmarz—each incorporating a distinct penalty to stabilize solutions and mitigate noise amplification. The Wasserstein–Kaczmarz method, in particular, leverages optimal transport theory to impose geometric priors, yielding enhanced robustness for multi-modal or highly overlapping distributions. We benchmark these methods against established ILT solvers—including CONTIN, maximum entropy (MaxEnt), TRAIn, ITAMeD, and PALMA—using synthetic single- and multi-modal diffusion distributions contaminated with 1% controlled noise. Quantitative evaluation via mean squared error (MSE), Wasserstein distance, total variation, peak signal-to-noise ratio (PSNR), and runtime demonstrates that Wasserstein–Kaczmarz attains an optimal balance of speed (0.53 s per inversion) and accuracy (MSE = 4.7×108), while TRAIn achieves the highest fidelity (MSE = 1.5×108) at a modest computational cost. These results elucidate the inherent trade-offs between computational efficiency and reconstruction precision and establish regularized Kaczmarz solvers as versatile, high-performance tools for ill-posed inverse problems. Full article
Show Figures

Figure 1

22 pages, 2436 KB  
Article
An Efficient Sparse Synthetic Aperture Radar Imaging Method Based on L1-Norm and Total Variation Regularization
by Zhiqi Gao, Huiying Ma, Pingping Huang, Wei Xu, Weixian Tan and Zhixia Wu
Electronics 2025, 14(13), 2508; https://doi.org/10.3390/electronics14132508 - 20 Jun 2025
Viewed by 436
Abstract
The continuous progress of synthetic aperture radar (SAR) imaging has led to a growing emphasis on the challenges involved in data acquisition and processing. And the challenges in data acquisition and processing have become increasingly prominent. However, traditional SAR imaging models are limited [...] Read more.
The continuous progress of synthetic aperture radar (SAR) imaging has led to a growing emphasis on the challenges involved in data acquisition and processing. And the challenges in data acquisition and processing have become increasingly prominent. However, traditional SAR imaging models are limited by their large demand for data sampling and slow image reconstruction speeds, which is particularly prominent in large-scale scene applications. To overcome these limitations, this study proposes an innovative L1-Total Variation (TV) regularization sparse SAR imaging algorithm. The submitted algorithm constructs an imaging operator and an echo simulation operator to achieve decoupling in the azimuth and range dimensions, respectively, as well as to reduce the requirement for sampling data. In addition, a Newton acceleration iterative method is introduced to the optimization process, aiming to accelerate the speed of image reconstruction. Comparative analysis and experimental validation indicate that the proposed sparse SAR imaging algorithm outperforms conventional methods in resolution, target localization, and clutter suppression. The results suggest strong potential for rapid scene reconstruction and real-time monitoring in complex environments. Full article
Show Figures

Figure 1

24 pages, 6467 KB  
Article
Combining Kronecker-Basis-Representation Tensor Decomposition and Total Variational Constraint for Spectral Computed Tomography Reconstruction
by Xuru Li, Kun Wang, Yan Chang, Yaqin Wu and Jing Liu
Photonics 2025, 12(5), 492; https://doi.org/10.3390/photonics12050492 - 15 May 2025
Viewed by 351
Abstract
Energy spectrum computed tomography (CT) technology based on photon-counting detectors has been widely used in many applications such as lesion detection, material decomposition, and so on. But severe noise in the reconstructed images affects the accuracy of these applications. The method based on [...] Read more.
Energy spectrum computed tomography (CT) technology based on photon-counting detectors has been widely used in many applications such as lesion detection, material decomposition, and so on. But severe noise in the reconstructed images affects the accuracy of these applications. The method based on tensor decomposition can effectively remove noise by exploring the correlation of energy channels, but it is difficult for traditional tensor decomposition methods to describe the problem of tensor sparsity and low-rank properties of all expansion modules simultaneously. To address this issue, an algorithm for spectral CT reconstruction based on photon-counting detectors is proposed, which combines Kronecker-Basis-Representation (KBR) tensor decomposition and total variational (TV) regularization (namely KBR-TV). The proposed algorithm uses KBR tensor decomposition to unify the sparse measurements of traditional tensor spaces, and constructs a third-order tensor cube through non-local image similarity matching. At the same time, the TV regularization term is introduced into the independent energy spectrum image domain to enhance the sparsity constraint of single-channel images, effectively reduce artifacts, and improve the accuracy of image reconstruction. The proposed objective minimization model has been tackled using the split-Bregman algorithm. To evaluate the algorithm’s performance, both numerical simulations and realistic preclinical mouse studies were conducted. The ultimate findings indicate that the KBR-TV method offers superior enhancement in the quality of spectral CT images in comparison to several existing methods. Full article
(This article belongs to the Special Issue Biomedical Optics:Imaging, Sensing and Therapy)
Show Figures

Figure 1

19 pages, 1819 KB  
Article
Adaptive Optics Retinal Image Restoration Using Total Variation with Overlapping Group Sparsity
by Xiaotong Chen, Yurong Shi and Hongsun Fu
Symmetry 2025, 17(5), 660; https://doi.org/10.3390/sym17050660 - 26 Apr 2025
Viewed by 349
Abstract
Adaptive optics (AO)-corrected retina flood illumination imaging technology is widely used for investigating both structural and functional aspects of the retina. Given the inherent low-contrast nature of original retinal images, it is necessary to perform image restoration. Total variation (TV) regularization is an [...] Read more.
Adaptive optics (AO)-corrected retina flood illumination imaging technology is widely used for investigating both structural and functional aspects of the retina. Given the inherent low-contrast nature of original retinal images, it is necessary to perform image restoration. Total variation (TV) regularization is an efficient regularization technique for AO retinal image restoration. However, a main shortcoming of TV regularization is its potential to experience the staircase effects, particularly in smooth regions of the image. To overcome the drawback, a new image restoration model is proposed for AO retinal images. This model utilizes the overlapping group sparse total variation (OGSTV) as a regularization term. Due to the structural characteristics of AO retinal images, only partial information regarding the PSF is known. Consequently, we have to solve a more complicated myopic deconvolution problem. To address this computational challenge, we propose an ADMM-MM-LAP method to solve the proposed model. First, we apply the alternating direction method of multiplier (ADMM) as the outer-layer optimization method. Then, appropriate algorithms are employed to solve the ADMM subproblems based on their inherent structures. Specifically, the majorization–minimization (MM) method is applied to handle the asymmetry OGSTV regularization component, while a modified version of the linearize and project (LAP) method is adopted to address the tightly coupled subproblem. Theoretically, we establish the complexity analysis of the proposed method. Numerical results demonstrate that the proposed model outperforms the existing state-of-the-art TV model across several metrics. Full article
(This article belongs to the Special Issue Computational Mathematics and Its Applications in Numerical Analysis)
Show Figures

Figure 1

15 pages, 7817 KB  
Article
Sparsity-Guided Phase Retrieval to Handle Concave- and Convex-Shaped Specimens in Inline Holography, Taking the Complexity Parameter into Account
by Yao Koffi, Jocelyne M. Bosson, Marius Ipo Gnetto and Jeremie T. Zoueu
Optics 2025, 6(2), 15; https://doi.org/10.3390/opt6020015 - 17 Apr 2025
Viewed by 642
Abstract
In this work, we explore an optimization idea for the complexity guidance of a phase retrieval solution for a single acquired hologram. This method associates free-space backpropagation with the fast iterative shrinkage-thresholding algorithm (FISTA), which incorporates an improvement in the total variation (TV) [...] Read more.
In this work, we explore an optimization idea for the complexity guidance of a phase retrieval solution for a single acquired hologram. This method associates free-space backpropagation with the fast iterative shrinkage-thresholding algorithm (FISTA), which incorporates an improvement in the total variation (TV) to guide the complexity of the phase retrieval solution from the complex diffracted field measurement. The developed procedure can provide excellent phase reconstruction using only a single acquired hologram. Full article
Show Figures

Figure 1

16 pages, 5341 KB  
Article
A Sparse Representation-Based Reconstruction Method of Electrical Impedance Imaging for Grounding Grid
by Ke Zhu, Donghui Luo, Zhengzheng Fu, Zhihang Xue and Xianghang Bu
Energies 2024, 17(24), 6459; https://doi.org/10.3390/en17246459 - 22 Dec 2024
Cited by 2 | Viewed by 829
Abstract
As a non-invasive imaging method, electrical impedance tomography (EIT) technology has become a research focus for grounding grid corrosion diagnosis. However, the existing algorithms have not produced ideal image reconstruction results. This article proposes an electrical impedance imaging method based on sparse representation, [...] Read more.
As a non-invasive imaging method, electrical impedance tomography (EIT) technology has become a research focus for grounding grid corrosion diagnosis. However, the existing algorithms have not produced ideal image reconstruction results. This article proposes an electrical impedance imaging method based on sparse representation, which can improve the accuracy of reconstructed images obviously. First, the basic principles of EIT are outlined, and the limitations of existing reconstruction methods are analyzed. Then, an EIT reconstruction algorithm based on sparse representation is proposed to address these limitations. It constructs constraints using the sparsity of conductivity distribution under a certain sparse basis and utilizes the accelerated Fast Iterative Shrinkage Threshold Algorithm (FISTA) for iterative solutions, aiming to improve the imaging quality and reconstruction accuracy. Finally, the grounding grid model is established by COMSOL simulation software to obtain voltage data, and the reconstruction effects of the Tikhonov regularization algorithm, the total variation regularization algorithm (TV), the one-step Newton algorithm (NOSER), and the sparse reconstruction algorithm proposed in this article are compared in MATLAB. The voltage relative error is introduced to evaluate the reconstructed image. The results show that the reconstruction algorithm based on sparse representation is superior to other methods in terms of reconstruction error and image quality. The relative error of the grounding grid reconstructed image is reduced by an average of 12.54%. Full article
(This article belongs to the Special Issue Simulation and Analysis of Electrical Power Systems)
Show Figures

Figure 1

13 pages, 3195 KB  
Article
Application and Optimization of a Fast Non-Local Means Noise Reduction Algorithm in Pediatric Abdominal Virtual Monoenergetic Images
by Hajin Kim, Juho Park, Jina Shim and Youngjin Lee
Electronics 2024, 13(23), 4684; https://doi.org/10.3390/electronics13234684 - 27 Nov 2024
Viewed by 1030
Abstract
In this study, we applied and optimized a fast non-local means (FNLM) algorithm to reduce noise in pediatric abdominal virtual monoenergetic images (VMIs). To analyze various contrast agent concentrations, we produced contrast agent concentration samples (20, 40, 60, 80, and 100%) and inserted [...] Read more.
In this study, we applied and optimized a fast non-local means (FNLM) algorithm to reduce noise in pediatric abdominal virtual monoenergetic images (VMIs). To analyze various contrast agent concentrations, we produced contrast agent concentration samples (20, 40, 60, 80, and 100%) and inserted them into a phantom model of a one-year-old pediatric patient. Single-energy computed tomography (SECT) and dual-energy computed tomography (DECT) images were acquired from the phantom, and 40 kilo-electron-volt (keV) VMI was acquired based on the DECT images. For the 40 keV VMI, the smoothing factor of the FNLM algorithm was applied from 0.01 to 1.00 in increments of 0.01. We derived the optimized value of the FNLM algorithm based on quantitative evaluation and performed a comparative assessment with SECT, DECT, and a total variation (TV) algorithm. As a result of the analysis, we found that the average contrast to noise ratio (CNR) and coefficient of variation (COV) of each concentration were most improved at a smoothing factor of 0.02. Based on these results, we derived the optimized smoothing factor value of 0.02. Comparative evaluation shows that the optimized FNLM algorithm improves the CNR and COV results by approximately 3.14 and 2.45 times, respectively, compared with the DECT image, and the normalized noise power spectrum result shows a 101 mm2 improvement. The main contribution of this study is to demonstrate the effectiveness of an optimized FNLM algorithm in reducing noise in pediatric abdominal VMI, allowing high-quality images to be acquired while reducing contrast dose. This advancement has significant implications for minimizing the risk of contrast-induced toxicity, especially in pediatric patients. Our approach addresses the problem of limited datasets in pediatric imaging by providing a computationally efficient noise reduction technique and highlights the clinical applicability of the FNLM algorithm. In addition, effective noise reduction enables high-contrast imaging with minimal radiation and contrast exposure, which is expected to be suitable for repeat CT examinations of pediatric liver cancer patients and other abdominal diseases. Full article
Show Figures

Figure 1

15 pages, 7377 KB  
Article
Application of Adaptive Search Window-Based Nonlocal Total Variation Filter in Low-Dose Computed Tomography Images: A Phantom Study
by Hajin Kim, Bo Kyung Cha, Kyuseok Kim and Youngjin Lee
Appl. Sci. 2024, 14(23), 10886; https://doi.org/10.3390/app142310886 - 24 Nov 2024
Cited by 1 | Viewed by 986
Abstract
Computed tomography (CT) imaging using low-dose radiation effectively reduces radiation exposure; however, it introduces noise amplification in the resulting image. This study models an adaptive nonlocal total variation (NL-TV) algorithm that efficiently reduces noise in X-ray-based images and applies it to low-dose CT [...] Read more.
Computed tomography (CT) imaging using low-dose radiation effectively reduces radiation exposure; however, it introduces noise amplification in the resulting image. This study models an adaptive nonlocal total variation (NL-TV) algorithm that efficiently reduces noise in X-ray-based images and applies it to low-dose CT images. In this study, an AAPM CT performance phantom is used, and the resulting image is obtained by applying an annotation filter and a high-pitch protocol. The adaptive NL-TV filter was designed by applying the optimal window value calculated by confirming the difference between Gaussian filtering and the basic NL-TV approach. For quantitative image quality evaluation parameters, contrast-to-noise ratio (CNR), coefficient of variation (COV), and sigma value were used to confirm the noise reduction effectiveness and spatial resolution value. The CNR and COV values in low-dose CT images using the adaptive NL-TV filter, which performed an optimization process, improved by approximately 1.29 and 1.45 times, respectively, compared with conventional NL-TV. In addition, the adaptive NL-TV filter was able to acquire spatial resolution data that were similar to a CT image without applying noise reduction. In conclusion, the proposed NL-TV filter is feasible and effective in improving the quality of low-dose CT images. Full article
(This article belongs to the Special Issue Advances and Applications of Medical Imaging Physics)
Show Figures

Figure 1

24 pages, 60637 KB  
Article
SAR-NTV-YOLOv8: A Neural Network Aircraft Detection Method in SAR Images Based on Despeckling Preprocessing
by Xiaomeng Guo and Baoyi Xu
Remote Sens. 2024, 16(18), 3420; https://doi.org/10.3390/rs16183420 - 14 Sep 2024
Cited by 3 | Viewed by 2157
Abstract
Monitoring aircraft using synthetic aperture radar (SAR) images is a very important task. Given its coherent imaging characteristics, there is a large amount of speckle interference in the image. This phenomenon leads to the scattering information of aircraft targets being masked in SAR [...] Read more.
Monitoring aircraft using synthetic aperture radar (SAR) images is a very important task. Given its coherent imaging characteristics, there is a large amount of speckle interference in the image. This phenomenon leads to the scattering information of aircraft targets being masked in SAR images, which is easily confused with background scattering points. Therefore, automatic detection of aircraft targets in SAR images remains a challenging task. For this task, this paper proposes a framework for speckle reduction preprocessing of SAR images, followed by the use of an improved deep learning method to detect aircraft in SAR images. Firstly, to improve the problem of introducing artifacts or excessive smoothing in speckle reduction using total variation (TV) methods, this paper proposes a new nonconvex total variation (NTV) method. This method aims to ensure the effectiveness of speckle reduction while preserving the original scattering information as much as possible. Next, we present a framework for aircraft detection based on You Only Look Once v8 (YOLOv8) for SAR images. Therefore, the complete framework is called SAR-NTV-YOLOv8. Meanwhile, a high-resolution small target feature head is proposed to mitigate the impact of scale changes and loss of depth feature details on detection accuracy. Then, an efficient multi-scale attention module was proposed, aimed at effectively establishing short-term and long-term dependencies between feature grouping and multi-scale structures. In addition, the progressive feature pyramid network was chosen to avoid information loss or degradation in multi-level transmission during the bottom-up feature extraction process in Backbone. Sufficient comparative experiments, speckle reduction experiments, and ablation experiments are conducted on the SAR-Aircraft-1.0 and SADD datasets. The results have demonstrated the effectiveness of SAR-NTV-YOLOv8, which has the most advanced performance compared to other mainstream algorithms. Full article
Show Figures

Figure 1

19 pages, 2523 KB  
Article
Hyperspectral Image Denoising by Pixel-Wise Noise Modeling and TV-Oriented Deep Image Prior
by Lixuan Yi, Qian Zhao and Zongben Xu
Remote Sens. 2024, 16(15), 2694; https://doi.org/10.3390/rs16152694 - 23 Jul 2024
Cited by 4 | Viewed by 2433
Abstract
Model-based hyperspectral image (HSI) denoising methods have attracted continuous attention in the past decades, due to their effectiveness and interpretability. In this work, we aim at advancing model-based HSI denoising, through sophisticated investigation for both the fidelity and regularization terms, or correspondingly noise [...] Read more.
Model-based hyperspectral image (HSI) denoising methods have attracted continuous attention in the past decades, due to their effectiveness and interpretability. In this work, we aim at advancing model-based HSI denoising, through sophisticated investigation for both the fidelity and regularization terms, or correspondingly noise and prior, by virtue of several recently developed techniques. Specifically, we formulate a novel unified probabilistic model for the HSI denoising task, within which the noise is assumed as pixel-wise non-independent and identically distributed (non-i.i.d) Gaussian predicted by a pre-trained neural network, and the prior for the HSI image is designed by incorporating the deep image prior (DIP) with total variation (TV) and spatio-spectral TV. To solve the resulted maximum a posteriori (MAP) estimation problem, we design a Monte Carlo Expectation–Maximization (MCEM) algorithm, in which the stochastic gradient Langevin dynamics (SGLD) method is used for computing the E-step, and the alternative direction method of multipliers (ADMM) is adopted for solving the optimization in the M-step. Experiments on both synthetic and real noisy HSI datasets have been conducted to verify the effectiveness of the proposed method. Full article
Show Figures

Figure 1

13 pages, 2782 KB  
Article
Richardson–Lucy Iterative Blind Deconvolution with Gaussian Total Variation Constraints for Space Extended Object Images
by Shiping Guo, Yi Lu and Yibin Li
Photonics 2024, 11(6), 576; https://doi.org/10.3390/photonics11060576 - 20 Jun 2024
Viewed by 1812
Abstract
In ground-based astronomical observations or artificial space target detections, images obtained from a ground-based telescope are severely distorted due to atmospheric turbulence. The distortion can be partially compensated by employing adaptive optics (pre-detection compensation), image restoration techniques (post-detection compensation), or a combination of [...] Read more.
In ground-based astronomical observations or artificial space target detections, images obtained from a ground-based telescope are severely distorted due to atmospheric turbulence. The distortion can be partially compensated by employing adaptive optics (pre-detection compensation), image restoration techniques (post-detection compensation), or a combination of both (hybrid compensation). This paper focuses on the improvement of the most commonly used practical post-processing techniques, Richardson–Lucy (R–L) iteration blind deconvolution, which is studied in detail and improved as follows: First, the total variation (TV) norm is redefined using the Gaussian gradient magnitude and a set scheme for regularization parameter selection is proposed. Second, the Gaussian TV constraint is proposed to impose to the R–L algorithm. Last, the Gaussian TV R–L (GRL) iterative blind deconvolution method is finally presented, in which the restoration precision is visually increased and the convergence property is considerably improved. The performance of the proposed GRL method is tested by both simulation experiments and observed field data. Full article
(This article belongs to the Special Issue Adaptive Optics: Methods and Applications)
Show Figures

Figure 1

34 pages, 15812 KB  
Article
Exploring the Potential of PRISMA Satellite Hyperspectral Image for Estimating Soil Organic Carbon in Marvdasht Region, Southern Iran
by Mehdi Golkar Amoli, Mahdi Hasanlou, Ruhollah Taghizadeh Mehrjardi and Farhad Samadzadegan
Remote Sens. 2024, 16(12), 2149; https://doi.org/10.3390/rs16122149 - 13 Jun 2024
Cited by 1 | Viewed by 2735
Abstract
Soil organic carbon (SOC) is a crucial factor for soil fertility, directly impacting agricultural yields and ensuring food security. In recent years, remote sensing (RS) technology has been highly recommended as an efficient tool for producing SOC maps. The PRISMA hyperspectral satellite was [...] Read more.
Soil organic carbon (SOC) is a crucial factor for soil fertility, directly impacting agricultural yields and ensuring food security. In recent years, remote sensing (RS) technology has been highly recommended as an efficient tool for producing SOC maps. The PRISMA hyperspectral satellite was used in this research to predict the SOC map in Fars province, located in southern Iran. The main purpose of this research is to investigate the capabilities of the PRISMA satellite in estimating SOC and examine hyperspectral processing techniques for improving SOC estimation accuracy. To this end, denoising methods and a feature generation strategy have been used. For denoising, three distinct algorithms were employed over the PRISMA image, including Savitzky–Golay + first-order derivative (SG + FOD), VisuShrink, and total variation (TV), and their impact on SOC estimation was compared in four different methods: Method One (reflectance bands without denoising, shown as M#1), Method Two (denoised with SG + FOD, shown as M#2), Method Three (denoised with VisuShrink, shown as M#3), and Method Four (denoised with TV, shown as M#4). Based on the results, the best denoising algorithm was TV (Method Four or M#4), which increased the estimation accuracy by about 27% (from 40% to 67%). After TV, the VisuShrink and SG + FOD algorithms improved the accuracy by about 23% and 18%, respectively. In addition to denoising, a new feature generation strategy was proposed to enhance accuracy further. This strategy comprised two main steps: first, estimating the number of endmembers using the Harsanyi–Farrand–Chang (HFC) algorithm, and second, employing Principal Component Analysis (PCA) and Independent Component Analysis (ICA) transformations to generate high-level features based on the estimated number of endmembers from the HFC algorithm. The feature generation strategy was unfolded in three scenarios to compare the ability of PCA and ICA transformation features: Scenario One (without adding any extra features, shown as S#1), Scenario Two (incorporating PCA features, shown as S#2), and Scenario Three (incorporating ICA features, shown as S#3). Each of these three scenarios was repeated for each denoising method (M#1–4). After feature generation, high-level features were added to the outputs of Methods One, Three, and Four. Subsequently, three machine learning algorithms (LightGBM, GBRT, RF) were employed for SOC modeling. The results showcased the highest accuracy when features obtained from PCA transformation were added to the results from the TV algorithm (Method Four—Scenario Two or M#4–S#2), yielding an R2 of 81.74%. Overall, denoising and feature generation methods significantly enhanced SOC estimation accuracy, escalating it from approximately 40% (M#1–S#1) to 82% (M#4–S#2). This underscores the remarkable potential of hyperspectral sensors in SOC studies. Full article
Show Figures

Graphical abstract

Back to TopTop