Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = denoising diffusion bridge model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6518 KB  
Review
Diffusion Models at the Drug Discovery Frontier: A Review on Generating Small Molecules Versus Therapeutic Peptides
by Yiquan Wang, Yahui Ma, Yuhan Chang, Jiayao Yan, Jialin Zhang, Minnuo Cai and Kai Wei
Biology 2025, 14(12), 1665; https://doi.org/10.3390/biology14121665 - 24 Nov 2025
Viewed by 1062
Abstract
Diffusion models have emerged as a leading framework in generative modeling, poised to transform the traditionally slow and costly process of drug discovery. This review provides a systematic comparison of their application in designing two principal therapeutic modalities: small molecules and therapeutic peptides. [...] Read more.
Diffusion models have emerged as a leading framework in generative modeling, poised to transform the traditionally slow and costly process of drug discovery. This review provides a systematic comparison of their application in designing two principal therapeutic modalities: small molecules and therapeutic peptides. We dissect how the unified framework of iterative denoising is adapted to the distinct molecular representations, chemical spaces, and design objectives of each modality. For small molecules, these models excel at structure-based design, generating novel, pocket-fitting ligands with desired physicochemical properties, yet face the critical hurdle of ensuring chemical synthesizability. Conversely, for therapeutic peptides, the focus shifts to generating functional sequences and designing de novo structures, where the primary challenges are achieving biological stability against proteolysis, ensuring proper folding, and minimizing immunogenicity. Despite these distinct challenges, both domains face shared hurdles: the scarcity of high-quality experimental data, the reliance on inaccurate scoring functions for validation, and the crucial need for experimental validation. We conclude that the full potential of diffusion models will be unlocked by bridging these modality-specific gaps and integrating them into automated, closed-loop Design-Build-Test-Learn (DBTL) platforms, thereby shifting the paradigm from mere chemical exploration to the on-demand engineering of novel therapeutics. Full article
(This article belongs to the Section Medical Biology)
Show Figures

Figure 1

23 pages, 10215 KB  
Article
Robust Denoising of Structure Noise Through Dual-Diffusion Brownian Bridge Modeling and Coupled Sampling
by Long Chen, Changan Yuan, Huafu Xu, Ye He and Jianhui Jiang
Electronics 2025, 14(21), 4243; https://doi.org/10.3390/electronics14214243 - 30 Oct 2025
Viewed by 572
Abstract
Recent denoising methods based on diffusion models typically formulate the task as a conditional generation process initialized from a standard Gaussian distribution. However, such stochastic initialization often leads to redundant sampling steps and unstable results due to the neglect of structured noise characteristics. [...] Read more.
Recent denoising methods based on diffusion models typically formulate the task as a conditional generation process initialized from a standard Gaussian distribution. However, such stochastic initialization often leads to redundant sampling steps and unstable results due to the neglect of structured noise characteristics. To address these limitations, we propose a novel framework that directly bridges the probabilistic distributions of noisy and clean images while jointly modeling structured noise. We introduce Dual-diffusion Brownian Bridge Coupled Sampling (DBBCS) the first framework to incorporate Brownian bridge diffusion into image denoising. DBBCS synchronously models the distributions of clean images and structural noise via two coupled diffusion processes. Unlike conventional diffusion models, our method starts sampling directly from noisy observations and jointly optimizes image reconstruction and noise estimation through a coupled posterior sampling scheme. This allows for dynamic refinement of intermediate states by adaptively updating the sampling gradients using residual feedback from both image and noise paths. Specifically, DBBCS employs two parallel Brownian bridge models to learn the distributions of clean images and noise. During inference, their respective residual processes regulate each other to progressively enhance both denoising and noise estimation. A consistency constraint is enforced among the estimated noise, the reconstructed image, and the original noisy input to ensure stable and physically coherent results. Extensive experiments on standard benchmarks demonstrate that DBBCS achieves superior performance in both visual fidelity and quantitative metrics, offering a robust and efficient solution to image denoising. Full article
(This article belongs to the Special Issue Recent Advances in Efficient Image and Video Processing)
Show Figures

Figure 1

23 pages, 7046 KB  
Article
Atmospheric Scattering Prior Embedded Diffusion Model for Remote Sensing Image Dehazing
by Shanqin Wang and Miao Zhang
Atmosphere 2025, 16(9), 1065; https://doi.org/10.3390/atmos16091065 - 10 Sep 2025
Viewed by 1388
Abstract
Remote sensing image dehazing presents substantial challenges in balancing physical fidelity with generative flexibility, particularly under complex atmospheric conditions and sensor-specific degradation patterns. Traditional physics-based methods often struggle with nonlinear haze distributions, while purely data-driven approaches tend to lack interpretability and physical consistency. [...] Read more.
Remote sensing image dehazing presents substantial challenges in balancing physical fidelity with generative flexibility, particularly under complex atmospheric conditions and sensor-specific degradation patterns. Traditional physics-based methods often struggle with nonlinear haze distributions, while purely data-driven approaches tend to lack interpretability and physical consistency. To bridge this gap, we propose the Atmospheric Scattering Prior embedded Diffusion Model (ASPDiff), a novel framework that seamlessly integrates atmospheric physics into the diffusion-based generative restoration process. ASPDiff establishes a closed-loop feedback mechanism by embedding the atmospheric scattering model as a physics-driven regularization throughout both the forward degradation simulation and the reverse denoising trajectory. The framework operates through the following three synergistic components: (1) an Atmospheric Prior Estimation Module that uses the Dark Channel Prior to generate initial estimates of the transmission map and global atmospheric light, which are then refined through learnable adjustment networks; (2) a Diffusion Process with Atmospheric Prior Embedding, where the refined priors serve as conditional guidance during the reverse diffusion sampling, ensuring physical plausibility; and (3) a Haze-Aware Refinement Module that adaptively enhances structural details and compensates for residual haze via frequency-aware decomposition and spatial attention. Extensive experiments on both synthetic and real-world remote sensing datasets demonstrate that ASPDiff significantly outperforms existing methods, achieving state-of-the-art performance while maintaining strong physical interpretability. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

14 pages, 4750 KB  
Article
ADBM: Adversarial Diffusion Bridge Model for Denoising of 3D Point Cloud Data
by Changwoo Nam and Sang Jun Lee
Sensors 2025, 25(17), 5261; https://doi.org/10.3390/s25175261 - 24 Aug 2025
Viewed by 1177
Abstract
We address the task of point cloud denoising by leveraging a diffusion-based generative framework augmented with adversarial training. While recent diffusion models have demonstrated strong capabilities in learning complex data distributions, their effectiveness in recovering fine geometric details remains limited, especially under severe [...] Read more.
We address the task of point cloud denoising by leveraging a diffusion-based generative framework augmented with adversarial training. While recent diffusion models have demonstrated strong capabilities in learning complex data distributions, their effectiveness in recovering fine geometric details remains limited, especially under severe noise conditions. To mitigate this, we propose the Adversarial Diffusion Bridge Model (ADBM), a novel approach for denoising 3D point cloud data by integrating a diffusion bridge model with adversarial learning. ADBM incorporates a lightweight discriminator that guides the denoising process through adversarial supervision, encouraging sharper and more faithful reconstructions. The denoiser is trained using a denoising diffusion objective based on a Schrödinger Bridge, while the discriminator distinguishes between real, clean point clouds and generated outputs, promoting perceptual realism. Experiments are conducted on the PU-Net and PC-Net datasets, with performance evaluation employing the Chamfer distance and Point-to-Mesh metrics. The qualitative and quantitative results both highlight the effectiveness of adversarial supervision in enhancing local detail reconstruction, making our approach a promising direction for robust point cloud restoration. Full article
(This article belongs to the Special Issue Short-Range Optical 3D Scanning and 3D Data Processing)
Show Figures

Figure 1

18 pages, 3691 KB  
Article
A Field Study on Sampling Strategy of Short-Term Pumping Tests for Hydraulic Tomography Based on the Successive Linear Estimator
by Xiaolan Hou, Rui Hu, Huiyang Qiu, Yukun Li, Minhui Xiao and Yang Song
Water 2025, 17(14), 2133; https://doi.org/10.3390/w17142133 - 17 Jul 2025
Viewed by 557
Abstract
Hydraulic tomography (HT) based on the successive linear estimator (SLE) offers the high-resolution characterization of aquifer heterogeneity but conventionally requires prolonged pumping to achieve steady-state conditions, limiting its applicability in contamination-sensitive or low-permeability settings. This study bridged theoretical and practical gaps (1) by [...] Read more.
Hydraulic tomography (HT) based on the successive linear estimator (SLE) offers the high-resolution characterization of aquifer heterogeneity but conventionally requires prolonged pumping to achieve steady-state conditions, limiting its applicability in contamination-sensitive or low-permeability settings. This study bridged theoretical and practical gaps (1) by identifying spatial periodicity (hole effect) as the mechanism underlying divergences in steady-state cross-correlation patterns between random finite element method (RFEM) and first-order analysis, modeled via an oscillatory covariance function, and (2) by validating a novel short-term sampling strategy for SLE-based HT using field experiments at the University of Göttingen test site. Utilizing early-time drawdown data, we reconstructed spatially congruent distributions of hydraulic conductivity, specific storage, and hydraulic diffusivity after rigorous wavelet denoising. The results demonstrate that the short-term sampling strategy achieves accuracy comparable to that of long-term sampling strategy in characterizing aquifer heterogeneity. Critically, by decoupling SLE from steady-state requirements, this approach minimizes groundwater disturbance and time costs, expanding HT’s feasibility to challenging environments. Full article
(This article belongs to the Special Issue Hydrogeophysical Methods and Hydrogeological Models)
Show Figures

Figure 1

21 pages, 2568 KB  
Article
Improved Flood Insights: Diffusion-Based SAR-to-EO Image Translation
by Minseok Seo, Jinwook Jung and Dong-Geol Choi
Remote Sens. 2025, 17(13), 2260; https://doi.org/10.3390/rs17132260 - 1 Jul 2025
Cited by 2 | Viewed by 2513
Abstract
Floods, exacerbated by climate change, necessitate timely and accurate situational awareness to support effective disaster response. While electro-optical (EO) satellite imagery has been widely employed for flood assessment, its utility is significantly limited under conditions such as cloud cover or nighttime. Synthetic Aperture [...] Read more.
Floods, exacerbated by climate change, necessitate timely and accurate situational awareness to support effective disaster response. While electro-optical (EO) satellite imagery has been widely employed for flood assessment, its utility is significantly limited under conditions such as cloud cover or nighttime. Synthetic Aperture Radar (SAR) provides consistent imaging regardless of weather or lighting conditions but it remains challenging for human analysts to interpret. To bridge this modality gap, we present diffusion-based SAR-to-EO image translation (DSE), a novel framework designed specifically for enhancing the interpretability of SAR imagery in flood scenarios. Unlike conventional GAN-based approaches, our DSE leverages the Brownian Bridge Diffusion Model to achieve stable and high-fidelity EO synthesis. Furthermore, it integrates a self-supervised SAR denoising module to effectively suppress SAR-specific speckle noise, thereby improving the quality of the translated outputs. Quantitative experiments on the SEN12-FLOOD dataset show that our method improves PSNR by 3.23 dB and SSIM by 0.10 over conventional SAR-to-EO baselines. Additionally, a user study with SAR experts revealed that flood segmentation performance using synthetic EO (SynEO) paired with SAR was nearly equivalent to using true EO–SAR pairs, with only a 0.0068 IoU difference. These results confirm the practicality of the DSE framework as an effective solution for EO image synthesis and flood interpretation in SAR-only environments. Full article
(This article belongs to the Special Issue Deep Learning Innovations in Remote Sensing)
Show Figures

Figure 1

23 pages, 16837 KB  
Article
MapGen-Diff: An End-to-End Remote Sensing Image to Map Generator via Denoising Diffusion Bridge Model
by Jilong Tian, Jiangjiang Wu, Hao Chen and Mengyu Ma
Remote Sens. 2024, 16(19), 3716; https://doi.org/10.3390/rs16193716 - 6 Oct 2024
Cited by 4 | Viewed by 2284
Abstract
Online maps are of great importance in modern life, especially in commuting, traveling and urban planning. The accessibility of remote sensing (RS) images has contributed to the widespread practice of generating online maps based on RS images. The previous works leverage an idea [...] Read more.
Online maps are of great importance in modern life, especially in commuting, traveling and urban planning. The accessibility of remote sensing (RS) images has contributed to the widespread practice of generating online maps based on RS images. The previous works leverage an idea of domain mapping to achieve end-to-end remote sensing image-to-map translation (RSMT). Although existing methods are effective and efficient for online map generation, generated online maps still suffer from ground features distortion and boundary inaccuracy to a certain extent. Recently, the emergence of diffusion models has signaled a significant advance in high-fidelity image synthesis. Based on rigorous mathematical theories, denoising diffusion models can offer controllable generation in sampling process, which are very suitable for end-to-end RSMT. Therefore, we design a novel end-to-end diffusion model to generate online maps directly from remote sensing images, called MapGen-Diff. We leverage a strategy inspired by Brownian motion to make a trade-off between the diversity and the accuracy of generation process. Meanwhile, an image compression module is proposed to map the raw images into the latent space for capturing more perception features. In order to enhance the geometric accuracy of ground features, a consistency regularization is designed, which allows the model to generate maps with clearer boundaries and colorization. Compared to several state-of-the-art methods, the proposed MapGen-Diff achieves outstanding performance, especially a 5% RMSE and 7% SSIM improvement on Los Angeles and Toronto datasets. The visualization results also demonstrate more accurate local details and higher quality. Full article
Show Figures

Figure 1

Back to TopTop