Next Article in Journal
Artificial Intelligence-Based Sensorless Control of Induction Motors with Dual-Field Orientation
Previous Article in Journal
Forecasting the Future Development in Quality and Value of Professional Football Players
Previous Article in Special Issue
Correlation Data Augmentation-Based YOLO-Integrated Object Detection of Thermal-Equalization Video Using Line Scanning Inductive Thermography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

D3Fusion: Decomposition–Disentanglement–Dynamic Compensation Framework for Infrared-Visible Image Fusion in Extreme Low-Light

1
School of Mechanical Engineering, Sichuan University, Chengdu 610065, China
2
School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(16), 8918; https://doi.org/10.3390/app15168918
Submission received: 17 July 2025 / Revised: 9 August 2025 / Accepted: 10 August 2025 / Published: 13 August 2025

Abstract

Infrared-visible image fusion quality is critical for nighttime perception in autonomous driving and surveillance but suffers severe degradation under extreme low-light conditions, including irreversible texture loss in visible images, thermal boundary diffusion artifacts, and overexposure under dynamic non-uniform illumination. To address these challenges, a Decomposition–Disentanglement–Dynamic Compensation framework, D3Fusion, is proposed. Firstly, a Retinex-inspired Decomposition Illumination Net (DIN) decomposes inputs into enhanced images and degradative illumination maps for joint low-light recovery. Secondly, an illumination-guided encoder and a multi-scale differential compensation decoder dynamically balance cross-modal features. Finally, a progressive three-stage training paradigm from illumination correction through feature disentanglement to adaptive fusion resolves optimization conflicts. Compared to State-of-the-Art methods, on the LLVIP, TNO, MSRS, and RoadScene datasets, D3Fusion achieves an average improvement of 1.59% in standard deviation (SD), 6.9% in spatial frequency (SF), 2.59% in edge intensity (EI), and 1.99% in visual information fidelity (VIF), demonstrating superior performance in extreme low-light scenarios. The framework effectively suppresses thermal diffusion artifacts while mitigating exposure imbalance, adaptively brightening scenes while preserving texture details in shadowed regions. This significantly improves fusion quality for nighttime images by enhancing salient information, establishing a robust solution for multimodal perception under illumination-critical conditions.

1. Introduction

Infrared (IR) and visible (VIS) image fusion leverages their complementary characteristics to play a key role in complex scene perception, demonstrating significant application value in military reconnaissance, autonomous driving, nighttime surveillance, and other fields [1,2,3,4]. As illustrated in Figure 1, in autonomous driving perception systems, IR imaging captures thermal radiation features to detect heat-emitting targets (e.g., pedestrians, vehicles) under low-light or foggy conditions, while VIS imaging preserves reflectance-dependent texture details such as clothing patterns and road signs [3]. This physical complementarity makes their fusion a core solution for boosting visual system perception.
As applications such as autonomous driving expand to all-weather and all-terrain scenarios, ensuring stable fusion performance in extreme environments becomes critical. Specifically, extreme low-light conditions exacerbate three fundamental challenges: imaging degradation in low-light environments—stemming from insufficient photon capture and sensor noise amplification, reducing VIS image signal-to-noise ratio (SNR) below 5 dB, and submerging texture details in noise [1]. Meanwhile, dynamic non-uniform illumination (e.g., vehicle headlights, streetlights) creates coexisting overexposed and underexposed regions, challenging traditional fusion methods to balance dynamic range [5]. Additionally, the fundamental divergence between VIS (reflectance-based) and IR (thermal radiation-based) imaging causes systematic discrepancies in edge positions and contrast for the same object [6], leading to thermal edge diffusion, texture blurring, and misalignment in fused images [2], which exacerbates cross-modal feature alignment difficulties. Current fusion methodologies address these challenges through three technical paradigms:
(i)
Retinex-based methods decompose an image into reflectance(R) and illumination (L) components to achieve low-light enhancement, establishing a physically interpretable framework for this restoration process [7,8,9]. Traditional Retinex-based methods have laid foundational groundwork for low-light image enhancement, though with notable limitations. Early implementations like Single-Scale Retinex (SSR) [10] and Multi-Scale Retinex (MSRCR) [11] leverage Gaussian filtering to smooth illumination maps, but this approach often introduces color distortion and over-enhancement artifacts. The reliance on manual parameter tuning further restricts their adaptability across diverse scenes. Deep learning advancements have enabled data-driven Retinex implementations. Jin et al. [4] employ a deep neural network to estimate the shading maps (analogous to illumination maps) and adjust lightness, while KinD [12] enhances upon this by incorporating a reflection recovery network to suppress noise and color distortion. Retinexformer [13] integrates Retinex decomposition with a Transformer architecture, utilizing illumination-guided attention to model long-range dependencies. Additionally, classic Retinex models do not consider the noise in low-light images, while a robust Retinex model has been proposed to address this by adding a noise term and using novel regularization terms for illumination and reflectance [14]. These single-modality frameworks demonstrate efficacy in enhancing low-light images but lack explicit cross-modal feature integration, limiting their direct applicability to fusion tasks. In the context of multimodal fusion, Zhang et al. [15] pioneered the integration of Retinex decomposition into fusion workflows using UNet for illumination estimation. PIAFusion [3] does not explicitly employ Retinex theory, yet it introduces a progressive illumination-aware network to estimate lighting distributions, enabling adaptive fusion by leveraging illumination conditions. This approach inspires our model design by demonstrating the effectiveness of integrating illumination-aware mechanisms for dynamic feature fusion. DIVFusion [1] advances this by designing a Scene-Illumination Disentangled Network (SIDNet) to strip degradative illumination from visible images, thereby improving texture preservation in low-light scenarios, though it still faces challenges in dynamic illumination adaptation.
(ii)
Early frequency-domain methods, such as FCLFusion [16], decompose images using wavelet transforms to separate high-frequency texture details from low-frequency structural components. U2Fusion [17] advances this by introducing a unified unsupervised network with dense blocks for multi-scale feature integration, though its performance diminishes in extremely low-light scenarios. WaveletFormerNet [5] innovatively integrates wavelet transforms with Transformer-based modules, leveraging attention mechanisms to mitigate texture loss during downsampling. This approach demonstrates superior detail preservation in non-homogeneous fog conditions. However, the adoption of fixed frequency band division strategies makes it difficult to adapt to dynamic illumination in complex lighting environments.
(iii)
Transformer architectures and attention mechanisms have revolutionized infrared-visible image fusion by enabling explicit modeling of long-range dependencies and cross-modal feature interactions. CrossFuse [6] introduces a cross-attention mechanism (CAM) that leverages reversed softmax activation to prioritize complementary (uncorrelated) features, reducing redundant information while preserving thermal targets from infrared images and texture details from visible inputs. This design demonstrates that attention can effectively enhance modality-specific feature integration. SwinFusion [18] adapts the Swin Transformer to model cross-domain long-range interactions, outperforming CNN-based methods in global feature aggregation. Its hierarchical architecture with shifted windows allows efficient computation while capturing contextual dependencies, though it may incur higher computational costs in real-time scenarios. CDDFuse [19] proposes a correlation-driven dual-branch feature decomposition framework, using cross-modal differential attention to disentangle shared structural features and modality-specific details. This approach explicitly enhances feature complementarity, but the absence of illumination-aware guidance leads to significant loss of visible-light information under low-light conditions, degrading fusion quality.
Despite progress, three critical limitations hinder existing methods in extreme low-light environments: (i) Traditional fusion frameworks exhibit insufficient texture recovery capability for severely degraded visible images. Under extreme low-light conditions, the signal-to-noise ratio (SNR) of visible images drops significantly [1], accompanied by issues such as low contrast and prominent noise. This renders traditional fusion methods unable to balance thermal target saliency and texture detail preservation in nighttime scenes, leading to severely constrained fusion performance [3]. (ii) Direct fusion of cross-modal features triggers spectral distortion. Infrared and visible modalities exhibit distinct physical properties: infrared gradients primarily reflect thermal boundaries, while visible gradients characterize material textures [20,21]. Fusing their features without explicit decomposition generates artifacts, fundamentally because traditional methods under low-light conditions tend to over-rely on infrared thermal features [19]. This leads to the misamplification of thermal radiation features and suppression of high-frequency texture expression, thereby causing abnormal diffusion of thermal boundaries and blurring degradation of visible-light textures in shadow regions, which produces obvious artifacts. (iii) Complex dynamic illumination distributions in nighttime scenes significantly interfere with static fusion strategies, resulting in blurred target edges. Static fusion rules fail to adaptively balance multimodal features under varying lighting conditions [3].
To address the above issues, an illumination-aware progressive infrared-visible fusion network, D3Fusion, tailored for nighttime scenes, is proposed in this paper. The core of this approach lies in a unified framework that couples a Decomposition Illumination Net (DIN) with attention-guided feature disentanglement, enabling joint optimization of low-light enhancement and cross-modal integration. DIN decomposes degradative illumination components from visible images to achieve low-light enhancement, while the Disentangled Encoder and Reconstruction Decoder collaborate to disentangle and fuse cross-modal features—thereby preserving texture details while integrating thermal saliency. This framework dynamically separates illumination effects from reflectance-dependent texture features, preventing thermal boundary diffusion and noise amplification typical of traditional methods. Additionally, the proposed illumination-guided feature disentanglement encoder adjusts attention weights based on real-time illumination maps, while the multi-scale differential compensation decoder enhances complementary feature integration through bidirectional feature refinement and hierarchical attention gating. By explicitly modeling the physical divergence between thermal radiation and reflectance-based features, these components mitigate spectral artifacts and enable adaptive fusion under extreme low-light. To resolve the objective conflicts in end-to-end training, a three-stage progressive training paradigm is introduced: Stage I optimizes low-light enhancement to normalize illumination variations, Stage II refines cross-modal feature disentanglement to separate thermal saliency and texture cues, and Stage III conducts adaptive fusion with illumination-guided attention. This staged strategy ensures that the model sequentially adapts to illumination dynamics, feature disparities, and fusion objectives, overcoming the static rule limitations of traditional methods. In summary, the main contributions of the proposed method are as follows:
  • A unified illumination-feature optimization framework is proposed, integrating Retinex-based degradation separation with attention-guided feature disentanglement to achieve joint optimization of low-light enhancement and feature disentanglement, balancing thermal saliency and texture preservation.
  • An illumination-guided disentanglement encoder and a multi-scale differential compensation decoder are designed, wherein attention weights are adaptively adjusted using real-time illumination maps, while complementary feature extraction is enhanced through multi-scale differential compensation.
  • A progressive three-stage training paradigm to resolve end-to-end training conflicts and adapt to dynamic illumination is established:
    • Stage I: Illumination-aware enhancement
    • Stage II: Cross-modal feature disentanglement
    • Stage III: Dynamic complementary fusion
Experimental results demonstrate that our method achieves a balance between thermal targets and texture details in low-light scenarios, adaptively amplifying scene brightness while fusing complementary information according to dynamic illumination conditions. The framework demonstrates superior visual quality and dominates quantitative metrics. Our work establishes a systematic Decomposition–Disentanglement–Dynamic Compensation (D3Fusion) framework for robust multimodal perception under illumination-critical conditions.

2. Methodology

2.1. Overview

Figure 2 illustrates the overall architecture of the proposed illumination-aware progressive fusion framework. To address the challenges of nighttime image fusion under extreme low-light conditions, the framework employs a three-stage pipeline:
Illumination Decomposition. The Decomposition Illumination Net (DIN) processes input visible ( v i R H   ×   W   ×   3 ) and infrared ( i r R H   ×   W   ×   1   ) image pairs. Inspired by Retinex theory, DIN decomposes them into enhanced visible ( v i e n ) and infrared ( i r e n ) images, along with a degraded illumination map ( l d ). Crucially, DIN extracts illumination feature components ( F f e v i , F f e i r ) that guide subsequent attention mechanisms. This stage explicitly mitigates degradative lighting effects in low-light visible imagery.
Feature Disentanglement. Enhanced images are processed by a Cross-Modal Disentangled Encoder (EN). The pipeline begins with a Shared Encoder that extracts preliminary cross-modal features. Significantly, this encoder is modulated by illumination features ( F f e v i , F f e i r ) from DIN, enabling adaptive feature extraction under varying illumination conditions. Modality-specific encoders then decompose inputs into shared base features ( F b v i , F b i r ) representing structural information and modality-specific detail features ( F d v i , F d i r ) capturing visible textures and thermal signatures.
Differential-Aware Fusion. A Dynamic Compensation Decoder (DE), equipped with a multi-scale differential perception mechanism, dynamically fuses the disentangled base ( F b v i , F b i r ) and detail ( F d v i , F d i r ) features. It utilizes attention gating and differential feature compensation strategies to adaptively integrate the complementary information from both modalities, producing the final fused image ( F u ).
Formally, given visible image v i R H × W × 3 and infrared image i r R H × W × 1 , the pipeline is governed by the following:
v i e n , i r e n , l d , F f e v i , F f e i r = D I N ( v i , i r ) S t a g e   I F b v i , F b i r , F d v i , F d i r = E N ( v i e n , i r e n , F f e v i , F f e i r ) S t a g e   I I F u = D E ( F b v i , F b i r , F d v i , F d i r ) ( S t a g e   I I I )

2.2. DIN (Decomposition Illumination Net)

The Decomposition Illumination Network (DIN) is inspired by Retinex theory [7] and improved from the Scene-Illumination Disentangled Network (SIDNet) [1]. As formalized in Equation (2), Retinex theory decomposes a low-light image ( I ) into a reflectance component ( R ) and an illumination component ( L ):
I = R L
where denotes element-wise multiplication. The visual degradation caused by low-light conditions primarily originates from the illumination component L , while the reflectance component R represents the intrinsic attributes of the scene. Thus, the enhanced image under normal illumination can be derived by estimating R from the degraded low-light image I .
Architecture of DIN. As shown in Figure 3, the DIN module adopts a triple-branch architecture to achieve illumination dissociation. The visible and infrared images are concatenated as input and processed through stacked 3 × 3 convolutions for preliminary feature extraction. Each branch incorporates a Channel Attention Module (CAM) to adaptively select multimodality-specific features. The CAM generates channel attention vectors via global average pooling to enhance visible features, degrade illumination features, and preserve infrared thermal characteristics. Three symmetric decoders with residual skip connections are then employed to reconstruct the outputs: the enhanced visible image v i e n , enhanced infrared image i r e n , and degraded illumination image l d . The process is formulated as follows:
{ v i e n , i r e n ,   l d } = D I N ( C ( v i , i r ) )
where C ( ) denotes channel-wise concatenation, and v i , i r represent the input visible and infrared image pairs. Notably, the illumination features, F f e v i and F f e i r , extracted from the SAM blocks are leveraged for subsequent illumination-guided processing.
The learning of decomposed components is supervised through a multi-objective loss function (detailed in Section 2.5), which enforces consistency between reconstructed images and their corresponding physical properties.

2.3. Disentanglement Encoder

As illustrated in Figure 4, the encoder comprises a Shared Encoder for cross-modal feature extraction and Modality-Specific Encoders for visible and infrared feature disentanglement.

2.3.1. Shared Encoder

The shared encoder employs dual parallel branches to extract shallow features from visible and infrared inputs. Three critical components are integrated into this encoder: the Illumination-Guided Attention Block (IGAB), Dilated Multi-Scale Dense Convolution (DMDC) module, and Adaptive Fusion (AF) module. The process is formally expressed as follows:
{ F s v i , F s i r } = S E ( v i e n ,   i r e n )
where S E ( ) denotes the shared encoder, and F s v i , F s i r represent shallow features of visible and infrared images.
Illumination-Guided Attention Block (IGAB). Conventional Transformers neglect illumination distribution when computing attention weights, thus limiting their robustness under complex and dynamic lighting conditions. To address this limitation, we introduce the Illumination-Guided Attention Block (IGAB) [13], which incorporates both decomposed visible and infrared illumination features { F f e v i , F f e i r } from DIN as attention modulators. These features dynamically adjust cross-modal attention maps, facilitating physics-aware feature enhancement across varying illumination scenarios.
Dilated Multi-Scale Dense Convolution (DMDC). To capture multi-scale contextual features, DMDC combines three convolutional kernels (3 × 3, 5 × 5, 7 × 7) with three dilation rates (1, 2, 4), forming nine parallel branches. The outputs are concatenated and weighted via global average pooling and Softmax to generate attention maps, adaptively highlighting critical regions.
Adaptive Fusion Module (AF). A learnable weighted fusion mechanism replaces naive concatenation or averaging. For inputs F 1 and F 2 , the fusion is expressed as follows:
F o u t = α   F 1 + ( 1 α ) F 2
where α [ 0 , 1 ] is a trainable parameter optimized via softmax constraints to balance multimodal feature contributions.

2.3.2. Modality-Specific Feature Disentanglement

The modality-specific encoders integrate parallel Basic Feature Extraction (BFE) and Detail Feature Extraction (DFE) branches, conceptually extended from CDDFuse’s dual-branch decomposition framework [19]. Visible and infrared modalities undergo independent processing using dedicated BFE/DFE streams to extract modality-specific features. BFE captures structural representations through spatial self-attention mechanisms. DFE preserves textural details via inverted residual blocks. This configuration enables specialized feature learning while maintaining inter-modal compatibility during fusion.
Basic Feature Extraction (BFE). BFE extracts global structural features from shallow shared features:
F b v i = B ( F s v i ) ,     F b i r = B ( F s i r )
where B denotes the BFE module. It integrates spatial multi-head self-attention and gated depth-wise convolutions to model long-range dependencies while maintaining computational efficiency.
Detail Feature Extraction (DFE). DFE focuses on local texture preservation:
F d v i = D ( F s v i ) ,     F d i r = D ( F s i r )
where D ( ) represents the DFE module. To achieve lossless feature extraction, an Inverted residual block (INN) with reflection padding and cross-channel interaction is employed. Each INN unit performs nonlinear feature coupling [22].
This hierarchical architecture explicitly disentangles modality-shared structural features and modality-specific detail features, providing a robust foundation for subsequent fusion.

2.4. Dynamic Compensation Decoder

As illustrated in Figure 5, after the encoder disentangles features, the decoupled base and detail features are fused. To integrate cross-modal base (or detail) features, we reuse the BFE (or DFE) to obtain fused base and detail components:
F B = B ( F b v i + F b i r ) ,     F D = D ( F d v i + F d i r )
where F B and F D represent the fused base and detail components, respectively.
For the fusion of global base and detail features, a fusion decoder (Dynamic Compensation Decoder) is designed. Although infrared and visible features have been disentangled into base and detail components, simple fusion strategies (e.g., weighted averaging) lack sensitivity to feature discrepancies and fail to adequately exploit cross-modal complementarity. To address this, we propose a Differentiation-Driven Dynamic Perception Module (DPM) that achieves complementary feature fusion through differential feature disentanglement, multi-scale attention enhancement, and bidirectional feature compensation.
In DPM, firstly, we generate difference features through mutual differentiations:
Δ b d i = F b i F d i ,     Δ d b i = F d i F b i
where F b i and F d i are the input components of the i-th layer. Δ F b d i and Δ F d b i denote differential features, and represents element-wise subtraction, capturing modality-specific discrepancies.
Subsequently, multi-layer convolution and ReLU activation functions are used to enhance the sparsity and separability of differential features. Through Global Average Pooling (GAP) and dimensionality boosting and reducing convolution, channel attention maps are generated, enabling the network to focus on the complementary regions between modalities:
b d 2 = Δ b d i S ( W u ( W d ( G A P ( Δ b d i ) ) ) ) d b 2 = Δ d b i S ( W u ( W d ( G A P ( Δ d b i ) ) ) )
where W d and W u denote dimension reduction/expansion convolutions, and S ( ) is Sigmoid. In order not to lose the global spatial correlation, a 7 × 7 large convolution kernel is then used:
b d 3   = b d 2 S   ( W c 7 ( b d 2 ) ) d b 3   = d b 2 S   ( W c 7 ( d b 2 ) )
where W c 7 denote 7 × 7 convolution. After that, we adopted the cascaded dilation convolution (dilation rates = 1, 2, 3) to capture the multi-scale context:
b d 4 = b d 3 ( W m 3 ( W m 2 ( W m 1 ( b d 3 ) ) ) ) d b 4 = d b 3 ( W m 3 ( W m 2 ( W m 1 ( d b 3 ) ) ) )
where W m r denote 3 × 3 convolution with an expansion rate of r.
Finally, the enhanced differential features are inversely fused into base/detail components:
F b i + 1   = F b i d b 4 F d i + 1   = F d i b d 4
where the denotes element-wise addition. Thus, the output components F b i + 1 and F d i + 1 of the i-th layer are obtained.
In the final upsampling process, Reflection Padding is adopted instead of the traditional zero padding. The artifacts at the image edges are suppressed by mirroring symmetrical boundary pixels, and a controllable negative slope is introduced by using the LeakyReLU activation function to alleviate the problem of gradient sparsity in deep networks.
The decoder participates in Stage II and Stage III training:
Stage   II :   v i ^ = D E ( F b v i , F d v i ) ,   i r ^ = D E ( F b i r , F d i r )
Stage   III :   F u = D E ( F B , F D )
where the D E ( ) denotes the fusion decoder. v i ^ and i r ^ represent the visible light image and the infrared image reconstructed using decoupled features, which are used to participate in the training of Stage II. F u denotes the final fused image, which is used to participate in the training of Stage III.

2.5. Progressive Three-Stage Training Strategy

Since the optimization directions of low-light enhancement (Stage I), feature decoupling (Stage II), and multimodal fusion (Stage III) are different, direct end-to-end training will cause the loss terms of different tasks to interfere with each other during gradient backpropagation, reducing the convergence stability. To solve the optimization conflicts among multiple tasks, we propose a progressive three-stage training framework with task-specific loss functions.

2.5.1. Stage I: Degradation-Aware Illumination Separation

In Stage I, we dissociated the degraded illumination features from paired visible and infrared images { v i , i r } , obtaining the low-light-enhanced visible and infrared images { v i e n , i r e n } . The total loss function is defined as follows:
L s t a g e   I = λ 1 L v i r e c o n I + λ 2 L i r r e c o n I + λ 3 L s m o o t h + λ 4 L m c + λ 5 L p e r
where:
Reconstruction Loss ( L v i r e c o n I , L i r r e c o n I ). Ensures fidelity of enhanced images:
L v i r e c o n I = v i v i r e c o n 2 2 + μ 1 ( 1 S S I M ( v i , v i r e c o n ) ) L i r r e c o n I = i r i r e n 2 2 + μ 1 ( 1 S S I M ( i r , i r e n ) )
where S S I M ( , ) represents the calculation of structural similarity, μ 1 = 2 . · 2 2 denotes the squared L2-norm calculation. Let v i r e c o n denote the reconstructed visible-light image. In accordance with the previous Retinex theory, we define v i r e c o n as the element-wise multiplication of v i e n and l d , which can be expressed by the formula:
v i r e c o n = v i e n l d
Illumination Smoothness Loss ( L s m o o t h ). Constrains the spatial continuity of l d :
L s m o o t h = l d m a x ( v i r e c o n , ε )
where ε is an extremely small constant used to prevent the denominator from being zero (0.01 in this paper). ( ) represents the Sobel operator in both the x and y directions.
Mutual Consistency Loss ( L m c ). Enforces physical consistency between illumination and reflectance:
L m c =   l d · e x p ( μ 2 ·   v i e n )
L m c introduces an illumination–reflection mutual-regularization term. At the prominent edges of objects (where v i e n is relatively large), it allows for abrupt illumination changes (where l d is relatively small) to better simulate the abrupt illumination changes at the edges of objects in the real world. Here, μ 2 is a constant, and it is set to −10 in this paper.
Perceptual Loss ( L p e r ). Aligns enhanced images with histogram-equalized references using VGG-19 features:
L p e r = V G G v i h i s t V G G ( v i e n ) 1
where · 1 represents the L1-norm calculation, v i h i s t represents v i after histogram equalization, and V G G ( ) represents the use of the pre-trained VGG-19 model. We extract the features of its conv3_3 and conv4_3 layers for similarity calculation.

2.5.2. Stage II: Feature Disentanglement

In the second stage, paired visible and infrared images { v i , i r } are first enhanced via DIN. The enhanced visible and infrared images { v i e n , i r e n } are then fed into a feature encoder to disentangle base features { F b v i , F b i r } and detailed features { F d v i , F d i r } . To generate supervisory images, the base and detail features of the visible image { F b v i , F d v i } (or infrared image { F b i r , F d i r } ) are combined and input into a fusion decoder to reconstruct the corresponding visible image v i ^ (or infrared image i r ^ ). This process aims to train the encoder’s capability in feature disentanglement. The total loss function in Stage II is formulated as follows:
L s t a g e   I I = α 1 L v i r e c o n I I + α 2 L i r r e c o n I I + α 3 L d e c o m p + α 4 L g r a d I I
where:
Reconstruction Loss ( L v i r e c o n I I , L i r r e c o n I I ). Supervises the decoder to reconstruct visible/infrared images from disentangled features ( μ 3 = 5 ):
L v i r e c o n I I = v i , v i ^ 2 2 + μ 3 ( 1 S S I M ( v i , v i ^ ) ) L i r r e c o n I I = i r , i r ^ 2 2 + μ 3 ( 1 S S I M ( i r , i r ^ ) )
Feature Disentanglement Loss ( L d e c o m p ). During feature disentanglement, we posit that the base features extracted from visible and infrared images predominantly encode shared modality-invariant information (highly correlated), while the detail features capture modality-specific characteristics (weakly correlated). To explicitly enforce this decomposition, a feature disentanglement loss is introduced:
L d e c o m p = ( L C C D ) 2 L C C B = ( C C ( v i D , i r D ) ) 2 C C ( v i B , i r B ) + ϵ
where C C ( ) computes Pearson correlation and ϵ is set to 1.01 to ensure that this loss function is always positive.
Gradient Consistency Loss ( L g r a d I I ). To further preserve high-frequency details, we introduce an additional gradient consistency loss:
L g r a d I I = v i v i ^ 1

2.5.3. Stage III: Cross-Modal Feature Fusion

The paired visible and infrared images { v i , i r } are sequentially processed through DIN and the pre-trained encoder to obtain decomposed features { F b v i , F b i r , F d v i , F d i r } . The cross-modal base features { F b v i , F b i r } (or detail features { F d v i , F d i r } ) are then fused via the BFE or DFE, producing fused features { F B , F D } . Finally, these fused features are fed into the fusion decoder to generate the fused image F u .
L s t a g e   I I I = α 3 L d e c o m p + α 5 L i n t + α 6 L g r a d I I I
where:
Feature Disentanglement Loss ( L d e c o m p ). Remains consistent with Stage II.
Intensity Consistency Loss ( L i n t ). Ensures fused image preserves salient thermal and visible regions:
L i n t = 1 H W F u m a x ( v i , i r ) 1
where H and W are the height and width of an image. By performing a maximum-value operation, within L i n t , we direct the fused image to retain both the high-frequency textures of the visible light image and the thermal radiation features of the infrared image.
Multimodal Gradient Loss ( L g r a d I I I ). Aligns gradients with optimal edges from both modalities:
L g r a d I I I = 1 H W F u m a x ( v i , i r ) 1

3. Experiments and Results

3.1. Datasets and Experimental Settings

Datasets. Four datasets covering different scenarios are selected: LLVIP [23], TNO [24], MSRS [3], and RoadScene [25]. The LLVIP dataset is a traffic surveillance dataset containing 1200 pairs of street scene images for validating dynamic illumination adaptation. The TNO dataset contains infrared-visible image pairs of nighttime military scenes with a resolution of 640 × 480, including extreme low-light conditions (<5 lux). The MSRS dataset is a multispectral road scene dataset containing 2000 image pairs, covering complex light interference scenarios such as vehicle headlights and street lamps. The RoadScene dataset is an autonomous driving scenario dataset (800 image pairs) that includes harsh conditions like fog and rainy nights. This experiment aims to verify the model’s enhancement capability under extremely low-light conditions. Therefore, we randomly selected images with illumination < 10 lux from each dataset to form the test set. The training set includes both normally lit and low-light images to enhance the model’s generalization ability, while we employed random rotation and gamma transformation to simulate different low-light intensities and added Gaussian noise to improve robustness. Our results are compared against seven State-of-the-Art (SOTA) fusion methods: PIAFusion [3], MUFusion [26], SeAFusion [27], CMTFusion [28], CrossFuse [6], DIVFusion [1], and CDDFuse [19].
Evaluation Metrics. Eight widely recognized metrics to quantify fusion performance from multiple perspectives are adopted: entropy (EN), standard deviation (SD), spatial frequency (SF), average gradient (AG), edge intensity (EI), visual information fidelity (VIF), mutual information (MI), and QAB/F [29,30,31].
Implementation Details. The experiments are conducted on a machine equipped with an NVIDIA GeForce RTX 4090 GPU using the PyTorch 2.0 framework. During the preprocessing stage, training samples were randomly cropped into 128 × 128 patches. The training protocol consisted of 260 epochs with a three-stage progressive training scheme (Stage I: 100 epochs, Stage II: 40 epochs, Stage III: 120 epochs), with a batch size of 8. We employed the Adam optimizer (β1 = 0.9, β2 = 0.999) with an initial learning rate of 10−4 and a cosine decay strategy. For network hyperparameter configuration, λ1 to λ5 were set to 1000, 2000, 7, 9, and 40, respectively, while α1 to α6 were configured as 1, 1, 2, 10, 1, and 10.

3.2. Comparative Studies

3.2.1. Qualitative Analysis

Visual comparisons across four benchmark datasets (Figure 6, Figure 7, Figure 8 and Figure 9) demonstrate our method’s superior fusion quality through effective integration of infrared thermal radiation signatures and visible-light textural details. As evidenced in Figure 6 (LLVIP #34, #13) and Figure 7 (TNO #25), conventional methods exhibit critical limitations in shadowed regions: PIAFusion, MUFusion, CMTFusion, SeAFusion, and CDDFuse suffer from textural (ground brick patterns) degradation due to unconstrained thermal feature dominance without illumination decomposition, resulting in over-smoothed low-light textures and significant detail loss.
In contrast, D3Fusion simultaneously enhances pedestrians in extremely low-light regions and preserves lane markings through degraded illumination separation via DIN decomposition combined with dynamic feature weighting using illumination-guided attention. This approach effectively mitigates the inherent conflict between thermal saliency retention and texture preservation through integrated degraded illumination separation (DIN decomposition) and dynamic feature weighting (IGAB), further enhanced by differential feature compensation. Furthermore, our framework suppresses thermal boundary diffusion artifacts and illumination-induced halo effects in Figure 7 (TNO #19, #25: human contours in vegetation and near streetlights) through multimodal feature equilibrium.
The dynamic adaptation capability is exemplified in complex lighting environments such as Figure 8 (MSRS #00943N: sky-forest boundaries) and Figure 9 (RoadScene #FLIR_08284: vehicle-light interactions). Here, the Illumination-Guided Attention Block (IGAB) adaptively balances multi-source lighting influences, effectively mitigating unnatural artifacts near light sources (e.g., color distortion) and exposure imbalance prevalent in DIVFusion and PIAFusion, thereby maintaining edge sharpness and spatial consistency under varying illumination conditions.

3.2.2. Quantitative Analysis

Eight metrics across four benchmark datasets (LLVIP, MSRS, TNO, and RoadScene) are used for evaluating the proposed method. To ensure reproducibility, three independent replicate fusion experiments were performed on each dataset using identical training weights and hyperparameters. The standard deviation of key evaluation metrics (i.e., EN, SD, SF, and VIF) across these replicates was found to be ≤1%, thus validating the stability of outputs under consistent experimental conditions. As shown in Table 1, quantitative comparisons demonstrate superior performance of our approach under low-light conditions. Significant advantages in information entropy (EN) and standard deviation (SD) confirm its exceptional detail preservation and contrast enhancement capabilities. Dominance in dynamic range metrics (SF, EI, AG) indicates remarkable enhancement in low-light textures and effective extraction of high-frequency features. Although our method does not achieve optimal results in structural fidelity metrics (MI, QAB/F), it maintains competitive performance. This outcome aligns with our fundamental insight: Low-light enhancement constitutes information reconstruction rather than simple replication. By prioritizing visual discriminability, the approach may reduce correlation with source images. Simultaneously, the proposed model actively suppresses thermal noise edges in infrared images to achieve clearer textures of dynamic targets. The results show our fused images exhibit enhanced clarity and more appropriate brightness in low-light scenarios, yielding superior visual quality. Therefore, the slight reductions in mutual information (MI) and edge preservation (QAB/F) represent an acceptable compromise.

3.3. Ablation Study

To validate the contribution of key components and training strategies within the proposed D3Fusion framework, comprehensive ablation experiments were conducted on the LLVIP test set, as detailed in Table 2.

3.3.1. Impact of Core Modules

Removing the Illumination-Guided Attention Block (IGAB)reduced performance in both entropy (EN) and standard deviation (SD). This degradation confirms IGAB’s critical role in dynamically modulating cross-modal attention weights through real-time illumination maps, which enhances thermal target contrast in severely underexposed areas while simultaneously suppressing overexposure artifacts.
Omitting the Dilated Multi-Scale Dense Convolution (DMDC)module significantly impacted feature representation capabilities, with observable reductions in EN and SD metrics. This performance drop stems from DMDC’s role in capturing multi-scale contextual features through parallel convolutional pathways with varying receptive fields.
Exclusion of the Differentiation-Driven Dynamic Perception Module (DPM)caused significant degradation in visual information fidelity (VIF). This verifies DPM’s critical function in enabling bidirectional gradient compensation for high-frequency detail preservation.
The simultaneous removal of all three core components (IGAB + DMDC + DPM) resulted in severe performance collapse across all quantitative measures, establishing their synergistic operation within the illumination guidance → multi-scale encoding → differential compensation pipeline.

3.3.2. Training Strategy Analysis

The three-stage training paradigm proved indispensable for stable optimization. End-to-end training without progressive stages caused measurable performance degradation across multiple metrics. This confirms that sequential decoupling of illumination correction (Stage I), feature disentanglement (Stage II), and dynamic fusion (Stage III) resolves fundamental optimization conflicts arising from competing objectives. A critical incompatibility exists between low-light enhancement (Stage I) and feature disentanglement (Stage II): the illumination smoothness loss ( L s m o o t h ) constrains illumination continuity but simultaneously suppresses the detail separation capability of feature disentanglement loss ( L d e c o m p ) . This conflict manifests as compromised texture preservation when both losses are optimized concurrently.
Skipping the feature disentanglement stage (Stage II) significantly degraded multiple key metrics, indicating insufficient modality-specific feature separation by the encoder. This inadequacy substantially compromises model convergence stability during training and reduces overall robustness. These findings validate the necessity of explicit decomposition learning prior to fusion, confirming that staged optimization effectively prevents mutually inhibitory gradient updates.

3.3.3. Loss Function Contributions

Removal of L s m o o t h reduced visual information fidelity (VIF) and introduced visible artifacts in high-illumination regions due to non-smooth illumination estimation. This degradation confirms the loss term’s critical role in enforcing illumination map continuity through curvature constraints, preventing overexposure artifacts while preserving natural luminance transitions.
Elimination of L d e c o m p significantly impaired feature separation capability, evidenced by measurable degradation in mutual information (MI) metrics. The loss function’s design, enforcing high correlation between base features while minimizing detail feature correlation, proves essential for effective modality-specific feature extraction. This clean decoupling substantially enhances subsequent cross-modal fusion quality by reducing thermal-texture confusion in overlapping spectral regions.
These experiments conclusively demonstrate that each component synergistically contributes to robust fusion performance under extreme low-light conditions.

3.4. DIN Decomposition Visualization

Figure 10 illustrates the decomposition results of DIN. The illumination separation process effectively decouples degraded lighting ( l d ) from reflectance components, enhancing both visible ( v i e n ) and infrared ( i r e n ) modalities. Critical features obscured in raw inputs ( v i / i r ) become discernible post-enhancement, particularly in 0–5 lux regions (highlighted areas). This explicit separation provides a robust foundation for subsequent feature disentanglement and fusion.
These results collectively validate our framework’s superiority in addressing low-light degradation, modality conflicts, and dynamic illumination challenges inherent to nighttime fusion tasks.

3.5. Computational Efficiency

We tested the model on NVIDIA RTX 4090D (24 GB) using 256 × 256 image samples from the datasets. The average inference time per image is 41.3 ms (corresponding to 24.2 frames per second), with Total FLOPs per frame reaching 450.5 GFLOPs. These metrics reflect the model’s computational demands while demonstrating real-time inference potential on high-performance hardware. The model is implemented based on PyTorch, which supports deployment on various platforms. The lightweight design of key modules (e.g., the multi-scale differential compensation decoder) provides potential for compression and deployment on resource-constrained devices. In future work, we aim to further explore model compression techniques to reduce computational overhead, while conducting targeted research to adapt the model to specific application scenarios. This will involve hardware–software co-optimization and real-world environment testing to broaden its applicability across diverse deployment scenarios.

4. Conclusions and Discussion

This paper proposes D3Fusion, an illumination-aware progressive fusion framework for infrared-visible images in extreme low-light scenarios. By establishing a collaborative optimization mechanism between illumination correction and feature disentanglement, our framework effectively addresses the core challenges of texture degradation, modality conflicts, and dynamic illumination adaptability in extreme low-light conditions. The core innovations include:
  • A unified illumination-feature co-optimization architecture that integrates Retinex-based degradation separation with attention-guided feature fusion, enabling joint enhancement of low-light images and decoupling of cross-modal characteristics.
  • An illumination-guided disentanglement encoder and multi-scale differential compensation decoder, which dynamically modulate attention weights using real-time illumination maps while enhancing complementary feature extraction via multi-scale differential perception.
  • A three-stage progressive training paradigm (degradation recovery, feature disentanglement, and adaptive fusion) that resolves optimization conflicts inherent in end-to-end frameworks, achieving balanced enhancement in thermal saliency and texture preservation.
Extensive experiments on four benchmark datasets (TNO, LLVIP, MSRS, RoadScene) demonstrate that D3Fusion outperforms State-of-the-Art methods in both visual quality and objective metrics under low-light conditions. The proposed Decomposition–Disentanglement–Dynamic Compensation framework effectively suppresses spectral distortion artifacts while enhancing salient information, establishing a robust solution for multimodal perception in illumination-degraded environments with significant applicability for nighttime autonomous systems.
Despite its strong performance, D3Fusion exhibits limitations in certain challenging scenarios. One representative failure case occurs in environments with complex lighting and poor image quality: in extremely dark, shadowed regions, residual noise may be misidentified as object contours during the illumination recognition and correction process. This misinterpretation leads to inappropriate enhancement, where noise artifacts are erroneously amplified as structural edges. An example of this artifact is visualized in Figure 11, demonstrating the model’s sensitivity to severe noise in low-light shadows. Additionally, slight halo artifacts may emerge near intense glare sources due to overcompensation during illumination correction, accompanied by blurred object boundaries. Finally, the model’s computational efficiency (24 FPS on RTX 4090) restricts deployment on resource-constrained edge devices. These limitations will motivate future work on adaptive illumination adjustment, artifact suppression, and lightweight network optimization.

Author Contributions

Conceptualization, W.Y. and X.C.; Methodology, W.Y.; Software, W.Y.; Validation, W.Y.; Formal analysis, W.Y., Y.L. and X.C.; Investigation, W.Y.; Resources, W.Y.; Data curation, W.Y.; Writing—original draft, W.Y.; Writing—review & editing, W.Y., Y.L. and X.C.; Visualization, W.Y.; Supervision, Y.L. and X.C.; Project administration, Y.L. and X.C.; Funding acquisition, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (62203312) and the Natural Science Foundation of Sichuan Province of China (2024NSFSC1484).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. The datasets in this article are not readily available due to technical and time limitations. However, the data are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IRInfrared Images
VISVisible Images
DINDecomposition Illumination Network
SNRSignal-to-Noise Ratio
SSRSingle-Scale Retinex
MSRMulti-Scale Retinex
SIDNetScene-Illumination Disentangled Network
irInfrared Input Images
viVisible Input Images
CAMChannel Attention Modules
CNNConvolutional Neural Networks
ENEncoder
DEDecoder
IGABIllumination-Guided Attention Block
DMDCDilated Multi-Scale Dense Convolution
SEShared Encoder
BFEBase Feature Encoders
DFEDetail Feature Encoders
AFAdaptive Fusion
INNInverted Residual Block
DPMDifferential Perception Module
ReLURectified Linear Unit
GAPGlobal Average Pooling
LeakyReLULeaky Rectified Linear Unit
VGGVisual Geometry Group

References

  1. Tang, L.; Xiang, X.; Zhang, H.; Gong, M.; Ma, J. DIVFusion: Darkness-free infrared and visible image fusion. Inf. Fusion 2023, 91, 477–493. [Google Scholar] [CrossRef]
  2. Zhang, X.; Demiris, Y. Visible and Infrared Image Fusion Using Deep Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10535–10554. [Google Scholar] [CrossRef]
  3. Tang, L.; Yuan, J.; Zhang, H.; Jiang, X.; Ma, J. PIAFusion: A progressive infrared and visible image fusion network based on illumination aware. Inf. Fusion 2022, 83, 79–92. [Google Scholar] [CrossRef]
  4. Jin, Y.; Yang, W.; Tan, R.T. Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression. In Proceedings of the 17th European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; pp. 404–421. [Google Scholar]
  5. Zhang, S.; Tao, Z.; Lin, S. WaveletFormerNet: A Transformer-based wavelet network for real-world non-homogeneous and dense fog removal. Image Vis. Comput. 2024, 146, 105014. [Google Scholar] [CrossRef]
  6. Li, H.; Wu, X.-J. CrossFuse: A novel cross attention mechanism based infrared and visible image fusion approach. Inf. Fusion 2024, 103, 102147. [Google Scholar] [CrossRef]
  7. Land, E.H.; McCann, J.J. Lightness and Retinex Theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef] [PubMed]
  8. Wang, S.H.; Zheng, J.; Hu, H.M.; Li, B. Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef]
  9. Fu, X.Y.; Zeng, D.L.; Huang, Y.; Zhang, X.P.; Ding, X.H. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; IEEE: New York, NY, USA, 2016; pp. 2782–2790. [Google Scholar]
  10. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
  11. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef]
  12. Zhang, Y.H.; Zhang, J.W.; Guo, X.J. Kindling the Darkness: A Practical Low-light Image Enhancer. In Proceedings of the 27th ACM International Conference on Multimedia (MM), Nice, France, 21–25 October 2019; ACM: New York, NY, USA, 2019; pp. 1632–1640. [Google Scholar]
  13. Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; IEEE: New York, NY, USA, 2023; pp. 12470–12479. [Google Scholar]
  14. Li, M.D.; Liu, J.Y.; Yang, W.H.; Sun, X.Y.; Guo, Z.M. Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
  15. Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond Brightening Low-light Images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
  16. Wang, C.; Pu, Y.; Zhao, Z.; Nie, R.; Cao, J.; Xu, D. FCLFusion: A frequency-aware and collaborative learning for infrared and visible image fusion. Eng. Appl. Artif. Intell. 2024, 137, 109192. [Google Scholar] [CrossRef]
  17. Xu, H.; Ma, J.; Jiang, J.; Guo, X.; Ling, H. U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 502–518. [Google Scholar] [CrossRef] [PubMed]
  18. Ma, J.Y.; Tang, L.F.; Fan, F.; Huang, J.; Mei, X.G.; Ma, Y. SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer. IEEE-CAA J. Autom. Sin. 2022, 9, 1200–1217. [Google Scholar] [CrossRef]
  19. Zhao, Z.; Bai, H.; Zhang, J.; Zhang, Y.; Xu, S.; Lin, Z.; Timofte, R.; Van Gool, L. CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for Multi-Modality Image Fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; IEEE: New York, NY, USA, 2023; pp. 5906–5916. [Google Scholar]
  20. Ma, J.; Yu, W.; Liang, P.; Li, C.; Jiang, J. FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf. Fusion 2019, 48, 11–26. [Google Scholar] [CrossRef]
  21. Wang, J.; Xi, X.; Li, D.; Li, F. FusionGRAM: An Infrared and Visible Image Fusion Framework Based on Gradient Residual and Attention Mechanism. IEEE Trans. Instrum. Meas. 2023, 72, 5005412. [Google Scholar] [CrossRef]
  22. Zhou, M.; Fu, X.Y.; Huang, J.; Zhao, F.; Liu, A.P.; Wang, R.J. Effective Pan-Sharpening With Transformer and Invertible Neural Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5406815. [Google Scholar] [CrossRef]
  23. Jia, X.Y.; Zhu, C.; Li, M.Z.; Tang, W.Q.; Zhou, W.L.; Soc, I.C. LLVIP: A Visible-infrared Paired Dataset for Low-light Vision. In Proceedings of the 18th IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 3489–3497. [Google Scholar]
  24. Toet, A.; Hogervorst, M.A. Progress in color night vision. Opt. Eng. 2012, 51, 0109010. [Google Scholar] [CrossRef]
  25. Xu, H.; Ma, J.Y.; Le, Z.L.; Jiang, J.J.; Guo, X.J.; Assoc Advancement Artificial, I. FusionDN: A Unified Densely Connected Network for Image Fusion. In Proceedings of the 34th AAAI Conference on Artificial Intelligence/32nd Innovative Applications of Artificial Intelligence Conference/10th AAAI Symposium on Educational Advances in Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 12484–12491. [Google Scholar]
  26. Cheng, C.; Xu, T.; Wu, X.-J. MUFusion: A general unsupervised image fusion network based on memory unit. Inf. Fusion 2023, 92, 80–92. [Google Scholar] [CrossRef]
  27. Tang, L.; Yuan, J.; Ma, J. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network. Inf. Fusion 2022, 82, 28–42. [Google Scholar] [CrossRef]
  28. Park, S.; Vien, A.G.; Lee, C. Cross-Modal Transformers for Infrared and Visible Image Fusion. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 770–785. [Google Scholar] [CrossRef]
  29. Liu, J.Y.; Wu, G.Y.; Liu, Z.; Wang, D.; Jiang, Z.Y.; Ma, L.; Zhong, W.; Fan, X.; Liu, R.S. Infrared and Visible Image Fusion: From Data Compatibility to Task Adaption. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 2349–2369. [Google Scholar] [CrossRef] [PubMed]
  30. Ma, J.Y.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
  31. Zhang, X. Deep Learning-Based Multi-Focus Image Fusion: A Survey and a Comparative Study. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4819–4838. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Comparative visualization of nighttime street scenes: (a) Visible image (VIS); (b) Infrared image (IR); (c) Fused image (generated by the proposed D3Fusion). The fused image integrates texture details (from VIS) and thermal targets (from IR) for enhanced nighttime perception.
Figure 1. Comparative visualization of nighttime street scenes: (a) Visible image (VIS); (b) Infrared image (IR); (c) Fused image (generated by the proposed D3Fusion). The fused image integrates texture details (from VIS) and thermal targets (from IR) for enhanced nighttime perception.
Applsci 15 08918 g001
Figure 2. Overall architecture of the proposed illumination-aware progressive fusion network. The framework comprises three core components: (a) Decomposition Illumination Net (DIN) for joint low-light enhancement, (b) Cross-Modal Disentangled Encoder for feature decomposition, and (c) Differential Compensation Decoder.
Figure 2. Overall architecture of the proposed illumination-aware progressive fusion network. The framework comprises three core components: (a) Decomposition Illumination Net (DIN) for joint low-light enhancement, (b) Cross-Modal Disentangled Encoder for feature decomposition, and (c) Differential Compensation Decoder.
Applsci 15 08918 g002
Figure 3. Architecture of the Decomposition Illumination Net (DIN). The three-branch architecture processes concatenated visible-infrared inputs through parallel convolution streams with Channel Attention Modules (CAM).
Figure 3. Architecture of the Decomposition Illumination Net (DIN). The three-branch architecture processes concatenated visible-infrared inputs through parallel convolution streams with Channel Attention Modules (CAM).
Applsci 15 08918 g003
Figure 4. Architecture of the Disentanglement Encoder. The dual-path network combines a shared encoder with illumination-guided attention (IGAB) and multi-scale dilated convolution (DMDC), and modality-specific branches for base/detail feature extraction. Base Feature Encoders (BFE) employ spatial self-attention, while Detail Feature Encoders (DFE) use inverted residual blocks for edge-preserving decomposition.
Figure 4. Architecture of the Disentanglement Encoder. The dual-path network combines a shared encoder with illumination-guided attention (IGAB) and multi-scale dilated convolution (DMDC), and modality-specific branches for base/detail feature extraction. Base Feature Encoders (BFE) employ spatial self-attention, while Detail Feature Encoders (DFE) use inverted residual blocks for edge-preserving decomposition.
Applsci 15 08918 g004
Figure 5. Architecture of the Dynamic Compensation Decoder. Multi-Scale Differential Perception Module (DPM) enabling bidirectional feature refinement and attention-gated fusion.
Figure 5. Architecture of the Dynamic Compensation Decoder. Multi-Scale Differential Perception Module (DPM) enabling bidirectional feature refinement and attention-gated fusion.
Applsci 15 08918 g005
Figure 6. Visual comparison for “#34” and “#13” in the LLVIP dataset.
Figure 6. Visual comparison for “#34” and “#13” in the LLVIP dataset.
Applsci 15 08918 g006
Figure 7. Visual comparison for “#25” and “#19” in the TNO dataset.
Figure 7. Visual comparison for “#25” and “#19” in the TNO dataset.
Applsci 15 08918 g007
Figure 8. Visual comparison for “#00004N” and “#00943N” in the MSRS dataset.
Figure 8. Visual comparison for “#00004N” and “#00943N” in the MSRS dataset.
Applsci 15 08918 g008
Figure 9. Visual comparison for “#FLIR_07206” and “#FLIR_08284” in the RoadScene dataset.
Figure 9. Visual comparison for “#FLIR_07206” and “#FLIR_08284” in the RoadScene dataset.
Applsci 15 08918 g009
Figure 10. DIN decomposition visualization.
Figure 10. DIN decomposition visualization.
Applsci 15 08918 g010
Figure 11. Failure cases of “#00714N” and “#00947N” in the MSRS dataset. (a) VIS; (b) IR; (c) Fusion result from CDDFuse (without illumination correction); (d) Fusion result from DIVFusion (with illumination correction); (e) Ours (with illumination correction). During illumination correction, residual noise is misidentified as structural contours, resulting in erroneous enhancement in low-light regions.
Figure 11. Failure cases of “#00714N” and “#00947N” in the MSRS dataset. (a) VIS; (b) IR; (c) Fusion result from CDDFuse (without illumination correction); (d) Fusion result from DIVFusion (with illumination correction); (e) Ours (with illumination correction). During illumination correction, residual noise is misidentified as structural contours, resulting in erroneous enhancement in low-light regions.
Applsci 15 08918 g011
Table 1. Quantitative results of the IVF task. Boldface and underline show the best and second-best values, respectively. indicates higher values are better.
Table 1. Quantitative results of the IVF task. Boldface and underline show the best and second-best values, respectively. indicates higher values are better.
Dataset: LLVIP Infrared-Visible Fusion Dataset
EN ↑SD ↑SF ↑AG ↑EI ↑VIF ↑MI ↑QAB/F
CDDFuse7.2650.6417.294.7849.730.983.120.53
DIVFusion7.5052.9016.125.3255.340.982.640.45
PIAFusion7.2449.5517.725.2454.580.992.920.60
MUFusion7.0747.4613.484.6250.530.762.380.46
SeAFusion7.3149.9516.554.9151.580.932.890.53
CrossFuse6.6231.4812.423.4035.420.832.510.49
CMTFusion7.0641.2711.283.4535.880.872.710.50
Ours7.5354.6820.565.8558.111.002.940.55
Dataset: MSRS Infrared-Visible Fusion Dataset
EN ↑SD ↑SF ↑AG ↑EI ↑VIF ↑MI ↑QAB/F
CDDFuse6.3941.899.52.9031.771.063.940.54
DIVFusion7.2749.2812.134.3847.950.902.630.25
PIAFusion6.2541.329.542.9131.931.043.650.51
MUFusion5.6730.848.222.4627.540.671.860.37
SeAFusion6.3740.919.122.8531.521.043.580.51
CrossFuse6.2235.187.842.2525.070.912.930.46
CMTFusion5.7931.066.382.0422.500.822.590.46
Ours7.3152.3912.634.5349.711.073.680.49
Dataset: TNO Infrared-Visible Fusion Dataset
EN ↑SD ↑SF ↑AG ↑EI ↑VIF ↑MI ↑QAB/F
CDDFuse7.0742.6413.745.0852.010.832.950.46
DIVFusion7.3649.5316.466.7668.650.712.400.33
PIAFusion6.9539.5012.374.7750.420.853.020.52
MUFusion7.1645.1410.885.1458.690.601.950.37
SeAFusion7.0441.7812.665.0454.020.742.780.43
CrossFuse6.8735.7011.024.0941.580.792.860.41
CMTFusion6.9034.6610.754.1642.830.722.320.42
Ours7.2947.1817.366.8069.890.872.830.47
Dataset: RoadScene Infrared-Visible Fusion Dataset
EN ↑SD ↑SF ↑AG ↑EI ↑VIF ↑MI ↑QAB/F
CDDFuse7.4855.3912.714.9356.800.843.300.52
DIVFusion7.5454.4312.594.5651.160.763.220.32
PIAFusion7.1443.4211.944.5252.670.823.070.50
MUFusion7.2145.2410.904.7757.020.712.230.40
SeAFusion7.3953.4212.544.9757.130.833.260.49
CrossFuse7.0539.488.623.1836.850.733.210.37
CMTFusion7.2645.388.473.2337.490.813.190.44
Ours7.5856.1812.964.9557.060.853.270.47
Table 2. Ablation experiment results in the test set of LLVIP. Bold indicates the best value.
Table 2. Ablation experiment results in the test set of LLVIP. Bold indicates the best value.
ConfigurationsENSDMIVIF
Structurew/o IGAB7.5152.862.880.95
w/o DMDC7.4751.662.920.96
w/o DPM7.4952.772.900.94
w/o DMDC + DPM7.4450.682.890.93
w/o IGAB + DMDC + DPM7.4048.792.860.91
Stagew/o Stage I training7.2349.732.970.98
w/o Stage II training7.4050.942.880.96
w/o Stage I + II training6.9147.332.750.86
Lossw/o L s m o o t h + L m c 7.2850.102.910.97
w/o L d e c o m p 7.4551.962.900.96
Ours7.5354.682.941.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, W.; Liu, Y.; Chen, X. D3Fusion: Decomposition–Disentanglement–Dynamic Compensation Framework for Infrared-Visible Image Fusion in Extreme Low-Light. Appl. Sci. 2025, 15, 8918. https://doi.org/10.3390/app15168918

AMA Style

Yang W, Liu Y, Chen X. D3Fusion: Decomposition–Disentanglement–Dynamic Compensation Framework for Infrared-Visible Image Fusion in Extreme Low-Light. Applied Sciences. 2025; 15(16):8918. https://doi.org/10.3390/app15168918

Chicago/Turabian Style

Yang, Wansi, Yi Liu, and Xiaotian Chen. 2025. "D3Fusion: Decomposition–Disentanglement–Dynamic Compensation Framework for Infrared-Visible Image Fusion in Extreme Low-Light" Applied Sciences 15, no. 16: 8918. https://doi.org/10.3390/app15168918

APA Style

Yang, W., Liu, Y., & Chen, X. (2025). D3Fusion: Decomposition–Disentanglement–Dynamic Compensation Framework for Infrared-Visible Image Fusion in Extreme Low-Light. Applied Sciences, 15(16), 8918. https://doi.org/10.3390/app15168918

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop