Next Article in Journal
Development of a Wearable Walking and Standing Aid for Elderly People
Previous Article in Journal
Design of a Snake-like Robot for Rapid Injury Detection in Patients with Hemorrhagic Shock
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bidirectional Dynamic Adaptation: Mutual Learning with Cross-Network Feature Rectification for Urban Segmentation

School of Intelligent Manufacturing and Energy Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(18), 10000; https://doi.org/10.3390/app151810000
Submission received: 31 July 2025 / Revised: 4 September 2025 / Accepted: 9 September 2025 / Published: 12 September 2025

Abstract

Semantic segmentation of urban scenes from red–green–blue and thermal infrared imagery enables per-pixel categorization, delivering precise environmental understanding for autonomous driving and urban planning. However, existing methods suffer from inefficient fusion and insufficient boundary accuracy due to modal differences. To address these challenges, we propose a bidirectional dynamic adaptation framework with two complementary networks. The modality-aware network uses dual attention and multi-scale feature integration to balance modal contributions adaptively, improving intra-class semantic consistency and reducing modal disparities. The edge-texture guidance network applies pixel-level and feature-level weighting with Sobel and Gabor filters to enhance inter-class boundary discrimination, improving detail and boundary precision. Furthermore, the framework redefines multi-modal synergy using an adaptive cross-modal mutual learning mechanism. This mechanism employs information-driven dynamic alignment and probability-guided semantic consistency to overcome the fixed constraints of traditional mutual learning. This cohesive orchestration enhances multi-modal fusion efficiency and boundary delineation accuracy. Extensive experiments on the MFNet and PST900 datasets demonstrate the framework’s superior performance in urban road, vehicle, and pedestrian segmentation, surpassing state-of-the-art approaches.

1. Introduction

Semantic segmentation of urban scenes, as a core task within the field of computer vision, involves separating key objects such as vehicles, pedestrians, and buildings within urban environments. This plays a pivotal role in applications including autonomous driving, intelligent surveillance, and urban planning [1,2,3,4,5]. In recent years, deep learning architectures such as U-Net [6] have advanced semantic segmentation. They leverage encoder–decoder designs and multi-scale feature fusion to improve both accuracy and efficiency. These advances have collectively driven rapid progress in the field [7,8,9,10,11,12,13,14,15]. However, single red–green–blue (RGB) mode has a limited ability to capture discriminative cues in low-texture or visually blurred areas, resulting in segmentation results that cannot be further enhanced. Thus, a red–green–blue and thermal (RGB-T) multimodal fusion paradigm has been introduced, integrating RGB textures and illumination-robust thermal distributions to form complementary features and improve segmentation performance [1,2,7]. The RGB-T semantic segmentation results in Figure 1 show two key defects in existing models. First, they fail to handle information redundancy and interference between RGB and thermal infrared modalities, leading to target recognition failures. Second, they perform poorly in boundary refinement, making precise target contour segmentation difficult.
To address these issues, this study designs two fusion modules: the Dynamic Feature Integration Module (DFIM) and the Unified Fusion Module (UFM), which tackle modal interference and boundary blur, respectively. However, directly combining the two modules for urban scene semantic segmentation tasks leads to feature allocation mismatch and conflicting optimization objectives [16,17]. More importantly, compared to single-module segmentation alone, the performance of this joint module segmentation significantly degrades. Traditional methods for resolving such module conflicts often require tedious architectural adjustments [18,19]. Inspired by mutual learning techniques [20], this study develops a mutual learning framework that enables two specialized networks to optimize collaboratively.
Specifically, this study designs a modality-aware network (MANet) that uses DFIM to fuse RGB and thermal infrared features adaptively. This enhances intra-class semantic consistency and reduces modal redundancy and interference. Additionally, this study also develops an Edge-Texture Guidance Network (EGNet) that uses UFM to apply pixel-level and feature-level weighting with Sobel and Gabor filters. This improves inter-class boundary discrimination and refines local details and boundary precision. Thirdly, to integrate the global semantics and boundary features from the networks, this study introduces an adaptive cross-modal mutual learning (ACML) mechanism that redefines multi-modal synergy. ACML overcomes the limitations of traditional mutual learning’s fixed constraints by dynamically aligning modal feature differences in the encoder stage and ensuring probability-guided consistency of predictive distributions in the decoder stage. This approach enhances intra-class semantic consistency through feature difference alignment and improves inter-class boundary discrimination via predictive distribution consistency, effectively addressing modal fusion and optimization conflicts [21,22,23,24]. By employing information-driven and probability-based optimization, ACML enables bidirectional knowledge transfer, achieving efficient multi-modal fusion and precise boundary delineation. The contributions of this study are as follows:
  • We propose MANet, focusing on global semantic representation, where DFIM addresses information redundancy and modal interference through adaptive weight allocation and dynamic enhancement mechanisms.
  • We design EGNet, focusing on detail capture and boundary refinement, where UFM enhances detection capability for object boundaries and details through pixel-level and feature-level weighting strategies combined with edge and texture filters.
  • We propose ACML, enhancing intra-class pixel semantic consistency and reducing inter-class boundary blurring through adaptive alignment based on feature differences and adaptive consistency based on prediction distribution entropy.
  • This study performs exhaustive experimental validation on two standard RGB-T semantic segmentation datasets, MFNet and PST900. The results demonstrate that the proposed method performs excellently in urban scene semantic segmentation tasks, proving the effectiveness and superiority of the dual-network collaborative framework based on mutual learning.
The rest of this paper is organized as follows: Section 2 introduces the current research status of RGB-T urban scene semantic segmentation and mutual learning; Section 3 elaborates on the proposed bidirectional dynamic adaptive mutual learning framework, including the design and implementation of MANet, EGNet, and ACML; Section 4 presents the experimental results and comparative analysis on the MFNet and PST900 datasets, and validates the effectiveness of each module through ablation experiments; Section 5 summarizes the paper and looks forward to future research directions.

2. Related Works

2.1. RGB-T Urban Scene Semantic Segmentation

RGB-T urban scene semantic segmentation achieves more robust semantic segmentation by fusing visible light RGB images and thermal infrared images. This technology utilizes the complementary strengths of two imaging modalities: RGB images provide rich texture details and color information, while thermal images capture temperature differences and ensure stable imaging quality in low-light conditions. With the development of deep learning technology, researchers have adopted end-to-end networks for RGB-T urban scene semantic segmentation [4,5]. Sun et al. [7] pioneered the RGB-Thermal Fusion Network, integrating feature information at multiple scales through designed fusion modules. Building on this foundation, Sun et al. [8] proposed the FuseSeg network, which focuses on urban scene semantic segmentation and uses improved fusion strategies to identify target details. Addressing modal imbalance issues, Wang et al. [9] proposed a semantic-guided fusion network that adaptively adjusts modality contribution weights, tackling issues of thermal infrared image redundancy or insufficient RGB image information. As technology develops, researchers increasingly focus on optimizing fusion strategies. Lv et al. [10] introduced a context-aware interaction network that improves information exchange between RGB and thermal infrared features. Zhou et al. [12] proposed a novel mamba fusion module to address long-range modeling issues. Zhou et al. [14] developed an adaptive gated fusion network to tackle uneven fusion caused by modal differences. Guo et al. [11] proposed contrastive learning-based knowledge, utilizing edge and distribution information to guide semantic decoding, thereby improving segmentation accuracy at class boundaries. Zhou et al. [25] introduced advanced feature integration modules to refine multimodal high-level features, thus providing detailed accuracy for target recognition. Guo et al. [13] proposed a memory-based contrastive learning network that utilizes cross-modal dual associations to fully fuse information from both RGB and thermal infrared modalities [15].
Current RGB-T semantic segmentation research focuses on multimodal information fusion. However, no unified framework has emerged to address fusion quality, segmentation details, and edge accuracy simultaneously. This stems from the difficulty of existing models to efficiently handle multiple tasks in parallel [16,17]. To address this, this study proposes two complementary specialized networks focusing on modal fusion and feature optimization, respectively, to improve urban scene semantic segmentation accuracy.

2.2. Mutual Learning

Mutual learning advances model performance through bidirectional knowledge exchange, where multiple models collaboratively optimize shared objectives to achieve robust generalization beyond the constraints of unidirectional methods. Mutual learning originates from knowledge distillation [26], which transfers soft labels from large to smaller models for compression. It overcomes the limitations of one-way transfer, such as restricted generalization and dependence on a single teacher model. Zhang et al. [20] introduced deep mutual learning, establishing a bidirectional paradigm that jointly optimizes supervised and collaborative losses, enhancing model robustness. Subsequent works have explored diverse mutual learning strategies for various challenges. Chen et al. [27] proposed a diversified peer online knowledge distillation method that tackles model homogenization by dynamically selecting learning partners, improving model complementarity. In terms of heterogeneous model collaboration, Shen et al. [28] proposed a knowledge fusion method that integrates knowledge from multiple expert models to address the limited generalization of single models, offering insights for cross-domain knowledge transfer. Addressing challenges in multimodal learning, Peng et al. [29] proposed a correlation consistency knowledge distillation method that uses feature correlation constraints to tackle semantic alignment issues, promoting information fusion across modalities. To enhance the theoretical foundation and practical application effectiveness of mutual learning, researchers have further explored optimization strategies for specific domains. Sun et al. [30] proposed a BERT-based knowledge distillation method that compresses large language models, reducing computational complexity while preserving performance. Tang et al. [31] applied mutual learning to recommendation systems, using sequence modeling to address data sparsity and improve algorithm performance. Heo et al. [32] proposed an activation boundary distillation method that captures hidden neuron activation patterns to address fine-grained knowledge representation transfer, improving distillation accuracy and efficiency.
However, in RGB-T urban scene semantic segmentation, traditional mutual learning methods struggle to resolve semantic conflicts caused by dynamic modal disparities between RGB and thermal infrared data, as well as the need for precise boundary delineation in complex urban scenarios. Existing approaches, such as those in [20,29], often prioritize feature alignment but fail to jointly optimize intra-class semantic consistency and inter-class boundary discrimination. To mitigate these challenges, our proposed ACML improves multi-modal integration by adaptively coordinating modal features and predictive consistency, combining MANet’s intra-class semantic coherence with EGNet’s inter-class boundary discrimination for robust and precise urban scene segmentation.

3. Methodology

3.1. Overview

As shown in Figure 2, the dual-network framework based on collaborative learning proposed in this study includes three core components: MANet, EGNet, and ACML. As shown in Figure 3, Figure 4 and Figure 5, these three components, respectively, demonstrate their network architectures. During the training phase, MANet and EGNet achieve knowledge sharing and collaborative guidance through ACML to reach optimization effects. During the inference phase, these two networks can be deployed independently without relying on the ACML module. This design ensures collaborative efficiency during the training process while considering flexibility and computational efficiency during the testing phase. Drawing upon relevant research on mutual learning [27,28,29,30,31,32], models with structural differences can extract features from distinct perspectives, thereby achieving complementary advantages through collaborative learning. Therefore, we adopt different backbone networks for the two different networks to extract features.

3.2. MANet

Figure 3 shows the overall architecture of MANet. Its overall structure adopts a typical U-Net structure, consisting of MixTransformer-B2 [33], DFIM, and Hierarchical Decoder. Among these, {r1, r2, r3, r4} and {t1, t2, t3, t4} are multi-scale features of different scales extracted by the backbone network from RGB images and thermal images, respectively. Subsequently, {r1, r2, r3, r4} and {t1, t2, t3, t4} are input into the DFIM to obtain fused features {m1, m2, m3, m4}. Finally, these features are processed through a hierarchical decoder to output the final prediction results {p1}.
DFIM: In road semantic segmentation tasks, there exists the problem of information redundancy and inter-modal interference caused by imbalanced utilization of multi-modal information [34]. Therefore, we design the Dynamic Feature Integration Module (DFIM), which assigns weights adaptively via dual attention to highlight key feature dimensions and spatial positions, captures multi-scale contextual information using dilated convolutions, and adjusts feature responses through dynamic enhancement. This improves feature fusion by selectively combining complementary information from RGB and thermal modalities while suppressing redundant features, and integrating both fine-grained details and global context across multiple scales, enabling feature fusion.
Specifically, the DFIM processes ri and ti through independent convolutional layers for dimension consistency, then applies cascaded dual attention for fusion to obtain fi. The dual attention includes feature dimension attention using global pooling and convolutional networks to highlight important dimensions, and position attention, generating spatial weight maps to emphasize key positions. These attention weights sequentially act on concatenated multi-modal features for comprehensive selection across both feature dimensions and spatial positions. The specific expressions are as follows:
f i = D u a l A t t e n t i o n ( C a t ( C o n v 3 ( r i ) , C o n v 3 ( t i ) ) )
where DualAttention(·) represents the dual attention operation, Cat(·) represents the concatenation operation, and Conv3(·) represents the 3 × 3 convolution operation.
The fused feature fi undergoes multi-scale processing through four parallel dilated convolution branches, which, respectively, employ convolution kernels with different dilation rates (1, 2, 4, and 8 [35] are used here). Each branch extracts 1/4 of the output channel features, and then concatenation and convolution operations are used to fuse features under different dilation rates, obtaining feature si that contains rich receptive fields. This design enables the module to simultaneously capture local details and global contextual information, forming rich multi-scale feature representations. The specific expression is as follows:
s i = C o n v 3 ( C a t ( A t r o u s C o n v ( f i ) ) )
where AtrousConv(·) represents dilated convolution.
Finally, si enters the Dynamic Enhancement Branch, which extracts the global information gi of features through global adaptive average pooling, and then dynamically generates dynamic attention weight w and bias b based on si. Then, parameterized convolution [36] is used to process Si, which is then multiplied with gi features to obtain the fused feature mi. This enables the module to dynamically adjust feature response intensity according to different input scenarios, thereby improving the discriminative ability of features. The specific expression is as follows:
g i = D y n a m i c E n h a n c e m e n t ( s i )
m i = P a r a C o n v ( s i , w , b )
where DynamicEnhancement(·) represents the dynamic enhancement branch, which consists of global average pooling, convolution, ReLU, and sigmoid activation. w and b denote learnable parameters, and ParaConv(·) represents Parameterized Convolution.
Hierarchical Decoder: To fully utilize the fused features {m1, m2, m3, m4} output by DFIM, this study designs a hierarchical decoder that reconstructs high-resolution output through progressive upsampling and feature fusion. First, each fused feature mi undergoes convolution, batch normalization, ReLU activation, and upsampling operations to obtain preliminary reconstructed features gi. Then, multi-scale features are progressively integrated through a two-stage fusion mechanism: the first stage concatenates and convolutionally fuses features from adjacent scales to obtain intermediate fused features cj; the second stage further fuses these intermediate features to generate the final segmentation prediction map p1. This hierarchical decoding approach not only effectively recovers spatial resolution but also preserves semantic information from different scales, ensuring the accuracy and completeness of segmentation results. The specific formulas are as follows:
g i = C B R U ( m i ) ,   i = 1 , 2 , 3 , 4
c j = C o n v 3 ( C a t ( g j + 1 , g j ) ) ,   j = 1 , 2 , 3
z y = C o n v 3 ( C a t ( c y + 1 , c y ) ) ,   y = 1 , 2
p 1 = C o n v 3 ( C a t ( z 1 , z 2 ) )
where CBRU(·) consists of convolution, batch normalization, ReLU, and upsampling.

3.3. EGNet

Figure 4 shows the overall architecture of EGNet. The network first inputs RGB and thermal data into a pre-trained DFormer-Base [37] encoder to capture modality-specific and hierarchical semantic features, labeled as {R1, R2, R3, R4} and {T1, T2, T3, T4}, respectively. Among these, {R1} and {T1} are fed into UFM for cross-modal interaction and enhancement, yielding the fused representation {f1}. Meanwhile, {R2, R3, R4} and {T2, T3, T4} are element-wise added to the corresponding outputs of the previous UFM layer to produce {R2, R3, R4} and {T2, T3, T4}, which are then progressively input into subsequent UFMs to generate {f2, f3, f4}. Finally, this study inputs {f1, f2, f3, f4} into the Hierarchical Decoder to generate corresponding segmentation masks {s1, s2, s3, s4}. It is worth noting that the decoder adopts the same Hierarchical Decoder as MANet, so it will not be described again.
UFM: In RGB-T semantic segmentation, convolutional networks often struggle to capture fine-grained structural details (e.g., edges and textures) in complex urban scenes. To address this limitation, inspired by hybrid approaches in medical image boundary detection and remote sensing [19,33], we propose the Unified Feature Module (UFM), which integrates Sobel and Gabor filters to introduce deterministic edge and texture priors, effectively enhancing the representation capability of structural details in RGB-T feature fusion.
For semantic alignment, interaction, and refinement of RGB features Ri and thermal features Ti extracted by the backbone network, UFM first processes the input features Ri and Ti through a downsampling convolution module, which contains a 3 × 3 convolution layer, batch normalization, and LeakyReLU activation function:
R ¯ i = C o n v 3 ( R i )
T ¯ i = C o n v 3 ( T i )
Subsequently, cross-modal feature representation is generated through element-wise multiplication to compute the interaction between RGB and thermal features:
M R D = R ¯ i · T ¯ i
Next, a pixel-level weighting strategy (PW) is used to process the cross-modal interaction feature MRD, generating a pixel-level weighting map. This weighting map focuses on key regions in cross-modal interaction through convolution and Sigmoid activation. Based on pixel-level enhancement, the feature-level weighting module (FW) further generates feature-level weights through global average pooling and fully connected layers, thereby achieving cross-modal feature alignment and enhancing RGB features through the following operations, with the enhanced feature denoted as R i m o d :
R S , i = R ¯ i · P W M R D + R ¯ i
R i m o d = R s , i · F W ( R s , i )
To further enhance the model’s capability in detecting object boundaries and details, this study integrates classical image priors (edges and textures) into the UFM, where the Sobel operator provides gradient computation based on mathematical principles, aligning with the inherent advantages of thermal imaging in boundary representation; meanwhile, the frequency and directional selectivity of Gabor filters precisely captures the rich textural characteristics of RGB images. Specifically, fixed 3 × 3 Sobel filters are applied to R i m o d in both x and y directions to capture edge intensity gradients Gi, then edge-enhanced features Ei are obtained through 1 × 1 convolution, sigmoid gating, and residual operations:
G x = 1 0 1 2 0 2 1 0 1 , G y = 1 2 1 0 0 0 1 2 1
G i = ( G x R i m o d ) 2 + ( G y R i m o d ) 2
E i = R i m o d × σ ( C o n v 1 ( G i ) ) + R i m o d
where σ represents the sigmoid operation, and Conv1(·) represents the 1 × 1 convolution operation.
Next, Gabor filtering is applied to the edge-enhanced features Ei along the channel dimension, capturing complex texture patterns through multi-directional texture analysis. Then, the texture features from all directions are concatenated and processed through 1 × 1 convolution to obtain texture-enhanced features Texturei:
G a b o r ( x , y , λ , θ , σ , γ ) = exp ( x 2 + γ 2 y 2 2 δ 2 ) cos ( 2 π λ x )
T e x t u r i = C o n v C a t ( G a b o r i ( E i ) )
where x′ = x con θ + y sin θ, y′ = −x sin θ + y con θ, λ represents the wavelength, θ represents the direction, δ represents the scale, and γ represents the spatial aspect ratio. ConvCat(·) represents the joint operation of concatenation and convolution, while Cat represents the independent concatenation operation.
Finally, the obtained edge Ei and texture features Texturei are concatenated with the cross-modal aggregated features R i m o d , and then processed through multiple parallel dilated convolutions (with dilation rates of 1, 2, and 3, respectively) to expand their receptive field:
f i = C o n v C a t ( A t r o u s C o n v ( C a t ( E i , T e x t u r e i , R i m o d ) ) )
This hybrid strategy enables comprehensive exploitation of boundary-discriminative information from thermal infrared modality and texture-discriminative information from RGB modality, thereby achieving more precise cross-modal semantic alignment at the feature level. Although the incorporation of fixed filters introduces additional computational overhead during training, it significantly improves segmentation accuracy and generalization capability, striking an effective balance between performance gains and computational costs.

3.4. ACML

To address the challenges of modal disparities and the need for effective integration of global semantics and local boundary features in RGB-T urban scene semantic segmentation, this study proposes the ACML framework that redefines multi-modal optimization. Unlike traditional approaches reliant on fixed alignment modules or extensive hyperparameter tuning, ACML leverages dynamic, difference-based mutual learning to enable bidirectional knowledge transfer without parameter dependencies.
As shown in Figure 5, in the ACML mutual learning framework of this study, MANet and EGNet, respectively, receive RGB and thermal inputs, then extract their features {r1, r2, r3, r4},{t1, t2, t3, t4} and {r1, r2, r3, r4},{t1, t2, t3, t4} through their respective encoders, and the decoders output their respective prediction features p1 and p1.
First, this study proposes an adaptive alignment theory based on feature differences. By quantifying the differences in feature distributions between MANet and EGNet during the encoder stage, a dynamic modal complementarity optimization process is constructed. From an information theory perspective, this strategy uses norm differences in feature distributions to enhance intra-class pixel semantic cohesion dynamically. It suppresses representation drift from modal heterogeneity, resolving inconsistent intra-class pixel semantics in traditional fusion methods. This study first calculates the element-wise differences between MANet’s RGB features ri and thermal infrared features ti with EGNet’s corresponding features ri and ti, compresses the difference features to reduce computational complexity while preserving intra-class semantic patterns and inter-class boundary information, and enhances the spatial correlation of difference feature spaces. Subsequently, to dynamically balance modal contributions, this study calculates the L2 norms nr,i and nt,i of the compressed differences:
d ^ r , i = C o n v 3 ( r i r i ) , d ^ t , i = C o n v 3 ( t i t i )
n r , i = 1 C / 4 H W c , h , w ( d ^ r , i ( c , h , w ) ) 2 , n t , i = 1 C / 4 H W c , h , w ( d ^ t , i ( c , h , w ) ) 2
where B, C, H, W and b, c, h, w, respectively, represent the batch size, number of channels, height, and width.
Then, preliminary weights are generated through the sigmoid activation, and the norm is used to quantify the difference in intensity, thereby ensuring that the semantic consistency of pixels is superior to modal interference. Subsequently, the weights are normalized, and the difference intensity is quantified through weighted mean squared error to obtain the feature difference loss Lossfeat:
ω r , i = n r , i n r , i + n t , i + ε , ω t , i = n t , i n r , i + n t , i + ε
L o s s f e a t = i = 1 4 ( ω r , i 1 B ( C / 4 ) H W b , c , h , w ( d ^ r , i σ ( n r , i ) ) 2 + ω t , i 1 B ( C / 4 ) H W b , c , h , w ( d ^ t , i σ ( n t , i ) ) 2 )
where ε is used to prevent division-by-zero errors.
Secondly, this study designs an adaptive consistency operation based on the entropy of the prediction distribution. This mechanism dynamically adjusts the alignment weights between MANet and EGNet’s prediction maps (p1 and p1) using entropy to optimize consistency. High entropy values indicate uncertainty at object boundaries or semantic transitions, suppressing strict alignment to prevent error propagation; conversely, low entropy values reflect strong confidence and robustness in predictions, promoting reliable alignment. Specifically, the prediction map p1 of MANet and the corresponding prediction map p1 of EGNet are used to generate a soft prediction distribution through the softmax function.
q 1 = S o f t m a x ( p 1 / T ) , q 1 = S o f t m a x ( p 1 / T )
q 1 b , k , h , w = exp ( p 1 ( b , k , h , w ) / T ) k = 1 K exp ( p 1 ( b , k , h , w ) / T ) , q 1 ( b , k , h , w ) = exp ( p 1 ( b , k , h , w ) ) / T ) k = 1 K exp ( p 1 ( b , k , h , w ) / T )
where Softmax(·) denotes the softmax activation function, set T = 2 following established practice in mutual learning frameworks [29,30], and K represents the number of classes.
Then, the average entropy H(p) of the predicted distributions of the two networks is calculated to reflect the uncertainty of the predictions. A higher entropy requires a greater weight to guide the learning process. Subsequently, the entropy-based adaptive weight wpred is used to balance the reliability of the predictions and enhance the intra-class semantic consistency:
H ( p 1 ) = 1 B H W b , h , w k = 1 K q 1 ( b , k , h , w ) log q 1 ( b , k , h , w )
H ( p 1 ) = 1 B H W b , h , w k = 1 K q 1 ( b , k , h , w ) log q 1 ( b , k , h , w )
ω pred = H ( p 1 ) H ( p 1 ) + H ( p 1 ) + ε
After that, the Kullback–Leibler (KL) divergence between the prediction maps of the two networks is calculated, respectively, so as to make the inter-class boundary probabilities converge and improve the boundary clarity. Finally, the segmentation accuracy is enhanced through weighting to obtain the adaptive consistency loss of the prediction distribution entropy Losspred:
K L ( q 1 q 1 ) = 1 B H W b , h , w k = 1 K q 1 ( b , k , h , w ) log ( q 1 ( b , k , h , w ) q 1 ( b , k , h , w ) )
K L ( q 1 q 1 ) = 1 B H W b , h , w k = 1 K q 1 ( b , k , h , w ) log ( q 1 ( b , k , h , w ) q 1 ( b , k , h , w ) )
L o s s p r e d = ω p r e d K L ( q 1 q 1 ) + ( 1 ω p r e d ) K L ( q 1 q 1 )

3.5. Theoretical Analysis

When directly combining DFIM and UFM, optimization conflicts arise due to their distinct processing mechanisms [16,17]. DFIM employs dynamic feature selection through learnable attention mechanisms, adapting to input-dependent feature distributions [38], while UFM utilizes fixed image priors derived from Sobel and Gabor filters with cross-modal weighting strategies [39]. This architectural difference creates fundamental conflicts in their gradient optimization pathways, similar to those observed in multi-task learning scenarios [40].
The conflict manifests mathematically in their gradient optimization. Let LossD and LossU represent DFIM’s and UFM’s losses, respectively. Direct combination creates Lcombined = LossD + LossU, where ∇LossD optimizes for adaptive feature weighting while ∇LossU optimizes for fixed prior integration. These contradictory gradient directions create what “gradient interference” [41,42], leading to unstable training dynamics and performance degradation.
Our mutual learning framework resolves this by separating conflicting modules into specialized networks while enabling knowledge exchange through ACML. This approach follows the principle of “divide-and-conquer” optimization [43], eliminating direct parameter conflicts while maintaining collaborative learning through feature-level knowledge distillation [44]. The framework ensures stable convergence by preserving the distinct optimization characteristics of each module while facilitating cross-network collaboration.

3.6. Total Loss

The loss of this study is composed of three parts. Both MANet and EGNet mainly use cross-entropy loss (CE) [17]. MANet directly supervises the predicted segmentation map {p1} with the GT map to form LossD, while EGNet supervises the predicted segmentation maps {s1, s2, s3, s4} with the GT map to form LossU:
L o s s D = C E ( G T , p 1 )
L o s s U = C E ( G T , s i ) ,     i = 1 , 2 , 3 , 4
The total loss in the final training stage consists of the main loss of the two networks and the mutual learning loss:
L o s s T o t a l = L o s s D + L o s s U + L o s s f e a t + L o s s p r e d

4. Experiments and Results

4.1. Experimental Protocol

4.1.1. Datasets

The proposed model in this study was comprehensively validated on two standard datasets: MFNet and PST900. The MFNet dataset was used as the primary experimental dataset for detailed ablation studies. The MFNet dataset contains 1569 pairs of RGB-thermal infrared images with a resolution of 640 × 480, divided into 820 pairs of daytime images and 749 pairs of nighttime images based on capture time. The dataset includes annotations for eight semantic categories. The dataset was split using the standard protocol, with 50% of the daytime and nighttime images used for training, 25% for validation, and the remaining 25% for testing and evaluation [1]. The PST900 dataset contains 894 pairs of synchronously captured and geometrically calibrated RGB-thermal infrared images with a resolution of 1280 × 640 pixels. The dataset provides pixel-level, precise annotations and covers four specific target categories from the DARPA Subterranean Challenge. According to the standard split, 597 pairs of images were used for model training, and 297 pairs were used for the final testing [2].

4.1.2. Evaluation Metrics

To comprehensively assess the segmentation performance of all semantic categories, this study adopted two complementary metrics as the primary evaluation criteria. The mean Intersection over Union (mIoU) was used as the main metric, calculated by averaging the IoU scores of all categories. Each individual IoU represents the ratio of the intersection area to the union area between the predicted mask and the ground-truth mask. This metric provides a robust assessment of pixel-level accuracy as it equally penalizes false positives and false negatives. In addition, the mean F1 score (mF1) was used as a secondary evaluation metric, representing the harmonic mean of precision and recall across all categories. The F1 score effectively balances the trade-off between precision (correctly predicted positives) and recall (actual positives correctly identified), making it particularly valuable for assessing segmentation quality in scenarios where both over-segmentation and under-segmentation errors are critical issues [1,12].

4.1.3. Statistical Analysis Protocol

To ensure result reliability, all experiments were conducted 10 times using different random seeds (42–51). We report mean performance with standard deviation and assess statistical significance using paired t-tests against the strongest baseline (AGFNet [14]). This protocol addresses potential concerns about result variability and establishes the statistical validity of observed improvements.

4.1.4. Implementation Details

The model in this study was implemented based on the PyTorch 3.10 framework, and all experiments were conducted on a server powered by an NVIDIA GeForce GTX 1080Ti GPU for training and testing. To ensure consistent and fair comparisons of the experimental results, a unified training strategy and hyperparameter settings were used across all comparative methods. For both the MFNet and PST900 datasets, training and testing were conducted at the original resolution to avoid information loss due to image resizing. Training was conducted for 200 epochs. Model optimization was performed using the Ranger optimizer, with a weight decay coefficient set at 5 × 10−4 to prevent overfitting. The training batch size was set to 2, and the initial learning rate was set at 1 × 10−4. The entire model was trained in an end-to-end manner, taking RGB-T image pairs as input. The two networks were jointly optimized during training but could operate independently during testing, ensuring modularity and deployment flexibility [15,21].

4.2. MFNet Dataset

As shown in Table 1, For comprehensive evaluation, this study selected 11 mainstream state-of-the-art methods based on the following criteria: (1) Temporal representation: Methods spanning from early fusion approaches (MFNet [1], RTFNet [7]) to recent architectural innovations (AGFNet [14], LLE-Seg [15], KDSNet-S * [21]); (2) Technical diversity: Coverage of different fusion strategies including feature-level fusion (FuseSeg [8]), attention-based fusion (SGFNet [9]), context-aware interaction (CAINet [10]), and knowledge distillation approaches (CLNet-S [11]).
In terms of overall performance, the proposed method achieved significant improvements. EGNet + ACML reached 59.2% mIoU, surpassing the current best method AGFNet (58.6%) by 0.6 percentage points. For the mF1 metric, EGNet + ACML achieved 70.7%, outperforming the best baseline method by 3.0 percentage points. This substantial improvement validates the effectiveness of the mutual learning framework in enhancing overall model performance. For specific categories, MANet achieved 87.3% IoU for the Car category, which was further improved to 88.7% after ACML mutual learning, surpassing all comparison methods, including AGFNet (88.0%).
As shown in Figure 6, this figure illustrates the visual comparison of six models and our method across four daytime and four nighttime scenarios: Traditional methods such as RTFNet and FuseSeg suffer from blurred boundaries and incomplete segmentation; CLNet-S exhibits excessive smoothing effects, leading to the loss of important structural details; inconsistent segmentation: SGFNet and CAINet produce discontinuous segmentation results when addressing modal balance challenges, splitting object continuity into isolated regions.
In contrast, the mutual learning mechanism enables effective knowledge transfer between the modality-aware network (focused on global semantic representation) and the edge texture-guided network (focused on boundary refinement), resulting in more accurate and coherent segmentation outcomes, thereby validating the effectiveness of combining specialised network design with collaborative optimisation.

4.3. PST900 Dataset

The effectiveness of the proposed network was further validated through benchmarking with state-of-the-art methods using the PST900 dataset. All methods are initialised using pre-trained weights to ensure fair comparison. As shown in Table 2, DFANet + ACML and EGNet + ACML demonstrated excellent performance with mean Intersection over Union (mIoU) scores of 86.01 and 86.86, respectively, where EGNet + ACML achieved the best overall performance. Comparative analysis shows that our proposed framework outperforms existing methods in all evaluation metrics, with MANet + ACML improving by 1.79% over standalone MANet, and EGNet + ACML improving by 2.78% over the best competing method, MCNet.
In terms of specific metrics, EGNet + ACML achieved the highest mIoU of 90.27 in the Backpack category, surpassing the second-best result by 0.22%; in the Extinguisher and Survivor categories, EGNet + ACML led with mIoU scores of 81.87 and 81.29, respectively, demonstrating that ACML can effectively transfer knowledge and enable mutual correction. Although LLE-Seg achieved the best performance in the hand drill category with 82.00 mIoU, our EGNet + ACML demonstrated more balanced and consistent performance across all categories, surpassing LLE-Seg’s 80.70 mIoU with an excellent overall score of 86.86 mIoU. These results collectively validate that our ACML can also achieve excellent segmentation performance in other RGB-T modality scenarios.

4.4. Statistical Significance Analysis

To address concerns regarding the reliability of our performance improvements and potential experimental variance, we conducted rigorous statistical validation across 10 independent experimental runs for both datasets. As shown in Table 3,On the MFNet dataset, EGNet + ACML achieved a mean mIoU of 59.2 ± 0.30%, while DAFNet + ACML reached 58.6 ± 0.28%, with the 0.6% improvement demonstrating statistical significance (p = 0.047 < 0.05). As shown in Table 4, The PST900 dataset validation further strengthens our findings, where both DAFNet + ACML (86.01 ± 0.39%, p = 0.018) and EGNet + ACML (86.86 ± 0.37%, p = 0.003) showed highly significant improvements over baseline methods. The consistently low standard deviations (Std ≤ 0.39%) across all experiments demonstrate the stability and reproducibility of our approach, while the paired t-test results conclusively establish that the observed improvements exceed experimental variance and represent genuine methodological advances.

4.5. Ablation Study

4.5.1. Internal Effectiveness Validation of MANet

To verify the effectiveness of each component within the MANet, this study conducted detailed ablation experiments. As shown in Table 5, when the DFIM was completely removed (w/o DFIM), the model performance significantly dropped to 55.4% mIoU and 67.1% mF1, a decrease of 1.3% and 1.0%, respectively, compared to the full version. This significant performance degradation validates the central role of DFIM in multimodal feature fusion, demonstrating its importance in effectively integrating the complementary information of RGB and thermal infrared images.
Further experiments on the internal mechanisms of DFIM showed that the dual-attention mechanism can optimize the selection of information from different modalities, achieving more effective cross-modal feature fusion. After removing the dual-attention mechanism (w/o DA), the model performance further deteriorated to 54.6% mIoU and 66.4% mF1, a decrease of 2.1% and 1.7%, respectively, compared to the full version. Removing the multi-scale dilated convolution (w/o MDC) led to a drop in model performance to 54.5% mIoU and 66.4% mF1, a decrease of 2.2% and 1.7%, respectively, compared to the full version, indicating the positive role of multi-scale dilated convolution in capturing feature receptive fields. When the dynamic enhancement branch was removed (w/o DEB), the mIoU and mF1 dropped to 54.8% and 66.8%, respectively, a decrease of 1.9% and 1.3% compared to the full version. This highlights the role of the dynamic enhancement mechanism in improving feature adaptation to complex urban scenes. The visualization results in Figure 7 further validate the effectiveness of the proposed method.

4.5.2. Internal Effectiveness Validation of EGNet

To verify the effectiveness of each component within EGNet, this study systematically conducted ablation experiments on the core components of the UFM. As shown in Table 5, when the UFM was completely removed (w/o UFM), the model performance dropped from an mIoU of 56.9% to 55.5%, and from an mF1 of 68.2% to 66.9%.
This clearly validates the core role of the UFM in capturing feature details and refining boundaries. Removing the pixel-wise and feature-wise weighting strategies (w/o PW and FW) led to a significant performance drop to 54.6% mIoU and 66.2% mF1, a decrease of 2.3% and 2.0%, respectively, compared to the full version. This indicates that the pixel-wise and feature-wise weighting strategies mutually guide and enhance each other, promoting alignment between modalities. After removing the Sobel filter (w/o Sobel), the model performance further declined to 54.4% mIoU and 65.9% mF1. This proves that the Sobel filter significantly enhances the model’s boundary perception in complex scenes by extracting sharp edge cues. Removing the Gabor filter resulted in a performance drop to 54.7% mIoU and 65.8% mF1, demonstrating that the Gabor filter captures the texture representation of features. The visualization results in Figure 8 further validate the effectiveness of the proposed method.

4.5.3. Validation of ACML Effectiveness

To verify the effectiveness of the two core mechanisms in the ACML framework, this study separately removed the adaptive alignment mechanism based on feature differences (feat) and the adaptive consistency mechanism based on the entropy of prediction distributions (pred). As shown in Table 6, when only the feature difference alignment loss was applied (+Loss feat), the mIoU of MANet increased from 56.7% to 57.8%, and that of EGNet increased from 56.9% to 57.9%.
This mechanism effectively enhanced the semantic consistency of intra-class pixels by quantifying the differences in the encoder feature distributions of the two networks. When only the prediction distribution entropy consistency loss was applied (+Loss pred), the mIoU of MANet increased to 57.9%, and that of EGNet increased to 58.3%. It is worth noting that the performance improvement of EGNet (1.4%) was significantly larger than that of MANet (1.2%). This is because EGNet focuses on boundary refinement and detail capture, and its UFM enhances boundary perception through Sobel and Gabor filters, making it more sensitive to the uncertainty of prediction distributions.
The prediction distribution entropy mechanism, which quantifies prediction uncertainty and dynamically adjusts alignment weights to improve boundary clarity, fits well with the design features of EGNet, thus achieving a more significant performance improvement. When both core mechanisms were applied simultaneously (MANet + ACML, EGNet + ACML), both networks achieved their best performance. The mIoU of MANet reached 58.6%, and that of EGNet reached 59.2%, representing improvements of 1.9% and 2.3%, respectively, compared to the baselines. The results indicate that the ACML framework achieves an effective synergistic effect through joint optimization at both the feature and prediction levels. The visualization results in Figure 9 and Figure 10 further validate the effectiveness of the proposed method.

4.5.4. Validation of Direct Combination Ineffectiveness

To verify the ineffectiveness of directly combining the DFIM and UFM within a single network, this study designed comparative experiments to demonstrate the necessity of the mutual learning approach. As shown in Table 7, when the two modules were directly combined, there was a significant drop in performance.
Specifically, the MixTransfor + UFM + MAFE architecture achieved only 54.9% mIoU and 66.2% mF1, compared to the standalone MANet (MixTransfor + MAFE), which had 56.7% mIoU and 68.1% mF1, representing a decrease of 1.8% and 1.9%, respectively. Similarly, the DFormer + MAFE + UFM architecture achieved 55.3% mIoU and 66.9% mF1, compared to the standalone EGNet (DFormer + UFM), which had 56.9% mIoU and 68.2% mF1, representing a decrease of 1.6% and 1.3%, respectively. This phenomenon validates the core issue raised in the introduction of this study: the performance of two modules that work well individually actually declines when they are directly combined.
The main reasons are as follows: (1) Feature mismatch: There are distribution differences in the feature representation space between the DFIM and UFM. (2) Conflicting optimization goals: There is an inherent conflict between the optimization directions of multimodal fusion and boundary refinement, leading to mutual interference during the training process. This experiment fully demonstrates the ineffectiveness of the direct combination strategy, thereby supporting the necessity and rationality of the proposed mutual learning dual-network framework in this study.

4.5.5. Computational Efficiency and Cost–Benefit Analysis

As shown in Table 8, we evaluate computational complexity through model parameters (Params), floating-point operations per second (FLOPs), and frames per second (FPS). Unlike single-network architectures that rely on uniform parameter scaling, our method employs strategic resource allocation through dual specialized networks (modal-aware and edge-texture guided) combined with an adaptive cross-modal collaborative learning mechanism, enabling each network to focus on specific tasks while learning respective advantageous features. Compared to parameter-intensive methods (such as SGFNet with 125.25 million parameters) and lightweight solutions (such as CANet with 13.8 million parameters), our framework occupies a strategic middle ground. This parameter configuration not only significantly surpasses the performance of lightweight solutions but also demonstrates superior efficiency compared to parameter-intensive methods, achieving precise computational resource investment. While training two complete networks may appear computationally expensive, the superior performance of the dual-network architecture in segmentation quality and practical efficiency fully validates the effectiveness of our design philosophy. The ACML mechanism further enhances feature extraction and generalization capabilities through cross-network mutual learning, with the key advantage that no additional parameter overhead is introduced during inference. Therefore, compared to single-network models of equivalent accuracy, our framework achieves a superior balance between precision and computational cost, making it more suitable for practical application scenarios with stringent segmentation accuracy requirements.

5. Conclusions

This study addresses the critical challenges of “insufficient fusion” and “imprecise boundary delineation” in RGB-Thermal urban scene semantic segmentation by proposing a novel bidirectional dynamic adaptation framework based on mutual learning. The framework consists of two complementary networks: a Modality-Aware Network (MANet), which enhances intra-class semantic consistency through dynamic feature integration, and an Edge-Texture Guidance Network (EGNet), which improves inter-class boundary discrimination using unified feature modulation with embedded Sobel and Gabor priors. To synergize these specialized networks, we introduce an Adaptive Cross-modal Mutual Learning (ACML) mechanism that facilitates bidirectional knowledge transfer through feature difference alignment and prediction entropy consistency, without introducing additional inference-time parameters. Experiments on MFNet and PST900 datasets demonstrate that our method outperforms existing mainstream approaches in both mIoU and mF1 metrics, validating the effectiveness of the dual-network collaborative framework. This study replaces the “one-size-fits-all network” with a “specialized network + collaborative optimization” paradigm, providing a reusable design template for multimodal semantic segmentation. Future research will achieve model lightweighting through network pruning and quantization techniques to meet real-time inference requirements on edge devices, while constructing larger-scale and more diverse RGB-T datasets. Additionally, we will explore advanced strategies, including focal loss, data augmentation, and sample reweighting techniques, combined with expert system integration, to systematically address class imbalance issues in urban scene semantic segmentation, promoting the continuous development of this field.

Author Contributions

Software, J.Z.; formal analysis, J.Z.; data curation, J.Z.; writing—original draft preparation, J.Z.; writing—review and editing, N.C. and J.Z.; visualization, J.Z.; project administration, N.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support data presented in the study are openly available in https://github.com/wdqqggs/MlwcNet, accessed on 10 September 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ha, Q.; Watanabe, K.; Karasawa, T.; Ushiku, Y.; Harada, T. MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 5108–5115. [Google Scholar]
  2. Shivakumar, S.S.; Rodrigues, N.; Zhou, A.; Miller, I.D.; Kumar, V.; Taylor, C.J. Pst900: Rgb-thermal calibration, dataset and segmentation network. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–4 June 2020; pp. 9441–9447. [Google Scholar]
  3. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder–decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 833–851. [Google Scholar]
  4. Zhao, X.; Zhang, L.; Pang, Y.; Lu, H.; Zhang, L. A single stream network for robust and real-time RGB-D salient object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 646–662. [Google Scholar]
  5. Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 5686–5696. [Google Scholar]
  6. Niu, X.; Li, E.; Liu, J.; Wang, Y.; Osadchy, M.; Fang, Y. Mind the Gap: Learning Modality-Agnostic Representations with a Cross-Modality UNet. IEEE Trans. Image Process. 2024, 33, 655–670. [Google Scholar] [CrossRef]
  7. Sun, Y.; Zuo, W.; Liu, M. RTFNet: RGB-Thermal Fusion Network for Semantic Segmentation of Urban Scenes. IEEE Robot. Autom. Lett. 2019, 4, 2576–2583. [Google Scholar] [CrossRef]
  8. Sun, Y.; Zuo, W.; Yun, P.; Wang, H.; Liu, M. FuseSeg: Semantic Segmentation of Urban Scenes Based on RGB and Thermal Data Fusion. IEEE Trans. Autom. Sci. Eng. 2021, 18, 1000–1011. [Google Scholar] [CrossRef]
  9. Wang, Y.; Li, G.; Liu, Z. SGFNet: Semantic-Guided Fusion Network for RGB-Thermal Semantic Segmentation. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 7737–7748. [Google Scholar] [CrossRef]
  10. Lv, Y.; Liu, Z.; Li, G. Context-Aware Interaction Network for RGB-T Semantic Segmentation. IEEE Trans. Multimed. 2024, 26, 6348–6360. [Google Scholar] [CrossRef]
  11. Guo, X.; Zhou, W.; Liu, T. Contrastive learning-based knowledge distillation for RGB-thermal urban scene semantic segmentation. Knowl. Based Syst. 2024, 292, 111588. [Google Scholar] [CrossRef]
  12. Zhou, W.; Wu, H.; Jiang, Q. MDNet: Mamba-Effective Diffusion-Distillation Network for RGB-Thermal Urban Dense Prediction. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 3222–3233. [Google Scholar] [CrossRef]
  13. Guo, X.; Liu, T.; Mou, Y.; Chai, S.; Ren, B.; Wang, J.; Shi, W.; Liu, S.; Zhou, W. Transferring Prior Thermal Knowledge for Snowy Urban Scene Semantic Segmentation. IEEE Trans. Intell. Transp. Syst. 2025, 26, 12474–12487. [Google Scholar] [CrossRef]
  14. Zhou, X.; Wu, X.; Bao, L.; Yin, H.; Jiang, Q.; Zhang, J. AGFNet: Adaptive Gated Fusion Network for RGB-T Semantic Segmentation. IEEE Trans. Intell. Transp. Syst. 2025, 26, 6477–6492. [Google Scholar] [CrossRef]
  15. Guo, X.; Liu, Y.; Xue, W.; Zhang, Z.; Zhuang, Y. Low-Light Enhancement and Global-Local Feature Interaction for RGB-T Semantic Segmentation. IEEE Trans. Instrum. Meas. 2025, 74, 1–13. [Google Scholar] [CrossRef]
  16. Chen, L.; Fu, Y.; Gu, L.; Yan, C.; Harada, T.; Huang, G. Frequency-Aware Feature Fusion for Dense Image Prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 10763–10780. [Google Scholar] [CrossRef] [PubMed]
  17. Zhou, T.; Wang, W. Prototype-Based Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 6858–6872. [Google Scholar] [CrossRef]
  18. Zhang, J.; Yang, K.; Shi, H.; Reiß, S.; Peng, K.; Ma, C.; Fu, H.; Torr, P.H.S.; Wang, K.; Stiefelhagen, R. Behind Every Domain There is a Shift: Adapting Distortion-Aware Vision Transformers for Panoramic Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 8549–8567. [Google Scholar] [CrossRef]
  19. Shang, R.; Zhang, J.; Jiao, L.; Li, Y.; Marturi, N.; Stolkin, R. Multi-scale adaptive feature fusion network for semantic segmentation in remote sensing images. Remote Sens. 2020, 12, 872. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Xiang, T.; Hospedales, T.M.; Lu, H. Deep Mutual Learning. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4320–4328. [Google Scholar]
  21. Xie, Z.; Cheng, S.; Fan, J.; Huang, P. Micro-expression Recognition Based on Deep Mutual Learning Network. In Proceedings of the 2022 34th Chinese Control and Decision Conference (CCDC), Hefei, China, 15–17 August 2022; pp. 751–756. [Google Scholar]
  22. Huo, T.; Fan, J.; Li, X.; Chen, H.; Gao, B.; Li, X. Traffic Sign Recognition Based on ResNet-20 and Deep Mutual Learning. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 4770–4774. [Google Scholar]
  23. Gao, Y.; Kuang, P.; He, M.; Duan, Q.; Liu, C. MM-GCN: Multi-Mutual Learning Networks of Graph Convolution for Node Classification. In Proceedings of the International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 18–20 December 2020; pp. 97–100. [Google Scholar]
  24. Zhao, H.; Yang, G.; Wang, D.; Lu, H. Lightweight Deep Neural Network for Real-Time Visual Tracking with Mutual Learning. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 3063–3067. [Google Scholar] [CrossRef]
  25. Zhou, W.; Gong, T.; Yan, W. Knowledge Distillation SegFormer-Based Network for RGB-T Semantic Segmentation. IEEE Trans. Syst. Man Cybern. Syst. 2025, 55, 2170–2182. [Google Scholar] [CrossRef]
  26. Hinton, G.; Vinyals, O.; Dean, J. Distilling the Knowledge in a Neural Network. arXiv 2015, arXiv:1503.02531. [Google Scholar] [CrossRef]
  27. Guo, Q.; Wang, X.; Wu, Y.; Yu, Z.; Liang, D.; Hu, X.; Luo, P. Online Knowledge Distillation via Collaborative Learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11017–11026. [Google Scholar]
  28. Shen, C.; Wang, X.; Song, J.; Sun, L.; Song, M. Amalgamating knowledge towards comprehensive classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 3068–3075. [Google Scholar]
  29. Peng, B.; Jin, X.; Li, D.; Zhou, S.; Wu, Y.; Liu, J.; Zhang, Z.; Liu, Y. Correlation Congruence for Knowledge Distillation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27–28 October 2019; pp. 5006–5015. [Google Scholar] [CrossRef]
  30. Sun, S.; Cheng, Y.; Gan, Z.; Liu, J. Patient knowledge distillation for BERT model compression. arXiv 2019, arXiv:1908.09355. [Google Scholar] [CrossRef]
  31. Tang, J.; Wang, K. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, Los Angeles, CA, USA, 5–9 February 2018; pp. 565–573. [Google Scholar]
  32. Heo, B.; Lee, M.; Yun, S.; Choi, J.Y. Knowledge transfer via distillation of activation boundaries formed by hidden neurons. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 3779–3787. [Google Scholar]
  33. Chen, J.-N.; Sun, S.; He, J.; Torr, P.; Yuille, A.; Bai, S. TransMix: Attend to Mix for Vision Transformers. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 12125–12134. [Google Scholar]
  34. Pegia, M.-E.; Jónsson, B.Þ.; Moumtzidou, A.; Gialampoukidis, I.; Vrochidis, S.; Kompatsiaris, I. Comparative Analysis of Learning-Based Approaches for Change Detection in Satellite Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 3766–3781. [Google Scholar] [CrossRef]
  35. Wang, Y.; Zhou, W.; Qian, X. Transmission Line Detection Through Auxiliary Feature Registration With Knowledge Distillation. IEEE Trans. Autom. Sci. Eng. 2025, 22, 9413–9425. [Google Scholar] [CrossRef]
  36. De Silva, D.D.N.; Vithanage, H.W.M.K.; Xavier, S.A.; Piyatilake, I.T.S.; Fernando, S. Parameterized Wavelets for Convolutional Neural Networks. In Proceedings of the 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Ponta Delgada, Portugal, 15–17 April 2020; pp. 170–176. [Google Scholar]
  37. Yin, B.; Zhang, X.; Li, Z.; Liu, L.; Cheng, M.M.; Hou, Q. DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation. In Proceedings of the International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
  38. Shi, W.; Zheng, B. Conflict-Alleviated Gradient Descent for Adaptive Object Detection. In Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI-24), Jeju, Republic of Korea, 3–9 August 2024; pp. 1236–1244. [Google Scholar]
  39. Zhang, Z.; Shen, J.; Cao, C.; Dai, G.; Zhou, S.; Zhang, Q.; Zhang, S.; Shutova, E. Proactive Gradient Conflict Mitigation in Multi-Task Learning: A Sparse Training Perspective. arXiv 2024, arXiv:2411.18615. [Google Scholar] [CrossRef]
  40. Sun, Y.; Xu, X.; Li, J.; Hu, X.; Shi, Y.; Zeng, L.L. Learning Task-preferred Inference Routes for Gradient De-conflict in Multi-output DNNs. arXiv 2025, arXiv:2305.19844. [Google Scholar]
  41. Gu, X.; Xia, Y.; Zhang, J. Multimodal medical image fusion based on interval gradients and convolutional neural networks. BMC Med. Imaging 2024, 24, 232. [Google Scholar] [CrossRef]
  42. Cinemre, I.; Mehmood, K.; Kralevska, K.; Mahmoodi, T. Gradient-Based Optimization for Intent Conflict Resolution. Electronics 2024, 13, 864. [Google Scholar] [CrossRef]
  43. Yang, L.; Shen, D.; Cai, C.; Yang, F.; Gao, T.; Zhang, D.; Li, X. Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model. arXiv 2025, arXiv:2406.19905v3. [Google Scholar]
  44. Hoang, T.; Rana, S.; Gupta, S.; Venkatesh, S. Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning Interference with Gradient Projection. In Proceedings of the AAAI Conference on Artificial Intelligence, British Columbia, Canada, 20–27 February 2024. [Google Scholar]
Figure 1. Demonstrates the recognition limitations of mainstream semantic segmentation models MFNet [1] and CLNet [11]. In the first row, MFNet shows poor bicycle recognition performance within the red box and fails to detect traffic cones within the blue box, while CLNet can identify traffic cones but lacks detailed features. The red box regions in the second and third rows similarly validate this issue: both models exhibit incomplete target detection or insufficient capture of detailed information.
Figure 1. Demonstrates the recognition limitations of mainstream semantic segmentation models MFNet [1] and CLNet [11]. In the first row, MFNet shows poor bicycle recognition performance within the red box and fails to detect traffic cones within the blue box, while CLNet can identify traffic cones but lacks detailed features. The red box regions in the second and third rows similarly validate this issue: both models exhibit incomplete target detection or insufficient capture of detailed information.
Applsci 15 10000 g001
Figure 2. Overall framework. Green and red arrows indicate the direction of knowledge transfer between MANet and EGNet, while black arrows represent the input direction for RGB and thermal imaging.
Figure 2. Overall framework. Green and red arrows indicate the direction of knowledge transfer between MANet and EGNet, while black arrows represent the input direction for RGB and thermal imaging.
Applsci 15 10000 g002
Figure 3. Overall framework of MANet.
Figure 3. Overall framework of MANet.
Applsci 15 10000 g003
Figure 4. EGNet framework.
Figure 4. EGNet framework.
Applsci 15 10000 g004
Figure 5. ACML mutual learning framework.
Figure 5. ACML mutual learning framework.
Applsci 15 10000 g005
Figure 6. Qualitative visual comparison of RGB-T method. Please noted the difference in the red box.
Figure 6. Qualitative visual comparison of RGB-T method. Please noted the difference in the red box.
Applsci 15 10000 g006
Figure 7. Visualization of MANet ablation experiment.
Figure 7. Visualization of MANet ablation experiment.
Applsci 15 10000 g007
Figure 8. Visualization of the EGNet ablation experiment.
Figure 8. Visualization of the EGNet ablation experiment.
Applsci 15 10000 g008
Figure 9. MANet’s ablation experiment using ACML.
Figure 9. MANet’s ablation experiment using ACML.
Applsci 15 10000 g009
Figure 10. EGNet uses ACML ablation experiments.
Figure 10. EGNet uses ACML ablation experiments.
Applsci 15 10000 g010
Table 1. Comparison experiment table on the MFNet dataset; red values are the optimal results of the corresponding columns.
Table 1. Comparison experiment table on the MFNet dataset; red values are the optimal results of the corresponding columns.
MethodIoUmIoUmF1
CarPersonBikeCurveCar StopGuardrailColor ConeBump
MFNet17 [1]65.958.942.929.99.98.525.227.739.7-
RTFNet19 [7]86.367.858.243.724.33.626.057.251.763.0
FuseSeg21 [8]87.971.764.644.822.76.446.947.954.5-
SGFNet23 [9]88.477.664.345.831.06.057.155.057.6-
CAINet24 [10]88.566.368.755.431.59.048.960.758.6-
CLNet-S24 [11]88.365.360.842.828.61.847.951.453.865.3
MDNet24 [12]86.663.968.749.737.017.648.960.458.9-
MCNet-S*25 [13]86.159.662.633.940.312.052.451.755.167.7
AGFNet25 [14]88.073.764.640.964.913.857.052.5058.6-
LLE-Seg25 [15]88.673.264.846.830.08.852.562.458.4-
KDSNet-S*25 [21]87.169.559.542.335.511.352.950.456.3-
MANet87.371.565.045.335.75.345.456.656.768.5
EGNet88.272.363.345.332.84.853.054.256.968.2
MANet + ACML88.772.962.947.138.710.350.957.658.670.4
EGNet + ACML88.672.964.647.641.97.656.954.359.270.7
Table 2. Dataset: The red value is the optimal result of the corresponding column.
Table 2. Dataset: The red value is the optimal result of the corresponding column.
MethodHand-DrillBackpackExtinguisherSurvivormIoUmF1
mIoUmIoUmIoUmIoU
MFNet17 [1]41.1364.2760.3520.7057.02-
RFTNet19 [7]52.2467.9154.4654.1165.5259.50
PSTNet20 [2]53.6069.2070.1250.0368.3660.70
CLNet24 [11]70.3885.5964.4071.9878.41-
MDNet24 [12]76.7589.4169.4680.1782.97-
MCNet25 [13]75.4389.2979.7679.3784.6891.47
AGFNet25 [14]80.3085.3080.2078.7084.80-
LLE-Seg25 [15]82.0079.0075.6067.6180.70-
DFANet75.2188.3879.1279.0684.2291.03
EGNet78.0588.7880.3279.2285.1691.79
MANet + ACML78.1990.0581.4380.7686.0193.22
EGNet + ACML81.2190.2781.8781.2986.8694.07
Table 3. Statistical Validation of Results on MFNet Dataset.
Table 3. Statistical Validation of Results on MFNet Dataset.
MethodmIoU (Mean ± Std)p-Value
AGFNet58.6 ± 0.31-
DAFNet + ACML58.6 ± 0.280.036
ETGNet + ACML59.2 ± 0.300.047
Table 4. Statistical Validation of Results on PST900 Dataset.
Table 4. Statistical Validation of Results on PST900 Dataset.
MethodmIoU (Mean ± Std)p-Value
AGFNet84.80 ± 0.41-
DAFNet + ACML86.01 ± 0.390.018
ETGNet + ACML86.86 ± 0.370.003
Table 5. Validity verification of EGNet and MANet.
Table 5. Validity verification of EGNet and MANet.
MethodmIoUmF1
MANet (w/o DFIM)55.467.1
DFIM(w/o DA)54.666.4
DFIM(w/o MDC)54.566.4
DFIM(w/o DEB)54.866.8
EGNet (w/o UFM)55.566.9
UFM (w/o PW and FW)54.666.2
UFM (w/o Sobel)54.465.9
UFM (w/o Gabor)54.765.8
MANet56.768.1
EGNet56.968.2
Table 6. ACML validity verification.
Table 6. ACML validity verification.
MethodmIoUmF1
DAFNet + Loss feat57.869.1
ETGNet + Loss feat57.969.5
DAFNet + Loss pred57.969.2
ETGNet + Loss pred58.369.8
DAFNet + ACML58.670.4
ETGNet + ACML59.270.7
Table 7. Verifies the invalidity of the module combination.
Table 7. Verifies the invalidity of the module combination.
MethodmIoUmF1
MixTransfor + UFM + DFIM54.966.2
DFormer + DFIM + UFM55.366.9
MANet(MixTransfor + DFIM)56.768.1
EGNet(DFormer + UFM)56.968.2
Table 8. Model parameter analysis.
Table 8. Model parameter analysis.
MethodParams (M) ↓FLOPs (G) ↓FPS ↑
RFTNet19 [7]254.51337.046.83
FuseSeg21 [8]141.52193.4016.88
SGFNet23 [9]125.25144.835.95
CAINet24 [10]13.84.6925.68
CLNet-S24 [11]33.35150.5322.43
MDNet24 [12]29.9621.8827.21
MCNet-S*25 [13]19.4049.1917.66
AGFNet25 [14]37.2250.3219.84
MANet + ACML48.2397.329.86
EGNet + ACML74.8117.4112.70
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Chen, N. Bidirectional Dynamic Adaptation: Mutual Learning with Cross-Network Feature Rectification for Urban Segmentation. Appl. Sci. 2025, 15, 10000. https://doi.org/10.3390/app151810000

AMA Style

Zhang J, Chen N. Bidirectional Dynamic Adaptation: Mutual Learning with Cross-Network Feature Rectification for Urban Segmentation. Applied Sciences. 2025; 15(18):10000. https://doi.org/10.3390/app151810000

Chicago/Turabian Style

Zhang, Jiawen, and Ning Chen. 2025. "Bidirectional Dynamic Adaptation: Mutual Learning with Cross-Network Feature Rectification for Urban Segmentation" Applied Sciences 15, no. 18: 10000. https://doi.org/10.3390/app151810000

APA Style

Zhang, J., & Chen, N. (2025). Bidirectional Dynamic Adaptation: Mutual Learning with Cross-Network Feature Rectification for Urban Segmentation. Applied Sciences, 15(18), 10000. https://doi.org/10.3390/app151810000

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop