Figure 1.
Characteristic information under different perspectives. (a,b) represent the feature information under different scale perspectives. (c,d) represent the feature information under the low frequency and high frequency perspectives.
Figure 1.
Characteristic information under different perspectives. (a,b) represent the feature information under different scale perspectives. (c,d) represent the feature information under the low frequency and high frequency perspectives.
Figure 2.
The overall structure of MPIFNet. The U-Net network structure is used, with a Double-Branch Haar Wavelet Transform applied in the skip connection part. After each decoder, Global and Local Mamba Block Self-Attention is introduced.
Figure 2.
The overall structure of MPIFNet. The U-Net network structure is used, with a Double-Branch Haar Wavelet Transform applied in the skip connection part. After each decoder, Global and Local Mamba Block Self-Attention is introduced.
Figure 3.
Illustration of Block Self-Attention. The module first divides three components—Global Mamba, Local Mamba, and original features—into blocks. After performing self-attention computations, it aggregates all processed features.
Figure 3.
Illustration of Block Self-Attention. The module first divides three components—Global Mamba, Local Mamba, and original features—into blocks. After performing self-attention computations, it aggregates all processed features.
Figure 4.
Illustration of Double-Branch Haar Wavelet Transform. The double-branch structure originates from different skip connection layers. Each branch processes four frequency components extracted via Haar wavelet transform, with final feature integration achieved through additive fusion followed by channel concatenation.
Figure 4.
Illustration of Double-Branch Haar Wavelet Transform. The double-branch structure originates from different skip connection layers. Each branch processes four frequency components extracted via Haar wavelet transform, with final feature integration achieved through additive fusion followed by channel concatenation.
Figure 5.
Visualization of segmentation results on the Vaihingen dataset, where GT represents ground truth. The red square marks the area to be compared.
Figure 5.
Visualization of segmentation results on the Vaihingen dataset, where GT represents ground truth. The red square marks the area to be compared.
Figure 6.
Comparison with the real label: (a) an image from the Vaihingen dataset; (b) an image from the Potsdam dataset. The red square marks the region for comparison, and the arrow points to its enlarged visualization. The colored blocks represent different semantic classes: white for impervious surface, blue for building, cyan for low vegetation, green for tree, yellow for car, and red for background.
Figure 6.
Comparison with the real label: (a) an image from the Vaihingen dataset; (b) an image from the Potsdam dataset. The red square marks the region for comparison, and the arrow points to its enlarged visualization. The colored blocks represent different semantic classes: white for impervious surface, blue for building, cyan for low vegetation, green for tree, yellow for car, and red for background.
Figure 7.
Overall segmentation maps for ID 35 and ID 38 are shown in the figure.
Figure 7.
Overall segmentation maps for ID 35 and ID 38 are shown in the figure.
Figure 8.
Visualization of segmentation results on the Potsdam dataset, where GT represents ground truth. The red square marks the area to be compared.
Figure 8.
Visualization of segmentation results on the Potsdam dataset, where GT represents ground truth. The red square marks the area to be compared.
Figure 9.
Overall segmentation maps for ID and ID are shown in the figure.
Figure 9.
Overall segmentation maps for ID and ID are shown in the figure.
Figure 10.
Visualization of segmentation results on the LoveDA dataset, where GT represents ground truth. The red and blue square marks the area to be compared.
Figure 10.
Visualization of segmentation results on the LoveDA dataset, where GT represents ground truth. The red and blue square marks the area to be compared.
Figure 11.
Effect of Mamba Depths and Chunk Sizes on Model Performance.
Figure 11.
Effect of Mamba Depths and Chunk Sizes on Model Performance.
Figure 12.
Visualization of Wavelet Low-Frequency Components.
Figure 12.
Visualization of Wavelet Low-Frequency Components.
Figure 13.
MPIFNet segmentation results on a real-world remote sensing image, illustrating overall and local structure. The colored blocks represent different semantic classes: white for impervious surface, blue for building, cyan for low vegetation, green for tree, yellow for car, and red for background.
Figure 13.
MPIFNet segmentation results on a real-world remote sensing image, illustrating overall and local structure. The colored blocks represent different semantic classes: white for impervious surface, blue for building, cyan for low vegetation, green for tree, yellow for car, and red for background.
Table 1.
Comparison of quantitative results on the Vaihingen dataset between our method and SOTA approaches. The best values in each column are highlighted in bold and underlined, while the second-best values are highlighted in bold. All metrics are reported as percentages (%).
Table 1.
Comparison of quantitative results on the Vaihingen dataset between our method and SOTA approaches. The best values in each column are highlighted in bold and underlined, while the second-best values are highlighted in bold. All metrics are reported as percentages (%).
| Method | F1/IoU | mF1 | OA | mIoU |
|---|
|
Imp.Surf.
|
Building
|
Low Veg.
|
Tree
|
Car
|
|---|
| U-Net [8] | 95.79/91.93 | 92.77/86.52 | 81.91/69.36 | 88.69/79.68 | 82.40/70.06 | 88.31 | 91.51 | 79.51 |
| UNetFormer [42] | 96.58/93.38 | 95.10/90.66 | 84.17/72.66 | 90.03/81.87 | 85.77/75.09 | 90.33 | 92.93 | 82.73 |
| FT-UNetFormer [42] | 97.04/94.25 | 96.17/92.63 | 84.96/73.85 | 90.42/82.51 | 87.81/78.28 | 91.28 | 93.62 | 84.30 |
| A2-FPN [43] | 96.78/93.76 | 95.56/91.51 | 84.63/73.36 | 90.08/81.95 | 88.47/79.32 | 91.10 | 93.26 | 83.98 |
| MANet [44] | 96.74/93.68 | 95.46/91.32 | 84.61/73.32 | 89.72/81.36 | 88.74/79.77 | 91.05 | 93.07 | 83.89 |
| EIGNet [45] | 96.59/93.42 | 95.26/90.95 | 84.17/72.67 | 89.84/81.55 | 87.36/77.57 | 90.64 | 92.94 | 83.23 |
| CMTFNet [17] | 96.61/93.45 | 95.15/90.76 | 84.93/73.80 | 89.87/81.61 | 86.36/76.00 | 90.58 | 93.04 | 83.12 |
| CGGLNet [46] | 96.87/93.93 | 95.53/91.45 | 84.25/72.79 | 90.26/82.26 | 88.31/79.07 | 91.04 | 93.28 | 83.90 |
| ConvLSR-Net [41] | 97.02/94.22 | 96.00/92.31 | 85.31/74.38 | 90.72/83.02 | 88.82/79.88 | 91.57 | 93.62 | 84.76 |
| SFFNet [26] | 97.06/94.30 | 95.75/91.84 | 85.06/74.00 | 90.14/82.05 | 90.33/82.36 | 91.67 | 93.51 | 84.91 |
| E-PyramidMamba [47] | 96.82/93.84 | 95.44/91.29 | 84.80/73.62 | 90.24/82.21 | 86.93/76.89 | 90.85 | 93.29 | 83.57 |
| RS3Mamba [40] | 96.83/93.86 | 95.79/91.92 | 85.04/73.98 | 90.28/82.29 | 87.22/77.35 | 91.03 | 93.42 | 83.88 |
| PPMamba [24] | 97.00/94.19 | 96.08/92.47 | 85.54/74.73 | 90.48/82.62 | 88.77/79.81 | 91.57 | 93.68 | 84.76 |
| CPSSNet [23] | 97.00/94.18 | 95.84/92.03 | 84.86/73.70 | 90.39/82.47 | 89.68/81.29 | 91.55 | 93.51 | 84.73 |
| MPIFNet (ours) | 97.35/94.84 | 96.49/93.21 | 85.89/75.27 | 90.79/83.14 | 91.11/83.58 | 92.32 | 94.05 | 86.03 |
Table 2.
Comparison of parameters and computation of different networks. The minimum values of Params, FLOPs and Inference Time are highlighted in bold and underline, as smaller values indicate better efficiency.
Table 2.
Comparison of parameters and computation of different networks. The minimum values of Params, FLOPs and Inference Time are highlighted in bold and underline, as smaller values indicate better efficiency.
| Method | Backbone | Params (M) | Flops (G) | Infernce Time (ms) | FPS | mIoU |
|---|
| U-Net [8] | ResNet18 | 31.0 | 193.4 | 16.0 | 62.3 | 79.51 |
| UNetFormer [42] | ResNet18 | 11.7 | 12.0 | 5.9 | 170.3 | 82.73 |
| FT-UNetFormer [42] | Swin-S | 96.0 | 128.4 | 26.1 | 38.2 | 84.30 |
| A2-FPN [43] | ResNet18 | 12.2 | 13.6 | 3.4 | 295.0 | 83.98 |
| MANet [44] | ResNet18 | 35.9 | 54.5 | 10.7 | 93.4 | 83.89 |
| EIGNet [45] | ResNet18 | 34.5 | 45.1 | 17.4 | 57.6 | 83.23 |
| CMTFNet [17] | ResNet50 | 30.1 | 34.6 | 10.9 | 92.1 | 83.12 |
| CGGLNet [46] | ResNet50 | 36.9 | 88.5 | 21.1 | 47.3 | 83.90 |
| ConvLSR-Net [41] | ConvNeXt-S | 68.1 | 71.1 | 32.3 | 30.9 | 84.76 |
| SFFNet [26] | ConvNeXt-T | 34.2 | 52.1 | 16.3 | 59.9 | 84.91 |
| E-PyramidMamba [47] | ResNet18 | 28.8 | 19.1 | 3.8 | 264.3 | 83.57 |
| RS3Mamba [40] | ResNet18 | 43.3 | 39.6 | 24.7 | 40.5 | 83.88 |
| PPMamba [24] | ResNet18 | 21.7 | 23.1 | 22.1 | 45.3 | 84.76 |
| CPSSNet [23] | ConvNeXt-T | 31.9 | 28.8 | 77.1 | 13.0 | 84.73 |
| MPIFNet (ours) | ConvNeXt-S | 62.9 | 88.3 | 57.6 | 17.4 | 86.03 |
Table 3.
Comparison of quantitative results on the Potsdam dataset between our method and SOTA approaches. The best values in each column are highlighted in bold and underlined, while the second-best values are highlighted in bold. All metrics are reported as percentages (%).
Table 3.
Comparison of quantitative results on the Potsdam dataset between our method and SOTA approaches. The best values in each column are highlighted in bold and underlined, while the second-best values are highlighted in bold. All metrics are reported as percentages (%).
| Method | F1/IoU | mF1 | OA | mIoU |
|---|
|
Imp.Surf.
|
Building
|
Low Veg.
|
Tree
|
Car
|
|---|
| U-Net [8] | 92.25/85.61 | 94.09/88.84 | 86.27/75.86 | 87.34/77.53 | 95.71/91.77 | 91.13 | 89.38 | 83.92 |
| UNetFormer [42] | 94.05/88.77 | 96.58/93.40 | 87.45/77.70 | 88.82/79.89 | 96.79/93.78 | 92.74 | 91.39 | 86.71 |
| FT-UNetFormer [42] | 94.43/89.44 | 96.96/94.11 | 88.41/79.24 | 89.77/81.43 | 96.49/93.23 | 93.21 | 91.97 | 87.49 |
| A2-FPN [43] | 93.97/88.62 | 96.37/93.00 | 87.49/77.76 | 88.62/79.56 | 96.21/92.71 | 92.53 | 91.31 | 86.33 |
| MANet [44] | 93.77/88.28 | 96.44/93.14 | 87.63/77.99 | 89.16/80.44 | 96.40/93.06 | 92.68 | 91.42 | 86.58 |
| EIGNet [45] | 93.68/88.11 | 96.35/92.97 | 88.29/79.03 | 89.30/80.66 | 96.71/93.64 | 92.86 | 91.41 | 86.88 |
| CMTFNet [17] | 93.86/88.43 | 96.15/92.58 | 88.13/78.78 | 89.13/80.39 | 96.63/93.48 | 92.78 | 91.49 | 86.73 |
| CGGLNet [46] | 94.21/89.06 | 96.90/93.99 | 88.08/78.70 | 89.21/80.52 | 96.73/93.66 | 93.02 | 91.80 | 87.19 |
| ConvLSR-Net [41] | 94.86/90.22 | 97.18/94.52 | 88.21/78.91 | 89.45/80.92 | 97.00/94.17 | 93.34 | 92.16 | 87.75 |
| SFFNet [26] | 94.62/89.80 | 96.90/93.99 | 88.24/78.95 | 89.22/80.54 | 96.93/94.05 | 93.18 | 91.97 | 87.47 |
| E-PyramidMamba [47] | 93.83/88.37 | 96.20/92.68 | 87.90/78.42 | 89.16/80.44 | 96.47/93.19 | 92.71 | 91.43 | 86.62 |
| RS3Mamba [40] | 94.49/89.56 | 96.62/93.46 | 87.90/78.41 | 89.17/80.46 | 96.65/93.52 | 92.97 | 91.79 | 87.08 |
| PPMamba [24] | 94.54/89.65 | 96.95/94.08 | 88.29/79.03 | 89.48/80.97 | 96.47/93.18 | 93.15 | 92.06 | 87.38 |
| CPSSNet [23] | 94.31/89.24 | 96.71/93.63 | 88.02/78.61 | 89.53/81.04 | 96.85/93.89 | 93.08 | 91.74 | 87.28 |
| MPIFNet (ours) | 95.09/90.64 | 97.49/95.11 | 88.77/79.80 | 90.05/81.90 | 97.08/94.33 | 93.69 | 92.53 | 88.36 |
Table 4.
Comparison of quantitative results on the LoveDA dataset between our method and SOTA approaches. The best values in each column are highlighted in bold and underlined, while the second-best values are highlighted in bold. All metrics are reported as percentages (%).
Table 4.
Comparison of quantitative results on the LoveDA dataset between our method and SOTA approaches. The best values in each column are highlighted in bold and underlined, while the second-best values are highlighted in bold. All metrics are reported as percentages (%).
| Method | IoU | mF1 | OA | mIoU |
|---|
|
Background
|
Building
|
Road
|
Water
|
Barren
|
Forest
|
Agriculture
|
|---|
| U-Net [8] | 50.50 | 56.72 | 47.80 | 52.63 | 30.43 | 40.66 | 45.69 | 62.90 | 65.14 | 46.35 |
| UNetFormer [42] | 50.33 | 55.66 | 55.47 | 68.03 | 24.91 | 42.08 | 50.31 | 65.27 | 67.17 | 49.54 |
| FT-UNetFormer [42] | 55.48 | 62.10 | 56.03 | 72.26 | 33.34 | 43.12 | 57.00 | 69.51 | 72.00 | 54.19 |
| A2-FPN [43] | 52.25 | 56.15 | 52.04 | 70.14 | 34.08 | 42.43 | 51.60 | 67.13 | 68.89 | 51.24 |
| MANet [44] | 51.41 | 60.12 | 54.91 | 54.44 | 29.12 | 41.06 | 49.55 | 64.83 | 66.98 | 48.66 |
| EIGNet [45] | 52.39 | 58.29 | 52.40 | 62.85 | 38.83 | 38.40 | 48.89 | 66.49 | 67.96 | 50.29 |
| CMTFNet [17] | 53.79 | 62.67 | 54.89 | 70.14 | 30.80 | 41.34 | 53.86 | 67.99 | 70.39 | 52.50 |
| CGGLNet [46] | 54.33 | 65.95 | 54.08 | 69.00 | 31.74 | 38.90 | 52.10 | 67.78 | 70.35 | 52.30 |
| ConvLSR-Net [41] | 54.49 | 64.03 | 57.32 | 72.14 | 34.12 | 44.18 | 56.63 | 69.97 | 71.81 | 54.70 |
| SFFNet [26] | 54.37 | 64.22 | 56.56 | 68.68 | 32.95 | 44.66 | 52.02 | 68.87 | 70.71 | 53.35 |
| E-PyramidMamba [47] | 54.37 | 60.54 | 50.92 | 63.21 | 33.42 | 39.92 | 55.01 | 66.99 | 70.07 | 51.06 |
| RS3Mamba [40] | 53.31 | 57.73 | 52.50 | 66.40 | 37.39 | 38.31 | 53.62 | 67.29 | 69.45 | 51.33 |
| PPMamba [24] | 54.15 | 63.28 | 55.25 | 68.36 | 34.96 | 40.86 | 51.30 | 68.25 | 69.95 | 52.59 |
| CPSSNet [23] | 54.62 | 64.79 | 57.58 | 70.08 | 35.86 | 43.74 | 54.85 | 69.90 | 71.51 | 54.50 |
| MPIFNet (ours) | 55.56 | 66.37 | 56.89 | 71.99 | 37.04 | 46.38 | 56.11 | 70.97 | 72.43 | 55.76 |
Table 5.
Ablation Experiments on the Vaihingen and Potsdam datasets. Best values and ours in columns are shown in bold and underlined. All scores are expressed as a percentage (%).
Table 5.
Ablation Experiments on the Vaihingen and Potsdam datasets. Best values and ours in columns are shown in bold and underlined. All scores are expressed as a percentage (%).
| Datasets | Method | Params (M) | Flops (G) | mF1 | OA | mIoU |
|---|
| Vaihingen | Baseline | 58.4 | 68.9 | 90.95 | 93.36 | 83.76 |
| Baseline + DBHWT | 59.6 | 71.9 | 91.88 | 93.85 | 85.27 |
| Baseline + GLMBSA | 61.7 | 85.3 | 91.95 | 93.82 | 85.38 |
| Baseline + GLMBSA + DBHWT | 62.9 | 88.3 | 92.32 | 94.05 | 86.03 |
| Potsdam | Baseline | 58.4 | 68.9 | 92.69 | 91.28 | 86.61 |
| Baseline + DBHWT | 59.6 | 71.9 | 93.24 | 91.88 | 87.56 |
| Baseline + GLMBSA | 61.7 | 85.3 | 93.65 | 92.36 | 87.67 |
| Baseline + GLMBSA + DBHWT | 62.9 | 88.3 | 93.69 | 92.53 | 88.36 |
Table 6.
Effectiveness of the GLMBSA module. Best values are shown in bold and underlined.
Table 6.
Effectiveness of the GLMBSA module. Best values are shown in bold and underlined.
| Method | Params (M) | Flops (G) | Low Veg. | Car | mIoU |
|---|
| Baseline + GMamba | 61.7 | 76.8 | 74.56 | 80.19 | 84.74 |
| Baseline + LMamba | 61.7 | 76.8 | 73.92 | 81.23 | 84.90 |
| Baseline + GLMamba | 61.7 | 84.7 | 75.15 | 80.82 | 85.13 |
| Baseline + GLMBSA | 61.7 | 85.3 | 75.24 | 81.66 | 85.38 |
Table 7.
Effectiveness of the DBHWT module. Best values are shown in bold and underlined.
Table 7.
Effectiveness of the DBHWT module. Best values are shown in bold and underlined.
| Method | Params (M) | Flops (G) | mF1 | OA | mIoU |
|---|
| Baseline | 58.4 | 68.9 | 90.95 | 93.36 | 83.76 |
| Baseline + DB | 59.6 | 69.1 | 91.57 | 93.60 | 84.75 |
| Baseline + OBHWT | 59.2 | 70.7 | 91.61 | 93.69 | 84.83 |
| Baseline + DBHWT | 59.6 | 71.9 | 91.88 | 93.85 | 85.27 |
Table 8.
Impact of GLMBSA embedding placement on model performance: encoder, decoder, and combined stages. Best values are shown in bold and underlined. Check mark indicates the module is included, whereas the hyphen indicates it is excluded.
Table 8.
Impact of GLMBSA embedding placement on model performance: encoder, decoder, and combined stages. Best values are shown in bold and underlined. Check mark indicates the module is included, whereas the hyphen indicates it is excluded.
| Encoder | Decoder | Params (M) | Flops (G) | mIoU |
|---|
| - | - | 58.4 | 68.9 | 83.76 |
| ✓ | - | 70.6 | 89.7 | 84.78 |
| - | ✓ | 61.7 | 85.3 | 85.38 |
| ✓ | ✓ | 73.9 | 106.1 | 84.58 |
Table 9.
Comparison of computations for mamba blocks of different depths. Best values are shown in bold and underlined.
Table 9.
Comparison of computations for mamba blocks of different depths. Best values are shown in bold and underlined.
| Depths | Params (M) | Flops (G) | mF1 | OA | mIoU |
|---|
| Mamba × 0 | 59.6 | 72.4 | 91.99 | 93.88 | 85.45 |
| Mamba × 1 | 61.2 | 80.4 | 92.18 | 93.95 | 85.76 |
| Mamba × 2 | 62.9 | 88.3 | 92.32 | 94.05 | 86.03 |
| Mamba × 4 | 66.2 | 104.2 | 92.10 | 93.90 | 85.65 |
| Mamba × 8 | 72.7 | 135.9 | 92.16 | 93.91 | 85.75 |
Table 10.
Comparison of computations with different Self-Attention chunking size. Best values are shown in bold and underlined.
Table 10.
Comparison of computations with different Self-Attention chunking size. Best values are shown in bold and underlined.
| Sizes | Params (M) | Flops (G) | mF1 | OA | mIoU |
|---|
| H | 62.9 | 88.3 | 92.32 | 94.05 | 86.03 |
| H × 2 | 62.9 | 88.8 | 92.14 | 93.97 | 85.72 |
| H × 4 | 62.9 | 89.9 | 92.12 | 93.84 | 85.69 |
| H × 8 | 62.9 | 92.0 | 92.07 | 93.88 | 85.60 |
| H × 16 | 62.9 | 96.2 | 92.03 | 93.94 | 85.53 |
Table 11.
Comparison of Self-Attention calculation for different combinations. Best values are shown in bold and underlined.
Table 11.
Comparison of Self-Attention calculation for different combinations. Best values are shown in bold and underlined.
| Query | Key | Value | mIoU |
|---|
| X | LMamba | GMamba | 85.72 |
| X | GMamba | LMamba | 85.48 |
| GMamba | X | LMamba | 85.65 |
| LMamba | X | GMamba | 85.54 |
| GMamba | LMamba | X | 85.61 |
| LMamba | GMamba | X | 86.03 |
Table 12.
Comparison of computations with different Wavelet Bases. Best values are shown in bold and underlined.
Table 12.
Comparison of computations with different Wavelet Bases. Best values are shown in bold and underlined.
| Wavelet Bases | Params (M) | Flops (G) | mF1 | OA | mIoU |
|---|
| Daubechies | 62.9 | 88.3 | 92.16 | 93.86 | 85.76 |
| Symlet | 62.9 | 88.3 | 92.13 | 93.92 | 85.69 |
| Coiflet | 62.9 | 88.3 | 92.18 | 93.93 | 85.78 |
| Biorthogonal | 62.9 | 88.3 | 92.08 | 93.84 | 85.62 |
| Haar | 62.9 | 88.3 | 92.32 | 94.05 | 86.03 |
Table 13.
Comparison of computations with different Backbone. Best values are shown in bold and underlined.
Table 13.
Comparison of computations with different Backbone. Best values are shown in bold and underlined.
| Backbone | Params (M) | Flops (G) | mF1 | OA | mIoU |
|---|
| ResNet18 | 18.6 | 52.5 | 91.45 | 93.55 | 84.58 |
| ResNet50 | 31.3 | 66.6 | 92.07 | 93.72 | 85.57 |
| Swin-Tiny | 40.9 | 68.4 | 91.99 | 93.82 | 85.46 |
| Swin-Small | 62.2 | 92.2 | 92.00 | 93.87 | 85.47 |
| ConvNext-Tiny | 41.2 | 66.2 | 92.14 | 93.91 | 85.71 |
| ConvNext-Small | 62.9 | 88.3 | 92.32 | 94.05 | 86.03 |
Table 14.
Comparison of Computational Efficiency and Performance under Different Local Mamba Partitioning Strategies. Best values are shown in bold and underlined.
Table 14.
Comparison of Computational Efficiency and Performance under Different Local Mamba Partitioning Strategies. Best values are shown in bold and underlined.
| Partitioning Strategies | Window | Stride | Params (M) | Flops (G) | mF1 | OA | mIoU |
|---|
| Sliding Window | 32 × 32 | 12 | 69.2 | 97.7 | 92.21 | 93.94 | 85.85 |
| Sliding Window | 32 × 32 | 16 | 69.2 | 88.3 | 92.18 | 93.93 | 85.79 |
| Sliding Window | 64 × 64 | 16 | 69.2 | 101.8 | 92.10 | 93.89 | 85.65 |
| Sliding Window | 64 × 64 | 32 | 69.2 | 89.7 | 92.15 | 93.99 | 85.73 |
| Sliding Window | 64 × 64 | 64 | 69.2 | 85.9 | 92.19 | 94.01 | 85.80 |
| Sliding Window | 128 × 128 | 64 | 69.2 | 83.4 | 92.18 | 93.99 | 85.79 |
| Block-based | - | - | 62.9 | 88.3 | 92.32 | 94.05 | 86.03 |
Table 15.
Comparison of computations with different Noise levels. Best values are shown in bold and underlined.
Table 15.
Comparison of computations with different Noise levels. Best values are shown in bold and underlined.
| Method | Noise | Paramsb (M) | Flopsb (G) | mF1 | OA | mIoU |
|---|
| MPIFNet (our) | 0.00 | 62.9 | 88.3 | 92.32 | 94.05 | 86.03 |
| 0.01 | 62.9 | 88.3 | 85.56 | 89.34 | 75.48 |
| 0.03 | 62.9 | 88.3 | 85.54 | 89.93 | 75.52 |
| 0.05 | 62.9 | 88.3 | 84.63 | 88.85 | 74.15 |
| ConvLSR-Net | 0.00 | 68.1 | 71.1 | 91.57 | 93.62 | 84.76 |
| 0.01 | 68.1 | 71.1 | 83.62 | 87.54 | 72.61 |
| 0.03 | 68.1 | 71.1 | 83.99 | 89.00 | 73.40 |
| 0.05 | 68.1 | 71.1 | 82.60 | 87.34 | 71.34 |
| CPSSNet | 0.00 | 31.9 | 28.8 | 91.55 | 93.51 | 84.73 |
| 0.01 | 31.9 | 28.8 | 83.73 | 87.15 | 72.51 |
| 0.03 | 31.9 | 28.8 | 83.88 | 87.24 | 72.95 |
| 0.05 | 31.9 | 28.8 | 82.83 | 87.65 | 71.55 |