Figure 1.
Typical LVDG Structure and limitations of measurement capability. The measurement capability is concentrated in substations, leaving most of the LVDG unmonitored.
Figure 1.
Typical LVDG Structure and limitations of measurement capability. The measurement capability is concentrated in substations, leaving most of the LVDG unmonitored.
Figure 2.
Flowchart of the proposed adaptive working condition-based fault location framework.
Figure 2.
Flowchart of the proposed adaptive working condition-based fault location framework.
Figure 3.
Topology of the 400 V LVDG test network with 23 buses, showing ICT deployment locations (B2, B10, B16), fault simulation modules (B3, B7, B9, B11, B17, B22), and distributed PV units (yellow symbols at B6, B9, B14, B21, B23).
Figure 3.
Topology of the 400 V LVDG test network with 23 buses, showing ICT deployment locations (B2, B10, B16), fault simulation modules (B3, B7, B9, B11, B17, B22), and distributed PV units (yellow symbols at B6, B9, B14, B21, B23).
Figure 4.
Annual operational profiles of the test LVDG system: (a) Normalized load variation curve, (b) Normalized PV generation variation curve, (c) Maximum node voltage variation curve, (d) Minimum node voltage variation curve.
Figure 4.
Annual operational profiles of the test LVDG system: (a) Normalized load variation curve, (b) Normalized PV generation variation curve, (c) Maximum node voltage variation curve, (d) Minimum node voltage variation curve.
Figure 5.
Temporal distribution of 6 identified working conditions (WC) throughout the year, displayed as a 12 ∗ 24 matrix (months × hours) with color-coded clusters: WC1 (17.9%), WC2 (17.8%), WC3 (1.3%, edge condition), WC4 (30.9%, dominant condition), WC5 (19.3%), and WC6 (12.8%).
Figure 5.
Temporal distribution of 6 identified working conditions (WC) throughout the year, displayed as a 12 ∗ 24 matrix (months × hours) with color-coded clusters: WC1 (17.9%), WC2 (17.8%), WC3 (1.3%, edge condition), WC4 (30.9%, dominant condition), WC5 (19.3%), and WC6 (12.8%).
Figure 6.
Spatial voltage distribution heatmaps across the six working conditions in the test network, showing per-unit voltage magnitudes at each bus location. Color gradients from blue (0.8 p.u.) to red (1.05 p.u.) represent voltage levels.
Figure 6.
Spatial voltage distribution heatmaps across the six working conditions in the test network, showing per-unit voltage magnitudes at each bus location. Color gradients from blue (0.8 p.u.) to red (1.05 p.u.) represent voltage levels.
Figure 7.
Comparison of A-phase ground-fault signatures at buses B17 (Sample 5) and B22 (Sample 6) observed from the ICT at bus B16: (A,B) original voltage waveforms, (C,D) STFT time-frequency maps, (E,F) CWT time-frequency maps, (G,H) S-transform time–frequency map.
Figure 7.
Comparison of A-phase ground-fault signatures at buses B17 (Sample 5) and B22 (Sample 6) observed from the ICT at bus B16: (A,B) original voltage waveforms, (C,D) STFT time-frequency maps, (E,F) CWT time-frequency maps, (G,H) S-transform time–frequency map.
Figure 8.
CNN training performance without S-transform preprocessing: (a) Training curves for fault type classification showing accuracy (red) and loss (blue) evolution; (b) Training curves for fault location showing accuracy (red) and loss (blue) evolution; (c) Confusion matrix for fault type classification; (d) Confusion matrix for fault location.
Figure 8.
CNN training performance without S-transform preprocessing: (a) Training curves for fault type classification showing accuracy (red) and loss (blue) evolution; (b) Training curves for fault location showing accuracy (red) and loss (blue) evolution; (c) Confusion matrix for fault type classification; (d) Confusion matrix for fault location.
Figure 9.
CNN training performance with S-transform enhancement: (a) Training curves for fault type classification showing accuracy (red) and loss (blue) evolution; (b) Training curves for fault location showing accuracy (red) and loss (blue) evolution; (c) Confusion matrix for fault type classification; (d) Confusion matrix for fault location.
Figure 9.
CNN training performance with S-transform enhancement: (a) Training curves for fault type classification showing accuracy (red) and loss (blue) evolution; (b) Training curves for fault location showing accuracy (red) and loss (blue) evolution; (c) Confusion matrix for fault type classification; (d) Confusion matrix for fault location.
Figure 10.
Comparative performance analysis across different models.
Figure 10.
Comparative performance analysis across different models.
Figure 11.
Performance comparison between direct training and progressive transfer learning across different working conditions with varying training sample sizes. Four representative conditions are evaluated: (a) WC1 (17.9% samples); (b) WC3 (1.3% samples, edge condition); (c) WC5 (19.3% samples); (d) WC6 (12.8% samples).
Figure 11.
Performance comparison between direct training and progressive transfer learning across different working conditions with varying training sample sizes. Four representative conditions are evaluated: (a) WC1 (17.9% samples); (b) WC3 (1.3% samples, edge condition); (c) WC5 (19.3% samples); (d) WC6 (12.8% samples).
Figure 12.
Comparative analysis of fault location accuracy between direct training without working condition matching (blue bars) and transfer learning with working condition matching (red bars) across six working conditions.
Figure 12.
Comparative analysis of fault location accuracy between direct training without working condition matching (blue bars) and transfer learning with working condition matching (red bars) across six working conditions.
Table 1.
Comparison of representative fault-location approaches with the proposed method.
Table 1.
Comparison of representative fault-location approaches with the proposed method.
| Method | Feature Type | Measurement Requirement | Key Feature | Ref. |
|---|
| Classical ML | Time–amplitude | Multi-node waveform recorders | Sensitive to feature selection; poor generalization under sparse sensing | [14] |
| DL (e.g., CNN) | Raw waveform | Multi-node waveform recorders | High data requirement | [24] |
| DL + Transfer learning | Raw waveform | Multi-node waveform recorders | Partial adaptation; assumes high-quality data | [31] |
| GAN/Data synthesis | Simulated waveforms | Multi-node waveform recorders | Possible label noise and distribution distortion | [26] |
| Proposed method | Time–frequency–energy | Feeder-head measurement only | Handles sparse sensing and imbalanced data; robust across year-round conditions | - |
Table 2.
Transfer Learning Strategy Selection.
Table 2.
Transfer Learning Strategy Selection.
| Data Volume | Domain Similarity | Strategy | Trainable Layers |
|---|
| Limited | High | Fine-tuning | fully connected layers |
| Limited | Low | Medium-tuning | A few convolutional layers + fully connected layers |
| Sufficient | Low | Medium-tuning | A few convolutional layers + fully connected layers |
| Sufficient | High | Fine-tuning | fully connected layers |
Table 3.
2D-CNN Architecture. Input: 80 (time) × W (buses × 3) × 24 (S-transform channels), Each Conv Block includes BatchNorm, ReLU, and Dropout (0.25–0.5).
Table 3.
2D-CNN Architecture. Input: 80 (time) × W (buses × 3) × 24 (S-transform channels), Each Conv Block includes BatchNorm, ReLU, and Dropout (0.25–0.5).
| Layer | Type | Parameters | Output Shape |
|---|
| Input | - | - | (80, W, 24) |
| Conv Block 1 | 2 × Conv2D + Pool | 32 filters, 3 × 3 | (40, W/2, 32) |
| Conv Block 2 | 2 × Conv2D + Pool | 64 filters, 3 × 3 | (40, W/2, 64) |
| Conv Block 3 | 2 × Conv2D + Pool | 128 filters, 3 × 3 | (40, W/4, 128) |
| Conv Block 4 | 2 × Conv2D + Pool | 256 filters, 3 × 3 | (40, W/8, 256) |
| Global Pool | - | - | (256) |
| FC1 | Dense + Dropout | - | (512) |
| FC2 | Dense + Dropout | - | (256) |
| Output | Dense + Softmax | num_classes/num_location | (num_classes/num_location) |
Table 4.
Working Condition Cluster Centroids (Normalized Values).
Table 4.
Working Condition Cluster Centroids (Normalized Values).
| Features | WC1 | WC2 | WC3 | WC4 | WC5 | WC6 |
|---|
| Avg. Load (f1) | 0.52 | 0.35 | 0.31 | 0.17 | 0.45 | 0.70 |
| Avg. PV Output (f8) | 0.06 | 0.07 | 0.78 | 0.01 | 0.48 | 0.10 |
| Avg. Max Voltage (f12) | 1.00 | 1.00 | 1.06 | 1 | 1 | 1 |
| Avg. Min Voltage (f13) | 0.90 | 0.94 | 1 | 0.97 | 0.98 | 0.86 |
| Avg. Voltage Violation Nodes (f17 + f18) | 5 | 3 | 12 | 0 | 0 | 11 |
| Sample Count | 1565 | 1559 | 115 | 2708 | 1691 | 1122 |
Table 5.
Performance Comparison for Fault Type Classification With and Without S-Transform Enhancement.
Table 5.
Performance Comparison for Fault Type Classification With and Without S-Transform Enhancement.
| Metrics | Without S-Transform | With S-Transform | Improvement |
|---|
| Accuracy (%) | 99.43 | 99.72 | 0.29 |
| Recall (%) | 99.63 | 99.79 | 0.16 |
| F1-Score (%) | 99.53 | 99.74 | 0.21 |
Table 6.
Performance Comparison for Fault Location With and Without S-Transform Enhancement.
Table 6.
Performance Comparison for Fault Location With and Without S-Transform Enhancement.
| Metrics | Without S-Transform | With S-Transform | Improvement |
|---|
| Accuracy (%) | 93.17 | 99.80 | 6.63 |
| Recall (%) | 92.46 | 99.81 | 7.35 |
| F1-Score (%) | 92.37 | 99.82 | 7.45 |