Next Article in Journal
Multi-Source Heterogeneous Data Fusion Algorithm for Vessel Trajectories in Canal Scenarios
Previous Article in Journal
Path Planning for Mobile Robots Based on a Hybrid-Improved JPS and DWA Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Generative Adversarial Power Data Super-Resolution Perception Model

1
State Grid Shanghai Electric Power Research Institute, Shanghai 200437, China
2
College of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(16), 3222; https://doi.org/10.3390/electronics14163222
Submission received: 23 June 2025 / Revised: 6 August 2025 / Accepted: 8 August 2025 / Published: 14 August 2025

Abstract

Due to the challenges of low resolution and incomplete data in the process of power data collection and transmission and the lack of detail in the power data super-resolution algorithm, this paper proposes a generative adversarial network super-resolution perception model based on a linear attention mechanism. It uses the adversarial training mechanism of generator and discriminator to restore high-resolution power data from low-resolution power data. In the generator, the deep residual network structure is innovatively combined with the multi-scale linear attention mechanism, and the linear rectifier unit that can be dynamically learned is combined to improve the model’s ability to extract power data features. The discriminator employs a multi-scale architecture embedded with a dual-attention module, integrating both global and local features to enhance the model’s ability to capture fine details. Experiments were conducted on a dataset of multiple monitoring points in a city in East China. Experimental results indicate that the proposed Lmla-GAN delivers an overall average SSIM improvement of approximately 6.7% over the four baseline models-Bicubic, SRCNN, SubPixelCNN, and VDSR.

1. Introduction

The interactive development of the digital revolution and the energy revolution has led to the widespread application of big data in the power system, primarily in areas such as prediction, evaluation, pattern recognition, model building, and decision analysis. The advancement of monitoring and acquisition technology has generated a vast amount of data for the power system. These data contain rich information about the power system’s status. The results of mining and analyzing them will help the stable operation of the power system [1]. The high proportion of renewable energy grid connections has led to changes in the operation mode of the power system. The collected power data differs from the data generated by the traditional power grid. Wind and solar power generation have strong uncertainty and volatility. The data characteristics are influenced by climate, environment, and season. When analyzing these data, high-frequency power data will contain more characteristic information [2,3].
Although power quality monitoring equipment can achieve high resolution in the acquisition stage, due to the high acquisition frequency, many acquisition points, and a huge data volume, it is impossible to implement high-frequency data transmission and storage due to factors such as cost, transmission, and management [4]. The steady-state data of the power quality monitoring system is statistical data with a data granularity of minutes. With the grid connection of renewable energy, system uncertainty and randomness increase. Data of this time scale will not be able to meet many challenges and cannot reflect the time-varying nature of harmonic sources. When the system has a steady-state harmonic anomaly and cannot determine the cause of the fault, staff will be sent to the fault point to record the fault on-site. However, this processing method is usually post-processing and cannot reflect the actual status at the time of the fault on time. When power quality has harmonic problems, higher-precision harmonic data is needed as the basis for decision-making. If the granularity of harmonic statistical data is refined, it is necessary to increase the system transmission frequency, expand the transmission link, and update the power quality management software, etc., involving huge cost expenditures and management expenditures. The deep learning-based method can directly learn the potential mapping relationship between low-frequency data and high-frequency data, reconstruct the lost feature information in the low-frequency data to form high-frequency data, which is conducive to achieving accurate situational awareness and decision-making analysis, and provides analytical support and decision-making for the safe and stable operation of the power system.
Super-resolution reconstruction technology mainly includes three methods: interpolation-based, reconstruction-based, and learning-based [5,6,7]. The learning-based method is further divided into shallow and deep learning [8,9]. The interpolation-based method improves the resolution by adding pixels in the enlarged image, but there is a problem with blurred image edges and details. The various classical algorithms based on reconstruction and shallow learning proposed later, although they effectively enhance the quality of reconstructed images, require a lot of prior knowledge to guide the reconstruction process, and the algorithm framework requires rich theoretical knowledge support [10,11,12]. With the rapid development of deep learning and its widespread application in SR tasks, SR reconstruction effects far exceeding traditional algorithms can be achieved without requiring a small amount of prior knowledge.
In 2023, Wang et al. [13] proposed a semantic calibration network for cross-layer knowledge distillation (SemC-CKD), which effectively solves the semantic mismatch problem in cross-layer knowledge distillation by introducing an attention mechanism and allowing each student layer to contain knowledge from multiple teacher layers. In the same year, Fang et al. [14] proposed a cross-knowledge distillation framework, which cascaded the student network into the teacher network and directly used the parameters of the teacher network to supervise the training of the student network, reducing the difficulty of optimizing the student network. In 2024, Yasir et al. [15] proposed a deep-wise channel attention network (DWCAN). DWCAN introduces shallow residual blocks with deep separable convolutions, which helps to reduce model parameters and computational load while maintaining performance, but this lightweight design may limit the representation ability of the model to a certain extent, thereby affecting its performance in complex scenarios. At the same time, the channel attention mechanism is combined to improve the perception and extraction ability of image features. In applying super-resolution reconstruction technology to power data reconstruction, Liu et al. [16] proposed a U-Net model to convert missing load data into images for completion, which can effectively restore the missing values of power images, but the application of the model in power data reconstruction may be affected by data distribution and characteristics. When the data distribution changes or the characteristics are not obvious, the generalization ability of the model may be limited. Liang et al. [17] proposed a super-resolution perception network that integrates convolutional neural networks (CNNs) with residual networks to enhance the accuracy of power data reconstruction and applied it to state estimation in smart grids to improve data integrity. In 2023, Lu et al. [18] proposed an Enhanced Channel Attention Residual Network (ECARN). Built upon RCAN, ECARN incorporates Discrete Cosine Transform-based global pooling into the channel-attention mechanism, enabling the network to capture salient frequency-domain information more accurately. Xue Tongdan et al. [19] proposed an improved diffusion super-resolution model. By combining the diffusion model with the U-Net model, the super-resolution reconstruction accuracy of power data was significantly improved. However, its training time is long, and the time cost is often huge when encountering large datasets.
In summary, this paper will continue to study the shortcomings and limitations of existing methods and propose a more efficient and accurate method for reconstructing power quality data based on generative adversarial methods. The contributions of this work are as follows:
(1)
Converting one-dimensional electric data into two-dimensional grayscale images enables the generative adversarial network to efficiently learn the trend of power data changes.
(2)
Combining the deep residual network structure with the multi-scale linear attention mechanism can effectively solve the gradient vanishing problem of the deep network while paying attention to the different scale characteristics of power data to improve the accuracy of reconstruction.
(3)
Combined with the dynamically learnable linear rectifier unit, the network is guaranteed to converge quickly at a very small learning rate, dynamically learn the characteristics of power data, and improve the efficiency and quality of super-resolution reconstruction.
The rest of this paper is organized in the following order. Section 2 introduces the initial application of the generative adversarial network in the field of super-resolution. Section 3 proposes a generative adversarial network super-resolution perception model based on the linear attention mechanism. Section 4 experimentally verifies the feasibility of the proposed method. Section 5 is the conclusion of this paper.

2. Generative Adversarial Networks

Ledig C. et al. [20] first introduced the idea of a generative adversarial network (GAN) into the field of super-resolution reconstruction and proposed an SRGAN network. Through the adversarial training mechanism between the generator and the discriminator, the generator and the discriminator compete with each other during the training process. The generator is continuously optimized to generate high-resolution data that is closer to the real image, while the discriminator strives to improve the ability to distinguish between real and fake images. The pre-trained VGG19 network is introduced to calculate the content loss to extract image features so that the model pays more attention to the high-frequency details of the image. The overall objective function of the network training process can be divided into three parts: adversarial loss, perceptual loss, and content loss. The adversarial loss formula is as follows:
L adv = E x p data log D ( G ( x ) ) ,
where x~pdata represents a random variable that follows the real data distribution pdata, meaning that x is sampled from real samples. G(x) is the high-resolution image generated from the low-resolution image x, and D(G(x)) is the probability that the discriminator judges the generated image as real. Through the adversarial training between the generator and the discriminator, the output of the generator is closer to the real high-resolution sample. Then, the perceptual loss formula is as follows:
L perceptual = E x , y 1 C j H j W j ϕ j ( y ) ϕ j ( G ( x ) ) 2 2 ,
where ϕj(y) and ϕj(G(x)) are the feature maps of the jth layer of the pre-trained VGG19 network, Cj, Hj, and Wj are the number of channels, height, and width of the feature map, and the difference between the generated image and the real image is measured in the feature space to ensure that the meaning of the generated image is consistent with the real image. Then the content loss formula is as follows:
L MSE = E x , y y G ( x ) 2 2
This loss directly constrains the pixel-level similarity between the generated image and the real image, but the weight is low to avoid the generated image being too smooth. From the above three loss functions, the total objective function formula of SRGAN can be obtained as follows:
L total = L perceptual + λ L adv + η L MSE
where λ is the adversarial loss weight, and η is the pixel-level loss weight.

3. Lmla-GAN Model Architecture

3.1. Overall Model Structure

The Lmla-GAN network is mainly composed of two network frameworks: the generator network and the discriminator network. The generator network generates high-resolution color images by inputting color images converted from power data, and the discriminator network compares the authenticity of the real power data high-resolution images and the simulated high-resolution images. The two network structures perform adversarial learning to generate image samples that are closer to the real power data. Figure 1 is the generator network structure, and Figure 2 is the discriminator network structure. In the figure, k, n, and s represent the convolution kernel size, the number of output channels, and the step size, respectively.
The generator network structure combines the deep residual network structure with the multi-scale linear attention mechanism to extract low-frequency and high-frequency information in the power data, avoiding the gradient vanishing problem in the deep network structure. At the same time, it combines the dynamically learnable linear rectifier unit to dynamically learn the power data features and adapt to different types of power quality data. A dual-attention module (DAM) is embedded deep within the discriminator network; by focusing on the features of critical regions, it enables the discriminator to more precisely discern subtle discrepancies between real and fake samples, thereby enhancing its discriminative power.

3.2. Multi-Scale Linear Attention Mechanism

Currently, the application of high-resolution reconstruction is widespread; however, the reconstruction accuracy of existing generative adversarial models for super-resolution reconstruction remains low. Therefore, this paper introduces a multi-scale linear attention mechanism in the generative adversarial model. This method uses the ReLU linear attention mechanism to replace the traditional soft attention mechanism, which enhances the global perception field of the model and allows the model to obtain complex contextual information in high-resolution data without being restricted to local areas. At the same time, the data context information is aggregated through a small kernel convolution to generate multi-scale information, and then ReLU linear attention is performed on it, combining the global perception field with multi-scale learning, thereby enhancing the model’s overall understanding and representation ability of the image and further improving the performance of high-resolution dense prediction.
The module structure is shown in Figure 3 below. After the data is input, it first enters the linear layer. After the input data is linearly transformed, three vectors, Q, K, and V, are obtained and then enter three different branches. One is the global attention module, which utilizes two branches. One branch employs a 3 × 3 deep separable convolution (DWConv) and a 1 × 1 grouped convolution (1 × 1 GConv), while the other branch uses a 5 × 5 deep separable convolution (DWConv) and a 1 × 1 grouped convolution (1 × 1 GConv). After aggregating nearby tokens, multi-scale Q/K/V tokens are obtained. Finally, after passing through the global attention module, the similarity between Q and K is calculated to obtain the attention weight. Then, the attention weight is weighted and summed with V to obtain the attention output. The outputs of the three global attention modules are fused at the feature level, and the attention features of different scales are spliced through the connection operation to form a comprehensive feature representation. Finally, after passing through the linear layer, the model’s final output is obtained through a linear transformation.
The multi-scale token aggregation process is shown in Figure 4 below. Each Q, K, and V in each head performs independent information aggregation and only uses small kernel convolution for this purpose, which reduces the time spent on model training. Among them, Q = xWQ, K = xWK, V= xWV, WQ, WK, and WV are all learnable linear projection matrices, and the self-attention mechanism formula is as follows:
O i = j = 1 N S i m ( Q i , K j ) j = 1 N S i m ( Q i , K j ) V j
Among them, ReLU global attention is used to realize global receptive field and linear calculation, and the similarity function is defined as:
S i m ( Q , K ) = ReLu ( Q ) ReLu ( K ) T

3.3. Dynamically Learnable Linear Rectifier Unit

To enable the network to learn more features during the training process, the Enhanced Local Self-Attention (ELSA) attention mechanism [21] is combined with the ReLU function to form a dynamically learnable linear rectifier unit function, ensuring the network converges quickly at a very small learning rate. ELSA is a fine-grained attention mechanism model, and the specific formula is as follows:
s i = ϕ ( x i , θ ) = { σ ( β ) x i ,         x i 0 C ( α ) x i ,         x i < 0
where xi is the input element, θ = {α, β} is the learnable parameter, C(·) function limits the input variable to 0.01–0.99, and σ is the sigmoid activation function.
The ReLU activation function formula is as follows:
R ( x i ) = { x i ,         x i 0 0 ,           x i < 0
Combining Equation (7) with Equation (8), we achieve the learnable activation function AReLU, as shown below:
F ( x i , α , β ) = s i + R ( x i ) = { ( 1 + σ ( β ) ) x i ,         x i 0 C ( α ) x i ,                 x i < 0
Starting from Equation (9), one arrives at Equation (10). The AReLU function asymmetrically scales gradients: it amplifies the gradient for positive inputs—the region traditionally considered “activated”—while attenuating the gradient for negative inputs—the region usually regarded as “inactivated.” This directional gradient modulation allows the network to preserve salient information and simultaneously suppress redundant parameter fluctuations. Consequently, even under a low learning rate, the network achieves rapid and stable convergence, accelerating the overall training process.
F ( x i , α , β ) x i = { 1 + σ ( β ) ,         x i 0 C ( α ) ,           x i < 0

3.4. Dual Attention Module

In the discriminator, we propose a lightweight dual-attention module that combines Self-Attention (SA) [22] and Channel-Enhanced Attention (CEA). This design reduces memory footprint and computational complexity while effectively enlarging the receptive field and establishing long-range dependencies to capture both channel-wise and spatial features of power-equipment data.
As illustrated in Figure 5, self-attention first enables each spatial location to interact directly with global information, modeling long-distance dependencies. Subsequently, channel-enhanced attention employs a 1 × 1 convolution to expand and then compress the feature channels, extracting rich texture details and simultaneously reducing the parameter count. Average and max pooling are further applied to aggregate spatial statistics, capturing high-frequency information for a more refined channel-wise attention map. The outputs of both branches are concatenated and fused by a 1 × 1 convolution, yielding an enlarged receptive field without a notable increase in computation while preserving spatial details and channel correlations for effective discrimination of power-equipment features.

4. Experimental Verification

4.1. Experimental Preparation

This experiment utilizes 7000 harmonic voltage data points, 7000 harmonic distortion rate data points, and 2215 negative sequence imbalance data points from multiple substations in a city in East China, dividing them into a model training set and a test set in a ratio of 8:2. The training process employs paired high-resolution and low-resolution data. The high-resolution data is obtained using power quality detection equipment, and the data is obtained every three minutes. One day, a group is formed, and there is a total of 480 data points in one day. The low-resolution data is obtained through the AMI system, and the data is obtained every 15 min. There are a total of 96 data points in one day. The upsampling rate K for super-resolution reconstruction is 5, and 16 groups of data are selected as the batch size. The Adam optimizer is configured based on the parameter counts of the generator and discriminator, with an initial learning rate of 0.001 and a total of 200 training epochs. The model code is implemented in PyTorch 2.1.1 and executed on an NVIDIA GeForce RTX 3050 GPU. The host CPU runs at 3.20 GHz and is equipped with 32 GB of RAM.
Li et al. [23] introduce the concept of “power-data imaging.” Based on the periodicity inherent in raw power time-series signals, the measured data are reorganized as follows. For the i-th measurement type (i = 1, 2, …, m), let Li = [Ii1, Ii2, …, Iin, …, Ii n2] be the sequence of length n2. Each sequence is first partitioned into non-overlapping segments of length n and reshaped into an n × n matrix, yielding a single-channel image that corresponds to that measurement type. The m single-channel images obtained from the m measurement types are then concatenated along the depth axis, normalized via min-max scaling, and finally converted into a two-dimensional grayscale array. This produces a unified power image that integrates multi-source electrical information. Because the min-max normalization parameters are stored, the original 1-D sequence can be reconstructed without loss. The complete construction pipeline is illustrated in Figure 6.
The experimental results use Root Mean Squared Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM) as evaluation indicators of data super-resolution reconstruction quality.
RMSE is the root mean square of the difference between corresponding sample points in power quality data, and the formula is:
RMSE = 1 N i = 1 N ( x i HR x i SR ) 2
where N represents the length of a set of data, xHR and xSR represent the values of the original high-resolution signal and the reconstructed super-resolution signal, respectively.
PSNR is the ratio of the signal peak to the noise (RMSE) expressed in decibels based on RMSE, and the formula is:
PSNR = 10 log 10 ( M A X 2 RMSE 2 )
where MAX is the maximum possible value of the data.
SSIM takes into account the local structural characteristics of the signal and is closer to human perception of signal morphology than MSE/PSNR. The closer its value is to 1, the better the reconstruction effect.
SSIM = σ ( x i HR , x i SR ) + C 1 σ ( x i HR , x i HR ) σ ( x i SR , x i SR ) + C 1
where σ is the covariance formula, C1 and C2 are constants to ensure that the denominator is not zero.

4.2. Comparative Experiment

To verify the effectiveness of the improved generative adversarial power data super-resolution model proposed in this paper, it is compared with four other advanced algorithms: bicubic interpolation [24], SRCNN [25], SubPixelCNN [26], and VDSR [27], among others. Bicubic interpolation is an image interpolation method that fits the distribution of image pixels through a cubic polynomial function, which can better maintain the smoothness of the image when the image is enlarged; SRCNN is an early image super-resolution method based on deep learning, which can learn deeper features of the image; SubPixelCNN converts the high-dimensional feature map into a feature map with higher spatial resolution, effectively avoiding the blurring problem that may occur in the traditional interpolation method during image enlargement; VDSR is a convolutional neural network with a large network depth, which can learn more complex feature patterns of the image and can restore the details of the image more accurately. Therefore, the comparative experiments of these methods comprehensively demonstrate the advantages of the improved generative adversarial super-resolution network in this paper. To demonstrate the reconstruction effect, the 480 data are normalized and converted into a 20 × 24 grayscale array, and the Jet color mapping is used to generate a color image, as shown in Table 1 below. Column HR are the real high-resolution image results, and others are the reconstruction results of each model. Figure 7 is a comparison diagram of harmonic voltage super-resolution reconstruction.
As shown in Table 1 above, the Lmla-GAN model presented in this paper exhibits the highest similarity to the real high-resolution image (HR) in terms of visual effect compared to other classic super-resolution reconstruction models and effectively restores low-resolution images to high-resolution images. As can be seen from Figure 7, the model presented in this paper excels in the super-resolution reconstruction of harmonic voltage data, particularly in terms of detail restoration.
Table 2 presents the mean results of super-resolution reconstruction indicators for the test sets of three types of datasets.
In various types of datasets, the Lmla-GAN model proposed in this paper outperforms comparison models, including bicubic interpolation, SRCNN, SubPixelCNN, and VDSR, in terms of PSNR, RMSE, and SSIM. In the harmonic voltage dataset, the PSNR of Lmla-GAN is improved by 8.43% compared to the best comparison model (VDSR), the RMSE is reduced by 50.00%, and the SSIM is improved by 1.89%. In the harmonic distortion rate scenario, PSNR is improved by 5.50%, RMSE is reduced by 36.78%, and SSIM is improved by 1.68%; in the negative sequence imbalance scenario, PSNR is improved by 3.45%, RMSE is reduced by 22.94%, and SSIM is improved by 2.25%. Experimental results show that Lmla-GAN exhibits stronger effectiveness and robustness in image reconstruction quality.
To assess the complexity of the Lmla-GAN model, the experiment employs four evaluation metrics: total memory consumption, training time, and testing time. The model is trained on the harmonic-voltage dataset for 100 epochs, and testing time is measured using 10 randomly sampled sequences from the test set. The corresponding results are presented in Table 3.
As shown in the table, Lmla-GAN achieves higher accuracy while requiring training times comparable to those of the baseline models; its testing time also differs only marginally from the others. Although it occupies more memory, the overhead remains within an acceptable range.

4.3. Ablation Experiment

To explore the impact of the multi-scale linear attention mechanism and the dynamically learnable linear rectifier unit on the super-resolution reconstruction efficiency of the model, this paper performs a unified ablation study on the harmonic voltage dataset by controlling variables and testing the model’s reconstruction results after removing different modules. The results are shown in Table 4 below.
Ablation studies demonstrate that the multi-scale linear module, AReLU, and dual-attention module (DAM) are all crucial to model performance. Removing the multi-scale linear module degrades PSNR and SSIM by 3.10% and 1.12%, respectively, while increasing RMSE by 45.79%. Eliminating AReLU further reduces PSNR and SSIM by 0.70% and 0.57% and raises RMSE by 33.64%. Likewise, dropping DAM lowers PSNR and SSIM by 2.19% and 0.76% and elevates RMSE by 38.79%. These results confirm the effectiveness of each component in enhancing super-resolution quality. This subsection verifies the necessity of the multi-scale linear attention, AReLU, and dual-attention module through controlled experiments; the next section will synthesize these findings to systematically summarize the proposed method and discuss its broader implications for real-world power-grid applications.

5. Conclusions

To address the issue of low reconstruction accuracy in existing power data super-resolution perception models, this paper proposes a generative adversarial network-based super-resolution perception model utilizing a linear attention mechanism. It applies the model to multiple detection point datasets in a city in East China to verify the effect of its subjective and objective indicators of reconstructed images. In particular, we discuss the model design structure, the loss function used for model training, and multiple evaluation indicators applied to the model, including PSNR, RMSE, and SSIM. These experimental results demonstrate that, compared with the best-performing baseline (VDSR), Lmla-GAN achieves an average PSNR increase of 5.9%, an RMSE reduction of 33.3%, and an SSIM improvement of 1.9% across the three datasets. Ablation experiments were also conducted: sequentially removing the multiscale linear attention, AReLU, or DAM degrades the average metrics by 0.83 dB in PSNR, a 39.4% increase in RMSE, and a 0.83% drop in SSIM. Leveraging the reconstructed high-precision power-quality data, the model can be seamlessly embedded into three key scenarios—real-time substation monitoring, harmonic-source tracing in distribution networks, and state estimation across bulk power systems—thereby markedly enhancing the grid’s perception of transient disturbances, its pinpoint localization of harmonic pollution, and its fine-grained control of overall operational status. However, when training data are scarce or there are large geographical discrepancies between source and target domains, the super-resolution performance may degrade, making domain-adaptive transfer learning strategies imperative to bridge the gap.

Author Contributions

Conceptualization, P.Z. and L.P.; methodology, L.P.; software, C.X.; formal analysis, W.W.; investigation, H.W.; data curation, P.Z.; writing—original draft preparation, P.Z.; writing—review and editing, P.Z.; project administration, P.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by Science and Technology Project of State Grid Corporation of China (No. 52094024000L).

Data Availability Statement

Restrictions apply to the availability of these data. Data were obtained from State Grid Shanghai Electric Power Research Institute and are available from the authors with the permission of State Grid Shanghai Electric Power Research Institute.

Acknowledgments

The authors greatly appreciate the reviews, valuable suggestions, and editors’ encouragement.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, M.C.; Tsai, C.F.; Lin, W.C. Towards Missing Electric Power Data Imputation for Energy Management Systems. Expert Syst. Appl. 2021, 174, 114743. [Google Scholar] [CrossRef]
  2. Khan, Z.A.; Ullah, A.; Haq, I.U.; Hamdy, M.; Mauro, G.M.; Muhammad, K.; Baik, S.W. Efficient short-term electricity load forecasting for effective energy management. Sustain. Energy Technol. Assess. 2022, 53, 102337. [Google Scholar] [CrossRef]
  3. Yang, X.; Liu, X.; Li, Z.; Xiao, G.; Wang, P. Resilience-oriented proactive operation strategy of coupled transportation power systems under exogenous and endogenous uncertainties. Reliab. Eng. Syst. Saf. 2025, 262, 111161. [Google Scholar] [CrossRef]
  4. Song, L.; Li, Y.; Lu, N. ProfileSR-GAN: A GAN based super-resolution method for generating high-resolution load profiles. IEEE Trans. Smart Grid 2022, 13, 3278–3289. [Google Scholar] [CrossRef]
  5. Du, W.; Tian, S. Transformer and GAN-based super-resolution reconstruction network for medical images. J. Tsinghua Univ. (Sci. Technol.) 2023, 29, 197–206. [Google Scholar] [CrossRef]
  6. Wang, P.; Sertel, E. Multi-frame super-resolution of remote sensing images using attention-based GAN models. Knowl.-Based Syst. 2023, 266, 110387. [Google Scholar] [CrossRef]
  7. Zhu, J.; Lin, Z.; Wen, B. TM-GAN: A Transformer-Based Multi-Modal Generative Adversarial Network for Guided Depth Image Super-Resolution. IEEE J. Em. Sel. Top. C 2024, 14, 3394495. [Google Scholar] [CrossRef]
  8. Wang, Y.; Lian, H.B.; Wang, Y.C. Thermal imaging super-resolution reconstruction of power equipment based on improved edge attention generation countermeasure network. Power Syst. Prot. Control 2024, 52, 119–127. (In Chinese) [Google Scholar]
  9. Li, J.X.; Zhao, Y.X.; Wang, J.H. Overview of single image super-resolution reconstruction algorithm based on deep learning. Acta Autom. Sin. 2021, 47, 2341–2363. [Google Scholar]
  10. Tan, R.; Yuan, Y.; Huang, R.; Luo, J. Video super-resolution with spatial-temporal transformer encoder. In Proceedings of the 2022 IEEE International Conference on Multimedia and Expo (ICME), Taipei, China, 18–22 July 2022; pp. 1–6. [Google Scholar]
  11. Han, J.; Wang, C. SSR-TVD: Spatial super-resolution for time-varying data analysis and visualization. IEEE Trans. Vis. Comput. Graph. 2020, 28, 2445–2456. [Google Scholar] [CrossRef]
  12. Li, H.; Yang, Y.; Chang, M. Srdiff: Single image super-resolution with diffusion probabilistic models. Neurocomputing 2022, 479, 47–59. [Google Scholar] [CrossRef]
  13. Wang, C.; Chen, D.; Mei, J.P. SemCKD: Semantic calibration for cross- layer knowledge distillation. IEEE Trans. Knowl. Data Eng. 2023, 35, 6305–6319. [Google Scholar] [CrossRef]
  14. Fang, H.X.; Long, Y.W.; Hu, X.Y.; Ou, Y.T.; Huang, Y.J.; Hu, H.J. Dual cross knowledge distillation for image super-resolution. J. Visual Commun. Image Represent. 2023, 95, 103858. [Google Scholar] [CrossRef]
  15. Yasir, M.; Ullah, I.; Choi, C. Depthwise channel attention network (DWCAN): An efficient and lightweight model for single image super- resolution and metaverse gaming. Expert Syst. 2024, 41, e13516. [Google Scholar] [CrossRef]
  16. Liu, L.; Liu, Y. Load image inpainting: An improved U-Net based load missing data recovery method. Appl. Energ. 2022, 327, 119988–120001. [Google Scholar] [CrossRef]
  17. Liang, G.; Liu, G.; Zhao, J.; Liu, Y.; Gu, J.; Sun, G.; Dong, Z. Super resolution perception for improving data completeness in smart grid state estimation. Engineering 2020, 6, 789–800. [Google Scholar] [CrossRef]
  18. Lu, J.; Luo, X. Image super- resolution with enhanced channel attention residual network. In Proceedings of the 2023 8th International Conference on Intelligent Computing and Signal Processing, Xi’an, China, 21–23 April 2023; pp. 16501–16504. [Google Scholar]
  19. Xue, T.D.; Wang, H.; Qi, L.H.; Yan, J.Y.; Jiang, M.J.; Tao, S. Super resolution reconstruction technology of power data based on improved diffusion model. Autom. Electr. Power Syst. 2025, 49, 214–223. [Google Scholar]
  20. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21 July 2017; pp. 4681–4690. [Google Scholar]
  21. Zhou, J.; Wang, P.; Wang, F.; Liu, Q.; Li, H.; Jin, R. Elsa: Enhanced local self-attention for vision transformer. arXiv 2021, arXiv:2112.12786. [Google Scholar]
  22. Rahman, A.U.; Alsenani, Y.; Zafar, A.; Ullah, K.; Rabie, K.; Shongwe, T. Enhancing heart disease prediction using a self-attention-based transformer model. Sci. Rep. 2024, 14, 514. [Google Scholar] [CrossRef]
  23. Li, F.S.; Lin, D.; Yu, T. Improved generative adversarial network-based super resolution reconstruction for low-frequency measurement of smart grid. IEEE Access 2020, 8, 85257–85270. [Google Scholar] [CrossRef]
  24. Zhang, W.; Jia, S.; Zhu, H. Single pixel imaging based on bicubic interpolation walsh transform matrix. IEEE Access 2024, 12, 138575–138581. [Google Scholar] [CrossRef]
  25. Lv, Y.; Ma, H. Improved SRCNN for super-resolution reconstruction of retinal images. In Proceedings of the 2021 6th International Conference on Intelligent Computing and Signal Processing (ICSP), Xi’an, China, 9–11 April 2021; pp. 595–598. [Google Scholar]
  26. Xu, M.; Du, X.; Wang, D. Super-resolution restoration of single vehicle image based on ESPCN-VISR model. IOP Conf. Ser. Mater. Sci. Eng. 2020, 790, 012107. [Google Scholar] [CrossRef]
  27. He, L.; Zheng, X.; Ying, Z.; Sun, Y. Super-Resolution Algorithm for Sonar Images Based on VDSR Technology. In Proceedings of the 2025 5th International Conference on Neural Networks, Information and Communication Engineering (NNICE), Guangzhou, China, 10–12 January 2025; pp. 388–391. [Google Scholar]
Figure 1. Generator network structure.
Figure 1. Generator network structure.
Electronics 14 03222 g001
Figure 2. Discriminator network structure.
Figure 2. Discriminator network structure.
Electronics 14 03222 g002
Figure 3. Multi-scale linear attention.
Figure 3. Multi-scale linear attention.
Electronics 14 03222 g003
Figure 4. Multi-scale token aggregation process.
Figure 4. Multi-scale token aggregation process.
Electronics 14 03222 g004
Figure 5. Dual attention module.
Figure 5. Dual attention module.
Electronics 14 03222 g005
Figure 6. Power-data image construction method.
Figure 6. Power-data image construction method.
Electronics 14 03222 g006
Figure 7. Harmonic voltage line comparison chart.
Figure 7. Harmonic voltage line comparison chart.
Electronics 14 03222 g007
Table 1. Color map of model reconstruction effect.
Table 1. Color map of model reconstruction effect.
Harmonic VoltageHarmonic DistortionNegative Sequence Imbalance
BicubicElectronics 14 03222 i001Electronics 14 03222 i002Electronics 14 03222 i003
SRCNNElectronics 14 03222 i004Electronics 14 03222 i005Electronics 14 03222 i006
SubPixelCNNElectronics 14 03222 i007Electronics 14 03222 i008Electronics 14 03222 i009
VDSRElectronics 14 03222 i010Electronics 14 03222 i011Electronics 14 03222 i012
Lmla-GANElectronics 14 03222 i013Electronics 14 03222 i014Electronics 14 03222 i015
HRElectronics 14 03222 i016Electronics 14 03222 i017Electronics 14 03222 i018
Table 2. Super-resolution reconstruction indicators.
Table 2. Super-resolution reconstruction indicators.
Harmonic VoltageHarmonic Distortion RateNegative Sequence Unbalance
PSNRRMSESSIMPSNRRMSESSIMPSNRRMSESSIM
Bicubic36.84190.05440.924935.19570.077850.905433.89540.10810.8827
SRCNN38.19350.0430.966936.84610.0550.940334.58730.09590.9243
SubPixelCNN38.07840.04380.965337.27060.05080.964834.54630.09520.9154
VDSR38.22250.04280.968337.31620.05030.966235.18680.08630.9349
Lmla-GAN41.44320.02140.986639.36890.03180.982436.40000.06650.9559
Table 3. Negative sequence unbalance.
Table 3. Negative sequence unbalance.
Total MemoryTraining Duration/hTesting Duration/ms
Bicubic 10 MB/419.5
SRCNN450 MB1.3478.3
SubPixelCNN753 MB1517
VDSR1.3 GB1.4715
Lmla-GAN1.7 GB1.5613
Table 4. Ablation results.
Table 4. Ablation results.
PSNRRMSESSIM
Lmla-GAN41.44320.02140.9866
-Multiscale linearity40.15640.03120.9756
-AReLU41.15240.02860.9810
-DAM40.53750.02970.9791
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, P.; Pan, L.; Xiao, C.; Wu, W.; Wang, H. Improved Generative Adversarial Power Data Super-Resolution Perception Model. Electronics 2025, 14, 3222. https://doi.org/10.3390/electronics14163222

AMA Style

Zhang P, Pan L, Xiao C, Wu W, Wang H. Improved Generative Adversarial Power Data Super-Resolution Perception Model. Electronics. 2025; 14(16):3222. https://doi.org/10.3390/electronics14163222

Chicago/Turabian Style

Zhang, Peng, Ling Pan, Cien Xiao, Wei Wu, and Hong Wang. 2025. "Improved Generative Adversarial Power Data Super-Resolution Perception Model" Electronics 14, no. 16: 3222. https://doi.org/10.3390/electronics14163222

APA Style

Zhang, P., Pan, L., Xiao, C., Wu, W., & Wang, H. (2025). Improved Generative Adversarial Power Data Super-Resolution Perception Model. Electronics, 14(16), 3222. https://doi.org/10.3390/electronics14163222

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop