Next Article in Journal
From Ontology to Application: A Semantic Architecture for Music Education in Low-Code Environments
Previous Article in Journal
Path-Based Risk Segmentation of Road Networks with Exposure Modeling
Previous Article in Special Issue
ResDiff: Hardware-Aware Physical-Layer Covert Communication via Diffusion-Based Residual Perturbation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Frequency Identification Method for Differential Frequency-Hopping Signals Based on the Super-Resolution Reconstruction of Time–Frequency Images

1
School of Information Science and Engineering, Shenyang Ligong University, Shenyang 100159, China
2
Beijing Incom Information Technology Corporation, Beijing 100080, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(10), 2070; https://doi.org/10.3390/electronics15102070
Submission received: 13 April 2026 / Revised: 7 May 2026 / Accepted: 8 May 2026 / Published: 12 May 2026
(This article belongs to the Special Issue AI-Driven Signal Processing in Communications)

Abstract

The frequency identification technology of differential frequency-hopping (DFH) signals is the key to decoding at the receiver. Aiming to improve frequency identification accuracy under low signal-to-noise ratio (SNR) conditions, a method based on super-resolution image reconstruction technology is proposed for the instantaneous frequency identification of DFH signals. Firstly, the time–frequency image of the DFH signal is obtained using short-time Fourier transform (STFT). Then, a U-Net neural network with an attention mechanism is designed to suppress noise and interference components in the time–frequency image and reconstruct a super-resolution time–frequency image. Furthermore, based on the correlation between adjacent hop signals in accordance with the frequency transfer function, a ResNet neural network is designed to identify frequencies from the super-resolution time–frequency image of DFH signals. Simulation results demonstrate that the designed U-Net neural network can effectively suppress noise and interference components and reconstruct high-quality super-resolution time–frequency images. Comparative experimental results show that the proposed ResNet neural network can significantly improve the identification accuracy of DFH signals under low-SNR conditions. Specifically, the identification accuracy can reach more than 90% when the low SNR is not less than −10 dB, which is a significant improvement compared with other methods. Ablation experiment results indicate that the attention mechanism can improve model performance by 3.74%.

1. Introduction

Differential frequency hopping is a unique spread-spectrum communication technique that has faster carrier frequency switching and better anti-jamming ability. Therefore, DFH is widely used in the field of anti-jamming communication [1]. In a DFH communication system, the current hop frequency is determined by a frequency transfer function called the G function that relies only on the current data symbol and previous hop frequency. The DFH signal has better anti-interception ability because the current hop frequency is controlled by the transmitted data. Meanwhile, it also increases the difficulty of the DFH signal-receiving process. In the receiver, it is necessary to detect the total hopping bandwidth. The current hop frequency is difficult to detect effectively under low-SNR conditions, especially in the presence of fixed-frequency interference signals, broadband interference signals, chirped interference signals, etc. [2].
In recent years, to improve signal-detection performance under low-SNR conditions, algorithms based on deep learning have been widely studied. In [3], a noise learning method based on a multilevel residual convolutional autoencoder network is proposed, which consists of a deep convolutional autoencoder network and multilevel residuals to eliminate noise interference as much as possible. The multilevel residual structure is designed to address gradient vanishing and improve training speed. By replacing traditional clean signal learning with noise learning, the noise component is obviously suppressed. In [4], a multiscale frequency-division denoising network model is proposed to suppress noise components under complex working environments and severe noise interference. By designing a multiscale convolutional neural network and stacking convolutional pooling layers, abundant global features are learned. By constructing frequency-division denoising layers and establishing a sub-network based on an attention mechanism, the noise component is effectively reduced. To improve performance during target-state feature extraction, a deep learning method for signal denoising is proposed in [5]. By designing an autoencoder network to remove unstructured noise components, the original signal is recovered. These methods share a common approach: the sampled time-domain signal is input into neural networks to extract signal features. However, the sampled time-domain signal fails to encompass all signal features, which may mean the neural network is unable to learn the correct signal features under low-SNR conditions and thus fail to work properly.
In [6], a framework based on a deep convolutional autoencoder is designed to reduce noise in mine microseismic signals. Using the short-time Fourier transform, the normalized signal is converted into the time–frequency domain. Then, a denoising network is used to separate signal components from noise components. Because the characteristics of microseismic signals are similar to those of DFH signals, it is appropriate to convert the time-domain sampled DFH signal into the time–frequency domain.
In recent years, with the development of artificial intelligence technology, deep learning algorithms can be used not only for denoising tasks but also for frequency identification tasks. In [7], a novel frequency-estimation algorithm based on deep learning is presented. The instantaneous frequency and linear frequency-hopping slope of the LFM signal are estimated using a convolutional neural network. A feature fusion algorithm based on a deep residual network is proposed for radio frequency recognition in [8]. In [9], a multi-head ProbSparse self-attention network based on a convolutional neural network and transformer encoder module is designed for receiving frequency-hopping signals. Using this network, the performance of the receiver under unknown frequency-hopping sequence conditions is the same as that under known frequency-hopping sequence conditions. Currently, proposing deep learning algorithms for signal recognition has become a new research hotspot.
In this paper, to improve the frequency identification accuracy of differential frequency-hopping signals under low-SNR conditions, we propose an instantaneous frequency identification method based on super-resolution image reconstruction technology. By using the STFT, constructing a CU-Net network, and reconstructing a super-resolution time–frequency image, the super-resolution time–frequency image is obtained and the noise component is reduced. Furthermore, the frequency-hopping sequence is recognized using the constructed ResNet network. The recognized frequency-hopping sequence can be used for decoding.
In summary, the contributions of this paper are as follows:
  • We propose a CU-Net network for reconstructing super-resolution time–frequency images, which can reduce noise components.
  • We design a ResNet structure for recognizing frequency-hopping sequences.
  • Using various datasets, we demonstrate that frequency-hopping sequences can be effectively identified under low-SNR conditions.
The rest of the paper is organized as follows. Section 2 introduces the model of the DFH communication system, the principles of the G function, and the difficulties in detecting signals under low-SNR conditions. Section 3 proposes the frequency identification method for DFH signals based on super-resolution reconstruction. Section 4 presents the results of experimental evaluation. Finally, Section 5 concludes the paper.

2. Models of the DFH Communication System

In this section, we provide an overview of the differential frequency-hopping communication system. Specifically, Section 2.1 elaborates on the theoretical model of DFH communication; Section 2.2 delves into the working principles of the G function; and Section 2.3 further analyzes the key challenges encountered in frequency identification under low-SNR conditions.

2.1. Theory Models of the DFH Communication System

The block diagram of the DFH communication system is illustrated in Figure 1. For the k-th hop, the transmitted data is first mapped into symbol Dk. Subsequently, the instantaneous frequency fk of the k-th hop signal is calculated via the G function, which can be expressed as Equation (1):
fk = G(fk−1, Dk)
where fk−1 denotes the instantaneous frequency of the (k − 1)-th hop.
Furthermore, the k-th hop DFH signal S(t) is generated by a frequency synthesizer and transmitted through the communication channel, which is mathematically expressed as Equation (2):
S(t) = A · cos(2πfkt + φk), t ∈ [kTh, (k + 1)Th]
where A is the signal amplitude, φ k is the initial phase of the k-th hop signal, and T h represents the hop duration.
At the receiver, the received DFH signal contaminated by noise is processed using the Fast Fourier Transform (FFT). By detecting the FFT results of the received signal, the instantaneous frequency of each hop is identified, and the transmitted symbols are then decoded through the inverse G function (G−1), which is given by Equation (3):
Dk = G(fk−1, fk)
Finally, the original transmitted data is retrieved by mapping the decoded symbols back to the source bits.

2.2. The Principles of G Function

In the DFH communication system, frequency-hopping sequences are generated by the G function, which is closely associated with the transmitted data [10]. Specifically, the transmitted data is embedded in the frequency variation between adjacent hops of the signal. Taking the scenario where two bits of data are transmitted per hop as an example, the frequency variation between adjacent hops is depicted in Figure 2.
In Figure 2, the serial number of the frequency-hopping point is represented by node fN. Each node has multiple branches, which can be expressed as Equation (4):
Nb = 2BPH
where Nb denotes the fan-out factor, and BPH is the number of bits transmitted per hop.
Theoretical analysis indicates that the instantaneous frequency of the DFH signal varies with time, indicating that the DFH signal is a non-stationary signal. Figure 3 illustrates the time-domain waveform, frequency-domain spectrum, and time–frequency distribution of the five-hop DFH signal. The sampling frequency is 100 MHz, the hopping rate is 100,000 hops per second, and the frequency range of the hopping signal is from 10.2 MHz to 22.8 MHz.
As shown in Figure 3a, the signal period changes over time, but the frequency of each hop cannot be clearly distinguished. In Figure 3b, five hopping frequencies are observed within the frequency window, yet their temporal order remains unclear. In contrast, Figure 3c clearly illustrates the time-varying characteristics of the frequency and the sequential order of these variations.
The DFH signal transmits information through frequency variations, which implies a fixed G-function mapping relationship between adjacent hops. The time–frequency image within a five-hops window exhibits multiple signals with different frequencies arranged sequentially in the time domain, and the transition from one frequency to the next adheres to the mapping rule of the G function. Figure 3c not only demonstrates the temporal evolution of the signal frequency but also clarifies the signal decoding process, making it intuitive and easy to understand.

2.3. Analysis of the Limitation of Frequency Recognition of DFH Signals in Low-SNR Conditions

The DFH technology integrates carrier modulation, frequency generation, and demodulation into a single process via the G and G−1 functions. The frequency-hopping sequence of the DFH signal is non-repetitive and exhibits strong randomness, which can be verified through the analysis of the working mechanism of DFH communication and the operating principle of the G function. Due to the non-repetitive frequency variation characteristic of the DFH signal, the current frequency of the received DFH signal cannot be predicted, and the receiver must search the entire hopping bandwidth to determine the hopping rate and further identify the current signal frequency.
In most practical applications, the hopping bandwidth of a frequency-hopping system is much wider than the frequency bandwidth occupied by a single hop. Under low-SNR conditions, the hopping bandwidth may contain noise and other interfering signals, leading to the degradation of the received signal quality. The received signal R(t) can be mathematically modeled as Equation (5):
R(t) = S(t) + J(t) + n(t)
where R(t) is the received signal at the receiver, S(t) is the original DFH signal, J(t) represents the interference signal, and n(t) denotes the additive white Gaussian noise (AWGN).
Equation (5) reveals that the time–frequency distribution of the received signal is affected by various factors, including the type of interference, the SNR level, and the interference duration. In this chapter, three typical types of interference signals are investigated, namely fixed-frequency interference, broadband interference, and chirp interference, and their impacts are tested under different SNR levels. The mathematical expressions of the fixed-frequency interference, broadband interference, and chirp interference signals are given by Equations (6), (7), and (8), respectively.
JF(t) = A·cos(2πfFt + φ)
where fF is the frequency of fixed-frequency interference.
JB(t) = n(th(t)
where h(t) denotes the impulse response of the broadband filter.
JC(t) = A·cos(2πfC0t + πkt2 + φ), 0 ≤ t ≤ T
where fC0 denotes the starting frequency and k = B/T represents the chirp rate, in which B is the bandwidth and T is the pulse width.
Figure 4 plots the STFT-based time–frequency distribution of the received signal, which is contaminated by fixed-frequency, broadband, and chirp interference. All interferences fully cover the frequency-hopping bandwidth of the DFH signal, with each interference lasting from one hop to multiple hops.
Figure 4 illustrates the time–frequency distribution of the received signals under different SNR levels. Different colors represent the signal’s energy at each time–frequency bin. Warm tones, such as red, orange, and yellow, correspond to high energy and large amplitude, while cold tones including blue, green, and black represent low energy and small amplitude. In Figure 4a (SNR = 20 dB), the DFH signal’s frequency-hopping process over time is clearly displayed in the time–frequency distribution, and the time and frequency ranges of the interference can be easily identified. As the SNR decreases, the noise intensity increases significantly. Consequently, the DFH signal is gradually submerged in the noise and becomes difficult to distinguish in Figure 4b,c.
Due to the unique data transmission mechanism of the DFH system, the receiver must scan the entire hopping bandwidth to decode the signal. Under low-SNR conditions, noise and interference are prone to cause decoding errors. Unlike conventional frequency-hopping systems, the randomness of the DFH hopping sequence prevents the receiver from converting the received signal into a fixed-frequency signal or suppressing noise using a band-pass filter. Therefore, how to effectively eliminate the impacts of noise and interference signals under low-SNR conditions is a critical issue that must be addressed by the DFH receiver.

3. Frequency Recognition Method for DFH Signals Based on Super-Resolution Reconstruction of Time–Frequency Images (TFISrR)

3.1. Method Motivation

A comparative analysis of references [6,9] verifies that denoising algorithms based on time–frequency images achieve superior noise-suppression performance. At the receiver, the received signal is first transformed into time–frequency domain, where the time–frequency image quality is dominated by the window length and number of Fast Fourier Transform (NFFT) size. Increasing the sampling window length improves the time-domain estimation accuracy of frequency hopping, while a larger NFFT size enhances frequency-domain estimation accuracy. Although both strategies can improve time–frequency image resolution, they inevitably increase the data volume, thereby imposing a heavier computational burden and higher processing latency on the system.
The super-resolution (SR) reconstruction of time–frequency distribution images is a typical task in computer vision and image processing. Starting from a low-resolution (LR) image obtained from the input signal, it aims to recover indistinct fine structures and enhance image details. The core objective is to reconstruct a high-resolution (HR) image from a given LR input image [11], thereby improving image quality and detail fidelity.
Most existing deep learning-based image super-resolution methods rely on convolutional neural networks (CNNs) and generative adversarial networks (GANs). An enhanced swin ttransformer with U-Net generative adversarial networks (ESTUGAN) method was proposed for remote-sensing image super-resolution, which employs a U-Net-based discriminator to explicitly guide GAN training, enabling the better recovery of detailed and complex textural regions in SR outputs [12]. Jiaxin Li proposed an X-shaped interactive autoencoder named XINet for hyperspectral image super-resolution [13]. This framework performs joint learning from hyperspectral and multispectral data channels using two parameter-sharing U-Net branches, each focusing on distinct detail scales. Such a design enriches spatial-spectral feature extraction and boosts hyperspectral image SR performance. An improved radar-image SR model based on SR3 was developed for weather radar imagery, which integrates an RA module into the U-Net structure to strengthen feature extraction and generate higher-resolution weather radar images [14]. Refs. [12,13,14] all adopt the U-Net architecture shown in Figure 5, with lightweight modifications such as GAN integration or parameter-sharing designs. These improvements enhance the network’s capability to capture fine image details and effectively improve image resolution.
In this work, the time–frequency image of DFH signal is initially processed by the U-Net network for super-resolution reconstruction. A comparison between the original time–frequency image and the reconstructed HR image is presented in Figure 6. As observed, the reconstructed image yields a substantially higher resolution, which facilitates more accurate DFH signal frequency recognition. Motivated by this, an attention mechanism is incorporated to construct a CU-Net-based autoencoder, which not only improves image resolution but also suppresses signal noise.
References [8,15] adopt residual network (ResNet) structures to expand the receptive field and preserve low-level features, which benefits accurate DFH signal frequency recognition. In this paper, after obtaining the SR image, the correlation between adjacent hop frequencies derived from the G function transform is taken as input. A ResNet model is constructed to learn the nonlinear frequency mapping in the reconstructed image, enabling the correct recognition of the DFH signal frequency sequence.

3.2. Method Structure

The overall pipeline of the proposed TFISrR method is illustrated in Figure 7. First, time–frequency analysis is conducted on the received signal to generate a time–frequency image. Next, a denoising and super-resolution reconstruction network is applied to produce the SR image. Finally, a signal-frequency-recognition network extracts the DFH signal frequency sequence from the SR image, realizing precise DFH signal frequency identification.

3.2.1. Time–Frequency Analysis

Time–frequency analysis methods for non-stationary signals, including STFT [16], Wigner–Ville distribution (WVD) [17], Wavelet transform, and Hilbert–Huang transform [18], have been widely applied in radar and communication signal processing, speech/audio processing, and biomedical engineering [19]. In recent years, deep learning-based approaches commonly use time–frequency images as input, constructing diverse network architectures for signal recognition and feature extraction [20,21,22].
In the proposed TFISrR method, the signal time–frequency distribution image is processed by a customized CU-Net for joint denoising and super-resolution reconstruction within a unified framework. To reduce the overall processing latency, STFT is adopted for time–frequency analysis, which accelerates computation while satisfying performance requirements. The STFT is formulated as
S T F T R ( τ , f ) = + R ( t ) · w ( t τ ) · e j 2 π f t d t
where R(t) denotes the received time-domain sampled signal, w(tτ) represents the analysis window function, t is the time shift parameter, and f is the frequency variable.
Figure 4a,b shows the time–frequency images, which clearly depict the frequency-hopping pattern of the DFH signal. The variation in color intensity and frequency position over time facilitates intuitive observation of hopping behavior, while interference components also exhibit distinguishable time–frequency signatures. To accelerate the training of the denoising and super-resolution network, the STFT time–frequency image—as the input Xinput to the noise reduction and super-resolution reconstruction network—is converted from a three-channel RGB format to a single-channel grayscale format and resized to 320 × 320.

3.2.2. Noise Reduction and Super-Resolution Reconstruction Network

To achieve an accurate regression estimation of the frequency characteristics for DFH signals, a dedicated noise reduction and super-resolution reconstruction network is proposed. This network, denoted as CU-Net, synergistically integrates a U-Net backbone with an autoencoder architecture and dense skip connections, while embedding the convolutional block attention module (CBAM) to enhance feature representation. The architecture of CU-Net is illustrated in Figure 8.
The proposed CU-Net fully captures the time–frequency distribution representation, local sparsity, and multi-scale contextual dependencies of DFH signals. Built on the U-Net backbone, the network adopts channel upsampling at the input to construct a high-dimensional feature space, project multi-scale features into independent channels, and provide a decoupled feature foundation for subsequent downstream processing. Furthermore, considering the inherent sparsity and locality of DFH signals in the time–frequency domain, we embed the CBAM attention mechanism into the network. The channel attention module adaptively weights feature channels sensitive to critical signal properties such as signal power and transient characteristics. Meanwhile, the spatial attention module focuses on valid signal regions in time–frequency maps, thereby suppressing the pervasive interference caused by background noise and irrelevant components. With the above designs, DFH signal components can be effectively separated from low-SNR, complex time–frequency maps and reconstructed via super-resolution. This offers high-quality and high-reliability input features for subsequent ResNet-based frequency sequence recognition.
  • Network structure.
In the time–frequency image of the DFH signal, the energy distribution, dwell time, and bandwidth of individual frequency components exhibit high similarity. Before encoder downsampling, a channel upscaling operation is performed on the input Xinput to map low-dimensional channel features to multi-channel representations, yielding feature layer X0. This layer is then convolved with the CBAM to produce feature layer X1, which serves as the encoder input. This strategy promotes comprehensive multi-view feature extraction and effective information representation.
In the encoder, downsampling is implemented via convolutional layers, while the decoder adopts transposed convolution for upsampling. Since DFH signal features are distributed across both time and frequency dimensions, the network must capture multi-receptive-field characteristics. To satisfy this requirement, multi-scale skip connections are deployed, stacking encoder downsampling feature maps with decoder upsampling feature maps across multiple skip paths. This structure supplies rich multi-scale contextual information during decoding, better preserving the time–frequency characteristics of the DFH signal.
Following the decoder upsampling stage, the output feature layer X1′ is processed by the CBAM and a final convolutional layer to generate the ultimate super-resolution reconstructed image Xoutput.
  • Encoder and decoder.
After channel upscaling, the enhanced feature maps are sequentially processed by the encoder and decoder for feature extraction and signal reconstruction, respectively. The encoder and decoder are constructed in a symmetric hierarchical manner. The encoder is composed of the first, second, …, (M − 1)th layers in a top-down fashion, while the decoder consists of the (M − 1)th, (M − 2)th, …, and first layers in a bottom-up manner. The mathematical formulations of the encoder and decoder at the (M − 1)th layer are given by Equations (10) and (11), respectively:
X M = f M 1 ( w M 1 × f M 2 ( f 1 ( w 1 × X 1 + b 1 ) ) + b M 1 )
X 1 = h 1 ( w ̑ 1 × h 2 ( h M 1 ( w ̑ M 1 × X M + b ̑ M 1 ) ) + b ̑ 1 )
where wM−1, bM−1, w ̑ M 1 , and b ̑ M 1 denote the weight matrix and the bias terms of the encoder and decoder at the (M − 1)th layer, respectively. fM−1 and hM−1 represent the nonlinear activation functions of the encoder and decoder, satisfying XM′ = XM.
In CU-Net, the encoder projects high-dimensional feature layer X1 maps into a compact low-dimensional layer XM to extract salient time–frequency features of DFH signals. Conversely, the decoder reconstructs the low-dimensional latent-features layer XM back to the high-dimensional feature layer X1′, approximating the distribution of the original feature maps.
The formulations of the encoder and decoder at the N-th layer can be expressed as (12) and (13). Similarly, XM′ = XM is satisfied when N = M.
X N = f N + 1 ( w N 1 × X N 1 + b N 1 ) , N [ 2 , M ]
X N = h N ( w ̑ N × X N + 1 + b ̑ N ) , N 1 , M 1
In CU-Net, several skip connections are used. The feature layer fed into the N-th layer of the decoder is the fused feature layer Y N from the skip connections. This can be written as Equation (14):
Y N = X N + X N , N 1 , M 1
Modify Equation (13) to Equation (15) and YM’ = X’M when N = M − 1.
X N = h N w ̑ N × Y N + 1 + b ̑ N , N 1 , M 1
Modify Equation (11) to Equation (16):
X 1 = h 1 w ̑ 1 × h 2 h M 1 w ̑ M 1 × Y M + b ̑ M 1 . . . + b ̑ 1
  • CBAM.
The received time–frequency images are inevitably contaminated by noise and interference, which degrade the precision of DFH signal feature extraction and reconstruction. Given the time-varying frequency characteristics of DFH signals, the CBAM is embedded into the network to suppress interference and highlight informative features.
CBAM comprises two complementary sub-modules: the channel attention module (CAM) and the spatial attention module (SAM) [23]. The feature layer X G is weighted along the channel and spatial dimensions using attention. Then, a convolution is applied to produce the output feature layer X G + 1 . The CBAM expression can be expressed as (17), and the channel attention module and the spatial attention module can be expressed as (18) and (19):
XG+1 = MS((MC(XG) × XG)) × (MC(XG) × XG), XG ∈ {X0, X’1}
MC(XG) = σ(MLP(AvgPool(XG)) + MLP(MaxPool(XG)))
MS(XG) = σ {f 7×7 {[AvgPool(XG); MaxPool(XG)]}}
where σ(·) denotes Sigmoid activation function, MLP denotes multi-layer perceptron, and f7×7 represents the 7 × 7 convolutional operation.
By integrating CBAM, the network achieves stronger feature representation and generalization capabilities for DFH signals, simplifies the feature learning process, and improves the accuracy of component separation and signal reconstruction.
  • Network parameter configuration.
Performing time–frequency image super-resolution reconstruction requires selecting a proper scaling factor for CU-Net. The scaling factor is determined by balancing computational cost, reconstruction quality and recognition accuracy. Common options include ×2, ×3, ×4, and ×8. Larger scaling factors substantially increase computational complexity and tend to introduce reconstruction artifacts, while smaller ones yield limited resolution improvement. Therefore, this paper adopts the conventional ×4 scaling factor, which achieves a satisfactory trade-off between computational complexity and recognition performance and conforms to mainstream research configurations.
In CU-Net, all convolution layers adopt a 3 × 3 kernel size except for the final output layer. Each convolution is followed by a batch normalization (BN) layer and a ReLU activation function. The input Xinput with dimensions 1 × 320 × 320 is first fed into a channel upscaling convolutional layer with a 1 × 1 stride, generating a 16 × 320 × 320 feature layer X0. This feature layer X0 is then processed by a CBAM-integrated convolutional layer to produce X1, which serves as the encoder input.
The encoder performs five downsampling stages, each consisting of a 2 × 2 strided convolution and a 1 × 1 convolution, ultimately yielding a 512 × 10 × 10 feature layer, X6. In the decoding path, multi-scale features from the encoder are fused with decoder features via symmetric skip connection, XN. The decoder executes five upsampling stages. After each stage, the upsampled feature layer XN is concatenated with the corresponding skip-connected feature layer XN to form YN as the input for the next stage.
After five upsampling stages, a 16 × 320 × 320 reconstructed feature layer X1′ is obtained. This layer is further processed by a 1 × 1 convolution with CBAM and a Sigmoid activation to generate the final super-resolved image Xoutput of size 1 × 320 × 320. The detailed layer-wise parameter settings are summarized in Table 1.

3.2.3. Signal Frequency Recognition Network

  • Network architecture.
The time–frequency images processed by CU-Net exhibit suppressed noise, reduced interference, and enhanced resolution, which significantly facilitate subsequent frequency recognition. On this basis, a lightweight residual network, ResNet, with deep residual learning and dense residual connections is constructed for DFH signal frequency identification, termed the Signal Frequency Recognition Network. The architecture of the proposed ResNet is depicted in Figure 9.
In Figure 9, the input to the ResNet is the super-resolved image Xoutput generated by CU-Net. To mitigate gradient vanishing and explosion during network training, residual blocks (Res Blocks) are utilized for hierarchical feature extraction. After global pooling, the compact features are fed into a fully connected (FC) layer followed by a SoftMax activation function, yielding the final classification results of DFH signal frequency sequences.
  • Residual block.
The structure of the Res Block is illustrated in Figure 10.
Given an input feature layer x, two successive 3 × 3 convolutional layers with ReLU activation are applied to compute the residual feature layer F(x). The output x′ is obtained by the element-wise addition of the input and the residual feature, as formulated in Equation (20):
x = F x + x
Each residual layer contains two cascaded Res Blocks for feature refinement. After processing, the N-th feature layer XN is transformed into the subsequent feature layer XN+1, as expressed in Equation (21).
X N + 1 = F F X N + X N + F X N + X N
  • Network parameter configuration.
In ResNet, the 1 × 320 × 320 super-resolution reconstructed image is first processed by a 7 × 7 convolutional layer with a 2 × 2 stride, producing a 64 × 160 × 160 feature layer X1. A max-pooling layer is then applied to downsample X1 to a 64 × 80 × 80 feature layer X2. Three residual layers are sequentially stacked to extract deep features, generating a 512 × 10 × 10 feature layer X6. An average pooling (AvgPool) layer compresses X6 into a 512 × 1 × 1 vector. Finally, a fully connected layer with SoftMax activation outputs the classification results of DFH signal frequency sequences. The detailed layer-wise parameter configurations are listed in Table 2.

4. Experimental Section

To validate the effectiveness of the proposed TFISrR framework for DFH signal frequency detection, comprehensive numerical simulations are conducted on a dedicated dataset. The dataset consists of 10,000 signal samples with a sampling rate of 100 MHz, a frequency-hopping range from 10.2 MHz to 22.8 MHz, 64 discrete frequency points, a hopping dwell time of 10 μs, and an observation time window of 200 μs. The generated time–frequency images are single-channel grayscale images with a dimension of 1 × 320 × 320.

4.1. Performance Evaluation of CU-Net

In the training phase of CU-Net, 10,000 noisy TF images of size 1 × 320 × 320 were employed as network inputs, while the corresponding clean high-resolution TF images of DFH signals served as ground-truth labels. The dataset was randomly partitioned into training and test subsets with a ratio of 8:2. The mean squared error (MSE) was adopted as the loss function, and the Adam optimizer was utilized for network optimization with an initial learning rate of 0.001 and a batch size of 32.
The denoising and super-resolution reconstruction performance of CU-Net was evaluated under three typical interference scenarios: fixed-frequency interference, wideband interference, and chirp interference, with a signal-to-interference ratio (SIR) of 1.0. Tests were performed at SNR levels of 0 dB, −10 dB, and −20 dB.
For quantitative comparison, the proposed CU-Net was benchmarked against four state-of-the-art methods: LN-MRSCAE [3], MFDDN [4], RIFDAE [5], and conventional U-Net [11]. Performance was evaluated under varying SIR levels and interference types, with MSE as the core metric.
The MSE loss, widely used in regression tasks, measured the averaged squared deviation between the reconstructed and ground-truth images, as defined in Equation (22):
L o s s M S E = 1 n i = 1 n y t r u e , i y p r e d , i 2
where n denotes the number of test samples, y(ture,i) is the ground-true value of the i-th sample, and y(pred,i) denotes the reconstructed value predicted by the network.
As a critical indicator of anti-interference capability, SIR was defined as the power ratio between the desired signal PSignal and the interference component PInterference, as formulated in Equation (23):
S I R = P S i g n a l P I n t e r f e r e n c e
Figure 11 depicts the MSE loss curves of different networks under three interference types with SIR = 1.0. The MSE loss monotonically decreases with an increasing SNR, indicating improved reconstruction quality at higher SNR levels. Benefiting from the embedded CBAM, CU-Net achieves the lowest MSE among all compared methods and exhibits stable convergence when the SNR exceeds −14 dB.
Deep learning methods relying on one-dimensional (1D) time-domain data, such as LN-MRSCAE and RIFDAE, exhibit significant performance degradation under extremely low-SNR conditions. The reason is that these methods directly operate on raw 1D time-domain samples, which provide extremely limited feature dimensions and contextual information when subjected to heavy noise contamination. Consequently, neural networks struggle to learn the unique G-function-dependent nonlinear frequency-mapping relationship inherent in DFH signals and fail to discriminate the subtle time-domain discrepancies between instantaneous frequency-hopping transitions and ambient background noise.
Although transforming the received one-dimensional signals into two-dimensional (2D) time–frequency images prior to processing constitutes a viable solution, the sole employment of generic image processing networks (e.g., conventional U-Net) still presents substantial challenges. This is due to the distinct high locality and sparsity of the fine-grained time–frequency structural features of DFH signals. Specifically, these features manifest as well-localized, energy-concentrated, and short-duration hopping stripes in the time–frequency domain. Unfortunately, generic image processing architectures lack dedicated mechanisms to focus on and preserve these critical local features, ultimately resulting in the loss of delicate time–frequency structural characteristics during processing.
Figure 12 illustrates the MSE variation of CU-Net under different interference types with SIR = 1.0. Consistent with the above trend, MSE decreases as SNR rises. CU-Net is most severely affected by fixed-frequency interference, whereas Gaussian noise imposes the mildest degradation. The network tends to converge steadily when the SNR is above −14 dB.
As shown in Figure 13, the MSE performance of CU-Net under three interference types was evaluated across different SIR conditions. The MSE value decreases with both elevated SNR and SIR. When the SNR exceeds 14 dB, the reconstruction loss tends to be stable, demonstrating the strong robustness of CU-Net against composite noise and interference.

4.2. Performance Evaluation of ResNet

In the training of the frequency recognition network, 10,000 super-resolved TF images reconstructed by CU-Net were used as inputs, and the corresponding DFH frequency sequences were labeled as ground truth. The dataset was split into training and test sets at an 8:2 ratio. Cross-entropy loss was adopted for classification training, and the Adam optimizer was applied with an initial learning rate of 0.001 and a batch size of 32.
Given a 200 μs time window and 10 μs dwell time, two cases exist in DFH sequences:
  • A complete frame containing 19 valid hops and 2 incomplete hops, resulting in 21 frequency labels;
  • A full 20-hop sequence with 64 optional frequency points per hop.
To maintain label consistency, all frequency sequences were unified to 21 dimensions. For 20-hop samples, the 21st label was filled with 65 to indicate a non-existent hop. For unreliably recognized frequency points, the value 66 was used to mark invalid frequency status.
Two groups of comparative simulations were conducted to verify the recognition performance:
  • The super-resolved outputs from LN-MRSCAE, MFDDN, RIFDAE, U-Net, and CU-Net were fed into the same ResNet for recognition comparison.
  • The CU-Net reconstructed images were fed into ResNet, ResNet34 [15], DarkNet20 [24], DenseNet [25], and GoogleNet [26] for classifier comparison.
Performance was measured by cross-entropy loss (for training stability) and frequency recognition accuracy (for task validity).
Supposing there are categories, the model outputs a probability distribution p = [p1, p2, …, pC], and the true label is represented by the one-hot vector y = [y1, y2, …, yC]. Cross-entropy loss is standard for multi-class classification, calculated as Equation (24):
L o s s Cross - Entropy = i = 1 c y i log p i
where yi is the probability value of the i-th category in the true label, and p i is the predicted probability value of the i-th category.
The recognition accuracy was computed by element-wise matching between the predicted hopping sequence F′ and the ground-truth frequency set F. The accuracy for each batch is the ratio of correctly identified frequency points to the total number of hopping points, and the overall accuracy is the average over all batches, as defined in Equation (25):
A c c u r a c y = n N b B I n d i c a t o r F b = F b N × 21 × B × 100 %
where N is the batch size, n is the n-th batch, B is the batch size, b denotes the b-th frequency hop in the batch, Fb′ and Fb are the predicted and true frequencies, respectively, and Indicator(Fb= Fb) is an indicator function equal to 1 for correct matches and 0 otherwise.
Frequency recognition tests were carried out under fixed-frequency, wideband, and chirp interferences with SIR = 1.0. The cross-entropy loss and recognition accuracy curves are illustrated in Figure 14, Figure 15, Figure 16 and Figure 17.
Figure 14 shows the cross-entropy loss of ResNet using inputs from different reconstruction networks. MFDDN and RIFDAE provide poor reconstruction quality, leading to failed recognition when SNR < −6 dB. Although the loss decreases after SNR exceeds −4 dB, it remains at a high level. By contrast, LN-MRSCAE, U-Net, and CU-Net yield steadily declining losses as SNR increases. Notably, the proposed CU-Net achieves the lowest loss even at −20 dB and converges at approximately −12 dB.
As presented in Figure 15, the recognition accuracy of ResNet exhibits consistent trends. Reconstructed images from MFDDN and RIFDAE cannot support reliable recognition below −6 dB. Conversely, inputs from LN-MRSCAE, U-Net, and CU-Net maintain accuracy above 70% at SNR = −14 dB. CU-Net achieves the best performance, with accuracy exceeding 90% when SNR > −12 dB.
Figure 16 illustrates the cross-entropy loss of different recognition networks using CU-Net reconstructed inputs. All models show reduced loss with increasing SNR. The proposed lightweight ResNet achieves the fastest convergence and lowest loss, stabilizing at SNR = −12 dB.
As shown in Figure 17, the recognition accuracy of all classifiers improves with elevated SNR. The proposed ResNet achieves the highest accuracy (over 90% when SNR > −12 dB) and strong robustness under low-SNR and strong interference conditions.
To validate the generalization capability of our proposed method, we established an indoor over-the-air (OTA) transmission and reception testbed based on a software-defined radio (SDR), as illustrated in Figure 18. The platform employs a host PC (marked as (1) in Figure 18) to control the USRP-HMX310 (2) for transmitting DFH and jamming signals, and another USRP-HMX310 (3) for capturing OTA waveform and forwarding them to the host PC for signal processing and data analysis.
It is worth noting that although the DFH and jamming signals adopted in the experiment share identical parameter settings with those in simulations, the practical OTA transceiving process introduces inherent non-ideal factors that cannot be fully modeled by synthetic AWGN and Rician channels. These factors include quantization noise and real multipath fading induced by indoor reflection paths. Consequently, the constructed OTA platform is capable of validating the generalization performance of TFISrR under realistic physical propagation conditions.
Figure 19 compares the recognition accuracy on real-world and synthetic datasets. It can be observed that the proposed method exhibits excellent generalization capability. Due to inherent environmental interference in real-world data and quantization noise, the achieved accuracy is slightly lower than that on synthetic dataset.

5. Conclusions and Future Work

To address the challenge of accurate instantaneous frequency recognition for differential frequency-hopping signals in low-SNR and complex electromagnetic environments, this paper proposes a TFISrR framework based on time–frequency image super-resolution reconstruction and deep learning recognition. The proposed CU-Net effectively suppresses noise and interference while enhancing the resolution of TF images via an attention-augmented symmetric encoder–decoder structure. The lightweight ResNet further achieves robust frequency-sequence classification using the refined super-resolved TF representations. Extensive simulation results demonstrate that the proposed modules outperform existing benchmark methods in both reconstruction and recognition tasks. The complete TFISrR framework exhibits strong anti-interference ability and low-SNR robustness, making it suitable for practical DFH signal monitoring and recognition applications.
Future work will be dedicated to further optimizing the proposed algorithm, with a specific focus on the signal frequency detection network. To this end, a wide range of attention mechanisms and diverse feature extraction strategies will be systematically explored and integrated into the network architecture, aiming to enhance the network’s capability to capture valid target signal features with higher accuracy and efficiency. Meanwhile, it is anticipated that the optimized network will effectively suppress and eliminate redundant background noise as well as interfering signals in complex electromagnetic environments, which in turn will further improve the accuracy of instantaneous frequency recognition and enhance the overall robustness of the framework under ultra-low-SNR conditions. Furthermore, subsequent research will be dedicated to the lightweight optimization of the network model to reduce its computational complexity—a critical step to meet the real-time processing requirements of practical DFH signal monitoring and reconnaissance systems. Additionally, future investigations may extend to validating the proposed framework using real-world measured DFH signals, which will further demonstrate its applicability and reliability in practical engineering scenarios.

Author Contributions

Conceptualization, methodology and writing—original draft: P.Y. and B.Q. Software and formal analysis: B.M. Writing—review and editing: M.Q. Resources: H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Liaoning Province, grant number 2024-MS-113, the Liaoning Provincial Department of Education Science and Technology Innovation Team Project, grant number LJ222510144001, and Xingliao Talents Plan, grant number XLYC2202013.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author Bingzhen Mu was employed by Beijing Incom Information Technology corporation. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Chen, R.; Shi, J.; Yang, L.-L.; Li, Z.; Guan, L. High-Security Sequence Design for Differential Frequency Hopping Systems. IEEE Syst. J. 2021, 15, 4895–4906. [Google Scholar] [CrossRef]
  2. Mansour, A.; Osswald, C. Interception of Signals with Very Weak SINR. IEEE Access 2024, 12, 97346–97373. [Google Scholar] [CrossRef]
  3. Du, W.; Yang, L.; Wang, H.; Gong, X.; Zhang, L.; Li, C.; Ji, L. LN-MRSCAE: A novel deep learning based denoising method for mechanical vibration signals. J. Vib. Control 2023, 30, 459–471. [Google Scholar] [CrossRef]
  4. Wang, Y.-M.; Cao, G.-Q. A multiscale convolution neural network for bearing fault diagnosis based on frequency division denoising under complex noise conditions. Complex. Intell. Syst. 2023, 9, 4263–4285. [Google Scholar] [CrossRef]
  5. Xie, H.-H.; Yuan, Y.; Zeng, S.-Y. Radar Complex Intermediate Frequency Signal Denoising Based on Convolutional Auto-Encoder Network. IEEE Access 2023, 11, 93090–93097. [Google Scholar] [CrossRef]
  6. Hu, T.; Xu, B.; Wang, Y.; Zhu, J.; Zhou, J.; Wan, Z. Mine Microseismic Signal Denoising Based on a Deep Convolutional Autoencoder. Shock Vib. 2023, 2023, 6225923. [Google Scholar] [CrossRef]
  7. Razzaq, H.; Hussain, Z. Instantaneous Frequency Estimation of FM Signals under Gaussian and Symmetric α-Stable Noise: Deep Learning versus Time-Frequency Analysis. Information 2022, 14, 18. [Google Scholar] [CrossRef]
  8. Li, M.-D.; Xie, J.; Yang, H.-J.; Geng, M.-J.; Liu, J.-C. Specific Emitter Identification of Frequency Hopping Signals Based on Feature Extraction and Deep Residual Network. IEEE Access 2022, 10, 119084–119094. [Google Scholar] [CrossRef]
  9. Yuan, Z.; Zhao, Z.-J.; Zhang, Y.-P.; Zheng, S.-L.; Dai, S.-G. Intelligent Reception of Frequency Hopping Signals Based on CVDP. Appl. Sci. 2023, 13, 7604. [Google Scholar] [CrossRef]
  10. Feng, Y.-X.; Su, J.-K.; Qian, B. A Construction Method for the Random Factor-Based G Function. Appl. Sci. 2024, 14, 10478. [Google Scholar] [CrossRef]
  11. Fan, K.-C.; Hu, M.; Zhao, M.-C.; Qi, L.; Xie, W.-J.; Zou, H.-Y.; Wu, B.; Zhao, S.-S.; Wang, X.-W. RMSRGAN: A Real Multispectral Imagery Super-Resolution Reconstruction for Enhancing Ginkgo Biloba Yield Prediction. Forests 2024, 15, 859. [Google Scholar] [CrossRef]
  12. Yu, C.-H.; Hong, L.-Y.; Pan, T.-P.; Li, Y.-F.; Li, T.-T. ESTUGAN: Enhanced Swin Transformer with U-Net Discriminator for Remote Sensing Image Super-Resolution. Electronics 2023, 12, 4235. [Google Scholar] [CrossRef]
  13. Li, J.-X.; Zheng, K.; Li, Z.; Gao, L.-R.; Jia, X.-P. X-Shaped Interactive Autoencoders with Cross-Modality Mutual Learning for Unsupervised Hyperspectral Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–17. [Google Scholar] [CrossRef]
  14. Shi, Z.-P.; Geng, H.T.; Wu, F.-L.; Geng, L.-C.; Zhuang, X.-R. Radar-SR3: A Weather Radar Image Super-Resolution Generation Model Based on SR3. Atmosphere 2024, 15, 40. [Google Scholar] [CrossRef]
  15. Lv, Y.-N.; Lin, X.; Wang, P.; Li, H.-D. Analysis of paper types based on three dimensional fluorescence spectroscopy combined with Resnet34. Anal. Methods 2025, 17, 7320–7326. [Google Scholar] [CrossRef]
  16. Mohammad-Alikhani, A.; Jamshidpour, E.; Dhale, S.; Akrami, M.; Pardhan, S.; Nahid-Mobarakeh, B. Fault Diagnosis of Electric Motors by a Channel-Wise Regulated CNN and Differential of STFT. IEEE Trans. Ind. Appl. 2025, 61, 3066–3077. [Google Scholar] [CrossRef]
  17. Faisal, K.; Mir, H.; Sharma, R. Human Activity Recognition from FMCW Radar Signals Utilizing Cross-Terms Free WVD. IEEE Internet Things J. 2024, 11, 14383–14394. [Google Scholar] [CrossRef]
  18. Zhu, C.-Y.; Cao, T.-Y.; Zhao, X.-Q.; Yang, Y.-C.; Xu, Z.-W. A Time-Frequency Domain Detection Method for Measurement Data of Non-Stationary Signals Based on Optimized Hilbert-Huang Transform. IEEE Instrum. Meas. Mag. 2023, 26, 29–39. [Google Scholar] [CrossRef]
  19. Xu, Z.-M.; Wu, Q.-H.; Ai, X.-F.; Liu, X.-B.; Wu, J.; Zhao, F. Micromotion Frequency Estimation of Multiple Space Targets Based on RCD Spectrum. IEEE Trans. Aerosp. Electron. Syst. 2025, 61, 14019–14030. [Google Scholar] [CrossRef]
  20. Wang, Y.-Y.; He, S.-R.; Wang, C.-R.; Li, Z.; Li, J.; Dai, H.-J.; Xie, J.-L. Detection and parameter estimation of frequency hopping signal based on the deep neural network. Int. J. Electron. 2021, 109, 520–536. [Google Scholar] [CrossRef]
  21. Lin, M.-Y.; Tian, Y.; Zhang, X.-X.; Huang, Y.-H. Parameter Estimation of Frequency-Hopping Signal in UCA Based on Deep Learning and Spatial Time–Frequency Distribution. IEEE Sens. J. 2023, 23, 7460–7474. [Google Scholar] [CrossRef]
  22. Chen, Z.-Y.; Shi, Y.-W.; Wang, Y.-W.; Li, X.-B.; Yu, X.-H.; Shi, Y.-R. Unlocking Signal Processing with Image Detection: A Frequency Hopping Detection Scheme for Complex EMI Environments Using STFT and CenterNet. IEEE Access 2023, 11, 46004–46014. [Google Scholar] [CrossRef]
  23. Sun, B.; Hu, W.-T.; Wang, H.; Wang, L.; Deng, C.-Y. Remaining Useful Life Prediction of Rolling Bearings Based on CBAM- CNN-LSTM. Sensors 2025, 25, 554. [Google Scholar] [CrossRef]
  24. Betti, A.; Tucci, M. YOLO-S: A Lightweight and Accurate YOLO-like Network for Small Target Detection in Aerial Imagery. Sensors 2023, 23, 1865. [Google Scholar] [CrossRef]
  25. Liu, S.-Z.; Adil, M.; Ma, L.; Mazhar, S.; Qiao, G. DenseNet-Based Robust Channel Estimation in OFDM for Improving Underwater Acoustic Communication. IEEE J. Ocean. Eng. 2025, 50, 1518–1537. [Google Scholar] [CrossRef]
  26. Subba, A.; Sunaniya, A. Computationally optimized brain tumor classification using attention based GoogLeNet-style CNN. Expert. Syst. Appl. 2025, 260, 125443. [Google Scholar] [CrossRef]
Figure 1. Block diagram of DFH communication system.
Figure 1. Block diagram of DFH communication system.
Electronics 15 02070 g001
Figure 2. The frequency variation between adjacent hops of the DFH signal.
Figure 2. The frequency variation between adjacent hops of the DFH signal.
Electronics 15 02070 g002
Figure 3. Images of the DFH signal in different domains. (a) Time-domain waveform. (b) Frequency domain spectrum. (c) Time–frequency distribution.
Figure 3. Images of the DFH signal in different domains. (a) Time-domain waveform. (b) Frequency domain spectrum. (c) Time–frequency distribution.
Electronics 15 02070 g003
Figure 4. Images of the DFH signal in different domains. (a) SNR = 20 dB. (b) SNR = 10 dB. (c) SNR = −20 dB.
Figure 4. Images of the DFH signal in different domains. (a) SNR = 20 dB. (b) SNR = 10 dB. (c) SNR = −20 dB.
Electronics 15 02070 g004
Figure 5. U-Net network architecture.
Figure 5. U-Net network architecture.
Electronics 15 02070 g005
Figure 6. Comparison between the input time–frequency distribution image and super-resolution reconstructed image. (a) Input time–frequency distribution image. (b) Super-resolution reconstructed image.
Figure 6. Comparison between the input time–frequency distribution image and super-resolution reconstructed image. (a) Input time–frequency distribution image. (b) Super-resolution reconstructed image.
Electronics 15 02070 g006
Figure 7. Architecture of the TFISrR method.
Figure 7. Architecture of the TFISrR method.
Electronics 15 02070 g007
Figure 8. CU-Net architecture.
Figure 8. CU-Net architecture.
Electronics 15 02070 g008
Figure 9. ResNet architecture.
Figure 9. ResNet architecture.
Electronics 15 02070 g009
Figure 10. Residual block.
Figure 10. Residual block.
Electronics 15 02070 g010
Figure 11. Variation of MSE loss values for different networks.
Figure 11. Variation of MSE loss values for different networks.
Electronics 15 02070 g011
Figure 12. Variation of MSE loss values with different interference signals.
Figure 12. Variation of MSE loss values with different interference signals.
Electronics 15 02070 g012
Figure 13. Variation of MSE loss values in different SIR conditions.
Figure 13. Variation of MSE loss values in different SIR conditions.
Electronics 15 02070 g013
Figure 14. The cross-entropy loss values from different reconstruction networks.
Figure 14. The cross-entropy loss values from different reconstruction networks.
Electronics 15 02070 g014
Figure 15. Recognition accuracy of ResNet using different networks inputs.
Figure 15. Recognition accuracy of ResNet using different networks inputs.
Electronics 15 02070 g015
Figure 16. The cross-entropy loss of different recognition networks.
Figure 16. The cross-entropy loss of different recognition networks.
Electronics 15 02070 g016
Figure 17. Recognition accuracy of different networks using CU-Net inputs.
Figure 17. Recognition accuracy of different networks using CU-Net inputs.
Electronics 15 02070 g017
Figure 18. Wireless transmission scenario diagram.
Figure 18. Wireless transmission scenario diagram.
Electronics 15 02070 g018
Figure 19. Recognition accuracy of different networks on the real-world and synthetic datasets. (a) Recognition accuracy of ResNet using different networks inputs. (b) Recognition accuracy of different networks using CU-Net input.
Figure 19. Recognition accuracy of different networks on the real-world and synthetic datasets. (a) Recognition accuracy of ResNet using different networks inputs. (b) Recognition accuracy of different networks using CU-Net input.
Electronics 15 02070 g019
Table 1. CU-Net parameter setting.
Table 1. CU-Net parameter setting.
IndexLayersStructureShape
0Xinput-1 × 320 × 320
1X0Conv-BN-ReLU16 × 320 × 320
2X1CBAM-Conv-BN-ReLU16 × 320 × 320
3X2Downsampling16 × 160 × 160
32 × 160 × 160
4X3Downsampling32 × 80 × 80
64 × 80 × 80
5X4Downsampling64 × 40 × 40
128 × 40 × 40
6X5Downsampling128 × 20 × 20
256 × 20 × 20
7X6Downsampling256 × 10 × 10
512 × 10 × 10
8X5Upsampling512 × 20 × 20
256 × 20 × 20
9X4Upsampling256 × 40 × 40
128 × 40 × 40
10X3Upsampling128 × 80 × 80
64 × 80 × 80
11X2Upsampling64 × 160 × 160
32 × 160 × 160
12X1Upsampling32 × 320 × 320
16 × 320 × 320
13XoutputCBAM-Conv-BN-ReLU + Sigmoid1 × 320 × 320
Downsampling contains 2 × (Conv-BN-ReLU); Upsampling contains 2 × (TransConv-BN-ReLU); Conv stands for convolutional layer (Conv); TransConv stands for transposed convolutional layer (TransConv); BN stands for batch normalization; ReLU stands for activation function ReLU; Sigmoid stands for activation function Sigmoid.
Table 2. ResNet parameter setting.
Table 2. ResNet parameter setting.
IndexLayersStructureShape
0Xinput-1 × 320 × 320
1X1Conv-BN-ReLU64 × 160 × 160
2X2MaxPool64 × 80 × 80
3X32 × Res Block64 × 80 × 80
128 × 40 × 40
4X42 × Res Block128 × 40 × 40
256 × 20 × 20
5X52 × Res Block256 × 20 × 20
512 × 10 × 10
6X6AvgPool512 × 1 × 1
7ResultFC + SoftMax21
Conv stands for convolutional layer; BN stands for batch normalization; ReLU stands for activation function ReLU; Sigmoid stands for activation function Sigmoid.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, P.; Qian, B.; Mu, B.; Qi, M.; Wang, H. A Frequency Identification Method for Differential Frequency-Hopping Signals Based on the Super-Resolution Reconstruction of Time–Frequency Images. Electronics 2026, 15, 2070. https://doi.org/10.3390/electronics15102070

AMA Style

Yang P, Qian B, Mu B, Qi M, Wang H. A Frequency Identification Method for Differential Frequency-Hopping Signals Based on the Super-Resolution Reconstruction of Time–Frequency Images. Electronics. 2026; 15(10):2070. https://doi.org/10.3390/electronics15102070

Chicago/Turabian Style

Yang, Pengteng, Bo Qian, Bingzhen Mu, Mingjiao Qi, and Hailong Wang. 2026. "A Frequency Identification Method for Differential Frequency-Hopping Signals Based on the Super-Resolution Reconstruction of Time–Frequency Images" Electronics 15, no. 10: 2070. https://doi.org/10.3390/electronics15102070

APA Style

Yang, P., Qian, B., Mu, B., Qi, M., & Wang, H. (2026). A Frequency Identification Method for Differential Frequency-Hopping Signals Based on the Super-Resolution Reconstruction of Time–Frequency Images. Electronics, 15(10), 2070. https://doi.org/10.3390/electronics15102070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop