Next Article in Journal
EFE-CNA Net: An Approach for Effective Image Deblurring Using an Edge-Sensitive Focusing Encoder
Next Article in Special Issue
An Ultra-Thin Multi-Band Logo Antenna for Internet of Vehicles Applications
Previous Article in Journal
Unravelling Virtual Realities—Gamers’ Perceptions of the Metaverse
Previous Article in Special Issue
Transformer-Based User Charging Duration Prediction Using Privacy Protection and Data Aggregation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Convolutional Neural Network for Wideband Space-Time Beamforming

School of Electronic and Optical Engineering, Nanjing University of Science and Technology Zijin College, Nanjing 210023, China
*
Authors to whom correspondence should be addressed.
Electronics 2024, 13(13), 2492; https://doi.org/10.3390/electronics13132492
Submission received: 13 June 2024 / Revised: 23 June 2024 / Accepted: 24 June 2024 / Published: 26 June 2024

Abstract

:
Wideband beamforming technology is an effective solution in millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) systems to compensate for severe path loss through beamforming gain. However, traditional adaptive wideband digital beamforming (AWDBF) algorithms suffer from serious performance degradation when there are insufficient signal snapshots, and the training process of the existing neural network-based wideband beamforming network is slow and unstable. To address the above issues, an AWDBF method based on the convolutional neural network (CNN) structure, the improved wideband beamforming prediction network (IWBPNet), is proposed. The proposed method increases the network’s feature extraction capability for array signals through deep convolutional layers, thus alleviating the problem of insufficient network feature extraction capabilities. In addition, the pooling layers are introduced into the IWBPNet to solve the problem that the fully connected layer of the existing neural network-based wideband beamforming algorithm is too large, resulting in slow network training, and the pooling operation increases the generalization ability of the network. Furthermore, the IWBPNet has good wideband beamforming performance with low signal snapshots, including beam pattern performance and output signal-to-interference-plus-noise ratio (SINR) performance. The simulation results show that the proposed algorithm has superior performance compared with the traditional wideband beamformer with low signal snapshots. Compared with the wideband beamforming algorithm based on the neural network, the training time of IWBPNet is only 10.6 % of the original neural network-based wideband beamformer, while the beamforming performance is slightly improved. Simulations and numerical analyses demonstrate the effectiveness and superiority of the proposed wideband beamformer.

1. Introduction

Beamforming is a spatial filtering technique used to enhance signals in a specific direction while suppressing signals from other directions [1,2,3]. Based on the bandwidth of the processed signal, they can be categorized into two groups, namely narrowband beamforming and wideband beamforming [4,5,6]. Traditional beamforming studies mainly focus on the narrowband field. As the signal propagation environment becomes more and more complex, adaptive wideband digital beamforming (AWDBF) has been broadly utilized in many fields such as radio communications, radar, sonar, microphone arrays, seismology, astronomy and medical imaging [7,8,9].
Based on the processing domain of the received signal, the adaptive wideband beamforming algorithms can be categorized into spatial-temporal and spatial-frequency beamforming, respectively [10,11,12]. Frost beamformer (FB) is a well-known wideband space-time beamformer that employs a series of tapped delay lines (TDLs) for frequency-dependent weighting [13]. The FB utilizes the linearly constrained minimum variance (LCMV) criterion and can effectively achieve real-time wideband beamforming. To achieve accurate beam pointing and undistorted response to the signals of interest (SOIs), the pre-steering delay structures are utilized to correct the misalignment between SOI orientation and array geometry. However, this structure often has time-delay errors in practical applications, resulting in serious deterioration in performance [6,14]. Therefore, convolution constraints are introduced to remove the pre-steering delay structures [15], while this algorithm adds additional computational complexity. Then, the frequency constraint matrix is introduced to eliminate the pre-steering delay structures, namely frequency-constrained FB (FCFB) [16]. This method eliminates the pre-steering delay structures without increasing computational complexity and demonstrates good performance. Nonetheless, the FCFB still requires multiple adaptive weights to achieve a satisfactory output signal-to-interference-plus-noise ratio (SINR), resulting in poor performance in real-time calculations. Moreover, the traditional AWDBF algorithm cannot effectively generate a gain for the SOI and suppress the jamming sources when there are insufficient signal snapshots, leading to significant performance degradation.
Deep learning (DL) has experienced rapid development in recent years, finding widespread applications across various domains, such as target recognition and image classification [17,18,19]. Among the crucial models within DL, the convolutional neural network (CNN) stands out. It boasts essential features such as local connections, weight sharing, and multi-level feature learning. This network model significantly reduces the number of parameters, rendering it a potent instrument for tackling numerous intricate problems. CNNs have been utilized in the field of beamforming to obtain better beamforming performance. In [20], a radial basis function (RBF) based method using a two-stage CNN is proposed, which can perform beamforming without requiring model parameters, and has good accuracy and robust performance. A novel robust adaptive beamforming approach is proposed [21], where the signal covariance matrix is utilized as training data, and it achieves the purpose of calculating the beamforming weights directly from the signal covariance matrix. However, the process of solving the signal covariance matrix still has some complexity when compared with directly obtaining information from the received array signal. In [22], the CNN is utilized to the transmitter and trained in a supervised manner considering both uplink and downlink transmissions. However, the above-mentioned methods of applying neural networks to the field of beamforming to improve the performance of traditional beamformers focus on the narrowband beamforming. The model establishment in the field of wideband beamforming is more complex, and the number of parameters is larger. Applying deep learning to wideband beamforming to reduce complexity is meaningful. A frequency constraint wideband beamforming prediction network (WBPNet) based on the CNN is proposed in [23], which maintains good beam pointing under low-signal snapshots. However, the proposed method just preliminarily explores the possibility of a wideband beamforming algorithm based on deep learning. The network structure utilized is just a simple stack of three convolutional layers, and the large fully connected layer leads to the network difficult to train. A neural network-based wideband beamformer using progressive learning to train the network is proposed [24]. Although the training time is reduced, it still takes about one hour.
Pooling layers (PLs) [25,26,27,28] can significantly reduce the spatial dimension of the input and prevent overfitting of the network, thus reducing computational costs and improving network performance [29]. Moreover, the pooling technology can significantly and effectively reduce model training time, making beamforming more economical [26].
Based on the WBPNet, a new network structure for wideband beamforming weights generation is explored, named improved WBPNet (IWBPNet). The strong points of the proposed method are as follows. Firstly, to solve the problem of insufficient feature extraction capability of WBPNet, the network layer is added to improve the network’s ability to extract the features of array signals. Moreover, to solve the problem that the parameters of WBPNet are too large, resulting in the network training time is long, and the generalization ability is insufficient, PL is introduced into the IWBPNet to reduce the number of parameters that need to be trained, which makes the network easier to train and improves the generalization ability of the network. Through the above methods, the proposed IWBPNet shows superior performance compared with the WBPNet. In terms of network training speed, the training time of IWBPNet is 0.3683 h, which is only 10.6 % of WBPNet training time. In terms of beamforming performance, while maintaining good beam pattern performance, the output SINRs of IWBPNet have a small improvement. Compared with the method in [24], the proposed IWBPNet saves 64.1 % of training time while ensuring beamforming performance.
The rest of the paper is organized as follows: Section 2 introduces the conventional wideband beamforming method, FB and FCFB. In Section 3, a more economical network structure for the adaptive spatial-temporal wideband beamforming is presented, which greatly improves the problem of long network training time. In Section 4, the simulation results and analyses are given to show the correctness and effectiveness of the proposed algorithm. Finally, the brief conclusions can be found in Section 5.

2. Traditional Space-Time Wideband Beamformers

In wideband beamformers, new dimensions are usually added on the basis of the spatial dimension to achieve the frequency-dependent response to wideband signals. Among them, the famous FB achieves the above response by adding TDLs after the array element, thereby increasing the time dimension. In this section, we provide a brief overview of traditional wideband beamformers, including key structures and implementation methods of the FB and the FCFB.

2.1. Wideband FB with Pre-Steering Delay Structures

Assume that there is a one-dimensional uniform linear array (ULA) consisting of a total of J array elements, with each array element being equipped with K delay structures. The spacing between adjacent array elements is set as d, which corresponds to half of the wavelength, denoted as λ , at the highest frequency to avoid the occurrence of grating lobes. The incoming signal approaches the array from the far field with lightspeed, denoted as c, and can be treated as a parallel plane wave.
FB can adjust the sensor array in real time to respond to the SOI and suppress noise signals. The basic idea is to dynamically adjust the weights of a wideband sensor array, thereby minimizing the noise power of the array output and maintaining a satisfactory response to the direction of SOI [13]. In order to ensure that the signal from the desired direction has coherence when output, the pre-steering delay structures are utilized in the FB. The wideband FB with pre-steering delays is shown in Figure 1. Thereinto, T j θ 0 = j 1 d sin θ 0 c , j = 1 , 2 , , J are the pre-steering delays, and R k ( z ) is given as
R 1 ( z ) = 1 R k ( z ) = z 1 k = 2 , 3 , , K .
Then, the signal of each tap can be written as
x j , k ( n ) = x j [ n ( k 1 ) τ ]
where j = 1 , 2 , , J , k = 1 , 2 , , K , and τ denotes the time delay between adjacent taps.
At the n-th, the J K × 1 signal vector x ( n ) can be expressed as
x ( n ) = x 1 , 1 n , , x J , 1 n , x 1 , 2 n , , x 1 , K n , , x J , K n T .
The J K × 1 adaptive weight vector w is
w = w 1 , 1 , , w J , 1 , w 1 , 2 , , w 1 , K , , w J , K T .
Finally, the output of the beamformer can be represented as
y ( n ) = w H x ( n ) ,
where [ ] H represents the conjugate transpose.

2.2. Wideband FB Based on Frequency Constraints

The pre-steering delay structures in Section 2.1 may produce time-delay errors and the FB needs K degrees of freedom (DOFs) to satisfy the constraints, weakening anti-interference ability [6]. To effectively address this problem, one viable solution is to introduce an additional set of frequency domain constraints, i.e., the FCFB.
Different from the beamformer presented in Figure 1, consider a signal with bandwidth f, the minimum frequency of which is f m i n , and the maximum frequency is f m a x , i.e., f i [ f m i n , f m a x ] , i = 1 , 2 , , I . Then, the wideband signal is evenly divided into I sub-bands, the frequency domain constraint matrix is expressed as
C F = c F ( f 1 ) , c F ( f 2 ) , , c F ( f i ) , , c F ( f I ) J K × I
where c F ( f i ) is a J K × 1 column vector expressed as
c F f i = c F , T s f i c F , T f i ,
where ⊗ is the Kronecker product,
c F , T s f i = 1 , e j 2 π f i T s , , e j 2 π f i ( K 1 ) T s T ,
and
c F , T f i = e j 2 π f i T 1 ( θ 0 ) , e j 2 π f i T 2 ( θ 0 ) , , e j 2 π f i T J ( θ 0 ) T .
The response vector corresponding to C F is f F , which can be expressed as
f F = e j 2 π f 1 ( K 1 ) / 2 , e j 2 π f 2 ( K 1 ) / 2 , , e j 2 π f I ( K 1 ) / 2 T .
According to Equation (3), the correlation matrix of the signal can be denoted as
R x x = E { x ( n ) x H ( n ) } ,
thereinto, E { } is the expectation operation.
Consider a radar system with the intermittent period. During the radar intermittent period, the signal received by the array only contains noise and interference, so R x x is the covariance matrix of noise plus interference. Under the LCMV criterion, the AWDBF problem can be formulated as the following optimization formula
m i n w w H R i + n w s . t . C F H w = f F ,
where R i + n represents the covariance matrix of interference signals plus noise. According to the Lagrange criterion, the optimization problem in Equation (12) can be transformed into an elegant expression [16]
w o p t = R i + n 1 C F C F H R i + n 1 C F 1 f F .
Then, the output of the FCFB can be obtained from Equation (5).
Output SINR is an important indicator for evaluating beamforming performance, which reflects the effect of the beamformer on enhancing desired signal and suppressing interference signals and noise. It can be calculated by
SINR = w o p t H R s w o p t w o p t H R i + n w o p t ,
where R s is the covariance matrix of the desired signal.

3. The Improved Adaptive Wideband Beamforming Method Based on Machine Learning

The beamforming weights of traditional algorithms require the covariance matrix of interference plus noise. In actual situations, the covariance matrix can only be estimated based on a finite number of signal snapshots, and the accuracy of the estimated covariance matrix is closely related to the number of snapshots. When the number of snapshots is insufficient, the performance of traditional wideband beamforming will deteriorate severely. To solve this problem, the neural network-based wideband beamformer has emerged. However, the wideband beamforming algorithm cannot be regarded as a simple superposition of the narrowband beamforming algorithm. It usually introduces a new dimension for frequency correlation of the wideband signal, which is a complex process. The wideband beamforming algorithm based on neural networks is in the early stages of research, and has problems such as unstable network training and long network training time. In this section, an improved neural network-based wideband beamforming algorithm is introduced to solve the problem of long training time.

3.1. The Structure of the Improved Neural Network

CNNs possess unique attributes that enhance their efficiency and effectiveness in various tasks. One of the key strengths is local awareness, which allows them to concentrate on specific local regions within the input, thus reducing computational complexity. Furthermore, CNNs utilize parameter sharing, which allows the same set of weights to process different regions of the input. This feature reduces the number of parameters in the network and mitigates overfitting.
Neural networks constructed with multiple convolutional layers exhibit a hierarchical learning capability. Each layer is capable of learning features at different levels of abstraction. This hierarchical learning empowers the network to automatically extract features ranging from low-level details to high-level patterns, thereby enhancing its overall performance.
In this section, an improved neural network-based wideband beamforming method is proposed to solve the problems of long training time in existing neural network-based methods caused by too many network parameters. The improved neural network structure is depicted in Figure 2.
The input feature map has a size of 16 by 400 with 2 channels. Thereinto, the values 16 and 400 correspond to the width and height of the feature map, while 2 signifies the number of channels. In array signal processing, the received data and weights for beamforming are usually in complex form. Therefore, dual channels are used for data transmission, where one channel transmits the real part and the other transmits the imaginary part of the complex-valued data.
Then, the data enters the convolutional layer, which processes the information of each small area, reducing the amount of parameters while maintaining the continuity of the information. The process can be expressed as
O u t p u t = i w i x i + b ,
where x i and w i denote the input data and the weight of the convolutional kernel, respectively. b is the bias term.
Figure 3 illustrates a simple schematic diagram of the convolutional process. All of the information in the diagram is hypothetical and intended to illustrate the convolution process, and the bias is set as zero.
As can be seen from the diagram, when the input is convolved according to Equation (15), the convolution kernel remains consistent. This feature reduces the number of parameters that need to be updated during network training process. In this paper, the size of the convolution kernel is set as 3 × 3 , and the padding and stride are both set as 1.
Then, the batch normalization [30] layers are utilized, which can accelerate the convergence speed during model training, enhance the stability of the training process, prevent issues such as gradient explosions or vanishing, and introduce a beneficial regularization effect. Subsequently, max pooling operations are used in the first three layers of the network to reduce feature dimensions, model complexity, and mitigate the risk of overfitting. The schematic diagram of the max pooling operation is shown in Figure 4.
The Leaky Rectified Linear Unit (LeakyReLU) [31] is utilized as the activation function, and can be expressed as
LeakyReLU ( x ) = x , x 0 α x , x < 0 .
where α is a coefficient, typically assigned a value of 0.01 .
Finally, the data are flattened and then passes through the FC layer to achieve classification output of the data.

3.2. Training Process of the Improved Network

First, in the case of low snapshots, 500 sets of training data and 10 sets of test data are generated for neural network training. The data set consists of array received signals directly. Then, under the high snapshot scenarios, according to Equation (13), the analytical solution of optimal weights for the traditional wideband beamforming method, FCFB, is utilized to generate the weights w o p t required for wideband beamforming. The weights also serve as the target labels that the neural network aims to fit during the training process.
Moreover, the Adam algorithm [32] is utilized to adaptively adjust the learning rate. This aids in achieving faster convergence during the early stages of training and ensures that the learning rate decreases as the optimal solution is approached, thus preventing the possibility of skipping the optimal solution. The initial learning rate used in training is 0.0006 , the adaptive attenuation factor is 0.7 , and the attenuation step is 20. The batch size is set as 16. The signal with low snapshots and the beamforming weights obtained with high snapshots are used as the input and label of the network, respectively, to achieve the mapping between the low-snapshot signal and the wideband beamforming weights. The loss function during network training is expressed as follows:
L o s s = m = 1 M y t r a i n m y m 2 M ,
where M is the number of outputs. y t r a i n and y denote the label and output of the network, the goal of training is to minimize the loss function L ( y t r a i n , y ) .
After successful training, the network can map input signals to corresponding wideband beamforming weights at low snapshots. This means that we only need to feed low-snapshot signals into the network to quickly obtain the corresponding weights, significantly reducing the waiting time during system sampling.

4. Simulation Results

In this section, we compare the IWBPNet with the WBPNet and traditional FCFB in terms of beam pattern performance and output SINRs. Moreover, the proposed method is compared with the method in [24], the broadband beamforming weight generation network (BWGN). All simulations in this article are conducted under the PyTorch framework, the version is 2.0.0 , and the Python version is 3.8 . A total of 40 Monte Carlo simulations are carried out for all results.
The simulation is carried out under the following conditions. The number of ULA antennas J and delay structures after each antenna element K are 16 and 18, respectively. The number of sub-bands I, is set as 10. Assume that a wideband desired signal comes from θ 0 = 5 , and the interference plus noise signals θ j come from the range [ 60 , 30 ] , i.e., θ j [ 60 , 30 ] .
For simplicity and clarity of description, we define some abbreviations. The traditional beamforming method, FCFB, under sufficient signal snapshots, is defined as FCFB-SS. While that under insufficient signal snapshots is defined as FCFB-IS. The WBPNet under insufficient signal snapshots, is defined as WBPNet-IS. Similarly, the improved CNN structure for AWDBF under insufficient snapshot scenarios is defined as IWBPNet-IS, and the BWGN with insufficient snapshots is defined as BWGN-IS. The number of sufficient snapshots and insufficient snapshots are set as 4000 and 400, respectively. In addition, f m i n and f m a x are set as 0.8 GHz and 1.1 GHz, respectively. Gaussian white noise with a mean of 0 and a variance of 1 is added to simulate the environment. The interference-to-noise ratio (INR) and signal-to-noise ratio (SNR) are assumed as 40 dB and 0 dB.
Without loss of generality, the following three interference angles are randomly selected to evaluate the performance of the beam pattern, that is: θ j 1 , θ j 2 , θ j 3 , at 60 , 40 , 30 , respectively. Figure 5 shows the beam patterns of FCFB-SS and FCFB-IS for the desired and interference signals from the following directions: ( θ 0 , θ j 1 ), ( θ 0 , θ j 2 ) and ( θ 0 , θ j 3 ); and ( θ 0 , θ j 1 ) represents the desired signal comes from θ 0 and the interference signal comes from θ j 1 . The beam patterns of WBPNet-IS, BWGN-IS and IWBPNet-IS are illustrated in Figure 6.
The beam patterns presented in Figure 5 show that the traditional beamformer, FCFB, has good performance under high snapshots, but when the snapshots are insufficient, the performance deteriorates seriously. In contrast, from Figure 6, the WBPNet, BWGN and IWBPNet all have a high gain in the SOI direction and a deep null in the interference direction, although the number of signal snapshots is insufficient. This is due to the powerful nonlinear fitting ability of neural networks, which can fully learn the intrinsic relationship between the input data and output weights used for wideband beamforming with limited snapshots. After the network is successfully trained, we only need to provide the network with insufficient snapshot data, and the network can quickly generate good weights required for wideband beamforming, which significantly reduces the sampling time.
However, due to the large number of fully connected layers in WBPNet and the lack of necessary network optimization, which is just a stack of three convolutional networks, WBPNet takes a long time to train and is unstable. Due to the two-dimensional complex network structure in BWGN, the network training process is relatively complicated, which affects the network training speed. The WBPNet, BWGN and IWBPNet are all trained under the same parameters for 300 epochs. The training process of the WBPNet takes 3.4821 h, the training process of the BWGN takes 1.0262 h, while the IWBPNet just spends 0.3683 h to train, saving 89.4 % of the time compared with WBPNet and 64.1 % compared with BWGN, which is a huge improvement. The results show that while maintaining good wideband beamforming performance, the training time of IWBPNet is significantly reduced compared to the other two networks. Considering the application of neural network-based wideband beamformers are needed online training in real-time systems, less network training time is of great significance while maintaining good performance. Reducing the network training time while maintaining good beamforming performance is also the core of this research.
Furthermore, the output SINRs of FCFB with 400 and 4000 signal snapshots, and output SINRs of WBPNet, BWGN and IWBPNet with 400 signal snapshots at different interference angles are shown in Table 1. The results show that FCFB has good SINR performance when the signal snapshots are sufficient, because the covariance matrix of the array received signals is estimated approximately accurately. However, when the number of signal snapshots is insufficient, the estimated covariance matrix will be wrong, resulting in severe degradation of SINR performance. In contrast, the SINRs of WBPNet-IS, BWGN-IS and IWBPNet-IS demonstrate that the neural network-based wideband beamformers have good SINR performance despite the number of signal snapshots is insufficient. This is because the neural network has a strong fitting ability and can extract the characteristics of the array signals from a limited number of snapshots, resulting in good performance. Compared with the WBPNet, the IWBPNet demonstrates better SINR performance with insufficient snapshots. This is attributed to the IWBPNet having a deeper network structure, which increases the network’s ability to extract signal features. In addition, the PLs reduce the risk of network overfitting and greatly improve the generalization ability of the network. Compared with BWGN, the SINR performance of IWBPNet is also slightly improved due to the stronger generalization ability of IWBPNet. And the network training time of IWBPNet is only 35.9 % of training BWGN, demonstrating great time advantage of the proposed method. The simulation results of beam patterns, training time and SINRs for FCFB, WBPNet, BWGN and IWBPNet manifest the effectiveness and superiority of the proposed IWBPNet compared with the traditional wideband beamformer and the existing neural network-based wideband beamformers.

5. Conclusions

In this paper, for the neural network-based wideband beamformers, we start from the perspective that real-time training is required in actual systems due to environmental changes, and are committed to reducing the network training time while maintaining the performance of wideband beamforming. An improved neural network-based wideband beamformer, IWBPNet, is proposed to address the problem of long network training time of the existing neural network-based wideband beamforming algorithms. The improved structure utilizes the neural network to express the nonlinear mapping between the input signal and the weights used for AWDBF, which increases the number of network layers compared with WBPNet, thereby improving the network’s ability to extract signal features. In addition, the pooling layers are added to reduce the dimension of data transmission in the network, thus increasing the generalization ability of the network and solving the problem of too large fully connected layer in WBPNet. The proposed algorithm has great advantages in training time. Simulation results demonstrate that the training time of the proposed algorithm is 0.3683 h, which saves 89.4 % of the time compared with WBPNet. Compared with BWGN, the proposed method saves 64.1 % of the time. Moreover, due to the better fitting and generalization capabilities of the improved network, the proposed algorithm has better SINR performance.

Author Contributions

Writing—original draft preparation, M.G.; investigation, M.G. and Z.S.; writing—review and editing, Y.Z. and S.L.; project administration, M.G. and S.L.; supervision, M.G.; funding acquisition, M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by Basic Science Research Project of Jiangsu Province for Colleges and Universities under Grant 22KJD510008.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data underlying the results are available as part of the article and no additional source data are required.

Acknowledgments

The authors thank the reviewers for their great help on the article during its review progress.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, W.; Weiss, S. Wideband Beamforming: Concepts and Techniques; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  2. Hussain, K.; Oh, I.Y. Joint Radar, Communication, and Integration of Beamforming Technology. Electronics 2024, 13, 1531. [Google Scholar] [CrossRef]
  3. Wu, J.; Shi, C.; Zhang, W.; Zhou, J. Joint Beamforming Design and Power Control Game for a MIMO Radar System in the Presence of Multiple Jammers. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 759–773. [Google Scholar] [CrossRef]
  4. Sharahi, H.J.; Acconcia, C.N.; Li, M.; Martel, A.; Hynynen, K. A Convolutional Neural Network for Beamforming and Image Reconstruction in Passive Cavitation Imaging. Sensors 2023, 23, 8760. [Google Scholar] [CrossRef]
  5. Shen, C.C.; Huang, C.L. Improvement in Multi-Angle Plane Wave Image Quality Using Minimum Variance Beamforming with Adaptive Signal Coherence. Sensors 2024, 24, 262. [Google Scholar] [CrossRef]
  6. Zhang, S.; Gu, Q.; Sun, H.; Sheng, W.; Kirubarajan, T. Adaptive broadband frequency invariant beamforming using nulling-broadening and frequency constraints. Signal Process. 2022, 195, 108461. [Google Scholar] [CrossRef]
  7. Whipple, A.; Ruzindana, M.W.; Burnett, M.C.; Kunzler, J.W.; Lyman, K.; Jeffs, B.D.; Warnick, K.F. Wideband Array Signal Processing with Real-Time Adaptive Interference Mitigation. Sensors 2023, 23, 6584. [Google Scholar] [CrossRef]
  8. Combi, L.; Spagnolini, U. Adaptive optical processing for wideband hybrid beamforming. IEEE Trans. Commun. 2019, 67, 4967–4979. [Google Scholar] [CrossRef]
  9. Wu, X.; Xue, C.; Zhang, S.; Zhu, H.; Han, Y.; Sheng, W. The Complex Convolutional Neural Network for Adaptive Spatio-temporal Broadband Beamforming. IEEE Trans. Veh. Technol. 2024, 1–6. [Google Scholar] [CrossRef]
  10. Lei, L.; Wang, A.; Lagunas, E.; Hu, X.; Zhang, Z.; Wei, Z.; Chatzinotas, S. Spatial–Temporal Resource Optimization for Uneven-Traffic LEO Satellite Systems: Beam Pattern Selection and User Scheduling. IEEE J. Sel. Areas Commun. 2024, 42, 1279–1291. [Google Scholar] [CrossRef]
  11. Zhang, S.; Gu, Q.; Ma, X.; Sheng, W.; Kirubarajan, T. A data alternating extraction general structure and its algorithms for adaptive space–time wideband beamforming. Digit. Signal Process. 2022, 126, 103478. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Liang, N.; Yang, Y.; Yang, Y. Fast Sparse Bayesian Learning Based on Beamformer Power Outputs to Solve Wideband DOA Estimation in Underwater Strong Interference Environment. Electronics 2024, 13, 1456. [Google Scholar] [CrossRef]
  13. Frost, O.L. An algorithm for linearly constrained adaptive array processing. Proc. IEEE 1972, 60, 926–935. [Google Scholar] [CrossRef]
  14. Ebrahimi, R.; Seydnejad, S.R. Wideband Laguerre adaptive array with pre-steering constraints. IET Signal Process. 2015, 9, 529–536. [Google Scholar] [CrossRef]
  15. Godara, L.C.; Jahromi, M.S. Convolution constraints for broadband antenna arrays. IEEE Trans. Antennas Propag. 2007, 55, 3146–3154. [Google Scholar] [CrossRef]
  16. Ebrahimi, R.; Seydnejad, S.R. Elimination of pre-steering delays in space-time broadband beamforming using frequency domain constraints. IEEE Commun. Lett. 2013, 17, 769–772. [Google Scholar] [CrossRef]
  17. Pei, J.; Huang, Y.; Huo, W.; Zhang, Y.; Yang, J.; Yeo, T.S. SAR automatic target recognition based on multiview deep learning framework. IEEE Trans. Geosci. Remote. Sens. 2017, 56, 2196–2210. [Google Scholar] [CrossRef]
  18. Wang, J.H.; Lai, Y.T.; Tai, T.C.; Le, P.T.; Pham, T.; Wang, Z.Y.; Li, Y.H.; Wang, J.C.; Chang, P.C. Target Speaker Extraction Using Attention-Enhanced Temporal Convolutional Network. Electronics 2024, 13, 307. [Google Scholar] [CrossRef]
  19. Dong, Y.; Liu, Q.; Du, B.; Zhang, L. Weighted feature fusion of convolutional neural network and graph attention network for hyperspectral image classification. IEEE Trans. Image Process. 2022, 31, 1559–1572. [Google Scholar] [CrossRef]
  20. Liao, Z.; Duan, K.; He, J.; Qiu, Z.; Li, B. Robust Adaptive Beamforming Based on a Convolutional Neural Network. Electronics 2023, 12, 2751. [Google Scholar] [CrossRef]
  21. Mohammadzadeh, S.; Nascimento, V.H.; de Lamare, R.C.; Hajarolasvadi, N. Robust beamforming based on complex-valued convolutional neural networks for sensor arrays. IEEE Signal Process. Lett. 2022, 29, 2108–2112. [Google Scholar] [CrossRef]
  22. Huttunen, J.M.J.; Korpi, D.; Honkala, M. DeepTx: Deep Learning Beamforming With Channel Prediction. IEEE Trans. Wirel. Commun. 2023, 22, 1855–1867. [Google Scholar] [CrossRef]
  23. Wu, X.; Luo, J.; Li, G.; Zhang, S.; Sheng, W. Fast Wideband Beamforming Using Convolutional Neural Network. Remote Sens. 2023, 15, 712. [Google Scholar] [CrossRef]
  24. Xue, C.; Zhu, H.; Zhang, S.; Han, Y.; Sheng, W. Broadband Beamforming Weight Generation Network Based on Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
  25. Scherer, D.; Müller, A.; Behnke, S. Evaluation of pooling operations in convolutional architectures for object recognition. In Proceedings of the International Conference on Artificial Neural Network, Thessaloniki, Greece, 15–18 September 2010; pp. 92–101. [Google Scholar]
  26. Gholamalinezhad, H.; Khosravi, H. Pooling methods in deep neural networks, a review. arXiv 2020, arXiv:2009.07485. [Google Scholar]
  27. Qi, K.; Yang, C.; Hu, C.; Shen, Y.; Wu, H. Deep Object-Centric Pooling in Convolutional Neural Network for Remote Sensing Scene Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7857–7868. [Google Scholar] [CrossRef]
  28. Liu, J.; Lan, J.; Zeng, Y. GL-Pooling: Global–Local Pooling for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  29. Zafar, A.; Aamir, M.; Mohd Nawi, N.; Arshad, A.; Riaz, S.; Alruban, A.; Dutta, A.K.; Almotairi, S. A comparison of pooling methods for convolutional neural networks. Appl. Sci. 2022, 12, 8643. [Google Scholar] [CrossRef]
  30. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 7–9 July 2015; pp. 448–456. [Google Scholar]
  31. Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 17–19 June 2013; Volume 30, p. 3. [Google Scholar]
  32. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. The wideband FB with pre-steering delays.
Figure 1. The wideband FB with pre-steering delays.
Electronics 13 02492 g001
Figure 2. The structure diagram of improved wideband beamforming prediction network.
Figure 2. The structure diagram of improved wideband beamforming prediction network.
Electronics 13 02492 g002
Figure 3. A simple schematic diagram of the convolutional process.
Figure 3. A simple schematic diagram of the convolutional process.
Electronics 13 02492 g003
Figure 4. Schematic diagram of max pooling.
Figure 4. Schematic diagram of max pooling.
Electronics 13 02492 g004
Figure 5. Beam patterns of traditional FCFB with 4000 snapshots and 400 snapshots for interferences from θ j 1 , θ j 2 and θ j 3 . (a) FCFB-SS ( θ 0 , θ j 1 ) . (b) FCFB-SS ( θ 0 , θ j 2 ) . (c) FCFB-SS ( θ 0 , θ j 3 ) . (d) FCFB-IS ( θ 0 , θ j 1 ) . (e) FCFB-IS ( θ 0 , θ j 2 ) . (f) FCFB-IS ( θ 0 , θ j 3 ) .
Figure 5. Beam patterns of traditional FCFB with 4000 snapshots and 400 snapshots for interferences from θ j 1 , θ j 2 and θ j 3 . (a) FCFB-SS ( θ 0 , θ j 1 ) . (b) FCFB-SS ( θ 0 , θ j 2 ) . (c) FCFB-SS ( θ 0 , θ j 3 ) . (d) FCFB-IS ( θ 0 , θ j 1 ) . (e) FCFB-IS ( θ 0 , θ j 2 ) . (f) FCFB-IS ( θ 0 , θ j 3 ) .
Electronics 13 02492 g005
Figure 6. Beam patterns of WBPNet, BWGN and IWBPNet with 400 snapshots for interferences from θ j 1 , θ j 2 and θ j 3 . (a) WBPNet-IS ( θ 0 , θ j 1 ) . (b) WBPNet-IS ( θ 0 , θ j 2 ) . (c) WBPNet-IS ( θ 0 , θ j 3 ) . (d) BWGN-IS ( θ 0 , θ j 1 ) . (e) BWGN-IS ( θ 0 , θ j 2 ) . (f) BWGN-IS ( θ 0 , θ j 3 ) . (g) IWBPNet-IS ( θ 0 , θ j 1 ) . (h) IWBPNet-IS ( θ 0 , θ j 2 ) . (i) IWBPNet-IS ( θ 0 , θ j 3 ) .
Figure 6. Beam patterns of WBPNet, BWGN and IWBPNet with 400 snapshots for interferences from θ j 1 , θ j 2 and θ j 3 . (a) WBPNet-IS ( θ 0 , θ j 1 ) . (b) WBPNet-IS ( θ 0 , θ j 2 ) . (c) WBPNet-IS ( θ 0 , θ j 3 ) . (d) BWGN-IS ( θ 0 , θ j 1 ) . (e) BWGN-IS ( θ 0 , θ j 2 ) . (f) BWGN-IS ( θ 0 , θ j 3 ) . (g) IWBPNet-IS ( θ 0 , θ j 1 ) . (h) IWBPNet-IS ( θ 0 , θ j 2 ) . (i) IWBPNet-IS ( θ 0 , θ j 3 ) .
Electronics 13 02492 g006
Table 1. The output SINRs of FCFB with 400 and 4000 signal snapshots, and output SINRs of WBPNet, BWGN and IWBPNet with 400 signal snapshots at different interference angles.
Table 1. The output SINRs of FCFB with 400 and 4000 signal snapshots, and output SINRs of WBPNet, BWGN and IWBPNet with 400 signal snapshots at different interference angles.
DirectionFCFB-SS
(N = 4000)
FCFB-IS
(N = 400)
WBPNet-IS
(N = 400)
BWGN-IS
(N = 400)
IWBPNet-IS
(Proposed) (N = 400)
( θ 0 , θ j 1 )4.4021−14.42384.26884.29214.2998
( θ 0 , θ j 2 )4.3948−13.82464.21564.34664.3720
( θ 0 , θ j 3 )4.3663−14.21014.26974.20584.3178
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, M.; Shen, Z.; Zhou, Y.; Li, S. Improved Convolutional Neural Network for Wideband Space-Time Beamforming. Electronics 2024, 13, 2492. https://doi.org/10.3390/electronics13132492

AMA Style

Guo M, Shen Z, Zhou Y, Li S. Improved Convolutional Neural Network for Wideband Space-Time Beamforming. Electronics. 2024; 13(13):2492. https://doi.org/10.3390/electronics13132492

Chicago/Turabian Style

Guo, Ming, Zixuan Shen, Yuee Zhou, and Shenghui Li. 2024. "Improved Convolutional Neural Network for Wideband Space-Time Beamforming" Electronics 13, no. 13: 2492. https://doi.org/10.3390/electronics13132492

APA Style

Guo, M., Shen, Z., Zhou, Y., & Li, S. (2024). Improved Convolutional Neural Network for Wideband Space-Time Beamforming. Electronics, 13(13), 2492. https://doi.org/10.3390/electronics13132492

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop