Next Article in Journal
Overview of High-Performance Timing and Position-Sensitive MCP Detectors Utilizing Secondary Electron Emission for Mass Measurements of Exotic Nuclei at Nuclear Physics Facilities
Previous Article in Journal
R-RDSP: Reliable and Rapidly Deployable Wireless Ad Hoc System for Post-Disaster Management over DDS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ANN-Assisted Beampattern Optimization of Semi-Coprime Array for Beam-Steering Applications

1
Department of Electrical and Computer Engineering, Air University, Islamabad 44000, Pakistan
2
Institute of Microwave and Photonic Engineering, Graz University of Technology, 8010 Graz, Austria
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(22), 7260; https://doi.org/10.3390/s24227260
Submission received: 16 September 2024 / Revised: 6 November 2024 / Accepted: 11 November 2024 / Published: 13 November 2024
(This article belongs to the Section Communications)

Abstract

:
In this paper, an artificial neural network (ANN) has been proposed to estimate the required values of the adjustable parameters of a Semi-Coprime array with staggered steering (SCASS), which was proposed recently. By adjusting the amount of staggering and the sidelobe attenuation (SLA) factor of Chebyshev weights, SCASS can promise a quite small half-power beamwidth (HPBW) and a high peak-to-side-lobe ratio (PLSR), even when the beam is steered away from broadside direction. However, HPBW and PSLR cannot be improved simultaneously. There is always a trade-off between the two performance metrics. Therefore, in this paper, a mechanism has been introduced to minimize HPBW for a desired PSLR. The proposed ANN takes the array of architectural parameters, the required steering angle, and the desired performance metric, i.e., PSLR, as input and suggests the values of the adjustable parameters, which can promise the minimum HPBW for the desired PSLR and steering angle. To train the ANN, we have developed a dataset in Matlab by calculating HPBW and PSLR from the beampattern generated for a large number of combinations of all the variable parameters. It has been shown in this work that the trained ANN can suggest the optimum values of the adjustable parameters that promise the minimum HPBW for the given steering angle, PSLR, and array architectural parameters. The trained ANN can suggest the required adjustable parameters for the desired performance with mean absolute error within just 0.83%.

1. Introduction

Reducing HPBW and increasing the PSLR of an antenna or antenna system has been a popular area of research for more than two decades. Researchers have tried to optimize the two performance metrics, either by adjusting the array shape and geometry including inter-element spacing [1,2,3,4,5] and the relative pattern of the excitation currents of elements [6,7], or by optimizing the antenna elements [8,9,10]. A major part of the past research has focused on a single-aperture array in which a set of antenna elements is considered a single array. However, in the last decade, Coprime Sensing Array (CSA) [11] and Semi-Coprime Array (SCA) [12] were proposed, in which an array of sensors or antenna elements is split into subarrays such that the subarrays have few elements common. The beampatterns of the subarrays are combined to form a single beampattern, using different techniques, such as minimum [13,14] or product [15] of the beampatterns or a combination of minimum and product [16].
Investigation into CSA and SCA was triggered by the quest of having a beampattern with smaller half-power beamwidth (HPBW) and higher peak-to-side-lobe ratio (PSLR) with fewer sensors, as compared to the conventional linear arrays. In the initial works, all the subarrays of CSA and SCA have been beam-steered toward the same direction, i.e., the direction of interest. On the contrary, recently, staggered steering of the subarrays has been proposed [17] in which each sub-array is beam-steered in a different direction. If the desired steering angle is θ 0 , the subarrays are steered towards θ 0 + Δ and θ 0 Δ , where Δ is the adjustable parameter to control the staggering. Along with staggered beam-steering, in this work, Chebyshev weights were employed to suppress the sidelobes. It was shown in this work that the proposed strategy can outperform the existing split-aperture and unified-aperture arrays in terms of HPBW, PSLR, and directivity if the staggering parameter Δ and the sidelobe attenuation (SLA) of Chebyshev weights are controlled appropriately. However, to control the two parameters, Δ and SLA, no mechanism was devised in [17]. However, it was shown that an increase in Δ results in improving HPBW but may worsen PSLR simultaneously. Similarly, an increase in SLA can improve PSLR but degrade HPBW simultaneously. The results shown in [17] also reveal that the performance metrics HPBW and PSLR have no linear relationship with the adjustable parameters Δ and SLA. Therefore, there is no analytical method to find the values of these parameters that can ensure a given set of performance metrics. However, such problems can be solved using machine-learning techniques such as artificial neural networks (ANNs), which are very good at capturing the non-linear relationships between inputs and outputs.
In past, ANNs have been employed in the domain of antenna arrays for various objectives such as optimizing antenna arrays [18,19,20] and their beampattern [21,22,23], robust beamforming [24], mutual coupling reduction in antenna arrays [25], identification of defective elements of the arrays [26,27], estimation of direction of arrival [28], etc. In this paper, we employed ANN to find a suitable combination of Δ and SLA to ensure optimum HPBW for the desired PSLR for a given steering angle. To enable the ANN, it has been trained on a self-generated dataset. ANNs with different numbers of layers and numbers of neurons in each layer have been trained and evaluated for performance/estimation accuracy. Based on this analysis, two networks, with different numbers of hidden layers and neurons, have been selected; one for the best performance and the other for the best trade-off between performance and computational cost. The trained networks have also been tested on the input feature values not in the training dataset to assess the capability of the network to interpolate and extrapolate.
In this work, the effect of mutual coupling is not considered among the antenna elements. The coupling effect of nearby elements is very critical, and usually, the spacing between the elements is kept at λ /2, to reduce the coupling effect [29]. Increasing the spacing by more than λ /2 has very little effect in reducing the coupling, but it increases the size of the array. Since the minimum inter-element spacing in the proposed work is λ /2, we have assumed that mutual coupling is negligibly small. This work is proposed for beam-steering applications, and the design works best for a single frequency. Therefore, it cannot be used for frequencies far apart. However, in order to work at different close-by frequencies, we can use stretchable, temperature-dependent, or voltage-dependent dielectric materials whose relative permittivity changes with variation in material length, temperature, or voltage, respectively [30]. This may allow necessary changes in inter-element spacing in the array to effectively work at multiple frequencies.
The rest of the paper is organized as follows. Section 2 gives an overview of SCASS and illustrates the objective problem of the adjustment of Δ and SLA that has been addressed in this paper. Section 3 gives an overview of ANNs, elaborates on the dataset generation, and describes the architecture and training of the proposed ANN. Section 4 presents and discusses the results, and finally, Section 5 concludes the paper.

2. An Overview of SCASS and the Associated Challenges

The concept of SCASS is based on the fundamental architecture of SCA proposed in [12] and reproduced in Figure 1 for the reader’s convenience. SCA comprises three subarrays (SAs) known as SA1, SA2, and SA3, having PM, PN, and Q elements with inter-element spacing QN λ /2, QM λ /2, and λ /2, respectively. Here, P, M, N, and Q are integers, and M and N are coprime. Physically, it is a single linear array of P M + P N + Q P 1 elements, spaced non-uniformly, as depicted at the bottom of Figure 1. However, its elements are grouped into three, just for beamforming, so that three different beampatterns can be generated. It is noteworthy here that there is only one element common in the three subarrays. On the other hand, in SAs 1 and 2, there are P elements common. This is a sample arrangement of SCA with a specific combination of M , N , P , Q ; however, other combinations have also been considered in this paper, as mentioned in Section 3.2. In SCA, the three subarrays are beam-steered towards θ 0 , which is the desired steering angle of the whole array. On the contrary, in SCASS, a modified version of SCA, the three subarrays are beam-steered towards θ 0 + Δ , θ 0 Δ and θ 0 , respectively, where θ 0 is the desired steering angle of the whole array and Δ is an adjustable parameter to control the deviation of steering angles of SAs 1 and 2. The overall beam pattern of SCASS is given by
B ( θ ) = min B 1 ( θ ) , B 2 ( θ ) , B 3 ( θ )
where B i ( θ ) is the beam pattern of ith sub-array. SCASS has been investigated with uniform (SCASS-U) as well as Chebyshev (SCASS-C) weights. In SCASS-C, Chebyshev weights are applied on SAs 1 and 2, while uniform weights are applied on SA3 [17]. In this paper, only SCASS-C has been considered. Therefore, from this point onward, we shall use the term SCASS to refer to SCASS-C. As shown in [17], SCASS is optimized in terms of HPBW and PSLR when the values of the SLA factor of Chebyshev weights and Δ are selected appropriately. It has also been shown that SLA and Δ can be adjusted to achieve an HPBW smaller than other linear arrays along with a reasonably high PSLR. However, the two performance metrics cannot be improved simultaneously by adjusting SLA and Δ . Improving HPBW results in worsening PSLR and vice versa. To have a clear picture of the effect of the variables Δ and SLA on the performance metrics, we have simulated SCASS to augment the results presented in [17]. We have simulated SCASS, in Matlab, with ( M , N , P , Q ) = (3, 2, 3, 3) for all combinations of θ 0 { 0 ° , 30 ° , 60 ° } , Δ { 0.5 ° , 1 ° } and SLA ∈{20 dB, 23 dB}. The simulation results have been plotted in Figure 2.
This figure clearly shows the non-linear relationship of HPBW and PSLR with Δ and SLA. It is also obvious from the figure that an increase in PSLR by adjusting SLA is accompanied by increased HPBW. Therefore, it is not possible to attain the smallest possible HPBW along with the highest possible PSLR. In this situation, in this work, we have focused on minimizing the HPBW for the desired PSLR for a given set of M , N , P , Q , θ 0 . For this purpose, we have implemented and trained an ANN that takes M , N , P , Q , θ 0 , PSLR as inputs and suggests, as output, a combination of SLA and Δ that can promise the minimum HPBW for the given set of values of the input parameters. Details of the ANN have been discussed in the following section.

3. Artificial Neural Network: Introduction and Application to the Problem

In this section, we first introduce the basic architecture of an ANN, then describe the self-generated dataset which has been used to train the proposed network. At the end of this section, we illustrate the architectural configuration and training of the network.

3.1. Basic Architecture of an ANN

An artificial neural network (ANN) is a machine-learning model inspired by the human brain that can be trained to perform tasks such as classification and regression [31]. Similar to a human brain, an ANN consists of a network of neurons where each neuron receives multiple inputs and generates an output that may act as one of the inputs to other neurons. An artificial neuron is a mathematical function that calculates the weighted sum of the inputs and adds an offset, also known as bias. Then, it passes the calculated sum to a function, mostly non-linear, usually known as an activation function. An ANN is comprised of an input layer of neurons, an output layer, and one or more hidden layers between them [32], as shown in Figure 3. The number of neurons in the input layer is equal to the inputs (Ni), whereas the number of neurons in the output layer is equal to the number of outputs (No). In general, the output of nth neuron in mth layer is expressed mathematically as follows:
y m , n = g ( w m , n T x m 1 + b m , n ) , n = 1 , , K m , m = 2 , U
where w m , n and b m , n are the weight vector and the bias associated with the nth neuron of mth layer, respectively, x m 1 is the output vector of ( m 1 ) th layer, which is also the input of each of the neuron of mth layer. Here, K m is the number of neurons in the mth layer, U is the number of layers, and g ( . ) is the activation function, which may be sigmoid, linear, ReLU, etc. Please note that the input of the first layer is the input of the network. Therefore, K 1 = Ni and KU = No. According to inter-layer connectivity, ANN can be categorized into fully and partially connected networks. A network consisting of all possible connections among the neurons of adjacent layers is termed a fully connected network (FCN), and a network consisting of a subset of the possible connections is called a partially connected network (PCN). A neural network with a large number of neurons in the hidden layers allows it to learn complex features and patterns. However, a large number of neurons means high computational costs and the risk of overfitting the training data rather than generalizing it.
An ANN is trained by feeding it input values from a set of training examples in which the actual output values are known. During training, the working of ANN involves two processes, namely forward propagation and back propagation. In forward propagation, the inputs from the training dataset cross the input layer into the hidden layers, where each neuron processes the inputs and produces an output that represents an insight, pattern, or feature of the inputs. This array of features then moves to the subsequent layers that further process these features to calculate rather more complex features and nuanced patterns in the inputs, and that is what makes the neural networks strong. Finally, an output is generated for each training example as the output layer processes the manipulated inputs. The discrepancy between the estimated output values, calculated by the ANN, and the actual output values is calculated using a loss function, e.g., mean square error (MSE), mean absolute error (MAE), Huber loss function [33], etc. To train the ANN to give more accurate estimates, back propagation algorithm is employed in which the gradient of the loss function with respect to each weight in the network is calculated. This tells us how much the loss will be affected if a particular weight is changed slightly. Each of the weight vectors is then updated iteratively according to the following relationship:
w m , n i + 1 = w m , n i α L w m , n
where L represents the loss function, i indicates the iteration number and α is the learning rate. The output of the network depends not only on the weights in the output layer but also on the weights of the neurons in input and hidden layers, to which they are not directly connected. Therefore, the gradient of the loss function is calculated using the chain rule, which allows us to backtrack and see how each weight affects the overall error. Further details about back propagation can be found in [32]. The loss function is minimized iteratively by adjusting the weights and biases of the neurons, using optimization algorithms such as the gradient descent algorithm. There are different types of neural networks, each specialized to handle specific tasks. For instance, Convolutional Neural Networks (CNNs) are used for image recognition, whereas Recurrent Neural Networks (RNNs) handle sequential data, such as text, well [34].

3.2. The Dataset Generation

As mentioned in Section 2, the proposed ANN takes the parameters M , N , P , Q , θ 0 , PSLR as inputs and suggests, as output, a suitable combination of Δ and SLA to ensure the minimum HPBW for the given set of parameters. To train the network, a dataset was generated in Matlab, considering several possible combinations of input values. Four combinations of M , N , P , Q were considered, and θ 0 , Δ , and SLA were swept in a range with fixed step size, as mentioned in Table 1.
In Matlab, the beam pattern was generated for each set of M , N , P , Q for all combinations of θ 0 , Δ . SLA and the resulting HPBW and PSLR were noted. The dataset generated after this process contained all combinations of SLA and Δ , including ones not offering minimum HPBW for a specific PSLR. Therefore, in the second step, the dataset was refined by eliminating the training examples with HPBW more than the minimum possible for a specific PSLR. To illustrate the refining process, a subset of raw training examples is shown in Table 2. This table shows that for ( M , N , P , Q ) = (3, 2, 3, 3), and a specific θ 0 , there are multiple training examples in which PSLR is the same, i.e., 21.29 dB but HPBW is different. Since our objective is to train the network to suggest the adjustable parameters for minimum HPBW, we discarded all the entries in the dataset in which HPBW was more than minimum for a specific set of M , N , P , Q , θ 0 , PSLR. As an example, highlighted in the table, HPBW is minimum in the 4th row, among all the entries with PSLR = 21.29 dB for ( M , N , P , Q , θ 0 ) = (3, 2, 3, 3, 14°). Similarly, minimum HPBW for ( M , N , P , Q , θ 0 ) = (3, 2, 3, 3, 14.5°) is found in the last row for the same PSLR. Hence, the 4th and the last rows were retained, and the rest were discarded. A similar process was repeated for PSLR ranging from 17 to 25 dB for all combinations of M , N , P , Q , θ 0 . The refined version of the dataset enables the ANN, after training, to suggest a suitable combination of Δ and SLA to offer the minimum possible HPBW for a specified PSLR for a given set of M, N, P, Q, θ 0 . The trained network also gives the estimated value of HPBW, to be achieved in practice, if the suggested values of Δ and SLA are used for the given set of M, N, P, Q, θ 0 .

3.3. Training and Architectural Configuration of the Proposed ANN

Choosing the architecture of the neural network for a specific application is an iterative process with different combinations of hyperparameters. Generally, a complex mapping of the inputs to the outputs may require a deeper or wider network relative to a simple mapping. It is often necessary to fit a shallow neural network due to its low time and space complexity, whereas deep neural networks have their own set of challenges, such as exploding and vanishing gradients, high computational complexity, training instability, and overfitting [34]. Therefore, it is advisable to begin with a simpler network and gradually increase the complexity while trying a range of hyperparameters like learning rate, regularization coefficients, batch size, etc. After that, a network with the best trade-off between performance and complexity is chosen. In this work, the performance of the network with different numbers of layers and numbers of neurons per layer is depicted in Figure 4. This figure shows that ANN with 3 hidden layers and 32 neurons per layer performed the best, while the ANN with a single hidden layer and 8 neurons per layer performed the worst. However, other ANNs with 1 and 2 hidden layers and 16, 24, and 32 neurons per layer performed slightly poorer than the best one. Now, the selection of the network model depends on whether performance is optimized or the computational cost. If it is desirable to minimize the computational cost with reasonable performance, then the ANN with 1 hidden layer and 16 neurons per layer, known as ANN1, is the best option. However, other specifications of the two networks are shown in Table 3.
The dataset included almost 500,000 training examples split into roughly 70% training set, 20% test set, and 10% validation set. The final model’s training phase involved a typical batch size of 128, an Adam optimization algorithm with default parameters, and a learning rate of 0.1 decayed by 5% after every epoch for a total of 100 epochs. Mean Absolute Error (MAE) was employed as the loss function during the training, which is given by [35]:
MAE = 1 T i = 1 T y i y ^ i
where y i and y ^ i are the actual and the ANN-estimated output values of ith training example, respectively, and T is the number of training examples.

4. Results and Discussion

The ANN described in Section 3 has been trained on the refined dataset discussed in Section 3.2. The trained network has been then fed with a test dataset carefully generated in Matlab. This dataset has been generated in the same way as the training dataset was, but the values of θ 0 chosen in this process are not included in the training dataset. This helps to assess the capability of the network to interpolate. A sample of the test examples is shown in Table 4.
The ANN-estimated values of Δ , SLA, and HPBW have been plotted versus their actual values in Figure 5 for comparison. It is obvious from the figure that the ground truth and the estimated values are more or less the same. The performance of the two networks, in terms of mean absolute error of the three estimated parameters, is also shown in Table 5. The results show that the largest mean absolute error offered by ANNs 1 and 2 are 1.82% and 0.83%, respectively. It is noteworthy here that the proposed ANNs can suggest optimum Δ and SLA only for the reasonable PSLR input values. If the desired PSLR is beyond the capability of SCASS, the ANN-suggested values of Δ and SLA may not guarantee the desired performance. The maximum PSLR that SCASS can offer depends on the array parameters and steering angle, as is obvious from Figure 2. An increase in SLA means more suppression of sidelobes, but it does not necessarily mean increased PSLR. In fact, PSLR increases with an increase in SLA up to a certain limit, which depends on array architectural parameters M , N , P , Q as well as on Δ and θ 0 . For all the combinations of M , N , P , Q , considered in this work, the highest achievable PSLR is 23.5 dB, which is quite reasonably high for most of the applications. However, higher PSLR is accompanied by a widened beam, i.e., the increased value of minimum possible HPBW.

5. Conclusions

In this paper, we extended the idea of staggered beam-steering of subarrays in a semi-coprime array and proposed a mechanism to optimize the extent of staggering ( Δ ) and side lobe attenuation (SLA) using the Chebyshev weighting technique. We generated, in Matlab, a dataset of numerous combinations of the array parameters and the corresponding performance metrics i.e., HPBW and PSLR. This dataset has been used to train ANN to suggest a suitable combination of the adjustable parameters Δ and SLA, which can promise the minimum HPBW for the given steering angle and PSLR. We have investigated multiple neural networks with a different number of hidden layers and a different number of neurons per layer. Among the investigated network architectures, the best performer was the one with 3 hidden layers with 32 neurons in each layer, which offered MAE within 0.83%. However, another network architecture with only a single hidden layer with 16 neurons offered MAE within 1.82%, which is slightly higher than the best performer.

Author Contributions

Conceptualization and Simulation, W.K. and A.N.C.; Writing, W.K., A.N.C. and S.S.; Formatting and Organization, W.K., S.S. and A.S.R.; Review: S.S. and A.S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All the data is available within the article.

Acknowledgments

This work is supported by TU Graz open access publishing fund.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kenane, E.; Djahli, F. Optimum design of non-uniform symmetrical linear antenna arrays using a novel modified invasive weeds optimization. Arch. Electr. Eng. 2016, 65, 5–18. [Google Scholar] [CrossRef]
  2. Khalilpour, J.; Ranjbar, J.; Karami, P. A novel algorithm in a linear phased array system for side lobe and grating lobe level reduction with large element spacing. Analog. Integr. Circuits Signal Process. 2020, 104, 265–275. [Google Scholar] [CrossRef]
  3. Khalaj-Amirhosseini, M. To control the beamwidth of antenna arrays by virtually changing inter-distances. Int. J. Microw.-Comput.-Aided Eng. 2019, 29, e21754. [Google Scholar] [CrossRef]
  4. Oraizi, H.; Fallahpour, M. Nonuniformly spaced linear array design for the specified beamwidth/sidelobe level or specified directivity/sidelobe level with coupling consideration. Prog. Electromagn. Res. 2008, 4, 185–209. [Google Scholar] [CrossRef]
  5. Liang, S.; Fang, Z.; Sun, G.; Liu, Y.; Qu, G.; Zhang, Y. Sidelobe reductions of antenna arrays via an improved chicken swarm optimization approach. IEEE Access 2020, 8, 37664–37683. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Wang, W.; Wang, R.; Deng, Y.; Jin, G.; Long, Y. A novel NLFM waveform with low sidelobes based on modified Chebyshev window. IEEE Geosci. Remote Sens. Lett. 2020, 17, 814–818. [Google Scholar] [CrossRef]
  7. Li, M.; Zhang, Z.; Tang, M.C.; Yi, D.; Ziolkowski, R.W. Compact series-fed microstrip patch arrays excited with Dolph–Chebyshev distributions realized with slow wave transmission line feed networks. IEEE Trans. Antennas Propag. 2020, 68, 7905–7915. [Google Scholar] [CrossRef]
  8. Elwi, T.A.; Taher, F.; Virdee, B.S.; Alibakhshikenari, M.; Zuazola, I.J.G.; Krasniqi, A.; Kamel, A.S.; Tokan, N.T.; Khan, S.; Parchin, N.O.; et al. On the performance of a photonic reconfigurable electromagnetic band gap antenna array for 5G applications. IEEE Access 2024, 12, 60849–60862. [Google Scholar] [CrossRef]
  9. Zidour, A.; Ayad, M.; Alibakhshikenari, M.; See, C.H.; Lai, Y.X.; Ma, Y.; Guenad, B.; Livreri, P.; Khan, S.; Pau, G.; et al. Wideband endfire antenna array for 5G mmWave mobile terminals. IEEE Access 2024, 12, 39926–39935. [Google Scholar] [CrossRef]
  10. Zakeri, H.; Azizpour, R.; Khoddami, P.; Moradi, G.; Alibakhshikenari, M.; Hwang See, C.; Denidni, T.A.; Falcone, F.; Koziel, S.; Limiti, E. Low-cost multiband four-port phased array antenna for sub-6 GHz 5G applications with enhanced gain methodology in radio-over-fiber systems using modulation instability. IEEE Access 2024, 12, 117787–117799. [Google Scholar] [CrossRef]
  11. Vaidyanathan, P.P.; Pal, P. Sparse sensing with co-prime samplers and arrays. IEEE Trans. Signal Process. 2010, 59, 573–586. [Google Scholar] [CrossRef]
  12. Adhikari, K. Beamforming with semi-coprime arrays. J. Acoust. Soc. Am. 2019, 145, 2841–2850. [Google Scholar] [CrossRef] [PubMed]
  13. Adhikari, K.; Drozdenko, B. Design and Statistical Analysis of Tapered Coprime and Nested Arrays for the Min Processor. IEEE Access 2019, 7, 139601–139615. [Google Scholar] [CrossRef]
  14. Liu, Y.; Buck, J.R. Detecting Gaussian signals in the presence of interferers using the coprime sensor arrays with the min processor. In Proceedings of the 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 8–11 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 370–374. [Google Scholar]
  15. Adhikari, K.; Buck, J.R.; Wage, K.E. Extending coprime sensor arrays to achieve the peak side lobe height of a full uniform linear array. EURASIP J. Adv. Signal Process. 2014, 2014, 148. [Google Scholar] [CrossRef]
  16. Moghadam, G.S.; Shirazi, A.B. Novel method for digital beamforming in co-prime sensor arrays using product and min processors. IET Signal Process. 2019, 13, 614–623. [Google Scholar] [CrossRef]
  17. Khan, W.; Shahid, S.; Iqbal, W.; Rana, A.S.; Zahra, H.; Alathbah, M.; Abbas, S.M. Semi-Coprime Array with Staggered Beam-Steering of Sub-Arrays. Sensors 2023, 23, 5484. [Google Scholar] [CrossRef] [PubMed]
  18. Rawat, A.; Yadav, R.; Shrivastava, S. Neural network applications in smart antenna arrays: A review. AEU-Int. J. Electron. Commun. 2012, 66, 903–912. [Google Scholar] [CrossRef]
  19. Ayestarán, R.G.; Las-Heras, F.; Martínez, J.A. Non Uniform-Antenna Array Synthesis Using Neural Networks. J. Electromagn. Waves Appl. 2007, 21, 1001–1011. [Google Scholar] [CrossRef]
  20. Al-Bajari, M.; Ahmed, J.M.; Ayoob, M.B. Performance Evaluation of an Artificial Neural Network-Based Adaptive Antenna Array System. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; pp. 11–20. [Google Scholar] [CrossRef]
  21. Zooghby, A.; Christodoulou, C.; Georgiopoulos, M. Neural network-based adaptive beamforming for one- and two-dimensional antenna arrays. IEEE Trans. Antennas Propag. 1998, 46, 1891–1893. [Google Scholar] [CrossRef]
  22. Wu, X.; Luo, J.; Li, G.; Zhang, S.; Sheng, W. Fast Wideband Beamforming Using Convolutional Neural Network. Remote Sens. 2023, 15, 712. [Google Scholar] [CrossRef]
  23. Al Kassir, H.; Zaharis, Z.D.; Lazaridis, P.I.; Kantartzis, N.V.; Yioultsis, T.V.; Chochliouros, I.P.; Mihovska, A.; Xenos, T.D. Antenna Array Beamforming Based on Deep Learning Neural Network Architectures. In Proceedings of the 2022 3rd URSI Atlantic and Asia Pacific Radio Science Meeting (AT-AP-RASC), Gran Canaria, Spain, 29 May–3 June 2022; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  24. Liao, Z.; Duan, K.; He, J.; Qiu, Z.; Li, B. Robust Adaptive Beamforming Based on a Convolutional Neural Network. Electronics 2023, 12, 2751. [Google Scholar] [CrossRef]
  25. Roshani, S.; Koziel, S.; Yahya, S.I.; Chaudhary, M.A.; Ghadi, Y.Y.; Roshani, S.; Golunski, L. Mutual Coupling Reduction in Antenna Arrays Using Artificial Intelligence Approach and Inverse Neural Network Surrogates. Sensors 2023, 23, 7089. [Google Scholar] [CrossRef] [PubMed]
  26. Patnaik, A.; Christodoulou, C. Finding failed element positions in linear antenna arrays using neural networks. In Proceedings of the 2006 IEEE Antennas and Propagation Society International Symposium, Albuquerque, NM, USA, 9–14 July 2006; IEEE: Piscataway, NJ, USA, 2006. [Google Scholar] [CrossRef]
  27. Patnaik, A.; Choudhury, B.; Pradhan, P.; Mishra, R.K.; Christodoulou, C. An ANN Application for Fault Finding in Antenna Arrays. IEEE Trans. Antennas Propag. 2007, 55, 775–777. [Google Scholar] [CrossRef]
  28. Southall, H.; Simmers, J.; O’Donnell, T. Direction finding in phased arrays with a neural network beamformer. IEEE Trans. Antennas Propag. 1995, 43, 1369–1374. [Google Scholar] [CrossRef]
  29. Kumar, A.; Ansari, A.Q.; Kanaujia, B.K.; Kishor, J.; Matekovits, L. A review on different techniques of mutual coupling reduction between elements of any MIMO antenna. Part 1: DGSs and parasitic structures. Radio Sci. 2021, 56. [Google Scholar] [CrossRef]
  30. Kirtania, S.G.; Elger, A.W.; Hasan, M.R.; Wisniewska, A.; Sekhar, K.; Karacolak, T.; Sekhar, P.K. Flexible antennas: A review. Micromachines 2020, 11, 847. [Google Scholar] [CrossRef]
  31. Matloff, N. Statistical Regression and Classification From Linear Models to Machine Learning; Taylor and Francis Group: Abingdon, UK, 2017. [Google Scholar]
  32. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice Hall PTR: Hoboken, NJ, USA, 1998. [Google Scholar]
  33. Huber, P.J. Robust Estimation of a Location Parameter. Ann. Math. Stat. 1964, 35, 73–101. [Google Scholar] [CrossRef]
  34. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learn; MIT Press: Cambridge, MA, USA, 2016; p. 800. [Google Scholar]
  35. Qi, J.; Du, J.; Siniscalchi, S.M.; Ma, X.; Lee, C.H. On Mean Absolute Error for Deep Neural Network Based Vector-to-Vector Regression. IEEE Signal Process. Lett. 2020, 27, 1485–1489. [Google Scholar] [CrossRef]
Figure 1. A typical arrangement of SCA with M = 3, N = 2, P = 2, Q = 2.
Figure 1. A typical arrangement of SCA with M = 3, N = 2, P = 2, Q = 2.
Sensors 24 07260 g001
Figure 2. Effect of change in Δ and SLA on HPBW and PSLR for ( M , N , P , Q ) = (3, 2, 3, 3).
Figure 2. Effect of change in Δ and SLA on HPBW and PSLR for ( M , N , P , Q ) = (3, 2, 3, 3).
Sensors 24 07260 g002
Figure 3. A general architecture of an artificial neural network.
Figure 3. A general architecture of an artificial neural network.
Sensors 24 07260 g003
Figure 4. Performance of ANN with different numbers of hidden layers and numbers of neurons per layer.
Figure 4. Performance of ANN with different numbers of hidden layers and numbers of neurons per layer.
Sensors 24 07260 g004
Figure 5. Actual vs Estimated Values (ac) ANN1 (df) ANN2.
Figure 5. Actual vs Estimated Values (ac) ANN1 (df) ANN2.
Sensors 24 07260 g005
Table 1. Set of Values of the Parameters for the Dataset Generation.
Table 1. Set of Values of the Parameters for the Dataset Generation.
ParametersValues
( M , N , P , Q )(3, 2, 3, 3), (4, 3, 3, 2), (5, 2, 3, 3), (6, 5, 3, 2)
θ 0 0° to 70°, step size = 0.5°
SLA17 to 25 dB, step size = 0.1 dB
Δ 0° to 2°, step size = 0.1°
Table 2. A sample of raw training examples with PSLR = 21.29 dB for ( M , N , P , Q ) = (3, 2, 3, 3), θ 0 = 14°, 14.5°.
Table 2. A sample of raw training examples with PSLR = 21.29 dB for ( M , N , P , Q ) = (3, 2, 3, 3), θ 0 = 14°, 14.5°.
Inputs of the ANNOutputs of the ANN
M N P Q θ 0   ( ° ) PSLR (dB)SLA (dB) Δ   ( ° ) HPBW (°)
32331421.2921.50.31.701
32331421.2921.750.451.498
32331421.2922.250.651.267
32331421.2923.50.71.253
32331421.29240.671.295
323314.521.2921.50.31.708
323314.521.2922.250.651.274
Table 3. Neural Network Architectural Parameters.
Table 3. Neural Network Architectural Parameters.
Network SpecificationsANN1ANN2
Connectivity TypeFully connected
Number of Hidden Layers13
Neurons per Hidden Layer1632
ActivationInput/Output LayersLinear
FunctionHidden LayersReLU
Loss FunctionMean Absolute Error
Table 4. A sample of the test examples for ( M , N , P , Q ) = (3, 2, 3, 3).
Table 4. A sample of the test examples for ( M , N , P , Q ) = (3, 2, 3, 3).
MNPQ θ 0   ( ° ) PSLR (dB)SLA (dB) Δ   ( ° ) HPBW (°)
32335.217.419.30.840.938
32335.219.721.30.811.036
32335.22223.10.681.211
323313.1515.917.80.840.938
323313.1522.223.20.681.26
323328.8715.517.50.951.015
323328.8717.719.60.961.085
323328.8721.722.80.791.358
323337.191718.91.051.169
323337.1919.921.51.011.302
323337.1922.423.40.821.561
323349.2115.917.81.261.386
323349.2118.520.31.281.491
323349.2120.922.21.131.722
323349.212323.90.941.995
323358.3516.918.81.591.771
323358.3521.422.61.362.212
323358.3523241.172.492
Table 5. Estimation Error of the Proposed ANNs.
Table 5. Estimation Error of the Proposed ANNs.
Mean Absolute Percentage Error
SLA Δ HPBW
ANN10.21%1.82%1.53%
ANN20.20%0.83%0.70%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khan, W.; Shahid, S.; Chaudhry, A.N.; Rana, A.S. ANN-Assisted Beampattern Optimization of Semi-Coprime Array for Beam-Steering Applications. Sensors 2024, 24, 7260. https://doi.org/10.3390/s24227260

AMA Style

Khan W, Shahid S, Chaudhry AN, Rana AS. ANN-Assisted Beampattern Optimization of Semi-Coprime Array for Beam-Steering Applications. Sensors. 2024; 24(22):7260. https://doi.org/10.3390/s24227260

Chicago/Turabian Style

Khan, Waseem, Saleem Shahid, Ali Naeem Chaudhry, and Ahsan Sarwar Rana. 2024. "ANN-Assisted Beampattern Optimization of Semi-Coprime Array for Beam-Steering Applications" Sensors 24, no. 22: 7260. https://doi.org/10.3390/s24227260

APA Style

Khan, W., Shahid, S., Chaudhry, A. N., & Rana, A. S. (2024). ANN-Assisted Beampattern Optimization of Semi-Coprime Array for Beam-Steering Applications. Sensors, 24(22), 7260. https://doi.org/10.3390/s24227260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop