Next Article in Journal
Three-Dimensional Double Random-Phase Encryption for Simultaneous Two-Primary Data
Previous Article in Journal
Robust Localization-Guided Dual-Branch Network for Camouflaged Object Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction Enhancement of Metasurface Absorber Design Using Adaptive Cascaded Deep Learning (ACDL) Model

by
Haitham Al Ajmi
1,
Mohammed M. Bait-Suwailam
1,2,*,
Lazhar Khriji
1 and
Hassan Al-Lawati
1
1
Department of Electrical and Computer Engineering, Sultan Qaboos University, Muscat 123, Oman
2
Communication and Information Research Center, Sultan Qaboos University, Muscat 123, Oman
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(5), 822; https://doi.org/10.3390/electronics13050822
Submission received: 7 January 2024 / Revised: 24 January 2024 / Accepted: 26 January 2024 / Published: 20 February 2024
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))

Abstract

:
This paper presents a customized adaptive cascaded deep learning (ACDL) model for the design and performance prediction of metasurface absorbers. A multi-resonant metasurface absorber structure is introduced, with 10 target-driven design parameters. The proposed deep learning model takes advantage of cascading several sub-deep neural network (DNN) layers with forward noise mitigation capabilities. The inherent appearance of sparse data is dealt with in this work by proposing a trained data-adaptive selection technique. On the basis of the findings, the prediction response is quite fast and accurate enough to retrieve the design parameters of the studied metasurface absorber with two patches of 4000- and 7000-sample datasets. The training loss taken from the second DNN of our proposed model showed logarithmic mean squared errors of 0.039 and 0.033 when using Keras and the adaptive method, respectively, with a dataset split of 4000. On the contrary, for a dataset split of 7000, the errors were 0.049 with Keras and 0.045 with the adaptive method. On the other hand, the validation loss was evaluated using the mean square error method, which resulted in a loss of 0.044 with the 4000-sample datasets split with the Keras method, while this was 0.020 with the adaptive method. When extending the dataset to 7000 samples, the validation loss with the Keras splitting method was 0.0073, while it was improved, reaching 0.006, with the proposed adaptive method, and achieved a prediction accuracy of 94%. This proposed deep learning model can be deployed in the design process and synthesis of multi-resonant metasurface absorber structures. The proposed model shows the advantages of making the design process more efficient in sparse dataset handling, being an efficient approach in multi-resonance metasurface data pre-processing, being less time consuming, and being computationally valuable.

1. Introduction

Energy harvesting is the process of capturing wasted energy from ambient energy sources within the surrounding environment and then using such energy for recharging purposes. Over the past few years, there has been great demand for the deployment and use of wireless sensors in many engineering applications, including the Internet of Things [1,2]. Due to the nature of wireless connectivity with such devices, wireless sensors need to be self-powered [3]. In principle, the design approach of electromagnetic energy harvesting systems could take one of three directions: analytical/empirical formulations, full-wave numerical simulations, or trained trial and error approaches. Unfortunately, the three approaches can generally be time consuming and computationally expensive while trying to achieve the optimal design. Thus, it is crucial to reconsider more efficient alternatives to aid in the design process, among which is the deployment and integration of intelligent algorithms during the design process.
In the last two decades, metamaterials and their 2D counterparts, metasurfaces, have attracted a lot of attention from academia and the industry due to their unique properties, including peculiar constitutive parameter responses and potential applications, which are not realizable in most natural materials. Such properties are realized by subwavelength resonant engineered inclusions that are assembled periodically or aperiodically in a host medium. The recent evolution of metasurfaces and metamaterials has marked a transformative shift, motivated primarily by the demand for advanced capabilities in the optical and electromagnetic domains [4]. These engineered materials have the potential to revolutionize diverse applications, offering the prospect of downsized optics, unparalleled control over wavefronts, and customized wave manipulation. Such advances have begun to reshape the landscape of communication, sensing, and optical technologies [5,6,7]. However, it is vital to acknowledge that the path to these innovations has been fraught with challenges. The intricate process of designing and synthesizing these materials has imposed substantial hurdles, necessitating extensive simulations, formidable computational resources, and iterative design procedures [8,9,10,11,12,13].
Deep learning models have found widespread application in various engineering domains, including the design of microwave and photonic structures. Among these models, multilayer perceptron (MLP) is commonly used for feature extraction in tasks involving large datasets, such as image processing [14]. However, despite its utility, MLP has limitations, particularly in dealing with complex parameterization processes associated with inverse photonic problems. To address these challenges, bidirectional MLP has been utilized for the design of nanoscale photonic structures [5]. In addition to MLP, convolutional neural networks (CNNs) and Transpose CNNs have emerged as powerful tools for feature extraction and dimensionality reduction in large image datasets, such as those representing photonic structures [14]. The application involves pixelating the structure image, convoluting it with a feature extraction matrix, and forming a feature map. Subsequently, a pooled feature map is generated through pooling processes such as maximum, sum, or average pooling. This pooled feature map is then flattened and serves as input for the deep neural network [15]. In particular, CNNs have been used effectively to design anisotropic digital coding metasurfaces, demonstrating superior precision compared to numerical analysis methods [16]. Another study [17] highlights the training of a model to understand the relationship between metamaterial geometry and absorption properties, demonstrating the efficacy of machine learning in the design of metamaterials for high-temperature applications. Collectively, these publications underscore the efficiency of machine learning algorithms in expediting the design process for desired metamaterial or metasurface structures tailored to specific applications.
Various optimization techniques have been extensively explored in the design of metasurfaces, with studies such as [16,18,19,20,21,22] delving into their impact on performance. State-of-the-art multiobjective optimization algorithms, as evidenced in [23], have been used to achieve metasurfaces that meet multiple design goals, excelling in reflection, transmission, polarization, angular, and frequency-dependent properties. Custom state-of-the-art multi-objective and surrogate-assisted optimization algorithms, as outlined in [24], have been employed to navigate the solution space and design metasurface topologies that meet various user-specified performance criteria. Furthermore, statistical learning optimization techniques, as demonstrated in [25], have proven to be effective in optimizing highly efficient and robust metasurface designs, accommodating single- and multifunctional devices while considering manufacturing imperfections. It should be noted that the choice of optimizer plays a crucial role and affects the convergence criteria and the efficiency of finding the optimum point; gradient-based optimizers, as indicated in [26], generally outperform population-based methods in terms of function evaluations.
The integration of artificial intelligence (AI) into metasurface design introduces a spectrum of challenges. Initiated by the fundamental concern of “Data Hunger” in AI, which is particularly pronounced in resource-intensive data acquisition domains, as highlighted in [27], this challenge underscores the need for inventive approaches to harness and manage data in the metasurface design process. Subsequently, the vulnerability of deep learning models to “Black-Box Attacks” is articulated in [28], emphasizing the imperative to fortify model robustness and security measures against external manipulation. This discussion logically progresses to the crucial consideration of translating AI-designed metasurfaces into practicable structures [29], highlighting the ongoing need for innovative implementation methodologies. Further challenges emerge, including the intricate demands of “Nanofabrication Techniques,” as elucidated in [30], necessitating advancements for feasible implementation. The imperative to manage “Noise and Uncertainty” in data modeling, addressed in [31], introduces a strategy involving machine learning and domain knowledge, emphasizing the incorporation of domain-specific knowledge into AI methodologies. The innovative approach of meta-learning to predict the performance of noise filters in noisy data identification tasks, presented in [32], contributes to addressing the challenges in optimizing noise filters in diverse datasets. Exploring the interplay between “Prediction Uncertainty and Robustness to Noisy Data,” as detailed in [33], involves adjusting the temperature parameter in the softmax function, showcasing the potential for enhancing model robustness through parameter tuning. Concluding the sequence, the challenge of “Noisy and/or Missing Data” in high-dimensional sparse linear regression is addressed with a proposed stochastic optimization method, outlined in [34], featuring the Sparse Linear Regression with Missing Data (SLRM) method. This method, coupled with stochastic optimization, demonstrates effectiveness in overcoming challenges related to data quality and completeness in metasurface design.
In light of this background and to alleviate some of the challenges in the design of metasurface absorbers and extend the work in [35], the main contributions of this research are outlined as follows:
  • Optimal dataset size determination: We thoroughly investigated the requisite dataset size to attain a high level of accepted accuracy in deep neural network (DNN) models for metasurface design. Our numerical experiments reveal that a dataset comprising 4000 samples is adequate to establish a robust DNN model for the rapid design and synthesis of metasurface absorbers with an accuracy greater than 90%.
  • Sparse data handling with cascaded DNN: We addressed the challenge of handling datasets that are characterized by a high prevalence of sparse data. In addition, we examined the effectiveness of cascaded DNN models in refining prediction values. Our findings indicate that, while cascaded DNNs are effective, careful hyperparameter tuning of the optimizer is essential to mitigate numerical instability. Furthermore, we determined that a two-layer cascaded neural network is sufficient to achieve the desired accuracy in the design of multi-resonant metasurface absorbers. The impacts of two other data sorting and selection techniques, namely the ascending data sorting method and the bootstrap method, were also investigated and compared with the proposed adaptive descending data sorting method.
  • Dataset arrangement impact analysis: We conducted a systematic investigation that addressed the impact of different set arrangements on prediction accuracy, which to the best of our knowledge has not been thoroughly explored. Our study demonstrates that there is relatively limited influence on prediction accuracy when datasets are randomly organized or arranged using an alternative method, which we refer to here as the adaptive cascaded DL (ACDL) model. This approach involves aggregating response values for specific cases and subsequently arranging them in descending order, contributing to our understanding of dataset arrangement strategies for metasurface design through AI.
The remainder of this paper is organized as follows. Section 2 outlines the contribution of this work through the proposed adaptive cascaded deep learning model. Section 3 presents a unit cell of a rotational concentric split-ring resonator (R-SRR) as the main building block of the electromagnetic metasurface absorber. The datasets are then generated through full-wave electromagnetic simulations. Moreover, the results are presented and discussed in Section 4. Finally, a summary of the findings from this research study is given in Section 5.

2. Proposed Customized DL Model Methodology

2.1. Model Processing Environment

In this study, the computational tasks were performed under Windows 10 pro operating system with CPU, Intel Core i7-7700K @ 4.2 GHz with 4 cores. The system was equipped with 32 GB of random access memory. The computational aspects of the study were primarily facilitated by the use of a PyCharm professional edition-2023.3.2 integrated development environment (IDE). The model was implemented and hardcoded using Python version 3.9 within the PyCharm environment using pandas, matplotlib, Keras, and other libraries.

2.2. Proposed ACDL Model Setting and Training

In the scientific framework of deep neural network learning algorithms, four primary components are involved. These begin with data preprocessing, a pivotal step aimed at standardizing and normalizing the input data in order to mitigate issues associated with sparse and dense datasets [36,37]. There are two most commonly adopted approaches for this step, called standardization and normalization, which are calculated using [38]:
x n o r m l i z e d = x i n p u t x m i n x m a x x m i n ,
x s t a n d a r i z e d = x i n p u t x ¯ σ x ,
where x m i n , x m a x , x ¯ , and σ x are the minimum, maximum, mean, and standard deviation values of the input vector x, respectively.
Our proposed model utilizes the standardization method to prepare the dataset. Subsequently, the scaled dataset is divided into two subsets: a trained dataset consisting of 95% of the data and a test dataset consisting of 5% of the data. The second stage involves feedforward propagation, a process in which the input data are weighted, combined with biases, and passed through activation functions ( σ ) to produce a neuron-specific output, as illustrated in Equations (3) and (4) [36], and the activation function is used to introduce a nonlinearity in the model using [39]:
z i = ( w i a i + b i ) ,
y i = σ ( z i ) ,
where z i is the weighted sum, w i is the weight of the input a i , b i is the bias of the neuron i, and y i is the predicted output after applying the activation function to the weighted sum σ ( z i ) . In principle, there are several types of activation functions, and their deployment depends mainly on the type of problem (a detailed explanation can be found in [40,41]).
The simple process is illustrated in Figure 1. This process continues until the accuracy of the model produces satisfactory results from the prediction cycles and the number of epochs is completed.
In this proposed DL model, the neural network architecture includes 10 input neurons in the input layer, followed by three hidden layers with 1000, 900, and 1000 neurons, respectively. The output layer comprises 1001 neurons, and it utilizes LeakyReLU activation in the hidden layers and linear activation in the output layer, which are computed using
M S L E = 1 n i = 1 n ( l o g ( y t r u e i + 1 ) l o g ( y p r e d i + 1 ) ) 2 ,
M S E = 1 n i = 1 n ( y t r u e i y p r e d i ) 2 ,
where M S L E represents the mean square logarithmic error, t r u e i is the truth value, y p r e d i is the network predicted value, n is the batch size, and M S E is the mean square error.
Figure 2 shows a forward flow chart summarizing the overall procedure of the proposed ACDL model for the design of the metasurface absorber. The first stage of this model involves the intake of a structured dataset that includes critical attributes such as structural dimensions and the response of the target absorbance.
Following the ingestion of the dataset, a pivotal division occurs, separating the dataset into two distinct methodologies: Keras and the proposed mechanism of adaptive data splitting. Each of those two distinct mechanisms follows a unique path, leading to an intricate process of deep learning through cascaded DNNs. This cascaded architecture is strategically designed to enhance the predictive accuracy of the model and capture potential relationships within the dataset. We emphasize here that the main focus is on assessing the performance of the developed model, which is characterized by continuous monitoring of validation and training loss metrics. This iterative assessment aims to determine the optimal number of cascaded DNN layers, ensuring that the model achieves superb capabilities. In the second and last stage, the actual predictions of the model from each DNN are rigorously compared with those predicted for a distinct test dataset, facilitating a comprehensive assessment of the model performance and its ability to generalize beyond the training data. Figure 2 serves as a foundational visual representation of this procedure, encapsulating the complex path of predictive and evaluation accuracy.
In our methodological approach, we constructed a cascaded deep neural network (DNN) architecture while rigorously determining the optimal number of cascaded DNN layers needed for further prediction enhancement. Figure 3 visually illustrates this cascaded DNN structure, which comprises multiple layers that progressively refine the predictions for improved accuracy. The initial layer takes structural parameters as the input and absorber scattering parameter response and generates preliminary predictions, which are further refined in the second DNN layer. This iterative process culminates in the third DNN layer, which employs these refined predictions for comprehensive training using both training and testing datasets.
Upon completion of the forward propagation, the focus is on loss reduction via the backpropagation stage, which optimizes weights and biases using optimizers, i.e., the Adam optimizer used here, in order to minimize the cost function [42]. Note that the backpropagation flow involves calculating the derivative of the loss function (in our case, MSLE, see Equation (5)), with respect to the batch size and applying the chain rule to compute the rate of change of the cost function with respect to weights and biases. More details about this process can be found in [43,44]. Note that throughout the numerical experimentation, the mean square logarithmic error (MSLE) is used as the cost function, while the mean square error (MSE), Equation (6), is used to evaluate the overall performance of the system.
In order to examine the influence of the dataset size, we comprehensively collected two distinct data population configurations, namely the following: one with 4000 samples and another with 7000 samples. We also investigated the effect of the organization of the dataset using two mechanisms (Keras and adaptive data splitting). The first method involved random data arrangement, followed by dataset splitting with a 95% to 5% ratio to assess the impact of data randomness on the performance of the trained model. The second data management mechanism, which is one of our contributions in this research work, organized the scattering and absorbance response values for the metasurface absorbing structure in descending order and was designed to assess the influence of the data arrangement, particularly for sparse datasets. Note that design datasets were generated using comprehensive electromagnetic full-wave simulation tasks, ensuring an accurate representation of metasurface behavior.

3. Metasurface Absorber Structure Model

Figure 4 shows the proposed metasurface unit cell of two split-ring resonators, SRRs, as part of the absorbing structure. It consists of two edge-coupled concentric circular metallic rings with two gap cuts (slits). The square-shaped structure is mounted on a Rogers R T 5880 substrate ( ϵ r = 2.2, tan δ = 0.0009) with a thickness of t s = 1.57 mm and is supported with a metallic ground layer. The metallic parts of the structure are made of metallic copper. It is very important to mention here that this proposed metasurface structure is different from that of classical edge-coupled SRRs, due to the dependence of the two opposite cuts on the angular positions, i.e., θ 1 and θ 2 . In other words, the two concentric rings are rotational because of their dependence on angles. Thus, the deployed unit cell results in an asymmetric SRR structure, which has not been explored in deep learning studies concerning such a metasurface absorbing structure.
Within the 3D numerical structure model, periodic unit cell boundary conditions are enforced in the x and y directions, respectively, while open space boundary conditions are applied to the boundaries along the z direction. To excite the structure of the metasurface absorber, Floquet ports were assigned along the two walls, i.e., ±z direction.
This developed metasurface structure was used to generate, train, and test datasets through full-wave simulation software Ansys Electronics Desktop 17.2. The dependent variables were as follows: (a) the unit cell period, L, in the x and y directions; (b) the outer and inner radii of the inner and outer rings, R 1 and R 2 , respectively; (c) the widths of the rings, W 1 and W 2 ; (d) the two slits ( C u 1 , C u 2 ) added to the dependent variables; and (e) their angular positions θ 1 and θ 2 . Thus, a total of 10 dependent variables were considered in this research in order to generate 4000 random datasets of reflection coefficient S 11 and absorbance responses, with 1001 data points each from the design parameters, which were then considered as inputs to the DL model. In another case study, a dataset size of 7000 was also generated and its impact was evaluated to study the effect of increasing the dataset on the precision of the prediction of the DL model. The dataset was increased to include the change in lattice size, L, with respect to R 1 and R 2 , with a size of 7000.

4. Results and Discussions

From the perspective of metasurface absorber design, two important responses are required in the generation of the dataset, which are the scattering reflection coefficient, | S 11 | , and the absorption strength, A. This absorbance strength is expressed as
A = 1 S 11 2 S 21 2 ,
where A corresponds to the absorbance strength of the metasurface absorber. Note that the proposed absorbing structure is supported by a metallic layer, | S 21 | = 0 . Thus, there is a direct relationship between | S 11 | and the absorbance responses.
In this section, we present the results of the trained ACDL model. For convenience, the initial dataset was uniformly distributed at random. To partition this dataset into training and testing subsets, we used the Keras split function, adhering to a predefined split ratio of 95% for training and 5% for testing. This approach gave us invaluable information on the dynamic effects of random dataset organization and the consequent influence on our model’s performance. On the other hand, the second method of dataset management followed our proposed adaptive data-splitting mechanism, which is uniquely tailored to address the specific challenges posed by sparse datasets. In this method, we added the discrete values of the S 11 response of each structure arrangement. Then, we sorted the dataset into two orders, namely, in descending and ascending order, to compare the effect of data arrangement on the accuracy of the prediction using the cascaded network. Figure 5 illustrates the concept we propose for the descending order approach.
Figure 6 shows the significance of the proposed adaptive splitting method in this analysis. One standout observation is the consistent superiority of the adaptive method in terms of validation errors compared to the Keras method. As illustrated in Table 1, the adaptive method consistently delivered lower validation losses across all D N N layers, with D N N 1 and D N N 2 exhibiting significantly lower validation losses of 0.024 and 0.020, respectively, while the Keras method trailed behind with validation losses of 0.049 and 0.044, respectively. This remarkable and sustained advantage in reducing validation errors accentuated the unique ability of the adaptive method to ensure the reliability and precision of deep neural network models. Furthermore, it should be noted that the training loss under the adaptive method was marginally higher, as shown in Table 1, with a training loss of 0.15 compared to 0.032 with the Keras method for the third layer of the DNN. However, given the importance of validation accuracy in real-world applications, this minor increase in training loss was outweighed by the significant gain in validation performance. This emphasizes the importance of carefully weighing the trade-offs when selecting the appropriate dataset splitting method. From the aforementioned results and discussion, it is apparent that employing cascaded neural networks with only two layers proved to be the optimal choice for designing multi-resonant metasurface absorbers, offering a balance between performance and efficiency. This finding is especially significant, as it sets the stage for further investigation when the dataset was increased from 4000 to 7000, where the scalability and robustness of this approach can be further explored.
Figure 7 and Figure 8 present a comparison between the actual design responses and the predicted responses from each layer for the Keras splitting method for the metasurface absorber design responses in terms of | S 11 | and the absorbance strength, respectively. Clearly, we could observe a significant amount of noise generated from the first layer of the DNN model with Keras splitting. Furthermore, this noise was reduced via refining the data within the second layer, due to the change of inputted data to the model and enhancements in the gradient calculation. We note here the hyperparameters of the second DNN layer and, due to the small values of the gradient vector at this layer, β 1 changed from 0.9 to 0.95 in order to make the model more numerically stable.
Figure 9 and Figure 10 present a comparison between the actual design responses and the predicted responses from each layer for the proposed splitting method for the metasurface absorber design in terms of | S 11 | and the absorbance strength. Interestingly, the prediction of actual performance was greatly improved in terms of the absorption response and the noise encountered from the DL model, specifically with two cascaded layers of the deep neural network.
On careful examination of the prediction samples (Figure 8 and Figure 10), it becomes evident that a clear distinction in the quality of the prediction emerges. Specifically, when assessing predictive accuracy in terms of generated noise, where values exceeding zero dB are of particular significance, the dataset organized through the adaptive data sorting method exhibits a slight advantage. This advantage becomes notably pronounced when two cascaded deep neural network layers are employed. In essence, the adaptive data sorting method demonstrates superior predictive stability and response accuracy in scenarios where minimizing noise in the predictions is a crucial criterion.
Next, another investigation focusing on the impact of dataset size is considered, where the dataset was increased from 4000 to 7000 samples. As shown in Figure 11 and the dataset findings in Table 2, significant reductions in validation errors can be seen with the adaptive splitting method as compared to the Keras method across all DNN layers.
From Table 2, we can see that the validation losses with the adaptive method were 0.0067 and 0.006 for the D N N 1 and D N N 2 layers, respectively, while higher values of 0.077 and 0.0073 resulted with the Keras splitting method with D N N 1 and D N N 2 , respectively. From these results, the proposed splitting mechanism showed higher precision and accuracy while increasing the dataset population. Moreover, the deployment of only two layers within the DNN model turned out to provide the optimal performance in terms of training and validation losses when the dataset size was 7000 or even when lowering it to 4000.
In order to ensure the accuracy of the results from the suggested data sorting mechanism via descending order, a comparison was made with dataset sorting through ascending order. For convenience, the same dataset of 4000 cases was used. Figure 12 shows the predicted responses of the absorbance from the descending order data sorting and compares them to that of the ascending order. Clearly, we can see that the prediction accuracy of ascending order sorting was less accurate than that of the proposed descending data sorting, since not all resonance peaks of the absorber were detected. This was expected due to large pools of sparse data coming as a priority in the ascending-order topology.
On the other hand, the predicted absorbance peaks when sorting data in descending order were all comparable to the actual absorbance response (i.e., see the red curve). Another important metric is the prediction accuracy of the model when considering data sorting by ascending or descending order. Figure 13 illustrates a reduction in the prediction accuracy of the trained DL model when sorting the data in ascending order of almost 20% compared to the case of descending order.
Next, we investigated the prediction performance and precision of the DL model developed with another splitting technique, the bootstrap method. Figure 14 shows the prediction responses for the reflection coefficient from the bootstrap method, our proposed descending data-sorting order, and the actual response of the absorber. Interestingly, the predicted reflection coefficient response from the bootstrap showed more robustness in the prediction of the absorber performance, where the absorber’s resonances were well predicted. The accuracy of the trained DL model with the bootstrap method and that compared with our proposed data sorting are illustrated in Figure 14. As shown, the bootstrap method achieved a lower prediction accuracy by almost 5% compared to the proposed data sorting technique but a better prediction accuracy than the ascending data order approach with an increase of almost 15%.
To empirically assess the performance of the cascaded DNN model trained on a dataset comprising 7000 samples, we executed a series of rigorous numerical experiments. Such experiments were conducted with unwavering precision, employing two distinct data-splitting methodologies. The outcome of these endeavors culminated in the generation of comprehensive test samples, each tailored to evaluate the effectiveness of the respective data-splitting method. Upon augmenting the dataset from 4000 data samples to 7000, the evaluation of structural absorbance was extended through the consistent application of data partitioning and a three-tier cascaded neural network framework. The comparative visualizations presented in Figure 15 and Figure 16 illustrate the actual absorbance alongside the predicted absorbance values across the three cascaded layers. For instance, the predicted absorbance from the Keras method provided unrealistic absorption strength values (either above one or even negative absorbance values) (see Figure 15) as compared to the case of our proposed adaptive splitting from Figure 16. Moreover, the deployment of two DNN layers along with adaptive splitting resulted in the best prediction scenario from the absorbance response.
Lastly, we present the prediction accuracy of the proposed DL model with adaptive data splitting and compare it against the case with Keras splitting from the second D N N layer, as shown in Figure 17. We can observe that the proposed model reached a prediction accuracy of 79%, while the accuracy with the Keras data-splitting mechanism was around 54% for the dataset population of 4000. Moreover, the prediction accuracy improved further when increasing the size of the dataset to 7000, where the accuracy reached 94% for our proposed model as compared to 92% when Keras splitting was considered. Hence, the proposed adaptive splitting method showed better prediction accuracy as compared against the model with the Keras splitting feature.
Table 3 provides a comparison between various state-of-the-art artificial intelligence-based models for the design and synthesis of metasurface absorbers at different frequency regimes. From the structural geometry, our proposed structure has not been explored in previous studies, where the generated dataset depends on the angular location of the SRRs’ split gaps, i.e., the function of angles θ 1 and θ 2 . Moreover, our model explored a mechanism for data pre-processing via an adaptive data-splitting mechanism, as illustrated in Figure 5. This technique showed the significant convergence and accuracy of the trained and validated dataset considering only two deep neural network layers, as evident from Table 1 and Table 2, where an accuracy of 94% was achieved for the 7000-sample dataset with the proposed DL model with an adaptive data-splitting and sorting mechanism.

5. Conclusions

In this paper, we developed a deep neural network model with a novel data-splitting technique, termed the adaptive cascaded deep learning model. We compared it rigorously against the conventional Keras and bootstrap split functions. The results clearly show that our proposed data-splitting method significantly improves the quality of metasurface response predictions, as demonstrated through graphical representations and numerical results, including the prediction accuracy.
Furthermore, we explored the impact of dataset size on the design and performance prediction of multi-resonance metasurface absorbers. We found that a dataset of 4000 samples strikes an optimal balance, achieving high accuracy in metasurface response predictions. Enlarging the dataset, however, may introduce unwanted noise due to the nature of multi-resonance metasurface absorber responses.
Additionally, we investigated the use of cascaded neural networks to enhance the quality of scattering and absorption response predictions. While a valuable enhancement, it requires careful attention due to the challenge posed by small gradients, which can lead to numerical instability. We highlighted the importance of fine-tuning the optimizer hyperparameters, such as the learning rate and β 1 within the Adam optimizer.
Importantly, we also thoroughly explored the impact of sparse data on the trained DL model by assessing its performance with the proposed descending data sorting, achieving higher prediction accuracy compared to ascending order data sorting and the bootstrap method.
Lastly, our ACDL model achieved an impressive 94% prediction accuracy for a sufficient dataset of 7000 samples. We believe that our proposed data-splitting technique can be integrated into various artificial intelligence models to further enhance the design process and improve the predictions of metasurface absorbers.

Author Contributions

Conceptualization, H.A.A. and M.M.B.-S.; methodology, H.A.A., M.M.B.-S., L.K. and H.A.-L.; software, H.A.A.; validation, H.A.A.; formal analysis, H.A.A. and M.M.B.-S.; investigation, H.A.A.; resources, H.A.A., M.M.B.-S., L.K. and H.A.-L.; data curation, H.A.A. and M.M.B.-S.; writing, H.A.A.; writing—review and editing, H.A.A., M.M.B.-S., L.K. and H.A.-L.; visualization, H.A.A., M.M.B.-S., L.K. and H.A.-L.; supervision, M.M.B.-S., L.K. and H.A.-L.; project administration, M.M.B.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tan, T.; Yan, Z.; Zou, H.; Ma, K.; Liu, F.; Zhao, L.; Peng, Z.; Zhang, W. Renewable energy harvesting and absorbing via multi-scale metamaterial systems for Internet of things. Appl. Energy 2019, 254, 113717. [Google Scholar] [CrossRef]
  2. Sabban, A. Wearable circular polarized antennas for health care, 5G, energy harvesting, and IoT systems. Electronics 2022, 11, 427. [Google Scholar] [CrossRef]
  3. Kjellby, R.A.; Cenkeramaddi, L.R.; Frøytlog, A.; Lozano, B.B.; Soumya, J.; Bhange, M. Long-range & self-powered IoT devices for agriculture & aquaponics based on multi-hop topology. In Proceedings of the 2019 IEEE 5th World Forum on Internet of Things (WF-IoT), Limerick, Ireland, 15–18 April 2019; pp. 545–549. [Google Scholar]
  4. Ma, W.; Cheng, F.; Liu, Y. Deep-learning-enabled on-demand design of chiral metamaterials. ACS Nano 2018, 12, 6326–6334. [Google Scholar] [CrossRef]
  5. Malkiel, I.; Mrejen, M.; Nagler, A.; Arieli, U.; Wolf, L.; Suchowski, H. Plasmonic nanostructure design and characterization via deep learning. Light Sci. Appl. 2018, 7, 60. [Google Scholar] [CrossRef] [PubMed]
  6. Nadell, C.C.; Huang, B.; Malof, J.M.; Padilla, W.J. Deep learning for accelerated all-dielectric metasurface design. Opt. Express 2019, 27, 27523–27535. [Google Scholar] [CrossRef] [PubMed]
  7. Jiang, J.; Sell, D.; Hoyer, S.; Hickey, J.; Yang, J.; Fan, J.A. Free-form diffractive metagrating design based on generative adversarial networks. ACS Nano 2019, 13, 8872–8878. [Google Scholar] [CrossRef]
  8. An, S.; Fowler, C.; Zheng, B.; Shalaginov, M.Y.; Tang, H.; Li, H.; Zhou, L.; Ding, J.; Agarwal, A.M.; Rivero-Baleine, C.; et al. A deep learning approach for objective-driven all-dielectric metasurface design. ACS Photonics 2019, 6, 3196–3207. [Google Scholar] [CrossRef]
  9. Shalaginov, M.Y.; Campbell, S.D.; An, S.; Zhang, Y.; Ríos, C.; Whiting, E.B.; Wu, Y.; Kang, L.; Zheng, B.; Fowler, C.; et al. Design for quality: Reconfigurable flat optics based on active metasurfaces. Nanophotonics 2020, 9, 3505–3534. [Google Scholar] [CrossRef]
  10. An, S.; Zheng, B.; Tang, H.; Shalaginov, M.Y.; Zhou, L.; Li, H.; Kang, M.; Richardson, K.A.; Gu, T.; Hu, J.; et al. Multifunctional metasurface design with a generative adversarial network. Adv. Opt. Mater. 2021, 9, 2001433. [Google Scholar] [CrossRef]
  11. Fang, Z.; Zhan, J. Deep physical informed neural networks for metamaterial design. IEEE Access 2019, 8, 24506–24513. [Google Scholar] [CrossRef]
  12. Zhelyeznyakov, M.V.; Brunton, S.; Majumdar, A. Deep learning to accelerate scatterer-to-field mapping for inverse design of dielectric metasurfaces. ACS Photonics 2021, 8, 481–488. [Google Scholar] [CrossRef]
  13. Tanriover, I.; Hadibrata, W.; Aydin, K. Physics-based approach for a neural networks enabled design of all-dielectric metasurfaces. ACS Photonics 2020, 7, 1957–1964. [Google Scholar] [CrossRef]
  14. Ma, W.; Liu, Z.; Kudyshev, Z.A.; Boltasseva, A.; Cai, W.; Liu, Y. Deep learning for the design of photonic structures. Nat. Photonics 2021, 15, 77–90. [Google Scholar] [CrossRef]
  15. Sajedian, I.; Kim, J.; Rho, J. Finding the optical properties of plasmonic structures by image processing using a combination of convolutional neural networks and recurrent neural networks. Microsyst. Nanoeng. 2019, 5, 27. [Google Scholar] [CrossRef]
  16. Zhang, Q.; Liu, C.; Wan, X.; Zhang, L.; Liu, S.; Yang, Y.; Cui, T.J. Machine-learning designs of anisotropic digital coding metasurfaces. Adv. Theory Simul. 2019, 2, 1800132. [Google Scholar] [CrossRef]
  17. Ding, W.; Chen, J.; Li, X.M.; Xi, X.; Ye, K.P.; Wu, H.B.; Fan, D.G.; Wu, R.X. Deep learning assisted heat-resistant metamaterial absorber design. In Proceedings of the 2021 International Conference on Microwave and Millimeter Wave Technology (ICMMT), Nanjing, China, 23–26 May 2021; pp. 1–3. [Google Scholar]
  18. Liu, Z.; Raju, L.; Zhu, D.; Cai, W. A hybrid strategy for the discovery and design of photonic structures. IEEE J. Emerg. Sel. Top. Circuits Syst. 2020, 10, 126–135. [Google Scholar] [CrossRef]
  19. Donda, K.; Zhu, Y.; Merkel, A.; Fan, S.W.; Cao, L.; Wan, S.; Assouar, B. Ultrathin acoustic absorbing metasurface based on deep learning approach. Smart Mater. Struct. 2021, 30, 085003. [Google Scholar] [CrossRef]
  20. Qiu, T.; Shi, X.; Wang, J.; Li, Y.; Qu, S.; Cheng, Q.; Cui, T.; Sui, S. Deep learning: A rapid and efficient route to automatic metasurface design. Adv. Sci. 2019, 6, 1900128. [Google Scholar] [CrossRef] [PubMed]
  21. Ghorbani, F.; Beyraghi, S.; Shabanpour, J.; Oraizi, H.; Soleimani, H.; Soleimani, M. Deep neural network-based automatic metasurface design with a wide frequency range. Sci. Rep. 2021, 11, 7102. [Google Scholar] [CrossRef] [PubMed]
  22. Niu, C.; Phaneuf, M.; Qiu, T.; Mojabi, P. A deep learning based approach to design metasurfaces from desired far-field specifications. IEEE Open J. Antennas Propag. 2023, 4, 641–653. [Google Scholar] [CrossRef]
  23. Mansouree, M.; Arbabi, A. Metasurface design using level-set and gradient descent optimization techniques. In Proceedings of the 2019 International Applied Computational Electromagnetics Society Symposium (ACES), Miami, FL, USA, 14–19 April 2019; pp. 1–2. [Google Scholar]
  24. Campbell, S.D.; Whiting, E.B.; Werner, D.H.; Werner, P.L. High-Performance Metasurfaces Synthesized via Multi-Objective Optimization. In Proceedings of the 2019 International Applied Computational Electromagnetics Society Symposium (ACES), Miami, FL, USA, 14–19 April 2019; pp. 1–2. [Google Scholar]
  25. Campbell, S.D.; Zhu, D.Z.; Whiting, E.B.; Nagar, J.; Werner, D.H.; Werner, P.L. Advanced multi-objective and surrogate-assisted optimization of topologically diverse metasurface architectures. Metamat. Metadev. Metasyst. 2018, 10719, 43–48. [Google Scholar]
  26. Elsawy, M.; Gobé, A.; Leroy, G.; Lanteri, S.; Genevet, P. Advanced computational framework for the design of ultimate performance metasurfaces. Smart Photonic Optoelectron. Integr. Circuits 2023, 12425, 34–37. [Google Scholar]
  27. Adadi, A. A survey on data-efficient algorithms in big data era. J. Big Data 2021, 8, 24. [Google Scholar] [CrossRef]
  28. Papernot, N.; McDaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z.B.; Swami, A. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi, United Arab Emirates, 2–6 April 2017; pp. 506–519. [Google Scholar]
  29. Jiang, J.; Sell, D.; Hoyer, S.; Hickey, J.; Yang, J.; Fan, J.A. Data-driven metasurface discovery. arXiv 2018, arXiv:1811.12436. [Google Scholar]
  30. Yoon, G.; Tanaka, T.; Zentgraf, T.; Rho, J. Recent progress on metasurfaces: Applications and fabrication. J. Phys. D Appl. Phys. 2021, 54, 383002. [Google Scholar] [CrossRef]
  31. Liu, X.; Cheng, G.; Wu, J.X. Noise and uncertainty management in intelligent data modeling. In Proceedings of the Twelfth AAAI National Conference on Artificial Intelligence, Seattle, WA, USA, 31 July–4 August 1994; pp. 263–268. [Google Scholar]
  32. Garcia, L.P.F.; de Leon Ferreira de Carvalho, A.C.P.; Lorena, A.C. Noise detection in the meta-learning level. Neurocomputing 2016, 176, 14–25. [Google Scholar] [CrossRef]
  33. Khan, H.; Wang, X.; Liu, H. A study on relationship between prediction uncertainty and robustness to noisy data. Int. J. Syst. Sci. 2023, 54, 1243–1258. [Google Scholar] [CrossRef]
  34. Karimi, B.; Wai, H.T.; Moulines, É.; Li, P. Minimization by Incremental Stochastic Surrogate Optimization for Large Scale Nonconvex Problems. In Proceedings of the International Conference on Algorithmic Learning Theory, Paris, France, 29 March–1 April 2022. [Google Scholar]
  35. Al Ajmi, H.; Bait-Suwailam, M.M.; Khriji, L. A Comparison Study of Deep Learning Algorithms for Metasurface Harvester Designs. In Proceedings of the 2023 International Conference on Intelligent Computing, Communication, Networking and Services (ICCNS), Valencia, Spain, 19–22 June 2023; pp. 74–78. [Google Scholar]
  36. Heaton, J. Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Deep learning: The MIT Press, 2016, 800 pp, ISBN: 0262035618. Genet. Program. Evolvable Mach. 2018, 19, 305–307. [Google Scholar] [CrossRef]
  37. Ahsan, M.M.; Mahmud, M.P.; Saha, P.K.; Gupta, K.D.; Siddique, Z. Effect of data scaling methods on machine learning algorithms and model performance. Technologies 2021, 9, 52. [Google Scholar] [CrossRef]
  38. Raju, V.G.; Lakshmi, K.P.; Jain, V.M.; Kalidindi, A.; Padma, V. Study the influence of normalization/transformation process on the accuracy of supervised classification. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 729–735. [Google Scholar]
  39. Nwankpa, C.; Ijomah, W.; Gachagan, A.; Marshall, S. Activation functions: Comparison of trends in practice and research for deep learning. arXiv 2018, arXiv:1811.03378. [Google Scholar]
  40. Dietterich, T.G. Ensemble methods in machine learning. In Proceedings of the International Workshop on Multiple Classifier Systems, Cagliari, Italy, 21–23 June 2000; pp. 1–15. [Google Scholar]
  41. Apicella, A.; Donnarumma, F.; Isgrò, F.; Prevete, R. A survey on modern trainable activation functions. Neural Netw. 2021, 138, 14–32. [Google Scholar] [CrossRef] [PubMed]
  42. Sun, S.; Cao, Z.; Zhu, H.; Zhao, J. A survey of optimization methods from a machine learning perspective. IEEE Trans. Cybern. 2019, 50, 3668–3681. [Google Scholar] [CrossRef] [PubMed]
  43. Fahlman, S. Faster-Learning Variations on Back-Propagation: An Empirical Study. In Proceedings of the 1988 Connectionist Models Summer School. Available online: https://api.semanticscholar.org/CorpusID:238073001 (accessed on 6 January 2024).
  44. Chauvin, Y.; Rumelhart, D.E. Backpropagation: Theory, Architectures, and Applications; Psychology Press: London, UK, 2013. [Google Scholar]
Figure 1. Neural network Feed Forward principle with a single neuron.
Figure 1. Neural network Feed Forward principle with a single neuron.
Electronics 13 00822 g001
Figure 2. A flowchart diagram of the proposed adaptive cascaded deep learning model.
Figure 2. A flowchart diagram of the proposed adaptive cascaded deep learning model.
Electronics 13 00822 g002
Figure 3. A comprehensive flowchart diagram showing the forward flow of the cascaded neural network under study after the data-splitting mechanism.
Figure 3. A comprehensive flowchart diagram showing the forward flow of the cascaded neural network under study after the data-splitting mechanism.
Electronics 13 00822 g003
Figure 4. The proposed metasurface energy absorber unit cell structure under study with its geometrical parameters. Note that light-blue areas represent metallization layers.
Figure 4. The proposed metasurface energy absorber unit cell structure under study with its geometrical parameters. Note that light-blue areas represent metallization layers.
Electronics 13 00822 g004
Figure 5. Proposed technique for the adaptive data-splitting method. Note that the data sorting is based solely on the maximum sum of all sample values of each case.
Figure 5. Proposed technique for the adaptive data-splitting method. Note that the data sorting is based solely on the maximum sum of all sample values of each case.
Electronics 13 00822 g005
Figure 6. Validation loss for 4000-sample dataset, with data splitting using Keras split function and our proposed adaptive data merging technique.
Figure 6. Validation loss for 4000-sample dataset, with data splitting using Keras split function and our proposed adaptive data merging technique.
Electronics 13 00822 g006
Figure 7. Comparing the actual response of S 11 and the predicted response of the three cascaded DNNs with the Keras split function and 4000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm , θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.13 mm, C u 2 = 0.9 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Figure 7. Comparing the actual response of S 11 and the predicted response of the three cascaded DNNs with the Keras split function and 4000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm , θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.13 mm, C u 2 = 0.9 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Electronics 13 00822 g007
Figure 8. Comparing the actual absorbance and predicted absorbance of the three cascaded DNNs with the Keras split function and 4000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm , θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.13 mm, C u 2 = 0.9 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Figure 8. Comparing the actual absorbance and predicted absorbance of the three cascaded DNNs with the Keras split function and 4000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm , θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.13 mm, C u 2 = 0.9 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Electronics 13 00822 g008
Figure 9. Comparing the actual response of S 11 and the predicted response of the three cascaded DNNs with our proposed adaptive data merging technique and the 4000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm , θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.6 mm, C u 2 = 0.2 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Figure 9. Comparing the actual response of S 11 and the predicted response of the three cascaded DNNs with our proposed adaptive data merging technique and the 4000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm , θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.6 mm, C u 2 = 0.2 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Electronics 13 00822 g009
Figure 10. Comparing the actual absorbance and the predicted absorbance of the three cascaded DNNs with the adaptive split function and the 4000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm , θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.13 mm, C u 2 = 0.9 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Figure 10. Comparing the actual absorbance and the predicted absorbance of the three cascaded DNNs with the adaptive split function and the 4000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm , θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.13 mm, C u 2 = 0.9 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Electronics 13 00822 g010
Figure 11. Validation loss for the 7000-sample dataset, with data split using the Keras split function and our proposed adaptive data merging technique.
Figure 11. Validation loss for the 7000-sample dataset, with data split using the Keras split function and our proposed adaptive data merging technique.
Electronics 13 00822 g011
Figure 12. Comparing the actual absorbance and predicted absorbance, between descending and ascending order, of the second DNN. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm, θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.13 mm, C u 2 = 0.9 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Figure 12. Comparing the actual absorbance and predicted absorbance, between descending and ascending order, of the second DNN. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm, θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.13 mm, C u 2 = 0.9 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Electronics 13 00822 g012
Figure 13. Comparing the prediction accuracy between the trained DL models with descending and ascending order data sorting.
Figure 13. Comparing the prediction accuracy between the trained DL models with descending and ascending order data sorting.
Electronics 13 00822 g013
Figure 14. Comparing the actual absorbance and predicted absorbance, between descending order data sorting and the bootstrap method, of the second DNN. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm, θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.13 mm, C u 2 = 0.9 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Figure 14. Comparing the actual absorbance and predicted absorbance, between descending order data sorting and the bootstrap method, of the second DNN. The actual metasurface absorber design parameters were as follows: W 1 = W 2 = 1.4 mm, θ 1 = 6.02°, θ 2 = 46.0°, C u 1 = 0.13 mm, C u 2 = 0.9 mm, R 1 = 14 mm, R 2 = 8 mm, and L = 30 mm.
Electronics 13 00822 g014
Figure 15. Comparing the actual absorbance and predicted absorbance of the three cascaded DNNs with the Keras split function and 7000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = 1.33 mm, W 2 = 1.47 mm, θ 1 = 1°, θ 2 = 123.56°, C u 1 = 0.63 mm, C u 2 = 0.92 mm, R 1 = 15.71 mm, R 2 = 13.71 mm, and L = 30 mm.
Figure 15. Comparing the actual absorbance and predicted absorbance of the three cascaded DNNs with the Keras split function and 7000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = 1.33 mm, W 2 = 1.47 mm, θ 1 = 1°, θ 2 = 123.56°, C u 1 = 0.63 mm, C u 2 = 0.92 mm, R 1 = 15.71 mm, R 2 = 13.71 mm, and L = 30 mm.
Electronics 13 00822 g015
Figure 16. Comparing the actual absorbance and predicted absorbance of the three cascaded DNNs with our proposed adaptive data merging technique and the 7000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = 1.33 mm, W 2 = 1.47 mm, θ 1 = 1°, θ 2 = 82.71°, C u 1 = 0.63 mm, C u 2 = 0.92 mm, R 1 = 15.71 mm, R 2 = 13.71 mm, and L = 30 mm.
Figure 16. Comparing the actual absorbance and predicted absorbance of the three cascaded DNNs with our proposed adaptive data merging technique and the 7000-sample dataset. The actual metasurface absorber design parameters were as follows: W 1 = 1.33 mm, W 2 = 1.47 mm, θ 1 = 1°, θ 2 = 82.71°, C u 1 = 0.63 mm, C u 2 = 0.92 mm, R 1 = 15.71 mm, R 2 = 13.71 mm, and L = 30 mm.
Electronics 13 00822 g016
Figure 17. Accuracy of the cascaded neural network with two layers as a function of epochs, with and without the adaptive data splitting.
Figure 17. Accuracy of the cascaded neural network with two layers as a function of epochs, with and without the adaptive data splitting.
Electronics 13 00822 g017
Table 1. Training (MSLE) and validation (MSE) losses with the 4000-sample dataset using the proposed ACDL model after 150 epochs.
Table 1. Training (MSLE) and validation (MSE) losses with the 4000-sample dataset using the proposed ACDL model after 150 epochs.
Dataset Splitting MethodTraining LossValidation Loss
DNN1DNN2DNN3DNN1DNN2DNN3
Keras0.0310.0330.0320.0490.0440.046
Adaptive0.0370.0390.150.0240.0200.25
Table 2. Training (MSLE) and validation (MSE) losses with the 7000-sample dataset using the proposed ACDL model after 150 epochs.
Table 2. Training (MSLE) and validation (MSE) losses with the 7000-sample dataset using the proposed ACDL model after 150 epochs.
Dataset Splitting MethodTraining LossValidation Loss
DNN1DNN2DNN3DNN1DNN2DNN3
Keras0.0510.490.0470.0770.00730.0095
Adaptive0.0490.0450.050.00670.0060.081
Table 3. Comparison of state-of-the-art artificial intelligence models for metasurface absorber designs.
Table 3. Comparison of state-of-the-art artificial intelligence models for metasurface absorber designs.
ReferenceStructureMachine Learning ModelAccuracyModel Complexity
[18]Different shapesGAN95%Complex structure; complex dataset preparation (based on GAN); large dataset requirement
[19]Acoustic metasurfaceCNN-Complex dataset preparation (based on CNN)
[16]Pexilated metasurfaceCNN90.5%Complex structure; complex dataset preparation (based on CNN)
[20]Pexilated metasurfaceCNN90%Required significant data preprocessing
[21]Eight-ring-pattern metasurfaceCNN90%Complex structure; complex dataset preparation (based on CNN)
[22]Diploe antenna based on metasurfacesGAN-Complex structure; complex dataset preparation (based on GAN)
Proposed modelEdge-coupled SRR with automated cut gap positionDNN94% (7000-sample dataset)Straightforward dataset management mechanism; ease of integration with postprocessing data from EM simulators; simple design structure to implement
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ajmi, H.A.; Bait-Suwailam, M.M.; Khriji, L.; Al-Lawati, H. Prediction Enhancement of Metasurface Absorber Design Using Adaptive Cascaded Deep Learning (ACDL) Model. Electronics 2024, 13, 822. https://doi.org/10.3390/electronics13050822

AMA Style

Ajmi HA, Bait-Suwailam MM, Khriji L, Al-Lawati H. Prediction Enhancement of Metasurface Absorber Design Using Adaptive Cascaded Deep Learning (ACDL) Model. Electronics. 2024; 13(5):822. https://doi.org/10.3390/electronics13050822

Chicago/Turabian Style

Ajmi, Haitham Al, Mohammed M. Bait-Suwailam, Lazhar Khriji, and Hassan Al-Lawati. 2024. "Prediction Enhancement of Metasurface Absorber Design Using Adaptive Cascaded Deep Learning (ACDL) Model" Electronics 13, no. 5: 822. https://doi.org/10.3390/electronics13050822

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop