Next Article in Journal
Power Load Demand Forecasting Model and Method Based on Multi-Energy Coupling
Next Article in Special Issue
Artificial Intelligence Enabled Routing in Software Defined Networking
Previous Article in Journal
Research on Heat Transfer Performance of Micro-Channel Backplane Heat Pipe Air Conditioning System in Data Center
Previous Article in Special Issue
Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode
 
 
Article
Peer-Review Record

Effective Feature Selection Method for Deep Learning-Based Automatic Modulation Classification Scheme Using Higher-Order Statistics

Appl. Sci. 2020, 10(2), 588; https://doi.org/10.3390/app10020588
by Sang Hoon Lee, Kwang-Yul Kim and Yoan Shin *
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Appl. Sci. 2020, 10(2), 588; https://doi.org/10.3390/app10020588
Submission received: 30 June 2019 / Revised: 3 January 2020 / Accepted: 8 January 2020 / Published: 13 January 2020

Round 1

Reviewer 1 Report

The paper describes the interesting idea of using higher order cumulant features for AMC classification  using neural network based algorithms (which is not totally new since a related idea had been previously presented in [17]), but the paper has some basic flaws, both in the presentation, in the description of the novelty of the results and in how the results compare to previous methods. The idea is described clearly in the Introduction, but in page 3 the authors suddenly start summarizing the results from [16,18] (in Table 1), without defining the parameters C_(m,n), which makes the reading very confusing. Also, in Tables 3,4,5,8,10 the C_(m,n) parameters are written as a function of the SNR, but the caption does not specify it, neither does it specify that the SNR is measured in dB (and this is confusing as well).

Furthermore, it is not very clear how much novelty is contained in the current method with respect to the methods mentioned in the references.

Last, the simulation results could be better commented and compared to previous results, commenting where the numerical results are better and how much.

 

Author Response

We truly appreciate the reviewers’ valuable comments. We did our best to reflect the comments which can doubtlessly improve our manuscript. We also thoroughly checked and corrected the manuscript. Our detailed responses to the reviewers’ comments are given below. We have reflected all the answers in the revised manuscript, using the "Track Changes" function of the Microsoft Word.


<Reviewer #1>

 

 

Comment #1. The idea is described clearly in the Introduction, but in page 3 the authors suddenly start summarizing the results from [16,18] (in Table 1), without defining the parameters C_(m,n), which makes the reading very confusing.

 

Response #1. In order to clarify the meaning of Table 1, we changed the caption of Table 1 and added the definition of the cumulants to the revised manuscript as follows.

 

- Page 3, Section 2.1, line 105

 

Table 1 summarizes the expressions for the second, fourth and sixth-order cumulants [16, 18] used in this paper.

 

Table 1. The expressions for various cumulants

 

- Page 3, Section 2.1, line 110

 

Here,  is the ( )-th order cumulant and , is the -th order moment of the received signal .

 

Comment #2. Also, in Tables 3,4,5,8,10 the  parameters are written as a function of the SNR, but the caption does not specify it, neither does it specify that the SNR is measured in dB (and this is confusing as well).

 

Response #2. In order to clarify the meaning, we modified the captions in Tables 3, 4, 5, 8, 10 and the expressions for the SNR scale according to the reviewer’s comment.

 

Comment #3. Last, the simulation results could be better commented and compared to previous results, commenting where the numerical results are better and how much.

 

Response #3. In this paper, we propose an efficient feature extraction method to reduce training time while maintaining the AMC performance. In order to evaluate the proposed method, the representative value was selected from various cumulants by using each method and two sets of the simulations were conducted. In the first set of simulations, in order to find the effective feature values, we ranked the cumulants based on the calculation from each method. Then, we sequentially measured the classification performance by excluding the feature value one by one. Simulation results show that the proposed method is very effective for significant feature values. Thus, we conclude that the proposed method can extract the representative feature values more efficiently than the conventional method. In the second set of simulations, in order to measure the classification performance according to the group, the cumulants were divided into three groups (top, middle, and bottom) based on the ranking obtained from the efficient feature extraction method. The cumulants in each group were used as the input data of the DNN algorithm to measure the classification performance. From the simulation results, since the top group cannot always obtain the best performance in all SNR ranges, the conventional methods seem unstable. However, the proposed method shows not only stable but also better performance in wide SNR range of -2dB~10dB. Therefore, we conclude that the proposed method is efficient to extract the input data group. In order to reflect the reviewer’s comment, we added the following sentences in the revised manuscript.

 

- Page 9, Section 4.2, line 265

 

From the simulation results, since the top group cannot always obtain the best performance in all SNR ranges, the conventional methods seem unstable. However, the proposed method shows not only stable but also better performance in wide SNR range of -2dB~10dB. Even if the same amount of data is exploited, there is a large difference in performance depending on the feature values used. also, in the low SNR environment, if the highest feature group is used the performance can be higher than when using the lowest feature group in a high SNR environment. Therefore, we conclude that the proposed method is very effective to extract the input data group.

 

- Page 10, Section 5, line 296

 

it is expected that the AMC with high classification performance can be realized with a small amount of computation by extracting the efficient feature values using the proposed method. Thus, we conclude that the proposed method can be considered as a method to improve the performance of the AMC for military communication systems, the AMC-based jamming systems, and the automatic coding and modulation for the commercial wireless communication systems.

 

 

Many thanks again for the reviewer’s constructive comments to improve the quality of the revised manuscript.

 

 

Please also see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Overall the method and results are soundly conducted and presented. The two main issues I have with the paper are the following: 1. The article needs to be edited more closely for English writing. There were quite a few minor issues throughout the paper that detract a bit from the overall quality of the article. 2. The conclusions are not all that impactful. When I finished the article I was left asking myself so what. The significance of the study is lacking just a bit in my opinion. If more on the significance of the proposed method can be presented this article and study will be much stronger.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

The idea to improve AMC by searching the input data for the most meaningful and informative properties is a good one, and the work could make for an interesting read. That said there are considerable issues with the written language acting to obfuscate the work.

Sentences such as 60-61, 76, 82-84, 202-204, 222,225-226 require re-writing as they are either nonsensical, misleading or lacking depth. General grammar and sentence structure issues throughout the document make understanding the content challenging. Table and figure captions need to be more thorough. For example 'Table 7. DNN Parameter' doesn't actually tell the reader very much about what the table represents - especially when there are multiple DNNs mentioned! As another example, in table 1 it would be good to have a brief description of what M and C are in the caption. The meaning behind the data in tables 3-5 may be unclear to any reader who does not regularly work on AMC systems. Reference on line 56 appears missing [X]. Below Eq. 1 it states that cor(X,Y) is the covariance of X and Y. This does not agree with the equation, where cor(X,Y) appears to be the correlation and c(x,y) appears to be the covariance. All plots and figures are hard to distinguish when printed in black and white. Figure 1 appears to need an arrow from 'Create input data group' to 'DNN algorithm training'. Also, spelling mistakes in figure. Line 222 says 'As shown in Table 7...', wrong table.

Conceptually I like the work, however I do also wonder how many different trained DNNs were used with each method to produce the data in Figure 3. Training is a somewhat random process and it therefore is risky to conclude network or algorithm scale performance metrics from a single instance of a trained network! Can you show that it wasn't just better for the one DNN you tested it on?

Author Response

Comment #1. Sentences such as 60-61, 76, 82-84, 202-204, 222,225-226 require re-writing as they are either nonsensical, misleading or lacking depth.

 

Response #1. After a careful examination of the manuscript, since lines 60, 61 and 76 might be unnecessary for the overall flow of this manuscript, we deleted them. Moreover, since the rest sentences were and insufficient and misleading as pointed out by the reviewer, we changed these sentences in the revised manuscript as follows.

 

- Page 2, Section 1, line 82

 

In the second set of simulations, in order to measure the classification performance according to the group, the cumulants were divided into three groups (top, middle, and bottom) based on the ranking obtained from the efficient feature extraction method. The following is a summary of how each group is divided.

 

Top group: The five highest important representative values of each method Middle group: The five medium important representative values of each method bottom group: The five lowest important representative values of each method

 

- Page 7, Section 4.1, line 215

 

Table 6 shows the nonlinear activation functions considered in this paper. Since the Softmax function can produce the output in terms of probability, we can calculate the accuracy for each classified signal.

 

- Page 8, Section 4.2, line 247

 

As shown in Table 8, it is observed that the proposed method shows better performance in extracting the efficient features compared to the conventional techniques.

 

- Page 8, Section 4.2, line 248

 

In [14], only the second and the fourth cumulants are considered, and the variation of characteristic values is small even in low SNR environments.

 

Comment #2. General grammar and sentence structure issues throughout the document make understanding the content challenging.

 

Response #2. In accordance with the reviewer’s comment, we did our best to thoroughly proofread the manuscript and refine the expression.

 

Comment #3. Table and figure captions need to be more thorough. For example 'Table 7. DNN Parameter' doesn't actually tell the reader very much about what the table represents - especially when there are multiple DNNs mentioned.

 

Response #3. In accordance with the reviewer's comment, we changed the figure and table captions. Since Table 7 summarizes the parameters for the feature extraction, we modified the caption of Table 7 to “The DNN parameters for optimal feature extraction and performance verification”. Table 9 summarizes the parameters for the feature group extraction, so we modified the caption of Table 9 to “The DNN parameters for optimal feature group extraction and performance verification.”

 

Comment #4. In Table 1 it would be good to have a brief description of what M and C are in the caption.

 

Response #4.  is the cumulant used as the feature value in this paper and  is the higher order moment.  is required to extract  where . The 2nd, 4th, and 6th-order cumulant expressions are shown in Table 1. In order to reflect the reviewer’s comment, we added the following sentence under Table 1.

 

- Page 3, Section 2.1, line 110

 

Here,  is the ( )-th order cumulant and , is the -th order moment of the received signal .

 

Comment #5 Below Eq. 1 it states that cor(X,Y) is the covariance of X and Y. This does not agree with the equation, where cor(X,Y) appears to be the correlation and c(x,y) appears to be the covariance.

 

Response #5. According to the reviewer’s comment, we corrected the relevant sentence as follows.

 

- Page 4, Section 2.2, line 125

 

where  is the covariance of the variables  and . Thus, (1) can be expressed as

 

,                           (2)

 

Comment #6. Figure 1 appears to need an arrow from 'Create input data group' to 'DNN algorithm training'. Also, spelling mistakes in figure. Line 222 says 'As shown in Table 7...', wrong table.

 

Response #6. In accordance with the reviewer's comment, we added an arrow to Figure 1 and corrected the typo.

 

Comment #7. Conceptually, I like the work, however I do also wonder how many different trained DNNs were used with each method to produce the data in Figure 3. Training is a somewhat random process and it therefore is risky to conclude network or algorithm scale performance metrics from a single instance of a trained network! Can you show that it wasn't just better for the one DNN you tested it on?

 

Response #7. In this paper, we adopted the standard procedures in the deep learning research to optimize the DNN algorithm. We set up the hyperparameters and trained the DNN algorithms through the following procedures. In the first step, we adjusted the hyperparameters using the backpropagation algorithm based on the gradient descent and applied the batch normalization to prevent the overfitting during the training. Next, the validation errors were counted and the training was stopped to prevent the overfitting when the validation error stared to increase. Also, when the validation errors did not decrease anymore, we continued to train the system by cutting the learning rate in half. The 20% of the data were utilized for the validation. In this way, we set and trained the parameters of the DNN. In order to reflect the reviewer’s comment, we added the following sentences in the revised manuscript.

 

- Page 7, Section 4.1, line 219

 

In both sets of simulations, we trained the DNN algorithm several times for hyperparameter optimization. Since the DNN is a very complex structure, it is difficult to find the optimal weighting coefficients in one calculation. Therefore, in this paper we set up the hyperparameters and trained the DNN algorithms through the following standard procedures. In the first step, we adjusted the hyperparameters using the backpropagation algorithm based on the gradient descent and applied the batch normalization to prevent the overfitting during the training. Next, the validation errors were counted and the training was stopped to prevent the overfitting when the validation error started to increase. Also, when the validation errors did not decrease anymore, we continued to train by cutting the learning rate in half. The 20% of the data were utilized for the validation.

 

Many thanks again for the reviewer’s constructive comments to improve the quality of the revised manuscript.

 

 

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

In this paper, the authors discuss an approach by which to reduce the features utilized by a machine learning algorithm to improve computation time while minimizing the impact on the performance. While the approach has merit, the paper should be significantly polished to better highlight the trade space of the proposed approach against prior work in the area.

Major edits:

 * What are the assumptions made on the received signal for these calculations? What prior knowledge is required (e.g. channel effects, synchronization, etc.)

 * It is mentioned on page 6 line 189 that the DNN algorithm collects signal data and extracts features. This must be elaborated on. Are these meta features of the cumulants? The input to the network is itself features not the raw data? This is not clear from the text.

 * How were the neural network parameters defined in Tables 7 and 9 determined? If through brute search over hyperparameters, how did the authors prevent bias against the training dataset? Was there a validation dataset used in the training process to avoid overfitting? Was any regularization used?

 * It isn't clear from the text what the second neural network described by Table 9 is doing within the approach. What are the goals of each neural network and how are they trained?

 * It isn't overly clear from the performance analysis the benefit of the provided approach. The gains seem minor for the increased architecture needed. The tradeoffs between the proposed approach and prior approaches should be better defined.

Minor edits:

 * Missing reference in text "[X]"

 * Define r[n]. Is this the samples of the received signal or the received symbols?

 * Typo in Figure 1: "traing"

 * Grammar in Table 9: "secondly"

Author Response

 

Comment #1. What are the assumptions made on the received signal for these calculations? What prior knowledge is required (e.g. channel effects, synchronization, etc.)

 

Response #1. In this paper, we consider the received signals in the AWGN channel, and it is assumed that the receiver knows the center frequency, time synchronization, and the transmission bandwidth in advance to detect the signals. In order to reflect the reviewer’s comment, the following sentence was added in the revised manuscript

 

- Page 6, Section 3.3, line 201

 

First, modulated of the signals to be classified are created in the AWGN channel, the cumulants are extracted for each signal

 

Comment #2. It is mentioned on page 6 line 189 that the DNN algorithm collects signal data and extracts features. This must be elaborated on. Are these meta features of the cumulants? The input to the network is itself features not the raw data? This is not clear from the text.

 

Response #2. In this paper, we use the second, the fourth, and the sixth-order cumulants as the input features to the DNN algorithm. We added the following sentences in the revised manuscript to clarify this fact

- Page 3, Section 2.1, line 101

 

The cumulant is one of the typical statistical features used in the hierarchical AMC scheme [17]. In this paper, the higher-order cumulants for baseband received signal samples r [n] generated in the additive white Gaussian noise (AWGN) channel are extracted as representative features and used as the inputs to the DNN algorithm. Since the proposed method exploits the correlation characteristics, we consider the high-order cumulants as the feature values.

 

- Page 6, Section 3.3, line 199

 

First, modulated signals to be classified are created in the AWGN channel, and the cumulants are extracted for each signal. The extracted cumulants are represented by one representative value through the proposed method as shown in Equation 6.

 

Comment #3. How were the neural network parameters defined in Tables 7 and 9 determined? If through brute search over hyperparameters, how did the authors prevent bias against the training dataset? Was there a validation dataset used in the training process to avoid overfitting? Was any regularization used? It isn't clear from the text what the second neural network described by Table 9 is doing within the approach. What are the goals of each neural network and how are they trained?

 

Response #3. In this paper, we adopted the standard procedures in the deep learning research to optimize the DNN algorithm. We set up the hyperparameters and trained the DNN algorithms through the following procedures. In the first step, we adjusted the hyperparameters using the backpropagation algorithm based on the gradient descent and applied the batch normalization to prevent the overfitting during the training. Next, the validation errors were counted and the training was stopped to prevent the overfitting when the validation error stared to increase. Also, when the validation errors did not decrease anymore, we continued to train the system by cutting the learning rate in half. The 20% of the data were utilized for the validation. In this way, we set and trained the parameters of the DNN. In order to reflect the reviewer’s comment, we added the following sentences in the revised manuscript. In order to reflect the reviewer’s comment, we added the following sentences in the revised manuscript.

Then, in order to evaluate the proposed method, the representative value was selected from various cumulants by using each method and two sets of the simulations were conducted. In the first simulations, in order to find the effective feature values, we ranked the cumulants based on the calculation from each method. Then, we sequentially measured the classification performance by excluding the feature value one by one. In the second set of simulations, in order to measure the classification performance according to the group, the cumulants were divided into three groups (top, middle, and bottom) based on the ranking obtained from the efficient feature extraction method. The cumulants in each group were used as the input data of the DNN algorithm to measure the classification performance. In order to reflect the reviewer’s comment, the following sentence was added in the revised manuscript.

 

- Page 7, Section 4.1, line 219

 

In both sets of simulations, we trained the DNN algorithm several times for hyperparameter optimization. Since the DNN is a very complex structure, it is difficult to find the optimal weighting coefficients in one calculation. Therefore, in this paper we set up the hyperparameters and trained the DNN algorithms through the following standard procedures. In the first step, we adjusted the hyperparameters using the backpropagation algorithm based on the gradient descent and applied the batch normalization to prevent the overfitting during the training. Next, the validation errors were counted and the training was stopped to prevent the overfitting when the validation error started to increase. Also, when the validation errors did not decrease anymore, we continued to train by cutting the learning rate in half. The 20% of the data were utilized for the validation.

 

- Page 7, Section 4.2, line 237

 

in order to evaluate the proposed method, the representative value was selected from various cumulants by using each method and two sets of the simulations were conducted. In the first simulation, in order to find the effective feature values, we ranked the cumulants based on the calculation from each method. Then, we sequentially measured the classification performance by excluding the feature value one by one.

 

- Page 8, Section 4.2, line 258

 

In the second set of simulations, in order to measure the classification performance according to the group, the cumulants were divided into three groups (top, middle, and bottom) based on the ranking obtained from the efficient feature extraction method. The cumulants in each group were used as the inputs to the DNN algorithm to measure the classification performance.

 

- Page 9, Section 4.2, line 265

 

From the simulation results, since the top group cannot always obtain the best performance in all SNR ranges, the conventional methods seem unstable. However, the proposed method shows not only stable but also better performance in wide SNR range of -2dB~10dB. Even if the same amount of data is exploited, there is a large difference in performance depending on the feature values used. also, in the low SNR environment, if the highest feature group is used the performance can be higher than when using the lowest feature group in a high SNR environment. Therefore, we conclude that the proposed method is very effective to extract the input data group.

 

 

 

Comment #4. Minor edits:

 * Missing reference in text "[X]"

 * Define r[n]. Is this the samples of the received signal or the received symbols?

 * Typo in Figure 1: "traing"

 * Grammar in Table 9: "secondly“

 

Response #4. In accordance with the reviewer’s comment, we corrected all the minor edits. The  is the baseband received signal generated in the AWGN channel. In order to reflect the reviewer’s comment, we added the following sentences in the revised manuscript.

 

- Page 3, Section 2.1, line 102

 

In this paper, the higher-order cumulants for baseband received signal r [n] generated in the additive white Gaussian noise (AWGN) channel are extracted as the representative features and used as the inputs to the DNN algorithm.

 

 

Many thanks again for the reviewer’s constructive comments to improve the quality of the revised manuscript.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

There have been considerable improvements in the general communication of this work, including corrections to English and grammar. However, some changes must still be made before this can be published.

 

1) All abbreviations should have capital letters in their longer-form descriptor, matching the abbreviation letters itself. E.g. Automatic Modulation Classification (AMC) schemes or Convolutional Neural Networks (CNNs).

2) Line 52, starting ‘Assuming that the training data is similar…’ still does not make sense. The part that says ‘if the computation amount is lower than the likelihood method’ in particular. I am unclear if you are trying to say if your method is better when the available computational power is lacking, or if it should instead read ‘despite the computational complexity being lower than the likelihood method.’ Alternatively, you may be trying to say that it could yield promising results if the computational complexity proves lower than conventional methods. As you can see, at the moment it is proving difficult to understand.

3) Line 59, ‘… and a Recurrent NN (RNN) [10] for the AMC that have all been studied.’

4) Lines 69-75, ‘feature’ -> ‘features’ (correction required in all use cases here, but also in some other places throughout the document)

5) Lines 75-78, starting ‘Based on this, through the correlation…’ requires a reword. The sentence is unclear and repeats itself mid-way through.

6) Line 82, correct ‘…correlation coefficients, and the three methods…’ to ‘…correlation coefficients, and three methods…’

7) All figure and table captions are still lacking detail. In fact, table 7 and 9 currently have identical captions despite containing different information. It should be possible to understand the significance of any tables or figures using the captions alone. At the moment that is not possible. For example, Tables 3, 4 and 5 are all very similar looking, but line 280 suggests that in table 5 smaller values are better while in the other two larger values are better. All the captions need improvement but here are two examples:

Table 5: Effective correlation values of each cumulant in various SNR environments for the proposed method. Smaller values indicate strong influence on the classification performance, meaning the associated cumulants are effective features for AMC systems.

Table 6: Definitions for the non-linear activation functions used in the DNN. ReLU was used for all hidden layers, while Softmax was used on the output layer.

8) Table 5, C63, 0dB has a listed correlation of 1027. I am guessing that should be 10.27?

9) There is no reference to Tables 3, 4 and 5 anywhere in the body of the paper. If the tables are not needed then they should not be there, if they are then why are they not discussed at all?

10) Line 322, ‘… we trained the DNN algorithm several times…’ How many times was this process performed? Did you test this on multiple runs of training and testing neural networks? Are these finding statistically significant as a result? Neural network training is non-deterministic in its very nature due to the random starting position/initial operating condition, so I would expect any experiment claiming better performance using neural networks to show that this finding is repeatable.

11) Line 330, ‘The 20% of the data were utilized for the validation.’ Do you actually mean validation? As far as I can tell, you split the data into two groups, training and test data. Validation data is a very specific thing in machine learning, and would require the data be split into three groups. If you had validation data, this is not described or shown in table 7. If you used only training and test data then that is ok, but I strongly suggest that you avoid the phrase validation as it carries a significant meaning when discussing Neural Network performance.

12) It is unclear where the 50,000 data and the 450,000 data were used. Tables 7 and 9 are unclear on this front too. Why do we seem to have two differently sized groups of generated data here?

13) Line 362: ‘As shown in Table 8, it is observed that the proposed method shows better performance in extracting…’. How is this shown in table 8? This needs expanding. Remember that as readers we do not have a background understanding of your experiment, so you cannot expect us to look at a table full of numbers and intuitively draw meaningful conclusions from them. You have to walk us through the meaning and conclusion. At the moment you just assert that your method is better, and there is seemingly no comparative information on conventional performance provided to allow the reader to agree with this.

14) Lines 373-376: ‘…were divided into three groups (top, middle and bottom) based on the ranking obtained from…’ Please list which five cumulants fell into each of the groups. Don’t ask the reader to sit and work it out for themselves.

15) This work is interesting, but by the end of this paper I was still left asking ‘so what?’. The conclusion requires a rework to drive home how much of an improvement your proposed system is. Maybe add some quantitative results/comparisons with the conventional methods so that we can actually get a feel for how much difference this technique makes? At the moment the qualitative conclusion is a little lackluster.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

The authors have done a good job of addressing the major questions concerning the machine learning aspects of the work. However, it is still unclear from the work some of the assumptions made in the feature estimation process. First, it is still not clear from the text what the assumptions are on the received signal for calculation of the cummulant features. Is synchronization required? Knowledge of the channel effects? Are the assumptions made realistic down to -10dB SNR where the performance of this work is measured? With this in mind, is an accuracy of 86% at -10dB realistic for a real world scenario using cummulants due to non-realistic assumptions?

Additionally, it is unclear how many samples of the received signal are used to estimate each cummulant. Is the stated number of 450,000 data how much data was used to calculate the cummulants? If so, is this a realistic observation window for the considered applications?

Finally, it would be useful to understand the statistics of the cummulant estimation error based on the authors' estimation process.

From the previous review, the authors also did not address the following two issues:

 * It isn't clear from the text what the second neural network described by Table 9 is doing within the approach. What are the goals of each neural network and how are they trained?

 * It isn't overly clear from the performance analysis the benefit of the provided approach. The gains seem minor for the increased architecture needed. The tradeoffs between the proposed approach and prior approaches should be better defined.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop