Design of Self-Optimizing Polynomial Neural Networks with Temporal Feature Enhancement for Time Series Classification
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis is a very interesting and important paper, which makes a valuable contribution to the field. With some revisions as detailed below, this paper would be suitable for publication. Whereas x was initially used for the representation of multidimensional variables, it later came to mean something else in certain contexts. Such matters of notation can confuse the reader and should certainly be addressed through clear definitions of all notation used and strict adherence to these meanings throughout the paper. Data Availability and Links: One of the dataset links provided in the paper, https://timeseriesclassification.com/dataset, was broken. Please replace or add more sources for the same. Datasets should always be available and accessible to ensure reproducibility. The results seem promising from the comparisons made in the paper, and the new approach looks as if it will surpass the existing ones. However, it is not very clear if the improvement is of statistical significance. Equipping this section with a statistical test, say Hassani-Silva KS test (a package is available), would do the trick. The significance testing would harden the grounds on which the performance of the proposed method vis-à-vis existing approaches is compared and make the results more credible. In the text, you refer to the "amount of data" in the table. If you are referring to the time series length, please make this explicit. It is currently ambiguous and could lead to misunderstandings. If this is not about length but instead pertains to the number of samples or datasets, please correct the terminology accordingly. Equation (32): In Equation (32), if you are referring to a capital A 𝑋 X is a matrix representing multidimensional variables, the corresponding results (such as regression outcomes) should reflect this multidimensional nature. Even though the formula itself seems right, it needs further elaboration, with better definitions of all mathematical notations to make sure that the readers get the context right. Explanation of Results in Table 6: Table 6 presents successful and failed cases of the proposed methodology. I view this as an essential point of the paper due to which greater expansion is needed. Describing under what circumstances it works or fails will make it clear to readers where to expect the strengths and limitations of the method. Such details are the things readers can take home in deciding in which practical situations to consider applying the method.
Author Response
Please see the attachment, thank you.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe manuscript presents a novel self-optimizing polynomial neural network with temporal feature enhancement (OPNN-T) aimed at addressing challenges in time series classification. Overall, the paper is well-structured and flows logically from the problem statement to the proposed approach, experimental setup, and concluding remarks. The authors provide a clear rationale for combining temporal feature enhancement (through LSTM components) with polynomial neural networks (PNNs) and then incorporating particle swarm optimization for self-optimization. This approach is interesting, and the results demonstrate promising improvements over comparable classification methods on multiple datasets
Comment 1. While the manuscript clearly describes the motivation for combining LSTM-based temporal feature extraction with polynomial neural networks, it would be beneficial to elaborate further on how these two modules complement each other at a conceptual level.
Comment 2. Although the paper reports that the proposed method converges quickly (due to LSE in the polynomial network and use of BPTT for LSTM), it would be beneficial to provide more explicit information or experiments on runtime complexity and memory usage.
Comment 3. The paper presents fixed values for PSO parameters (e.g., swarm size, inertia weight), LSTM learning rate, etc. While referencing prior works helps justify these defaults, some discussion on how sensitive the final performance is to these hyperparameters would add clarity. Providing recommendations for tuning might help practitioners apply OPNN-T to new domains.
Comment 4. Emphasize how the proposed method might be integrated into practical applications, briefly acknowledge any constraints on the model’s applicability, such as scalability concerns or performance variations
Comment 5. (Optional) One of the potential strengths of polynomial neural networks is their interpretability: the coefficients in the polynomial expansions can offer insight into feature interactions. The manuscript could consider adding a small example to strengthen interpretability aspect.
Comment 6.(Optional) The experiments compare OPNN-T to several existing methods. However, the paper would benefit from a more thorough examination of how each component within OPNN-T contributes to overall performance. For instance, how much performance benefit is gained from:
1. Using LSTM-based temporal feature extraction (versus simpler feature extraction or none at all),
2. Employing three different polynomial neuron types (versus a single polynomial type),
3. Adopting the sub-dataset generator approach (versus training on the entire dataset at once).
Author Response
Please see the attachment, thank you.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThe manuscript addresses a critical problem in data mining: enhancing classification accuracy for time series data. It introduces an innovative self-optimizing polynomial neural network with temporal feature enhancement (OPNN-T). The topic is highly relevant to the journal and fits within its scope, given the manuscript's focus on advanced neural network architectures and their application to time series classification tasks. The paper is well-structured and presents comprehensive experiments demonstrating the advantages of OPNN-T over traditional and state-of-the-art classification techniques. The proposed OPNN-T integrates polynomial neural networks (PNNs) with LSTM-based temporal feature extraction and self-optimization strategies. This combination is innovative and addresses challenges in modeling complex time series data. The authors validate their approach on both machine learning and time series datasets, comparing it with baseline and advanced models. The results highlight the competitive performance of OPNN-T. By focusing on improving classification accuracy and adaptability across datasets, the work has potential applicability in various domains, including medical diagnostics and anomaly detection.
However, several areas require improvement to enhance reproducibility, and impact. The self-optimization strategy, particularly the use of the PSO algorithm, lacks detailed discussion regarding parameter tuning and termination criteria. More explanation on how these values were selected would enhance reproducibility. Were the diverse datasets chosen for specific characteristics (e.g., variability, number of features)? For some datasets (e.g., ItalyPowerDemand), the performance is suboptimal. The authors should explore and discuss why the model struggles with these datasets. Although the paper compares OPNN-T with existing methods, the discussion does not delve deeply into why it outperforms competitors like AFFNet in most cases. Highlighting the architectural or algorithmic advantages would add depth. A comparison of training times or resource utilization of the OPNN-T with simpler models would provide practical insights. Ensure all references are complete and consistently formatted. For instance, references to public datasets could include persistent links or DOIs. There are occasional grammatical errors and typographical issues (e.g., inconsistent use of capitalization in section titles). A thorough proofreading is recommended.
I my opinion the manuscript makes a valuable contribution to the field of time series classification and is suitable for publication in Electronics after revisions. Addressing the issues outlined above will significantly enhance the impact of the work.
Author Response
Please see the attachment, thank you.
Author Response File: Author Response.pdf
Reviewer 4 Report
Comments and Suggestions for Authors
The paper introduces a self-optimizing polynomial neural network with temporal feature enhancement (OPNN-T) for time series classification. To evaluate its performance, the authors apply it to publicly available datasets and compare the results with those of other prevalent classification models. The paper's topic is interesting and novel, demonstrating a robust methodological approach. Furthermore, it is generally well-written and well-structured, both from a methodological and application point of view. The authors describe effectively the research gap and the contributions of their approach to the existing body of literature.
However, several minor issues must be addressed before the paper can be considered for publication in MDPI's Electronics journal. These issues primarily involve better reflecting the paper’s contributions to the existing literature, more effectively demonstrating the proposed approach's effectiveness compared to alternative models, and improving readability. Consequently, I recommend a minor revision for the paper to allow the authors to address the following comments, which are detailed below.
— Keywords should include terms not already present in the title to improve discoverability.
— Lines 10-14: This part of the text could either be rewritten as regular text or, if kept as points, each point should be simplified, while ensuring consistency and conciseness. For example, point 3 seems somewhat disconnected from the first two points, as it uses a different presentation style.
— Some paragraphs in the Introduction could be reorganized for improved flow. For instance, long paragraphs could be split into smaller ones (e.g., lines 64-86), whereas others could be combined for better readability (e.g., 87-101). Moreover, the research gap that the paper tries to cover could be better described. The authors mention that ‘there remains substantial room for improvement in the context of time series classification.’ The authors could elaborate on this, with real-world applications where this improvement is needed. Without such context, even if their approach improves performance, it would still suggest an ongoing need for further enhancements. Furthermore, the authors should briefly discuss the application of their proposed model in the Introduction.
— The authors should compare their results with similar studies in the literature. Are there other approaches designed to improve classification rates? If so, how much improvement did those achieve?
— The authors should comment on whether the difference between their results and those of competing models is statistically significant. They should also discuss how the characteristics of the utilized datasets might affect the results. For example, does the proposed model have inherent advantages for these datasets compared to others? Put it differently, if the proposed model were applied to a different dataset, would it still outperform other classification models?
— The authors should expand the conclusions section, incorporating deeper insights about the results. For example, they should provide a clearer discussion of how their approach compares to existing classification models. They should also address the limitations of their study, connecting them with the proposed directions for future research.
Author Response
Please see the attachment, thank you.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for Authorsno further comments
Reviewer 3 Report
Comments and Suggestions for AuthorsI have reviewed the responses provided by the authors regarding my comments on the manuscript, and I appreciate their thoughtful and comprehensive approach to addressing the concerns raised.
In my opinion, the authors have adequately expanded the discussion on parameter tuning for the PSO algorithm, including specific experiments to justify their choices. The explanation of termination criteria, particularly the use of fitness improvement thresholds, adds clarity and improves reproducibility. Moreover, the added explanation regarding the lack of long-term temporal dependencies and low variability in some datasets provides valuable context for interpreting the results. The inclusion of Table 8 and a detailed analysis of performance factors, such as temporal feature enhancement and the use of higher-order polynomials, effectively demonstrates the strengths of the proposed approach.
The authors have addressed all my comments comprehensively and provided detailed revisions that significantly improve the quality of the manuscript. Therefore, I recommend accepting the revised manuscript for publication.