Signal and Data Analysis
A section of Entropy (ISSN 1099-4300).
In 1972, John W. Tukey stated, “Data analysis has to analyze real data. Most real data call for data investigation, while almost all statistical theory is concerned with data processing. This can be borne, in part because large segments of data investigation are, by themselves, data processing”. Every domain today is generating huge amount of real data, from medicine to aeronautics, and education to agriculture. There is no sector or activity in which data are not being generated.
Data analysis refers to the entire process from raw data harvest to the conversion into knowledge. CRISP-DM is the de facto standard used to model the process through which the process transitions from business understanding to data preparation to modelling, in which algorithms either from artificial intelligence (AI), statistics, or other fields can be integrated searching for patterns.
Special attention must be placed on the data understanding and preparation phases of the process. Data are not collected to be analyzed; data are the result of other operational processes and data are provided in different formats and types.
Most of the information generated today is collected in unstructured format, such as texts or images from audio to video. All these data and signals require special preparation prior to the application of modelling techniques.
This Section focuses on original and new research results regarding the broad and fascinating field of data analysis. Thus, manuscripts are solicited on data cleansing, data wrangling, data modelling, signal processing, text processing, data mining, and their applications to either traditional sectors including marketing and finance or to other novel sectors such as health, manufacturing, agriculture, and space. Submissions providing critical up-to-date reviews are also welcome.
Special emphasis is placed on methods for the analysis of complex data with a focus on probability theory, information theory, or entropy. Statistical methods (basic, PCA, correlation, regression, etc.), network analysis (entropy, Granger causality, etc.), visualization, traditional AI methods (decision trees, random forests, support vector machine (SVM), boosting, bagging, etc.), stream analysis methods, databases approaches, clustering approaches (hierarchical, distance-based, etc.), knowledge graph analysis and ontologies to enrich semantics or deep learning, and the exploration of the relationship between these techniques and probability theory, information theory, and entropy are of special interest.
Topical Advisory Panel
Following special issues within this section are currently open for submissions:
- Entropy and Nonlinear Signal Processing in Cardiovascular Applications (Deadline: 22 December 2023)
- The Application of Information Theory in Fault Detection and Diagnosis (Deadline: 30 December 2023)
- Methods in Artificial Intelligence and Information Processing II (Deadline: 31 December 2023)
- Entropy Methods for Physiological Signal Analysis (Deadline: 29 February 2024)
- Big Data Analytics and Information Science for Business and Biomedical Applications: Third Edition (Deadline: 30 March 2024)
- Information Theory and Nonlinear Signal Processing (Deadline: 15 May 2024)
- Entropy Applications in Condition Monitoring and Fault Diagnosis (Deadline: 20 July 2024)