Reprint

Data Science: Measuring Uncertainties

Edited by
June 2021
256 pages
  • ISBN978-3-0365-0792-7 (Hardback)
  • ISBN978-3-0365-0793-4 (PDF)

This book is a reprint of the Special Issue Data Science: Measuring Uncertainties that was published in

Chemistry & Materials Science
Computer Science & Mathematics
Physical Sciences
Summary

With the increase in data processing and storage capacity, a large amount of data is available. Data without analysis does not have much value. Thus, the demand for data analysis is increasing daily, and the consequence is the appearance of a large number of jobs and published articles. Data science has emerged as a multidisciplinary field to support data-driven activities, integrating and developing ideas, methods, and processes to extract information from data. This includes methods built from different knowledge areas: Statistics, Computer Science, Mathematics, Physics, Information Science, and Engineering. This mixture of areas has given rise to what we call Data Science. New solutions to the new problems are reproducing rapidly to generate large volumes of data. Current and future challenges require greater care in creating new solutions that satisfy the rationality for each type of problem. Labels such as Big Data, Data Science, Machine Learning, Statistical Learning, and Artificial Intelligence are demanding more sophistication in the foundations and how they are being applied. This point highlights the importance of building the foundations of Data Science. This book is dedicated to solutions and discussions of measuring uncertainties in data analysis problems.

Format
  • Hardback
License
© 2022 by the authors; CC BY-NC-ND license
Keywords
model-based clustering; mixture model; EM algorithm; integrated approach; density estimation; distribution free; non-parametric statistical test; decoy distributions; size invariance; scaled quantile residual; maximum entropy method; scoring function; outlier detection; overfitting detection; time series of counts; Bayesian hierarchical modeling; Bayesian nonparametrics; Pitman–Yor process; prior sensitivity; clustering; Bayesian forecasting; singular spectrum analysis; robust singular spectrum analysis; time series forecasting; mutual investment funds; relative entropy; cross-entropy; uncertain reasoning; inductive logic; confirmation measure; semantic information; medical test; raven paradox; Markov random fields; probabilistic graphical models; multilayer networks; objective Bayesian inference; intrinsic prior; variational inference; binary probit regression; mean-field approximation; multi-attribute emergency decision-making; intuitionistic fuzzy cross-entropy; grey correlation analysis; earthquake shelters; attribute weights; time series; Bayesian inference; hypothesis testing; unit root; cointegration; Rényi entropy; discrete Kalman filter; continuous Kalman filter; algebraic Riccati equation; nonlinear differential Riccati equation; cloud model; fuzzy time series; stock trend; Heikin–Ashi candlestick; water resources; channel; mathematical entropy model; bank profile shape; gene expression programming (GEP); entropy; genetic programming; artificial intelligence; data science; big data; n/a