Re-sampling Methods for Statistical Inference of the 2020s
A special issue of Stats (ISSN 2571-905X).
Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 43554
Special Issue Editor
Special Issue Information
Dear Colleagues,
Re-sampling methods are previous century fellows, with the Bootstrap, no doubt, being the most popular among them, now well into its forties. Following the rocketing evolution of technology, computational power and data sharing, statistical inference as we knew it at the beginning of the twenty-first century, has extraordinary expanded its scope and applications, and the global Covid-19 crisis appears to have even accelerated the process. With this Special Issue I am soliciting contributions, advancements and critical reviews on Bootstrap and re-sampling methods that address the statistical needs of the 2020s and envision future research directions. Manuscripts covering, though not limited to, topics in data science, statistical learning, statistical modelling, epidemiology, observational studies, circular economy, sustainable development and inequalities are particularly welcome.
I look forward to receiving your submissions.
Sincerely,
Prof. Dr. Fulvia Mecatti
Guest Editor
Message from Prof. Bradley Efron:
This is a propitious moment for Stats' special issue on resampling methods. Data sets are bigger than ever, computation is faster and cheaper than ever, and the demand for statistical analysis seems to multiply every year. Computer-intensive statistical methods — the substitution of computational power for routine and tedious paper and pencil calculations — is a growth industry in the current scientific environment, particularly as the complexity of our estimators and tests have outpaced theory.
Resampling plans were the original computer-intensive statistical methodology (so named in Efron and Diaconis' 1983 Scientific American article.) Their widespread adaptation encouraged other computer-based success stories, Markov Chain Monte Carlo being particularly notable. The term "resampling", in its current sense, seems to have been introduced in the title of my 1982 monograph "The jackknife, the bootstrap, and other resampling plans". Besides the jackknife and the bootstrap, several older resampling methods were discussed there: cross-validation, half-sampling, typical value theory, the infinitesimal jackknife, and balanced repeated replications.
The resampling story of the last forty years has been one of new uses more than new methods. The original, modest, goal of computationally attaching standard errors to statistical estimators was expanded to bootstrap confidence intervals (paralleling an ambitious theoretical development of likelihood based intervals.) Bootstrap smoothing ,aka "bagging" or "bootstrap aggregation", aimed at improving unsmooth estimators such as those obtained from model selection, and became central to Leo Breiman's popular machine learning package "random forests". Massive prediction algorithms, especially "deep learning", required new cross-validation techniques carried out at Herculean scales. As we will see in this volume, applications of resampling have spread to an enormous variety of scientific studies, generating a diversity of specialized techniques as well as an improved theoretical understanding of how the methods perform.
To say that this is a good time for publishing a resampling issue doesn't mean it's easy to do so. I'm grateful to Professor Fulvia Mecatti for conceiving, organizing, and carrying out the task so successfully.
Bradley Efron
Stanford
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Stats is an international peer-reviewed open access quarterly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- estimation
- big data
- public health
- data integration
- sampling statistics
- statistical assessment
- model selection
- empirical bayes
- population studies
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.
Further information on MDPI's Special Issue policies can be found here.