Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = metascience

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 360 KB  
Article
Statistics in Service of Metascience: Measuring Replication Distance with Reproducibility Rate
by Erkan O. Buzbas and Berna Devezer
Entropy 2024, 26(10), 842; https://doi.org/10.3390/e26100842 - 5 Oct 2024
Viewed by 2007
Abstract
Motivated by the recent putative reproducibility crisis, we discuss the relationship between the replicability of scientific studies, the reproducibility of results obtained in these replications, and the philosophy of statistics. Our approach focuses on challenges in specifying scientific studies for scientific inference via [...] Read more.
Motivated by the recent putative reproducibility crisis, we discuss the relationship between the replicability of scientific studies, the reproducibility of results obtained in these replications, and the philosophy of statistics. Our approach focuses on challenges in specifying scientific studies for scientific inference via statistical inference and is complementary to classical discussions in the philosophy of statistics. We particularly consider the challenges in replicating studies exactly, using the notion of the idealized experiment. We argue against treating reproducibility as an inherently desirable property of scientific results, and in favor of viewing it as a tool to measure the distance between an original study and its replications. To sensibly study the implications of replicability and results reproducibility on inference, such a measure of replication distance is needed. We present an effort to delineate such a framework here, addressing some challenges in capturing the components of scientific studies while identifying others as ongoing issues. We illustrate our measure of replication distance by simulations using a toy example. Rather than replications, we present purposefully planned modifications as an appropriate tool to inform scientific inquiry. Our ability to measure replication distance serves scientists in their search for replication-ready studies. We believe that likelihood-based and evidential approaches may play a critical role towards building statistics that effectively serve the practical needs of science. Full article
Show Figures

Figure 1

5 pages, 212 KB  
Editorial
The Quest for Psychiatric Advancement through Theory, beyond Serendipity
by Robert E. Kelly, Anthony O. Ahmed, Matthew J. Hoptman, Anika F. Alix and George S. Alexopoulos
Brain Sci. 2022, 12(1), 72; https://doi.org/10.3390/brainsci12010072 - 31 Dec 2021
Cited by 2 | Viewed by 2158
Abstract
Over the past century, advancements in psychiatric treatments have freed countless individuals from the burden of life-long, incapacitating mental illness. These treatments have largely been discovered by chance. Theory has driven advancement in the natural sciences and other branches of medicine, but psychiatry [...] Read more.
Over the past century, advancements in psychiatric treatments have freed countless individuals from the burden of life-long, incapacitating mental illness. These treatments have largely been discovered by chance. Theory has driven advancement in the natural sciences and other branches of medicine, but psychiatry remains a field in its “infancy”. The targets for healing in psychiatry lie within the realm of the mind’s subjective experience and thought, which we cannot yet describe in terms of their biological underpinnings in the brain. Our technology is sufficiently advanced to study brain neurons and their interactions on an electrophysiological and molecular level, but we cannot say how these form a single feeling or thought. While psychiatry waits for its “Copernican Revolution”, we continue the work in developing theories and associated experiments based on our existing diagnostic systems, for example, the Diagnostic and Statistical Manual of Mental Disorders (DSM), International Classification of Diseases (ICD), or the more newly introduced Research Domain Criteria (RDoC) framework. Understanding the subjective reality of the mind in biological terms would doubtless lead to huge advances in psychiatry, as well as to ethical dilemmas, from which we are spared for the time being. Full article
(This article belongs to the Section Neuropsychiatry)
24 pages, 1188 KB  
Article
Effect Sizes, Power, and Biases in Intelligence Research: A Meta-Meta-Analysis
by Michèle B. Nuijten, Marcel A. L. M. van Assen, Hilde E. M. Augusteijn, Elise A. V. Crompvoets and Jelte M. Wicherts
J. Intell. 2020, 8(4), 36; https://doi.org/10.3390/jintelligence8040036 - 2 Oct 2020
Cited by 24 | Viewed by 11141
Abstract
In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a [...] Read more.
In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of 0.26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small-study effects, potentially indicating publication bias and overestimated effects. We found no differences in small-study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We concluded that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields. Full article
Show Figures

Figure 1

20 pages, 1109 KB  
Article
Comfortably Numb? Researchers’ Satisfaction with the Publication System and a Proposal for Radical Change
by Hans van Dijk and Marino van Zelst
Publications 2020, 8(1), 14; https://doi.org/10.3390/publications8010014 - 2 Mar 2020
Cited by 2 | Viewed by 4231
Abstract
In this preregistered study we evaluate current attitudes towards, and experiences with, publishing research and propose an alternative system of publishing. Our main hypothesis is that researchers tend to become institutionalized, such that they are generally discontent with the current publication system, but [...] Read more.
In this preregistered study we evaluate current attitudes towards, and experiences with, publishing research and propose an alternative system of publishing. Our main hypothesis is that researchers tend to become institutionalized, such that they are generally discontent with the current publication system, but that this dissatisfaction fades over time as they become tenured. A survey was distributed to the first authors of papers published in four recent issues of top-15 Work and Organizational Psychology (WOP) journals. Even among this positively biased sample, we found that the time it takes to publish a manuscript is negatively associated with whether authors perceive this time to be justifiable and worthwhile relative to the amount their manuscript has changed. Review quality and tenure buffer the negative relationship with perceived justifiability, but not for perceived worth. The findings suggest that untenured (WOP) researchers are dissatisfied with the publishing times of academic journals, which adds to the pile of criticisms of the journal-based publication system. Since publishing times are inherent to the journal-based publication system, we suggest that incremental improvements may not sufficiently address the problems associated with publishing times. We therefore propose the adoption of a modular publication system to improve (WOP) publishing experiences. Full article
Show Figures

Figure 1

11 pages, 496 KB  
Article
Contributorship, Not Authorship: Use CRediT to Indicate Who Did What
by Alex O. Holcombe
Publications 2019, 7(3), 48; https://doi.org/10.3390/publications7030048 - 2 Jul 2019
Cited by 90 | Viewed by 20850
Abstract
Participation in the writing or revising of a manuscript is, according to many journal guidelines, necessary to be listed as an author of the resulting article. This is the traditional concept of authorship. But there are good reasons to shift to a contributorship [...] Read more.
Participation in the writing or revising of a manuscript is, according to many journal guidelines, necessary to be listed as an author of the resulting article. This is the traditional concept of authorship. But there are good reasons to shift to a contributorship model, under which it is not necessary to contribute to the writing or revision of a manuscript, and all those who make substantial contributions to a project are credited. Many journals and publishers have already taken steps in this direction, and further adoption will have several benefits. This article makes the case for continuing to move down that path. Use of a contributorship model should improve the ability of universities and funders to identify effective individual researchers and improving their ability to identify the right mix of researchers needed to advance modern science. Other benefits should include facilitating the formation of productive collaborations and the creation of important scientific tools and software. The CRediT (Contributor Roles Taxonomy) taxonomy is a machine-readable standard already incorporated into some journal management systems and it allows incremental transition toward contributorship. Full article
Show Figures

Figure 1

Back to TopTop