Next Issue
Volume 3, December
Previous Issue
Volume 3, June
 
 

Psych, Volume 3, Issue 3 (September 2021) – 16 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
11 pages, 489 KiB  
Article
Robust Chi-Square in Extreme and Boundary Conditions: Comments on Jak et al. (2021)
by Tihomir Asparouhov and Bengt Muthén
Psych 2021, 3(3), 542-551; https://doi.org/10.3390/psych3030035 - 10 Sep 2021
Cited by 4 | Viewed by 3071
Abstract
In this article we describe a modification of the robust chi-square test of fit that yields more accurate type I error rates when the estimated model is at the boundary of the admissible space. Full article
Show Figures

Figure A1

21 pages, 465 KiB  
Tutorial
Analysis of Categorical Data with the R Package confreq
by Jörg-Henrik Heine and Mark Stemmler
Psych 2021, 3(3), 522-541; https://doi.org/10.3390/psych3030034 - 7 Sep 2021
Cited by 2 | Viewed by 3589
Abstract
The person-centered approach in categorical data analysis is introduced as a complementary approach to the variable-centered approach. The former uses persons, animals, or objects on the basis of their combination of characteristics which can be displayed in multiway contingency tables. Configural Frequency Analysis [...] Read more.
The person-centered approach in categorical data analysis is introduced as a complementary approach to the variable-centered approach. The former uses persons, animals, or objects on the basis of their combination of characteristics which can be displayed in multiway contingency tables. Configural Frequency Analysis (CFA) and log-linear modeling (LLM) are the two most prominent (and related) statistical methods. Both compare observed frequencies (foik) with expected frequencies (feik). While LLM uses primarily a model-fitting approach, CFA analyzes residuals of non-fitting models. Residuals with significantly more observed than expected frequencies (foik>feik) are called types, while residuals with significantly less observed than expected frequencies (foik<feik) are called antitypes. The R package confreq is presented and its use is demonstrated with several data examples. Results of contingency table analyses can be displayed in tables but also in graphics representing the size and type of residual. The expected frequencies represent the null hypothesis and different null hypotheses result in different expected frequencies. Different kinds of CFAs are presented: the first-order CFA based on the null hypothesis of independence, CFA with covariates, and the two-sample CFA. The calculation of the expected frequencies can be controlled through the design matrix which can be easily handled in confreq. Full article
Show Figures

Figure 1

21 pages, 3237 KiB  
Article
Modelling Norm Scores with the cNORM Package in R
by Sebastian Gary, Wolfgang Lenhard and Alexandra Lenhard
Psych 2021, 3(3), 501-521; https://doi.org/10.3390/psych3030033 - 30 Aug 2021
Cited by 11 | Viewed by 3940
Abstract
In this article, we explain and demonstrate how to model norm scores with the cNORM package in R. This package is designed specifically to determine norm scores when the latent ability to be measured covaries with age or other explanatory variables such as [...] Read more.
In this article, we explain and demonstrate how to model norm scores with the cNORM package in R. This package is designed specifically to determine norm scores when the latent ability to be measured covaries with age or other explanatory variables such as grade level. The mathematical method used in this package draws on polynomial regression to model a three-dimensional hyperplane that smoothly and continuously captures the relation between raw scores, norm scores and the explanatory variable. By doing so, it overcomes the typical problems of classical norming methods, such as overly large age intervals, missing norm scores, large amounts of sampling error in the subsamples or huge requirements with regard to the sample size. After a brief introduction to the mathematics of the model, we describe the individual methods of the package. We close the article with a practical example using data from a real reading comprehension test. Full article
Show Figures

Figure 1

22 pages, 524 KiB  
Article
Estimating the Stability of Psychological Dimensions via Bootstrap Exploratory Graph Analysis: A Monte Carlo Simulation and Tutorial
by Alexander P. Christensen and Hudson Golino
Psych 2021, 3(3), 479-500; https://doi.org/10.3390/psych3030032 - 27 Aug 2021
Cited by 112 | Viewed by 8424
Abstract
Exploratory Graph Analysis (EGA) has emerged as a popular approach for estimating the dimensionality of multivariate data using psychometric networks. Sampling variability, however, has made reproducibility and generalizability a key issue in network psychometrics. To address this issue, we have developed a novel [...] Read more.
Exploratory Graph Analysis (EGA) has emerged as a popular approach for estimating the dimensionality of multivariate data using psychometric networks. Sampling variability, however, has made reproducibility and generalizability a key issue in network psychometrics. To address this issue, we have developed a novel bootstrap approach called Bootstrap Exploratory Graph Analysis (bootEGA). bootEGA generates a sampling distribution of EGA results where several statistics can be computed. Descriptive statistics (median, standard error, and dimension frequency) provide researchers with a general sense of the stability of their empirical EGA dimensions. Structural consistency estimates how often dimensions are replicated exactly across the bootstrap replicates. Item stability statistics provide information about whether dimensions are unstable due to misallocation (e.g., item placed in the wrong dimension), multidimensionality (e.g., item belonging to more than one dimension), and item redundancy (e.g., similar semantic content). Using a Monte Carlo simulation, we determine guidelines for acceptable item stability. After, we provide an empirical example that demonstrates how bootEGA can be used to identify structural consistency issues (including a fully reproducible R tutorial). In sum, we demonstrate that bootEGA is a robust approach for identifying the stability and robustness of dimensionality in multivariate data. Full article
Show Figures

Figure 1

32 pages, 1412 KiB  
Tutorial
Flexible Item Response Modeling in R with the flexmet Package
by Leah Feuerstahler
Psych 2021, 3(3), 447-478; https://doi.org/10.3390/psych3030031 - 16 Aug 2021
Cited by 7 | Viewed by 2360
Abstract
The filtered monotonic polynomial (FMP) model is a semi-parametric item response model that allows flexible response function shapes but also includes traditional item response models as special cases. The flexmet package for R facilitates the routine use of the FMP model in real [...] Read more.
The filtered monotonic polynomial (FMP) model is a semi-parametric item response model that allows flexible response function shapes but also includes traditional item response models as special cases. The flexmet package for R facilitates the routine use of the FMP model in real data analysis and simulation studies. This tutorial provides several code examples illustrating how the flexmet package may be used to simulate FMP model parameters and data (both for dichotomous and polytomously scored items), estimate FMP model parameters, transform traditional item response models to different metrics, and more. This tutorial serves as both an introduction to the unique features of the FMP model and as a practical guide to its implementation in R via the flexmet package. Full article
Show Figures

Figure 1

25 pages, 3122 KiB  
Article
shinyReCoR: A Shiny Application for Automatically Coding Text Responses Using R
by Nico Andersen and Fabian Zehner
Psych 2021, 3(3), 422-446; https://doi.org/10.3390/psych3030030 - 16 Aug 2021
Cited by 11 | Viewed by 4463
Abstract
In this paper, we introduce shinyReCoR: a new app that utilizes a cluster-based method for automatically coding open-ended text responses. Reliable coding of text responses from educational or psychological assessments requires substantial organizational and human effort. The coding of natural language in responses [...] Read more.
In this paper, we introduce shinyReCoR: a new app that utilizes a cluster-based method for automatically coding open-ended text responses. Reliable coding of text responses from educational or psychological assessments requires substantial organizational and human effort. The coding of natural language in responses to tests depends on the texts’ complexity, corresponding coding guides, and the guides’ quality. Manual coding is thus not only expensive but also error-prone. With shinyReCoR, we provide a more efficient alternative. The use of natural language processing makes texts utilizable for statistical methods. shinyReCoR is a Shiny app deployed as an R-package that allows users with varying technical affinity to create automatic response classifiers through a graphical user interface based on annotated data. The present paper describes the underlying methodology, including machine learning, as well as peculiarities of the processing of language in the assessment context. The app guides users through the workflow with steps like text corpus compilation, semantic space building, preprocessing of the text data, and clustering. Users can adjust each step according to their needs. Finally, users are provided with an automatic response classifier, which can be evaluated and tested within the process. Full article
Show Figures

Figure 1

18 pages, 360 KiB  
Article
Between-Item Multidimensional IRT: How Far Can the Estimation Methods Go?
by Mauricio Garnier-Villarreal, Edgar C. Merkle and Brooke E. Magnus
Psych 2021, 3(3), 404-421; https://doi.org/10.3390/psych3030029 - 9 Aug 2021
Cited by 9 | Viewed by 4092
Abstract
Multidimensional item response models are known to be difficult to estimate, with a variety of estimation and modeling strategies being proposed to handle the difficulties. While some previous studies have considered the performance of these estimation methods, they typically include only one or [...] Read more.
Multidimensional item response models are known to be difficult to estimate, with a variety of estimation and modeling strategies being proposed to handle the difficulties. While some previous studies have considered the performance of these estimation methods, they typically include only one or two methods, or a small number of factors. In this paper, we report on a large simulation study of between-item multidimensional IRT estimation methods, considering five different methods, a variety of sample sizes, and up to eight factors. This study provides a comprehensive picture of the methods’ relative performance, as well as each individual method’s strengths and weaknesses. The study results lead us to make recommendations for applied research, related to which estimation methods should be used under various scenarios. Full article
Show Figures

Figure 1

18 pages, 1961 KiB  
Article
cdcatR: An R Package for Cognitive Diagnostic Computerized Adaptive Testing
by Miguel A. Sorrel, Pablo Nájera and Francisco J. Abad
Psych 2021, 3(3), 386-403; https://doi.org/10.3390/psych3030028 - 9 Aug 2021
Cited by 4 | Viewed by 3390
Abstract
Cognitive diagnosis models (CDMs) are confirmatory latent class models that provide fine-grained information about skills and cognitive processes. These models have gained attention in the last few years because of their usefulness in educational and psychological settings. Recently, numerous developments have been made [...] Read more.
Cognitive diagnosis models (CDMs) are confirmatory latent class models that provide fine-grained information about skills and cognitive processes. These models have gained attention in the last few years because of their usefulness in educational and psychological settings. Recently, numerous developments have been made to allow for the implementation of cognitive diagnosis computerized adaptive testing (CD-CAT). Despite methodological advances, CD-CAT applications are still scarce. To facilitate research and the emergence of empirical applications in this area, we have developed the cdcatR package for R software. The purpose of this document is to illustrate the different functions included in this package. The package includes functionalities for data generation, model selection based on relative fit information, implementation of several item selection rules (including item exposure control), and CD-CAT performance evaluation in terms of classification accuracy, item exposure, and test length. In conclusion, an R package is made available to researchers and practitioners that allows for an easy implementation of CD-CAT in both simulation and applied studies. Ultimately, this is expected to facilitate the development of empirical applications in this area. Full article
Show Figures

Figure 1

26 pages, 1101 KiB  
Article
Predicting Differences in Model Parameters with Individual Parameter Contribution Regression Using the R Package ipcr
by Manuel Arnold, Andreas M. Brandmaier and Manuel C. Voelkle
Psych 2021, 3(3), 360-385; https://doi.org/10.3390/psych3030027 - 6 Aug 2021
Cited by 4 | Viewed by 3334
Abstract
Unmodeled differences between individuals or groups can bias parameter estimates and may lead to false-positive or false-negative findings. Such instances of heterogeneity can often be detected and predicted with additional covariates. However, predicting differences with covariates can be challenging or even infeasible, depending [...] Read more.
Unmodeled differences between individuals or groups can bias parameter estimates and may lead to false-positive or false-negative findings. Such instances of heterogeneity can often be detected and predicted with additional covariates. However, predicting differences with covariates can be challenging or even infeasible, depending on the modeling framework and type of parameter. Here, we demonstrate how the individual parameter contribution (IPC) regression framework, as implemented in the R package ipcr, can be leveraged to predict differences in any parameter across a wide range of parametric models. First and foremost, IPC regression is an exploratory analysis technique to determine if and how the parameters of a fitted model vary as a linear function of covariates. After introducing the theoretical foundation of IPC regression, we use an empirical data set to demonstrate how parameter differences in a structural equation model can be predicted with the ipcr package. Then, we analyze the performance of IPC regression in comparison to alternative methods for modeling parameter heterogeneity in a Monte Carlo simulation. Full article
Show Figures

Figure 1

12 pages, 278 KiB  
Article
The Sociotype of Dermatological Patients: Assessing the Social Burden of Skin Disease
by Servando E. Marron, Lucia Tomas-Aragones, Pedro C. Marijuan, Pablo Y. Mendivil-Nasarre and Jorge Navarro
Psych 2021, 3(3), 348-359; https://doi.org/10.3390/psych3030026 - 3 Aug 2021
Viewed by 2321
Abstract
Skin diseases can be the cause of a significant psychosocial burden. However, tools to screen for social interaction difficulties and diminished social networks that affect the wellbeing and mental health of the individual have not been sufficiently developed. This study is based on [...] Read more.
Skin diseases can be the cause of a significant psychosocial burden. However, tools to screen for social interaction difficulties and diminished social networks that affect the wellbeing and mental health of the individual have not been sufficiently developed. This study is based on the sociotype approach, which has recently been proposed as a new theoretical construct implemented in the form of an ad hoc questionnaire that examines the social bonding structures and relational factors. A pilot study was conducted in Alcañiz Hospital (Spain), with a study population of 159 dermatology patients. The results showed that in both subjective estimates concerning family, friends, work, and acquaintances, and in quantitative aspects, such as social contacts, duration of conversations, and moments of laughter, there were significant differences between the sample regarding diagnostic severity, dermatological diseases, and gender. The sociotype questionnaire (SOCQ) is a useful tool to screen for social difficulties in dermatological patients. Full article
(This article belongs to the Topic Psychodermatology)
12 pages, 397 KiB  
Article
Using the Effective Sample Size as the Stopping Criterion in Markov Chain Monte Carlo with the Bayes Module in Mplus
by Steffen Zitzmann, Sebastian Weirich and Martin Hecht
Psych 2021, 3(3), 336-347; https://doi.org/10.3390/psych3030025 - 30 Jul 2021
Cited by 11 | Viewed by 3886
Abstract
Bayesian modeling using Markov chain Monte Carlo (MCMC) estimation requires researchers to decide not only whether estimation has converged but also whether the Bayesian estimates are well-approximated by summary statistics from the chain. On the contrary, software such as the Bayes module in [...] Read more.
Bayesian modeling using Markov chain Monte Carlo (MCMC) estimation requires researchers to decide not only whether estimation has converged but also whether the Bayesian estimates are well-approximated by summary statistics from the chain. On the contrary, software such as the Bayes module in Mplus, which helps researchers check whether convergence has been achieved by comparing the potential scale reduction (PSR) with a prespecified maximum PSR, the size of the MCMC error or, equivalently, the effective sample size (ESS), is not monitored. Zitzmann and Hecht (2019) proposed a method that can be used to check whether a minimum ESS has been reached in Mplus. In this article, we evaluated this method with a computer simulation. Specifically, we fit a multilevel structural equation model to a large number of simulated data sets and compared different prespecified minimum ESS values with the actual (empirical) ESS values. The empirical values were approximately equal to or larger than the prespecified minimum ones, thus indicating the validity of the method. Full article
Show Figures

Figure 1

14 pages, 490 KiB  
Article
Testing and Interpreting Latent Variable Interactions Using the semTools Package
by Alexander M. Schoemann and Terrence D. Jorgensen
Psych 2021, 3(3), 322-335; https://doi.org/10.3390/psych3030024 - 30 Jul 2021
Cited by 33 | Viewed by 9486
Abstract
Examining interactions among predictors is an important part of a developing research program. Estimating interactions using latent variables provides additional power to detect effects over testing interactions in regression. However, when predictors are modeled as latent variables, estimating and testing interactions requires additional [...] Read more.
Examining interactions among predictors is an important part of a developing research program. Estimating interactions using latent variables provides additional power to detect effects over testing interactions in regression. However, when predictors are modeled as latent variables, estimating and testing interactions requires additional steps beyond the models used for regression. We review methods of estimating and testing latent variable interactions with a focus on product indicator methods. Product indicator methods of examining latent interactions provide an accurate method to estimate and test latent interactions and can be implemented in any latent variable modeling software package. Significant latent interactions require additional steps (plotting and probing) to interpret interaction effects. We demonstrate how these methods can be easily implemented using functions in the semTools package with models fit using the lavaan package in R, and we illustrate how these methods work using an applied example concerning teacher stress and testing. Full article
Show Figures

Figure 1

14 pages, 3345 KiB  
Article
Estimating Explanatory Extensions of Dichotomous and Polytomous Rasch Models: The eirm Package in R
by Okan Bulut, Guher Gorgun and Seyma Nur Yildirim-Erbasli
Psych 2021, 3(3), 308-321; https://doi.org/10.3390/psych3030023 - 29 Jul 2021
Cited by 19 | Viewed by 4622
Abstract
Explanatory item response modeling (EIRM) enables researchers and practitioners to incorporate item and person properties into item response theory (IRT) models. Unlike traditional IRT models, explanatory IRT models can explain common variability stemming from the shared variance among item clusters and person groups. [...] Read more.
Explanatory item response modeling (EIRM) enables researchers and practitioners to incorporate item and person properties into item response theory (IRT) models. Unlike traditional IRT models, explanatory IRT models can explain common variability stemming from the shared variance among item clusters and person groups. In this tutorial, we present the R package eirm, which provides a simple and easy-to-use set of tools for preparing data, estimating explanatory IRT models based on the Rasch family, extracting model output, and visualizing model results. We describe how functions in the eirm package can be used for estimating traditional IRT models (e.g., Rasch model, Partial Credit Model, and Rating Scale Model), item-explanatory models (i.e., Linear Logistic Test Model), and person-explanatory models (i.e., latent regression models) for both dichotomous and polytomous responses. In addition to demonstrating the general functionality of the eirm package, we also provide real-data examples with annotated R codes based on the Rosenberg Self-Esteem Scale. Full article
Show Figures

Figure 1

29 pages, 1009 KiB  
Article
Item Parameter Estimation in Multistage Designs: A Comparison of Different Estimation Approaches for the Rasch Model
by Jan Steinfeld and Alexander Robitzsch
Psych 2021, 3(3), 279-307; https://doi.org/10.3390/psych3030022 - 8 Jul 2021
Cited by 8 | Viewed by 4132
Abstract
There is some debate in the psychometric literature about item parameter estimation in multistage designs. It is occasionally argued that the conditional maximum likelihood (CML) method is superior to the marginal maximum likelihood method (MML) because no assumptions have to be made about [...] Read more.
There is some debate in the psychometric literature about item parameter estimation in multistage designs. It is occasionally argued that the conditional maximum likelihood (CML) method is superior to the marginal maximum likelihood method (MML) because no assumptions have to be made about the trait distribution. However, CML estimation in its original formulation leads to biased item parameter estimates. Zwitser and Maris (2015, Psychometrika) proposed a modified conditional maximum likelihood estimation method for multistage designs that provides practically unbiased item parameter estimates. In this article, the differences between different estimation approaches for multistage designs were investigated in a simulation study. Four different estimation conditions (CML, CML estimation with the consideration of the respective MST design, MML with the assumption of a normal distribution, and MML with log-linear smoothing) were examined using a simulation study, considering different multistage designs, number of items, sample size, and trait distributions. The results showed that in the case of the substantial violation of the normal distribution, the CML method seemed to be preferable to MML estimation employing a misspecified normal trait distribution, especially if the number of items and sample size increased. However, MML estimation using log-linear smoothing lea to results that were very similar to the CML method with the consideration of the respective MST design. Full article
(This article belongs to the Section Psychometrics and Educational Measurement)
Show Figures

Figure 1

10 pages, 256 KiB  
Review
Tattoos in Psychodermatology
by İlknur Kıvanç Altunay, Sibel Mercan and Ezgi Özkur
Psych 2021, 3(3), 269-278; https://doi.org/10.3390/psych3030021 - 6 Jul 2021
Cited by 5 | Viewed by 7315
Abstract
Tattooing is a permanent form of body art applied onto the skin with a decorative ink, and it has been practiced from antiquity until today. The number of tattooed people is steadily increasing as tattoos have become popular all over the world, especially [...] Read more.
Tattooing is a permanent form of body art applied onto the skin with a decorative ink, and it has been practiced from antiquity until today. The number of tattooed people is steadily increasing as tattoos have become popular all over the world, especially in Western countries. Tattoos display distinctive designs and images, from protective totems and tribal symbols to the names of loved or lost persons or strange figures, which are used as a means of self-expression. They are worn on the skin as a lifelong commitment, and everyone has their own reasons to become tattooed, whether they be simply esthetic or a proclamation of group identity. Tattoos are representations of one’s feelings, unconscious conflicts, and inner life onto the skin. The skin plays a major role in this representation and is involved in different ways in this process. This article aims to review the historical and psychoanalytical aspects of tattoos, the reasons for and against tattooing, medical and dermatological implications of the practice, and emotional reflections from a psychodermatological perspective. Full article
(This article belongs to the Topic Psychodermatology)
10 pages, 230 KiB  
Article
Social Support and Appearance Satisfaction Can Predict Changes in the Psychopathology Levels of Patients with Acne, Psoriasis and Eczema, before Dermatological Treatment and in a Six-Month Follow-up Phase
by Charalambos Costeris, Maria Petridou and Yianna Ioannou
Psych 2021, 3(3), 259-268; https://doi.org/10.3390/psych3030020 - 1 Jul 2021
Cited by 2 | Viewed by 3003
Abstract
This was a cross-sectional study which assessed the factors that predicted changes in the levels of psychopathological symptomatology of patients with acne, psoriasis and eczema both before dermatological treatment and in a six-month follow-up phase. One hundred and eight dermatological patients (18–35 years) [...] Read more.
This was a cross-sectional study which assessed the factors that predicted changes in the levels of psychopathological symptomatology of patients with acne, psoriasis and eczema both before dermatological treatment and in a six-month follow-up phase. One hundred and eight dermatological patients (18–35 years) participated in the study; 54 with visible facial cystic acne (Group A), and 54 with non-visible psoriasis/eczema (Group B). A battery of self-report questionnaires were administered to all patients before their dermatological treatment and in a six-month follow-up phase and included: the Symptom Checklist-90 Revised (SCL-90-R), the Interpersonal Support Evaluation List (ISEL-40), the Multidimensional Body–Self Relations Questionnaire (MBSRQ–AS) and the Rosenberg Self-Esteem Scale. Multiple regression analyses revealed that patients’ overall perceived social support and overall appearance satisfaction appeared to be strong predictors of the maintenance of patients’ psychopathology levels, even six months after they began their dermatological treatment. Psychosocial factors such as patients’ social support and appearance satisfaction could influence their psychopathology levels and the way they experienced their skin condition, before treatment and after a six-month period of time. The psychological assessment of the aforementioned factors could detect patients who would benefit from psychotherapeutic interventions in order to help them adapt to the extra burden which accompanies dermatological disorders. Full article
(This article belongs to the Topic Psychodermatology)
Previous Issue
Next Issue
Back to TopTop