Exploring New Frontiers in Psychometrics: Advancing Measurement of Skills and Behaviors

A special issue of Behavioral Sciences (ISSN 2076-328X). This special issue belongs to the section "Psychiatric, Emotional and Behavioral Disorders".

Deadline for manuscript submissions: 30 September 2025 | Viewed by 4361

Special Issue Editors


E-Mail Website
Guest Editor
Department of Psychology, National and Kapodistrian University of Athens, 15784 Athens, Greece
Interests: psychometric theory; measurement theory; measurement of individual differences; test development; item response theory; latent variable models; computerized adaptive testing (CAT)

E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

This Special Issue aims to delve into the cutting-edge developments in psychometric assessment methods tailored specifically to measure skills and behaviors. The primary goal is to showcase innovative approaches and techniques that extend beyond conventional methodologies, enabling a more accurate and comprehensive evaluation of individual capabilities and tendencies. Through this exploration, this Special Issue seeks to identify emerging trends, address existing challenges, and propose novel solutions in the field of psychometrics. Moreover, it aims to foster interdisciplinary collaboration among researchers, practitioners, and experts from diverse domains to facilitate the exchange of ideas and promote the adoption of advanced assessment strategies. Ultimately, the objectives include advancing the theoretical foundations of psychometrics, enhancing the validity and reliability of measurement tools, and contributing to a deeper understanding of human skills and behaviors in various contexts.

Dr. Ioannis Tsaousis
Dr. Georgios Sideridis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Behavioral Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • psychometrics
  • measurement theory
  • advanced measurement approaches
  • skill measurement advancements
  • advancements in behavioral assessment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2029 KiB  
Article
Comparing Frequentist and Bayesian Methods for Factorial Invariance with Latent Distribution Heterogeneity
by Xinya Liang, Ji Li, Mauricio Garnier-Villarreal and Jihong Zhang
Behav. Sci. 2025, 15(4), 482; https://doi.org/10.3390/bs15040482 - 7 Apr 2025
Viewed by 276
Abstract
Factorial invariance is critical for ensuring consistent relationships between measured variables and latent constructs across groups or time, enabling valid comparisons in social science research. Detecting factorial invariance becomes challenging when varying degrees of heterogeneity are present in the distribution of latent factors. [...] Read more.
Factorial invariance is critical for ensuring consistent relationships between measured variables and latent constructs across groups or time, enabling valid comparisons in social science research. Detecting factorial invariance becomes challenging when varying degrees of heterogeneity are present in the distribution of latent factors. This simulation study examined how changes in latent means and variances between groups influence the detection of noninvariance, comparing Bayesian and maximum likelihood fit measures. The design factors included sample size, noninvariance levels, and latent factor distributions. Results indicated that differences in factor variance have a stronger impact on measurement invariance than differences in factor means, with heterogeneity in latent variances more strongly affecting scalar invariance testing than metric invariance testing. Among model selection methods, goodness-of-fit indices generally exhibited lower power compared to likelihood ratio tests (LRTs), information criteria (ICs; except BIC), and leave-one-out cross-validation (LOO), which achieved a good balance between false and true positive rates. Full article
Show Figures

Figure 1

21 pages, 1730 KiB  
Article
A Machine Learning-Based Method for Developing the Chinese Symptom Checklist-11 (CSCL-11)
by Xuanyi Cai, Yunan Zhang, Meng Su, Fan Chang, Lei Quan, Yixing Liu and Bei Wang
Behav. Sci. 2025, 15(4), 459; https://doi.org/10.3390/bs15040459 - 2 Apr 2025
Viewed by 295
Abstract
The Chinese version of the Symptom Checklist-90 (SCL-90) is excessively lengthy, resulting in extended completion time and reduced respondent compliance. This study aimed to utilize a condensed subset of items from the Chinese SCL-90 to identify individuals at high risk for psychological disorders [...] Read more.
The Chinese version of the Symptom Checklist-90 (SCL-90) is excessively lengthy, resulting in extended completion time and reduced respondent compliance. This study aimed to utilize a condensed subset of items from the Chinese SCL-90 to identify individuals at high risk for psychological disorders based on machine learning methods, forming a concise and efficient preliminary psychopathological screening instrument for the Chinese general population. Analyzing data collected from 4808 SCL-90 psychological surveys, this study applied variable clustering to select the most representative items, resulting in an 11-item scale: the Chinese Symptom Checklist-11 (CSCL-11). The CSCL-11 demonstrated high internal consistency (Cronbach’s α = 0.84). The results of factor analysis supported a single-factor model for the CSCL-11, demonstrating an acceptable fit (SRMR = 0.035, RMSEA = 0.064, CFI = 0.935, and TLI = 0.919). The CSCL-11 demonstrated strong predictive performance for the Global Severity Index (GSI; RMSE = 0.11, R2 = 0.92, Pearson’s r = 0.96) and various subscale scores (RMSE < 0.25, R2 > 0.70, Pearson’s r > 0.85). Additionally, it achieved a 96% accuracy rate in identifying individuals at high risk for psychological disorders. The comparison results indicated that the CSCL-11 outperformed SCL-14, SCL-K11, and SCL-K-9 in predicting GSI scores. In identifying high-risk groups, CSCL-11 demonstrated performance similar to that of SCL-14 and surpassed both SCL-K11 and SCL-K-9. The CSCL-11 retains most of the critical information from the original Chinese SCL-90 and serves as a preliminary psychopathological screening tool for the Chinese general population. Full article
Show Figures

Figure 1

27 pages, 3458 KiB  
Article
Predicting Leadership Status Through Trait Emotional Intelligence and Cognitive Ability
by Bogdan S. Zadorozhny, K. V. Petrides, Yongtian Cheng, Stephen Cuppello and Dimitri van der Linden
Behav. Sci. 2025, 15(3), 345; https://doi.org/10.3390/bs15030345 - 11 Mar 2025
Viewed by 601
Abstract
Many interconnected factors have been implicated in the prediction of whether a given individual occupies a managerial role. These include an assortment of demographic variables such as age and gender as well as trait emotional intelligence (trait EI) and cognitive ability. In order [...] Read more.
Many interconnected factors have been implicated in the prediction of whether a given individual occupies a managerial role. These include an assortment of demographic variables such as age and gender as well as trait emotional intelligence (trait EI) and cognitive ability. In order to disentangle their respective effects on formal leadership position, the present study compares a traditional linear approach in the form of a logistic regression with the results of a set of supervised machine learning (SML) algorithms. In addition to merely extending beyond linear effects, a series of techniques were incorporated so as to practically apply ML approaches and interpret their results, including feature importance and interactions. The results demonstrated the superior predictive strength of trait EI over cognitive ability, especially of its sociability factor, and supported the predictive utility of the random forest (RF) algorithm in this context. We thereby hope to contribute and support a developing trend of acknowledging the genuine complexity of real-world contexts such as leadership and provide direction for future investigations, including more sophisticated ML approaches. Full article
Show Figures

Figure 1

20 pages, 724 KiB  
Article
A Machine-Learning-Based Approach to Informing Student Admission Decisions
by Tuo Liu, Cosima Schenk, Stephan Braun and Andreas Frey
Behav. Sci. 2025, 15(3), 330; https://doi.org/10.3390/bs15030330 - 7 Mar 2025
Viewed by 481
Abstract
University resources are limited, and strategic admission management is required in certain fields that have high application volumes but limited available study places. Student admission processes need to select an appropriate number of applicants to ensure the optimal enrollment while avoiding over- or [...] Read more.
University resources are limited, and strategic admission management is required in certain fields that have high application volumes but limited available study places. Student admission processes need to select an appropriate number of applicants to ensure the optimal enrollment while avoiding over- or underenrollment. The traditional approach often relies on the enrollment yields from previous years, assuming fixed admission probabilities for all applicants and ignoring statistical uncertainty, which can lead to suboptimal decisions. In this study, we propose a novel machine-learning-based approach to improving student admission decisions. Trained on historical application data, this approach predicts the number of enrolled applicants conditionally based on the number of admitted applicants, incorporates the statistical uncertainty of these predictions, and derives the probability of the number of enrolled applicants being larger or smaller than the available study places. The application of this approach is illustrated using empirical application data from a German university. In this illustration, first, several machine learning models were trained and compared. The best model was selected. This was then applied to applicant data for the next year to estimate the individual enrollment probabilities, which were aggregated to predict the number of applicants enrolled and the probability of this number being larger or smaller than the available study places. When this approach was compared with the traditional approach using fixed enrollment yields, the results showed that the proposed approach enables data-driven adjustments to the number of admitted applicants, ensuring controlled risk of over- and underenrollment. Full article
Show Figures

Figure 1

18 pages, 304 KiB  
Article
Estimating the Minimum Sample Size for Neural Network Model Fitting—A Monte Carlo Simulation Study
by Yongtian Cheng, Konstantinos Vassilis Petrides and Johnson Li
Behav. Sci. 2025, 15(2), 211; https://doi.org/10.3390/bs15020211 - 14 Feb 2025
Viewed by 818
Abstract
In the era of machine learning, many psychological studies use machine learning methods. Specifically, neural networks, a set of machine learning methods that exhibit exceptional performance in various tasks, have been used on psychometric datasets for supervised model fitting. From the computer scientist’s [...] Read more.
In the era of machine learning, many psychological studies use machine learning methods. Specifically, neural networks, a set of machine learning methods that exhibit exceptional performance in various tasks, have been used on psychometric datasets for supervised model fitting. From the computer scientist’s perspective, psychometric independent variables are typically ordinal and low-dimensional—characteristics that can significantly impact model performance. To our knowledge, there is no guidance about the sample planning suggestion for this task. Therefore, we conducted a simulation study to test the performance of an NN with different sample sizes and the simulation of both linear and nonlinear relationships. We proposed the minimum sample size for the neural network model fitting with two criteria: the performance of 95% of the models is close to the theoretical maximum, and 80% of the models can outperform the linear model. The findings of this simulation study show that the performance of neural networks can be unstable with ordinal variables as independent variables, and we suggested that neural networks should not be used on ordinal independent variables with at least common nonlinear relationships in psychology. Further suggestions and research directions are also provided. Full article
13 pages, 618 KiB  
Article
Development of a Forced-Choice Personality Inventory via Thurstonian Item Response Theory (TIRT)
by Ioannis Tsaousis and Amjed Al-Owidha
Behav. Sci. 2024, 14(12), 1118; https://doi.org/10.3390/bs14121118 - 21 Nov 2024
Viewed by 1025
Abstract
This study had two purposes: (1) to develop a forced-choice personality inventory to assess student personality characteristics based on the five-factor (FFM) personality model and (2) to examine its factor structure via the Thurstonian Item Response Theory (TIRT) approach based on Thurstone’s law [...] Read more.
This study had two purposes: (1) to develop a forced-choice personality inventory to assess student personality characteristics based on the five-factor (FFM) personality model and (2) to examine its factor structure via the Thurstonian Item Response Theory (TIRT) approach based on Thurstone’s law of comparative judgment. A total of 200 items were generated to represent the five dimensions, and through Principal Axis Factoring and the composite reliability index, a final pool of 75 items was selected. These items were then organized into 25 blocks, each containing three statements (triplets) designed to balance social desirability across the blocks. The study involved two samples: the first sample of 1484 students was used to refine the item pool, and the second sample of 823 university students was used to examine the factorial structure of the forced-choice inventory. After re-coding the responses into a binary format, the data were analyzed within a standard structural equation modeling (SEM) framework. Then, the TIRT model was applied to evaluate the factorial structure of the forced-choice inventory, with the results indicating an adequate fit. Further suggestions for future research with additional studies are provided to justify the scale’s reliability (e.g., test–retest) and validity (e.g., concurrent, convergent, and divergent). Full article
Show Figures

Figure 1

Back to TopTop