Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,658)

Search Parameters:
Keywords = language model applications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1706 KB  
Review
Contextual Integrity in Large Language Models: A Review
by Ahmad Hassanpour and Bian Yang
J. Cybersecur. Priv. 2026, 6(2), 74; https://doi.org/10.3390/jcp6020074 - 15 Apr 2026
Abstract
The rapid advancements in large language models (LLMs) have transformed natural language processing, enabling their application in diverse domains such as conversational agents and decision-support systems in sensitive areas like healthcare, finance, and eldercare. However, as LLMs are increasingly integrated into real-world contexts, [...] Read more.
The rapid advancements in large language models (LLMs) have transformed natural language processing, enabling their application in diverse domains such as conversational agents and decision-support systems in sensitive areas like healthcare, finance, and eldercare. However, as LLMs are increasingly integrated into real-world contexts, concerns about their adherence to ethical principles, privacy norms, and contextual expectations have become critical. Privacy preservation is particularly pressing in interactions involving personal or sensitive data, where ensuring that LLMs align with societal norms while mitigating risks of information leakage is essential to fostering trust and ensuring responsible deployment. Contextual integrity (CI) provides a robust framework to address these challenges, emphasizing that information flows should adhere to context-specific social norms. This principle is especially vital in sensitive applications, where LLMs must evaluate roles, information attributes, and transmission principles to maintain ethical behavior. Despite their linguistic proficiency, LLMs often fail to recognize and adapt to nuanced contextual norms, a limitation exacerbated by their probabilistic nature and the biases in their training data, which can lead to inappropriate or harmful outputs. Addressing these shortcomings requires rigorous evaluation methodologies and fine-tuning strategies that embed societal and contextual norms into the models. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

28 pages, 411 KB  
Article
Optimal Binary Locally Repairable Codes with Locality and Availability from Latin Squares
by Nanyuan Cao, Yu Zhang, Xiangqiong Zeng and Li Zhang
Mathematics 2026, 14(8), 1321; https://doi.org/10.3390/math14081321 - 15 Apr 2026
Abstract
The rapid development of machine learning, large language models, and related technologies has greatly increased the demand for data storage capacity. Therefore, the role of distributed storage systems in such applications has become more prominent. However, it is inevitable that a single node [...] Read more.
The rapid development of machine learning, large language models, and related technologies has greatly increased the demand for data storage capacity. Therefore, the role of distributed storage systems in such applications has become more prominent. However, it is inevitable that a single node fails in a distributed storage system during long-term use. Being able to repair failed nodes in a timely manner is extremely important for the stable operation of distributed storage systems, and a specific encoding scheme is required to meet the needs of efficiently repairing failed nodes. This research presents a novel family of binary locally repairable codes (LRCs) developed using multiple disjoint recovery sets constructed based on mutually orthogonal Latin squares (MOLS). The proposed constructions achieve distance optimality under the Singleton-like bound for LRCs with availability. Specifically, the codes are parameterized as (n=r2+tr,k=r2,r,t) and (n=rm+tm,k=rm,r,t), where n is the block length, k is the dimension, r is the locality, and t is the availability. These codes achieve minimum distance d=t+1, guaranteeing efficient recovery with t disjoint repair sets, each of size r. Compared to existing constructions, the proposed codes offer significant improvements in terms of code rate R=rr+t, support for larger block lengths, and reduced finite field size requirements (field size q=2). Additionally, a method is introduced to improve the minimum distance of codes with even availability t, constructing codes with parameters (n+1,k,r,t) and d=t+2, while preserving optimality. These properties make the proposed codes particularly suitable for distributed storage systems, where efficient and parallel repair of failed nodes is critical. Full article
(This article belongs to the Special Issue Coding Theory and the Impact of AI)
36 pages, 1146 KB  
Article
Authenticity and Cultural Appropriation in Saudi Fashion: Consumer Ethnocentrism and Ethical Evaluation
by Badrea Al-Oraini
World 2026, 7(4), 67; https://doi.org/10.3390/world7040067 - 15 Apr 2026
Abstract
This study examines how Saudi consumers evaluate the commodification of cultural symbols in fashion amid intensified heritage branding and symbolic market expansion. It addresses a gap in the literature on internal cultural commodification, where tensions surrounding authenticity, legitimacy, and commercialization emerge within the [...] Read more.
This study examines how Saudi consumers evaluate the commodification of cultural symbols in fashion amid intensified heritage branding and symbolic market expansion. It addresses a gap in the literature on internal cultural commodification, where tensions surrounding authenticity, legitimacy, and commercialization emerge within the same cultural community rather than across clearly separate cultural groups. Drawing on a culturally grounded application of the Theory of Planned Behavior and related literature on consumer ethnocentrism and moral evaluation, the study investigates how perceived authenticity, perceived cultural appropriation, ethical sense, and consumer ethnocentrism shape attitudes toward cultural commodification and purchase intention in the Saudi fashion context. Data were collected through an Arabic-language questionnaire-based survey of Saudi consumers (N = 552) using a non-probability purposive sampling approach. The measurement model employed reflective scales adapted from prior literature and was assessed for reliability and validity. To strengthen methodological rigor, the analysis also considered common method bias diagnostics. The proposed relationships were tested using partial least squares structural equation modeling (PLS-SEM) with bootstrapping. The findings indicate that perceived authenticity is positively associated with attitudes toward cultural commodification and relates to purchase intention primarily through attitudes. Perceived cultural appropriation is negatively associated with both attitudes and purchase intention, suggesting both a direct deterrent effect and an indirect pathway via attitudes. Consumer ethnocentrism shows a negative association with purchase intention and a weaker negative association with attitudes, while its moderating role appears statistically significant but limited in magnitude. Ethical sense displays a more complex pattern, combining negative indirect effects through evaluative pathways with a positive direct association with intention, consistent with qualified rather than purely restrictive participation in symbolic consumption. The study contributes to the literature by clarifying how consumer responses to heritage-based fashion commercialization are shaped by representational, ethical, and normative evaluations in a non-Western setting. Practically, it suggests that fashion brands operating in Saudi heritage markets should manage authenticity claims, symbolic legitimacy, and appropriation risk with greater cultural and ethical sensitivity. Full article
32 pages, 35110 KB  
Article
Semi-Automated Programming of Industrial Robotic Systems Using Large Language Models and Standardized Data Model
by Daniel Syniawa, Levin Droste and Bernd Kuhlenkötter
Robotics 2026, 15(4), 79; https://doi.org/10.3390/robotics15040079 - 15 Apr 2026
Abstract
The increasing application of industrial robots in modern production systems contrasts with a persistently high programming complexity that requires specialized know-how and creates substantial entry barriers. This work addresses this problem by introducing a systematic approach to robot programming based on Large Language [...] Read more.
The increasing application of industrial robots in modern production systems contrasts with a persistently high programming complexity that requires specialized know-how and creates substantial entry barriers. This work addresses this problem by introducing a systematic approach to robot programming based on Large Language Models (LLMs) that automatically translates natural language task descriptions into executable robot programs. The solution follows a two-stage pipeline: in Stage 1, the LLM structures the input into coherent process steps, and in Stage 2 these process steps are transformed into C++ code using a high-level function library. The performance is evaluated in simulation for the automated electrical cabinet assembly use case with terminal blocks, which is a significant element of various production processes. The architecture, based on the Robot Operating System 2 (ROS2) and MoveIt2, further integrates a standardized AutomationML-based configuration management for dynamic parameter handling and persistent state storage. A graphical user interface visualizes intermediate results, enables manual interventions and enables a simple operation for potential users without programming experience. The evaluation of the presented approach shows a success rate of up to 95 % for interpreting natural language instructions and generating code in the application scenario focused. The system reliably recognizes object attributes and correctly executes complex assembly instructions. In general, this work demonstrates how modern LLMs can bridge the semantic gap between human intent and robotic code for industrial applications. The developed high-level abstraction makes the system usable for non-programmers, highlights the potential for intuitive robot programming, and simultaneously identifies concrete technical challenges. Full article
20 pages, 10822 KB  
Article
T-CASP: Time-Aware Continual Aspect Semantic-Driven Incremental Pretraining
by Shuai Feng, Pan Su, Zaishan Qi and Liran Yang
Appl. Sci. 2026, 16(8), 3837; https://doi.org/10.3390/app16083837 - 15 Apr 2026
Abstract
With the rapid advancement of medical foundation models, their deployment in clinical practice is increasingly required. However, privacy constraints of hospital-specific data make large-scale retraining infeasible, limiting model adaptability. To address this issue, we propose a Continual Aspect Semantic-driven Incremental Pretraining (CASP) framework, [...] Read more.
With the rapid advancement of medical foundation models, their deployment in clinical practice is increasingly required. However, privacy constraints of hospital-specific data make large-scale retraining infeasible, limiting model adaptability. To address this issue, we propose a Continual Aspect Semantic-driven Incremental Pretraining (CASP) framework, which enables efficient adaptation of foundation models to private data, and the pre-trained models can be effectively applied to downstream tasks. In this paper, we focus on fundus fluorescein angiography (FFA) in ophthalmology as a representative application scenario to validate the proposed approach. FFA is a critical imaging modality for retinal disease diagnosis, as it is able to capture dynamic vascular changes across multiple angiographic phases. However, most existing learning-based methods treat FFA images statically and independently, failing to exploit the rich temporal and phase-specific semantics that are essential for accurate diagnosis. In this paper, a Time-aware Continual Aspect Semantic-driven incremental Pretraining (T-CASP) framework is proposed for FFA sequences. To compensate for limited temporal descriptions in clinical reports, large language models are first used to construct a temporal disease knowledge dictionary with phase-specific semantic descriptions. Based on this dictionary, a disease correlation matrix is injected into contrastive learning to guide fine-grained image–text alignment. A multi-layer gated residual adapter is further designed to capture phase-level semantics and enable knowledge transfer across learning stages through phase-wise continual pretraining. Extensive experiments demonstrate that T-CASP effectively models dynamic temporal semantics in FFA sequences, yielding consistent performance improvements over time-unaware and static baselines in retinal disease recognition. By explicitly integrating phase-wise temporal knowledge and continual semantic refinement, T-CASP provides a clinically consistent and effective solution for temporal FFA analysis, enhancing robustness and diagnostic accuracy in ophthalmic multimodal learning. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 311 KB  
Communication
Perspectives on Artificial Intelligence in Dermatology: An International Cross-Sectional Study
by Emmanouil Karampinis, Christina-Marina Zoumpourli, Aimilios Lallas, Zoe Apalla, John Paoli, Bengü Nisa Akay, Cristian Navarette-Dechent, Behera Biswanath, Nkechi Enechuwku, Peter Chai, Jie Liu, Olga Toli, Christina Kontogianni, Dimitrios Sgouros, Alexander Katoulis, Christofer Tzermias, Paweł Pietkiewicz and Enzo Errichetti
Medicina 2026, 62(4), 759; https://doi.org/10.3390/medicina62040759 - 15 Apr 2026
Abstract
Background and Objectives: Artificial intelligence (AI) has transitioned to an integral part of dermatology in only few years, yet perceptions of its use vary widely, reflecting diverse hopes, concerns, and perceived clinical utility. Materials and Methods: In this study, 300 dermatologists [...] Read more.
Background and Objectives: Artificial intelligence (AI) has transitioned to an integral part of dermatology in only few years, yet perceptions of its use vary widely, reflecting diverse hopes, concerns, and perceived clinical utility. Materials and Methods: In this study, 300 dermatologists from 13 countries, representing a range of experience levels and AI usage statuses, were surveyed regarding the characteristics and applications of AI in dermatology. Results: Among respondents, 61.33% reported having used AI tools in clinical practice. Adoption of AI was observed across all age groups, countries, and experience levels. Analysis of the types of AI tools used revealed a strong reliance on general-purpose large language models (LLMs), with chatbots being the most frequently cited category, utilized by 58.15% of users. Younger clinicians demonstrated a significant preference for chatbots (p < 0.05). Country-specific patterns in AI adoption were also noted. The most highly rated expected benefit of AI in dermatology was improved diagnostic accuracy, while the primary concern centered on regulatory and ethical limitations, suggesting that the “AI revolution” in dermatology is currently constrained less by technical barriers and more by regulation considerations. Use of consent forms when AI use takes place was more frequently reported as mandatory by dermatologists who had never used AI, reflecting heightened caution among non-users (p = 0.03). Additionally, 75% of respondents agreed that formal training in AI is necessary, highlighting a significant gap in traditional medical education regarding emerging technologies. Full article
(This article belongs to the Section Dermatology)
21 pages, 3068 KB  
Editorial
Artificial Intelligence in Participatory Environments: Technologies, Ethics, and Literacy Aspects
by Theodora Saridou and Charalampos A. Dimoulas
Societies 2026, 16(4), 127; https://doi.org/10.3390/soc16040127 - 15 Apr 2026
Abstract
While Artificial Intelligence (AI) approaches date back more than 60 years, there is no doubt that in the last 4 years, we have entered the era of AI. The advanced capabilities of Generative AI (GenAI) and Large Language Models (LLMs) have noticeably reshaped [...] Read more.
While Artificial Intelligence (AI) approaches date back more than 60 years, there is no doubt that in the last 4 years, we have entered the era of AI. The advanced capabilities of Generative AI (GenAI) and Large Language Models (LLMs) have noticeably reshaped multiple sectors, becoming a driving force in participatory environments. Recent developments in Machine/Deep Learning (ML/DL) and Natural Language Processing (NLP) have enabled the introduction of tools and applications integrated into various professional fields. Areas ranging from education and media to art, tourism, and food science incorporate AI technologies to optimize established workflows, facilitate change, enhance creativity, and foster interaction. The current Special Issue includes nineteen multidisciplinary research works exploring AI in participatory environments, primarily focusing on technologies, ethics, and literacy aspects. Employing diverse methodologies, the research identifies various uses of AI along with the critical ethical and legal risks and challenges they entail. Concerns about inaccuracy, algorithmic bias, data infringements, and the potential erosion of transparency and interpretability need to be addressed in every phase of the design and implementation of AI technologies. Co-creative human-in-the-loop processes and human judgment need to be further strengthened and supported through digital/AI literacy initiatives. In this regard, effective regulatory frameworks, inclusive institutional strategies, and targeted training programs can ensure responsible and trustworthy AI use with a balance between technological evolution and human oversight. Full article
Show Figures

Figure 1

15 pages, 3008 KB  
Article
Leveraging LLMs for Collaborative Ontology Engineering in Parkinson Disease Monitoring and Alerting
by Georgios Bouchouras, Dimitrios Doumanas, Andreas Soularidis, Konstantinos Kotis and George Vouros
AI 2026, 7(4), 139; https://doi.org/10.3390/ai7040139 - 14 Apr 2026
Abstract
Ontology engineering plays a critical role in clinical decision support systems for Parkinson’s Disease (PD) monitoring and alerting. While Large Language Models (LLMs) have shown promise in knowledge modeling tasks, their effectiveness in autonomously constructing comprehensive ontologies for complex clinical domains remains unclear. [...] Read more.
Ontology engineering plays a critical role in clinical decision support systems for Parkinson’s Disease (PD) monitoring and alerting. While Large Language Models (LLMs) have shown promise in knowledge modeling tasks, their effectiveness in autonomously constructing comprehensive ontologies for complex clinical domains remains unclear. This study investigates four ontology engineering methodologies for PD monitoring and alerting: One-shot (OS) prompting, Decomposed Sequential Prompting (DSP), X-HCOME, and SimX-HCOME+. Multiple LLMs were evaluated across these methodologies. Generated ontologies were assessed against a reference PD ontology using structural evaluation metrics focused on classes and object properties. Expert review was additionally conducted to analyze knowledge extensions beyond the gold standard. LLMs were able to autonomously generate syntactically valid and semantically meaningful ontologies using OS and DSP prompting; however, these ontologies exhibited limited conceptual coverage. Incorporating human expertise through X-HCOME significantly improved ontology completeness and evaluation metrics. Expert review further validated clinically relevant concepts absent from the reference ontology. SimX-HCOME+ demonstrated that iterative, supervised collaboration supports ontology refinement, although challenges persisted in natural language-to-rule formalization. The findings suggest that LLMs are more effective as collaborative assistants rather than standalone ontology engineers in the PD domain. Structured human–LLM collaboration is associated with improved ontology coverage and facilitates the identification of potential knowledge extensions in clinical monitoring applications. While the present evaluation focuses primarily on structural ontology elements, the proposed methodologies provide useful insights for LLM-assisted ontology engineering in complex healthcare domains. Full article
Show Figures

Figure 1

25 pages, 1445 KB  
Systematic Review
Deep Learning in the Architecture, Engineering, and Construction (AEC) Industry: Methods, Challenges, and Emerging Opportunities
by Muhammad Imran Khan, Abdul Waheed, Ehsan Harirchian and Bilal Manzoor
Buildings 2026, 16(8), 1546; https://doi.org/10.3390/buildings16081546 - 14 Apr 2026
Abstract
In recent years, deep learning (DL) has emerged as a transformative technology with significant potential to advance the Architecture, Engineering, and Construction (AEC) industry. DL enables automation, intelligent decision-making, and predictive analytics across various phases of construction, including design, site monitoring, safety management, [...] Read more.
In recent years, deep learning (DL) has emerged as a transformative technology with significant potential to advance the Architecture, Engineering, and Construction (AEC) industry. DL enables automation, intelligent decision-making, and predictive analytics across various phases of construction, including design, site monitoring, safety management, and facility operations. Despite its growing adoption, research on the comprehensive methods, practical challenges and emerging opportunities of DL in the AEC industry remains limited. This study presents a state-of-the-art review of DL applications in the AEC industry by focusing on key methods, challenges, emerging opportunities and future research directions. A systematic literature review (SLR) was conducted in this study. Three major DL methods applied in the AEC industry were examined: (i) data-driven computer vision, (ii) natural language processing (NLP), and (iii) generative and simulation-based methods. Key challenges were identified: (i) data scarcity issues, (ii) high computational requirements, (iii) limited generalization across projects, (iv) human factors and resistance to adoption, and (v) lack of standardization and interoperability. Additionally, emerging opportunities and future research directions are highlighted: (i) advanced construction site monitoring and safety management, (ii) automated design and generative modeling, (iii) predictive maintenance and facility management, (iv) integration with robotics and autonomous construction systems, and (v) smart project management and decision support systems. This study advances a holistic understanding of DL in the AEC industry by systematically synthesizing current methods, challenges, and emerging trends. It establishes a structured foundation for future research to overcome technical, practical, and organizational challenges, thereby supporting the scalable, intelligent, and sustainable transformation of construction practices. Full article
31 pages, 4371 KB  
Review
Optimization Strategies for Flexibility-Oriented Supply–Demand Matching in Industrial Park Integrated Energy Supply Systems: A Review of Modeling, Scheduling, and Flexibility Utilization
by Xueru Lin, Wei Zhong, Jing Li, Xingtao Tian, Hong Zhang and Xiaojie Lin
Energies 2026, 19(8), 1903; https://doi.org/10.3390/en19081903 - 14 Apr 2026
Abstract
The low-carbon transition of industrial parks is driving an increasing demand for advanced energy systems. Integrated energy supply systems (IESSs), which couple multiple energy forms, offer a critical pathway to alleviate the high-carbon intensity of energy structures and supply–demand imbalances in industrial parks [...] Read more.
The low-carbon transition of industrial parks is driving an increasing demand for advanced energy systems. Integrated energy supply systems (IESSs), which couple multiple energy forms, offer a critical pathway to alleviate the high-carbon intensity of energy structures and supply–demand imbalances in industrial parks by enhancing energy efficiency and reducing carbon emissions. The rapid advancement of energy storage technologies, multi-energy system modeling, and advanced energy management strategies has further propelled the research and application of IESSs. This review comprehensively delineates the distinctions between IESSs and traditional energy systems, highlighting the architecture and operational characteristics of IESSs to elucidate the impacts of multi-energy coupling and source–grid–load–storage interactions. We examine existing equipment and system modeling approaches and load modeling methods, and discuss modeling techniques for variable operating conditions. We analyze operational optimization methods for IESSs under deterministic, multi-time-scale, and uncertain conditions, and investigate the utilization mechanisms of flexibility resources across source–grid–load–storage links to illustrate how system flexibility supports dynamic supply–demand coordination. The review also identifies emerging trends in AI-driven IESS operation, highlighting the integration of physics-informed modeling, large language models, and multi-agent systems. This review establishes a unified analytical perspective for flexible supply–demand matching within IESSs, offering theoretical support for the development of future low-carbon industrial energy systems. Full article
Show Figures

Figure 1

21 pages, 1178 KB  
Article
Soft-Community Kernel Rényi Spectrum for Semantic Uncertainty Estimation in Large Language Models
by Zongkai Li and Junliang Du
Entropy 2026, 28(4), 442; https://doi.org/10.3390/e28040442 - 14 Apr 2026
Abstract
Uncertainty estimation is critical for deploying large language models (LLMs) in safety-sensitive and decision-critical applications. Recent approaches estimate semantic uncertainty by clustering multiple sampled responses into equivalence classes and measuring their diversity via entropy-based criteria. However, existing methods typically rely on greedy hard [...] Read more.
Uncertainty estimation is critical for deploying large language models (LLMs) in safety-sensitive and decision-critical applications. Recent approaches estimate semantic uncertainty by clustering multiple sampled responses into equivalence classes and measuring their diversity via entropy-based criteria. However, existing methods typically rely on greedy hard clustering and von Neumann entropy, which suffer from sensitivity to clustering order, noise in semantic equivalence judgments, and limited control over spectral contributions. In this work, we propose a principled information-theoretic framework for LLM semantic uncertainty estimation based on soft semantic communities and kernel Rényi entropy. Given multiple generations for a query, we construct a weighted semantic graph using pairwise semantic similarity scores and infer soft community assignments via weighted graph community detection. These soft assignments induce a positive semi-definite semantic kernel that captures the distribution of semantic modes without enforcing hard equivalence relations. Uncertainty is then quantified by the Rényi entropy of the kernel spectrum, yielding a tunable measure that interpolates between sensitivity to dominant semantic modes and long-tail semantic diversity. Compared to prior von Neumann entropy-based estimators, the proposed Rényi spectral uncertainty offers improved robustness to semantic noise, reduced dependence on clustering heuristics, and greater flexibility through its order parameter. Extensive experiments on question answering tasks demonstrate that our method provides more stable and discriminative uncertainty estimates, particularly under limited sampling budgets and noisy semantic judgments. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

19 pages, 630 KB  
Article
Extending the CASO-N24 to Late Adolescence: Psychometric Properties and Measurement Equivalence in a Peruvian School Sample
by Haydee Mercedes Aguilar-Armas, Velia Graciela Vera-Calmet, Marco Agustín Arbulú Ballesteros, Lucy Angélica Yglesias-Alva, Hugo Martin Noé Grijalva and Milagros del Carmen Quispe Villarreal
Healthcare 2026, 14(8), 1029; https://doi.org/10.3390/healthcare14081029 - 14 Apr 2026
Abstract
Background: Social anxiety in adolescence is a prevalent mental health concern characterized by intense fear of negative evaluation in social situations. The Social Anxiety Questionnaire for Adolescents (CASO-N24) is a Spanish-language instrument requiring validation in Peruvian populations. Objective: This study aimed [...] Read more.
Background: Social anxiety in adolescence is a prevalent mental health concern characterized by intense fear of negative evaluation in social situations. The Social Anxiety Questionnaire for Adolescents (CASO-N24) is a Spanish-language instrument requiring validation in Peruvian populations. Objective: This study aimed to validate the CASO-N24 in Peruvian adolescents aged 12–17 years, extending its application beyond the original 9–15-year range, and examine its psychometric properties including factorial structure, measurement invariance, nomological validity, and internal consistency. Methods: A stratified probability sample of 710 adolescents (352 males, 358 females; M = 14.82 years, SD = 1.45) from four northern Peruvian educational centers completed the CASO-N24 and ASQ-14. Exploratory and confirmatory factor analyses, multigroup invariance testing by age and gender, nomological validity assessment, and reliability estimation (Cronbach’s α and McDonald’s ω) were conducted using polychoric correlations and robust estimation methods. Results: The six-factor structure was replicated, explaining 47.13% of variance with factor loadings ranging 0.48–0.78. Model fit indices were excellent (GFI = 0.981, AGFI = 0.976, NFI = 0.971, SRMR = 0.046). Complete measurement invariance was achieved across age groups (12–15 vs. 16–17 years). Partial invariance by gender was observed, with differential item functioning identified in item 17. Nomological validity was confirmed through moderate-to-high correlations with ASQ-14 (males: r = 0.622; females: r = 0.604). Internal consistency was adequate (total scale ω = 0.95; subscales ω = 0.69–0.82). Conclusions: The CASO-N24 demonstrated robust psychometric properties for assessing social anxiety in Peruvian adolescents aged 12–17 years, supporting its multidimensional structure and utility for early detection in school settings while highlighting gender-specific response patterns warranting clinical consideration. Full article
Show Figures

Figure 1

18 pages, 469 KB  
Review
Generative Artificial Intelligence Transitions Pharmaceutical Development from Empirical Screening to Predictive Molecular Design and Clinical Trial Optimization
by Ghaith K. Mansour and Hatouf H. Sukkarieh
Pharmaceuticals 2026, 19(4), 614; https://doi.org/10.3390/ph19040614 - 13 Apr 2026
Abstract
The traditional paradigm of pharmaceutical research is characterized by substantial inefficiency, requiring extensive timelines and billions of dollars while suffering from high clinical attrition rates. The integration of generative artificial intelligence (AI) is driving a paradigm shift from empirical experimentation toward predictive, data-driven [...] Read more.
The traditional paradigm of pharmaceutical research is characterized by substantial inefficiency, requiring extensive timelines and billions of dollars while suffering from high clinical attrition rates. The integration of generative artificial intelligence (AI) is driving a paradigm shift from empirical experimentation toward predictive, data-driven innovation. This review evaluates state-of-the-art applications of these technologies across the drug discovery and development pipeline. By analyzing multi-omics data streams, AI models can elucidate complex disease mechanisms and identify novel therapeutic targets. Deep generative architectures facilitate the algorithmic creation of novel molecular entities, enabling the design of therapeutics with complex polypharmacological profiles. Furthermore, AI is enhancing the clinical testing phase through large language models (LLMs) that improve patient enrollment and through synthetic control arms (SCAs) that provide computational alternatives to traditional placebo groups. Despite these advances, the scientific community must address inherent algorithmic biases stemming from demographic underrepresentation and mitigate the risks of data hallucinations. Ultimately, realizing the full translational potential of generative AI in precision medicine may require the widespread adoption of explainable AI (XAI) frameworks and rigorous data standards. Full article
(This article belongs to the Section AI in Drug Development)
Show Figures

Graphical abstract

22 pages, 496 KB  
Systematic Review
Joint Modeling of Longitudinal and Survival Data in Public Health and Biomedical Research: A Systematic Review
by Weize Wang, Zoran Bursac and Nan Hu
Int. J. Environ. Res. Public Health 2026, 23(4), 492; https://doi.org/10.3390/ijerph23040492 - 13 Apr 2026
Abstract
We conducted a PRISMA-guided systematic review to summarize recent methodological advances in joint modeling. A PubMed search for English-language, peer-reviewed, full-text available articles published between 1 January 2019 and 30 January 2025 was conducted using the keywords “joint model”, “joint modeling”, “longitudinal and [...] Read more.
We conducted a PRISMA-guided systematic review to summarize recent methodological advances in joint modeling. A PubMed search for English-language, peer-reviewed, full-text available articles published between 1 January 2019 and 30 January 2025 was conducted using the keywords “joint model”, “joint modeling”, “longitudinal and survival”, “longitudinal and time-to-event”, and “public health”, resulting in 70 methodological studies from 793 records after screening. Original studies proposing methodological innovations in joint modeling were eligible, while clinical applications, reviews, comparative or predictive studies, and articles without full text were excluded. The reviewed methods introduced advances in both longitudinal and/or survival sub-models, including generalized linear mixed models, functional or latent class approaches, and flexible survival models, such as frailty, accelerated failure time, B-spline, and competing risks models. In total, 49% of the studies focused on longitudinal sub-model adaptations. This review is subject to limitations, including potential omission of relevant studies due to database, search term, and language restrictions. These developments have enhanced the flexibility of joint models for analyzing complex data structures, particularly in cardiovascular and oncology research, as well as broader public health applications. Despite these advances, challenges remain, including handling high-dimensional sparse data, reducing computational burden, and the lack of standardized evaluation metrics. This research received no external funding. Full article
(This article belongs to the Special Issue Advances in Biostatistics for Cardiovascular and Cancer Research)
Show Figures

Figure 1

8 pages, 250 KB  
Perspective
From Levin’s Universal Search to Policy-Guided Tree Search
by Ming Li
Entropy 2026, 28(4), 434; https://doi.org/10.3390/e28040434 - 13 Apr 2026
Abstract
Levin’s universal search embodies a striking principle: when a solution is efficiently verifiable, one can allocate search effort across candidate procedures according to a prior and obtain performance competitive (up to constant) with the best procedure in the reference class first published in [...] Read more.
Levin’s universal search embodies a striking principle: when a solution is efficiently verifiable, one can allocate search effort across candidate procedures according to a prior and obtain performance competitive (up to constant) with the best procedure in the reference class first published in (Levin, 1972). The accompanying video and Levin’s paper (Levin 2023) describe the history of this seminal result from the first-person perspective. While Andrey Kolmogorov passed away in 1987, Kolmogorov’s last student, Leonid Levin, systematically developed the theory of Kolmogorov complexity, including universal search. This perspective revisits the conceptual core of Levin-style universal search and finds its relationship to Solomonoff induction, a universal Bayesian framework for prediction that mixes over all computable hypotheses. Like Solomonoff induction, which serves as a spiritual foundation of large language models (LLMs), the Levin universal search has found important applications in AI. In this paper, we will follow one thread of this research on deterministic planning and reinforcement learning via policy-guided tree search. Full article
Back to TopTop