Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,064)

Search Parameters:
Keywords = language modelling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1280 KB  
Article
Enhancing Causal Text Detection Using Uncertainty-Weighted Machine Learning Ensembles
by Sivachandra K B, Neethu Mohan, Mithun Kumar Kar, Sikha O K and Sachin Kumar S
Informatics 2026, 13(3), 37; https://doi.org/10.3390/informatics13030037 (registering DOI) - 2 Mar 2026
Abstract
Causal inference in text data has been a demanding objective in the field of natural language processing, mainly due to the intrinsic ambiguity and context sensitivity inherent in data, inducing uncertainty. Diminishing this uncertainty is essential in identifying reliable causal connections and advancing [...] Read more.
Causal inference in text data has been a demanding objective in the field of natural language processing, mainly due to the intrinsic ambiguity and context sensitivity inherent in data, inducing uncertainty. Diminishing this uncertainty is essential in identifying reliable causal connections and advancing predictive consistency. In this research, we introduce an uncertainty-aware ensemble architecture that combines multiple text embedding schemes with both linear and nonlinear classifiers to boost causal text detection. Both sparse and neural-level embeddings were employed, and then combined it with an ensemble weighting approach based on two uncertainty estimation techniques, namely entropy-based and KL divergence-based. Unlike conventional ensemble methods with uniform or fixed voting strategies, our approach assigns weights inversely proportional to classifier uncertainty, ensuring that confident models exert greater influence on the final decisions. Our results show that TF-IDF, through its effective word frequency weighting scheme, consistently outperforms other embedding techniques, achieving better performance across both linear and nonlinear classifiers on both datasets (News Corpus and CausalLM–Adjective group). The experimental results show that our uncertainty-aware ensemble approach enhances both calibration and confidence predictions. Entropy-based weighting improves confidence in the case of linear classifiers with accuracy, F1-score, entropy and prediction confidence values of 94.3%, 94.0%, 0.382 and 0.774, respectively, while in the case of nonlinear classifiers the KL divergence-based weighting acquires a better performance with an accuracy of 97.6%, F1-score of 97.2%, KL Mean value of around 0.055 and LogLoss of 0.221. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

27 pages, 2619 KB  
Article
Defamiliarization Attack: Literary Theory Enabled Discussion of LLM Safety
by Bibin Babu, Iana Agafonova, Sebastian Biedermann and Ivan Yamshchikov
Electronics 2026, 15(5), 1047; https://doi.org/10.3390/electronics15051047 - 2 Mar 2026
Abstract
This paper introduces a multi-turn large language model (LLM) jailbreaking attack called Defamiliarization, in which malicious queries are embedded within ostensibly harmless narratives. By reframing requests in “unmarked” contexts, LLMs can be coerced into producing undesirable outputs. A range of scenarios is documented, [...] Read more.
This paper introduces a multi-turn large language model (LLM) jailbreaking attack called Defamiliarization, in which malicious queries are embedded within ostensibly harmless narratives. By reframing requests in “unmarked” contexts, LLMs can be coerced into producing undesirable outputs. A range of scenarios is documented, from planning ethically dubious actions to selectively overlooking critical events in literary texts, thereby exposing the limitations of alignment strategies predicated on detecting trigger words or semantic cues. Rather than substituting vocabulary, defamiliarization manipulates context and presentation, highlighting vulnerabilities that cannot be addressed by token-level fixes alone. Beyond demonstrating the effectiveness of defamiliarization as an attack strategy, evidence is presented of a systematic relationship between model scale and susceptibility. Experiments reveal that smaller-parameter models are significantly easier to manipulate using defamiliarized prompts. This finding raises important concerns regarding the growing popularity of lightweight, locally hosted LLMs, which are favored for their lower computational requirements but may lack alignment safeguards. A more holistic approach to LLM safety is advocated—one that incorporates insights from literary theory, ethics, and user experience—treating these models as interpretive agents. By doing so, defenses against covert manipulations can be strengthened and AI systems can remain aligned with human values. Full article
Show Figures

Figure 1

26 pages, 3226 KB  
Article
Assessing Street-Level Emotional Perception in Urban Regeneration Contexts Using Domain-Adapted CLIP
by Liyang Chu and Keting Zhou
Buildings 2026, 16(5), 980; https://doi.org/10.3390/buildings16050980 (registering DOI) - 2 Mar 2026
Abstract
As urban regeneration goals shift from physical improvement to pedestrian-level experience and emotional perception, existing assessment methods struggle to describe the emotional responses associated with renewed street environments. This paper proposes a framework for street-level emotional perception inference and analysis within the context [...] Read more.
As urban regeneration goals shift from physical improvement to pedestrian-level experience and emotional perception, existing assessment methods struggle to describe the emotional responses associated with renewed street environments. This paper proposes a framework for street-level emotional perception inference and analysis within the context of urban regeneration, enabling the automatic semantic recognition based on Street View Images (SVIs) and a Vision-Language Model (VLM). The paper constructs a six-dimensional emotion perceptual framework encompassing Comfort, Vitality, Safety, Oppressiveness, Nostalgia, and Alienation and uses a lightweight domain-adapted Contrastive Language-Image Pre-training (CLIP) model to infer emotional perceptions from SVIs. Building upon this, a dual-axis evaluation framework is introduced to structure and interpret basic spatial experience and regeneration-related perception. Using the Yuyuan Road and Wuding Road areas in Shanghai as a case study, the paper combines emotional perception results with street-level spatial analysis, proposing a scalable and interpretable analytical method for diagnosing urban regeneration outcomes and supporting emotion-informed spatial interventions. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

18 pages, 1182 KB  
Article
Co-MedGraphRAG: A Collaborative Large–Small Model Medical Question-Answering Framework Enhanced by Knowledge Graph Reasoning
by Sizhe Chen and Tao Chen
Information 2026, 17(3), 247; https://doi.org/10.3390/info17030247 - 2 Mar 2026
Abstract
Large language models (LLMs) have demonstrated significant capabilities in natural language processing (NLP), but they often encounter challenges in the medical domain. This can result in insufficient alignment between generated answers and user intent, as well as factual deviations. To address these issues, [...] Read more.
Large language models (LLMs) have demonstrated significant capabilities in natural language processing (NLP), but they often encounter challenges in the medical domain. This can result in insufficient alignment between generated answers and user intent, as well as factual deviations. To address these issues, we propose Co-MedGraphRAG, a novel framework combining knowledge graph reasoning with large–small model collaboration, aimed at improving the structural grounding and interpretability of medical responses. The framework operates through a multi-stage collaborative mechanism to augment question answering. First, a large language model constructs a question-specific knowledge graph (KG) containing pending entities (denoted as “none”) to explicitly define known and unknown variables. Subsequently, a hybrid reasoning strategy is employed to populate the pending entities, thereby completing the question-specific knowledge graph. Finally, this graph serves as critical structured evidence, combined with the original question, to augment the large language model in generating the final answer, implemented using Qwen2.5-7B and GLM4-9B in this paper. To evaluate the generated answers, we introduce a larger-parameter LLM(GPT-4o) to assess performance across five dimensions and compute an overall score. Experiments on three medical datasets demonstrate that Co-MedGraphRAG achieves consistent improvements in relevance, practicality, and structured knowledge support compared with mainstream Retrieval-Augmented Generation (RAG) frameworks. This work serves as a reference for researchers and developers designing medical question-answering frameworks and exploring decision-support applications. Full article
Show Figures

Graphical abstract

24 pages, 1346 KB  
Systematic Review
Artificial Intelligence in Cadastre: A Systematic Review of Methods, Applications, and Trends
by Jingshu Chen, Majid Nazeer, Bo Sum Lee and Man Sing Wong
Land 2026, 15(3), 411; https://doi.org/10.3390/land15030411 - 2 Mar 2026
Abstract
Surveying and register administration are core to land administration, and accordingly, land surveying and registration are essential to socio-economic development due to their potential accuracy and efficiency. Until now, customary land surveying and registration have relied on human input, which is a situation [...] Read more.
Surveying and register administration are core to land administration, and accordingly, land surveying and registration are essential to socio-economic development due to their potential accuracy and efficiency. Until now, customary land surveying and registration have relied on human input, which is a situation that undermines efficiency and is prone to errors in data handling. During the last decade, the exponential growth in artificial intelligence (AI), in particular, geospatial artificial intelligence (GeoAI), has provided new methodologies that can overcome these deficiencies. This review examines AI in cadastral management by analyzing technical solutions and trends across three areas including data collection, modeling, and common applications. This review aims to provide a comprehensive survey of the current use of AI in cadastral management to the extent of defining a future research avenue. Based on the comprehensive review of literature, this study has reached the following three conclusions. (1) Automated extraction of parcel boundaries has been achieved through deep learning in data collection and processing, removing the bottlenecks of manual interpretation. Models such as convolutional neural networks (CNNs) and Transformers have been used for pixel-level semantic segmentation of high-resolution remote sensing images, leading to significant improvements in efficiency and accuracy. (2) Non-spatial data have been processed with natural language processing techniques to automatically extract information and construct relationships, thus overcoming the limitations of paper-based archives and traditional relational databases. (3) Deep learning models have been applied to automatically detect parcel changes and to enable integrated analysis of spatial and non-spatial data, which has supported the transition of cadastral management from two-dimensional to three-dimensional. However, several challenges remain, including differences in multi-temporal data processing, spatial semantic ambiguity, and the lack of large-scale, high-quality annotated data. Future research can focus on improving model generalization, advancing cross-modal data fusion, and providing recommendations for the development of a reliable and practical intelligent cadastral system. Full article
Show Figures

Figure 1

20 pages, 1959 KB  
Article
The Development of a Large Language Model-Powered Chatbot to Advance Fairness in Machine Learning
by Pedro Henrique Ribeiro Santiago, Xiangqun Ju, Xavier Vasquez, Heidi Shen, Lisa Jamieson and Hawazin W. Elani
AI 2026, 7(3), 90; https://doi.org/10.3390/ai7030090 (registering DOI) - 2 Mar 2026
Abstract
Background: Machine learning (ML) has been widely adopted in decision-making, making fairness a central ethical and scientific priority. We developed the Themis chatbot, a Large Language Model (LLM) system designed to explain concepts of ML fairness in an accessible, conversational format. Methods: The [...] Read more.
Background: Machine learning (ML) has been widely adopted in decision-making, making fairness a central ethical and scientific priority. We developed the Themis chatbot, a Large Language Model (LLM) system designed to explain concepts of ML fairness in an accessible, conversational format. Methods: The development followed four stages: (1) curating a document corpus of 286 peer-reviewed publications on ML fairness; (2) development of Themis by combining a modern LLM (OpenAI’s GPT-4o) with Retrieval Augmented Generation (RAG); (3) creation of a 340-item benchmark dataset, the FairnessQA; and (4) evaluating performance against state-of-the-art non-augmented LLMs (DeepSeek R1, GPT-4o, GPT-5, and Grok 3). Results: For the multiple-choice questions, Themis achieved an accuracy of 96.7%, outperforming DeepSeek R1 (90.0%), GPT-4o (89.3%), GPT-5 (92.0%), and Grok 3 (86.7%), and the overall difference was statistically significant (χ2(4) = 10.1, p = 0.038). In the closed-ended questions, Themis achieved the highest accuracy (96.7%), while competing models ranged from 78.0% to 84.0%, and the overall difference was significant (χ2(4) = 23.9, p < 0.001). In the open-ended questions, Themis achieved the highest mean scores for correctness (M = 4.62), completeness (M = 4.59), and usefulness (M = 4.56), and differences were statistically significant (correctness: F(4, 195) = 20.91, p < 0.001; completeness: F(4, 195) = 7.76, p < 0.001; usefulness: F(4, 195) = 2.90, p < 0.001). By consolidating scattered research into an interactive assistant, Themis makes fairness concepts more accessible to educators, researchers, and policymakers. This work demonstrates that retrieval-augmented systems can enhance the public understanding of machine learning fairness at scale. Full article
42 pages, 2052 KB  
Article
GEMS: Gas-Enhanced Marine Search for Optimizing Fusion Mamba-Attention Networks for Fake Review Classification
by Sharon Roji Priya C., Deepalakshmi Perumalsamy and Rajermani Thinakaran
Future Internet 2026, 18(3), 132; https://doi.org/10.3390/fi18030132 - 2 Mar 2026
Abstract
The rise of fake reviews has become a major problem for trust in e-commerce sites. As for traditional machine learning solutions, they fail to capture the nuanced language that separates real reviews from fake reviews. In this work, we introduce a new hybrid [...] Read more.
The rise of fake reviews has become a major problem for trust in e-commerce sites. As for traditional machine learning solutions, they fail to capture the nuanced language that separates real reviews from fake reviews. In this work, we introduce a new hybrid metaheuristic algorithm that optimizes the Fusion Mamba-Attention Network (FMA-Net) for fake review detection, called GEMS (Gas-Enhanced Marine Search). GEMS is a unique combination of the exploration capabilities of the Enhanced Marine Predators Algorithm and the exploitation process of Henry Gas Solubility Optimization, offering a dual-phase optimization design for high-dimensional, asymmetric, metaheuristic-configured GEMS-optimized FMA-Net. Geometric enhancement of GEMS optimization provides GEMS-optimized FMA-Net with an accuracy of 96.8%, F1-score of 95.4%, and AUC-ROC of 97.2%, marking 3–7% improvement over the current best models for fake review detection on the Yelp, Amazon, and Google Reviews datasets. We lower the average time of hyperparameter optimization using GEMS with FMA-Net to achieve 68% reduction in overall time spent in grid search and 42% lower for complexity in comparison to genetic algorithms. The contributions of this work are the first hybrid metaheuristic for transformers, a mathematically formulated GEMS algorithm, and an extensive empirical study for proving multi-dimensional metric plausibility. Full article
18 pages, 5140 KB  
Article
BERT-Based Schema Matching for Integrating Heterogeneous Flood Data: A Case Study in Korea
by Taeyoung Choe, Mincheol Shin, Kwangyoung Kim, Myungseok Yang, Ka Lok Man and Mucheol Kim
Systems 2026, 14(3), 267; https://doi.org/10.3390/systems14030267 - 2 Mar 2026
Abstract
Integrating flood-response datasets across municipalities is often hindered by heterogeneous and non-standard variable names, a challenge amplified in Korean by local naming conventions and linguistic variation. This study addresses scalable schema alignment to standardize municipal flood datasets with reduced manual effort while maintaining [...] Read more.
Integrating flood-response datasets across municipalities is often hindered by heterogeneous and non-standard variable names, a challenge amplified in Korean by local naming conventions and linguistic variation. This study addresses scalable schema alignment to standardize municipal flood datasets with reduced manual effort while maintaining semantic consistency for downstream modeling. We propose a BERT-based schema matching framework that augments standardized attribute names with paraphrases generated by a generative language model and filtered to reduce semantic drift. Both standardized and target variable names are encoded using a flood-domain-adapted Korean BERT model, and candidate correspondences are retrieved via cosine-similarity ranking to produce top-k match suggestions for automated or human-in-the-loop alignment. Experiments on real flood-related tables from Busan and Incheon, evaluated jointly to diversify variable expressions, show that augmentation substantially improves top-k retrieval accuracy. In the combined evaluation, Hit@5 improves from 0.71 to 0.95, supporting more reliable schema harmonization for simulation-ready inputs. Full article
(This article belongs to the Section Supply Chain Management)
Show Figures

Figure 1

19 pages, 1473 KB  
Article
AI-Assisted Analysis of Future-Oriented Discourses: Institutional Narratives and Public Reactions on Social Media
by Galina V. Gradoselskaya, Inga V. Zheltikova, Maria Pilgun, Alexey N. Raskhodchikov and Andrey N. Yazykayev
Journal. Media 2026, 7(1), 49; https://doi.org/10.3390/journalmedia7010049 (registering DOI) - 2 Mar 2026
Abstract
This study explores how digital media ecosystems shape collective visions of the future under conditions of rapid technological innovation and the growing influence of artificial intelligence (AI). Drawing on a large corpus of social media content comprising 50,036,592 tokens, the research examines institutional [...] Read more.
This study explores how digital media ecosystems shape collective visions of the future under conditions of rapid technological innovation and the growing influence of artificial intelligence (AI). Drawing on a large corpus of social media content comprising 50,036,592 tokens, the research examines institutional narratives and user-generated responses through a hybrid methodological framework. This framework combines information-wave detection, network analysis, semantic and associative modeling (TextAnalyst 2.32), and interpretation supported by a large language model (GPT-5). The methodological contribution of the study lies in the integration of network-based and semantic algorithms with AI-driven analytical tools for the examination of large-scale textual data. The findings indicate that media discourses about the future operate as key mechanisms through which societies interpret the environmental, social, and economic consequences of technological change. Institutional actors promote multiple future-oriented models that often conflict with one another at both discursive and practical levels. In contrast, user-generated content reflects widespread fear, skepticism, and distrust. Prominent themes include nostalgia for the past, anxiety about socio-economic and environmental consequences, and concerns related to expanding forms of digital control. The analysis also reveals divergent perspectives on urban development. Positive narratives emphasize ecological balance, a comfortable urban environment, thoughtfully designed mixed-use development, and solutions to transportation challenges. Negative narratives, by contrast, focus on over-densification, environmental degradation, and the erosion of privacy in technologically saturated urban spaces. Full article
Show Figures

Graphical abstract

11 pages, 1165 KB  
Perspective
Artificial Intelligence at the Intersection of Chemistry and Materials Science
by Tomas Gregan and Juraj Gregan
AI 2026, 7(3), 89; https://doi.org/10.3390/ai7030089 (registering DOI) - 2 Mar 2026
Abstract
Research on metal–organic frameworks (MOFs) bridges the fields of chemistry and materials science. MOFs consist of metal ions linked together by long organic molecules. These materials are known for their high porosity and large surface area, with numerous applications ranging from storage of [...] Read more.
Research on metal–organic frameworks (MOFs) bridges the fields of chemistry and materials science. MOFs consist of metal ions linked together by long organic molecules. These materials are known for their high porosity and large surface area, with numerous applications ranging from storage of various gases to medical uses. Recent developments show that artificial intelligence (AI) is revolutionizing the discovery and design of MOFs. Despite these advancements in AI-driven approaches in MOFs, many challenges remain in processes such as data quality assurance and experimental validation. In this perspective, we highlight recent progress in MOFs and discuss the role of AI in this truly interdisciplinary field. Full article
Show Figures

Figure 1

25 pages, 1239 KB  
Article
Human–AI Collaboration in Programming Education: Student Perspectives on LLM-Based Coding Assistants
by Hebah Alquran and Shadi Banitaan
Computers 2026, 15(3), 154; https://doi.org/10.3390/computers15030154 - 2 Mar 2026
Abstract
The integration of large language models (LLMs) such as GitHub Copilot, ChatGPT, and DeepSeek into programming education has introduced a new form of human–AI collaboration. These tools provide real-time code suggestions, debugging assistance, and design support, yet their effects on learning, trust, productivity, [...] Read more.
The integration of large language models (LLMs) such as GitHub Copilot, ChatGPT, and DeepSeek into programming education has introduced a new form of human–AI collaboration. These tools provide real-time code suggestions, debugging assistance, and design support, yet their effects on learning, trust, productivity, and coding practices remain underexplored. We surveyed 248 students to examine relationships among these constructs, usage patterns by programming experience and academic level, the most frequently used assistants and programming languages, group differences in perceived learning and coding practices, and the extent to which learning, trust, and coding practices predict productivity. Students reported high adoption of ChatGPT and Python, generally positive perceptions of learning and productivity, and significant positive correlations among all constructs. Kruskal–Wallis tests indicated no significant differences in perceived learning across Basic, Intermediate, and Expert programmers, nor in coding practices across academic years (Years 1–4). Multiple regression showed that learning, trust, and coding practices jointly explained a substantial proportion of productivity variance (R2 = 0.628). These findings emphasize both opportunities and risks of AI integration and offer guidance for educators aiming to integrate AI tools while maintaining pedagogical rigor. Full article
Show Figures

Figure 1

23 pages, 6390 KB  
Article
A Modular Framework for Automated Hypothesis Validation and Refinement in Scientific Research
by Chenhao Chen, Taiga Masuda, Tsubasa Hirakawa, Takayoshi Yamashita and Hironobu Fujiyoshi
Information 2026, 17(3), 244; https://doi.org/10.3390/info17030244 - 2 Mar 2026
Abstract
Scientific research typically follows an iterative cycle where hypotheses are proposed, validated against experimental conclusions, and refined accordingly. While recent advances in large language models (LLMs) have enabled significant progress in automating individual stages of this process, existing systems are typically developed as [...] Read more.
Scientific research typically follows an iterative cycle where hypotheses are proposed, validated against experimental conclusions, and refined accordingly. While recent advances in large language models (LLMs) have enabled significant progress in automating individual stages of this process, existing systems are typically developed as standalone solutions, making it difficult to coordinate multiple research activities within a coherent research workflow. In this study, we present a modular framework for automated hypothesis validation and refinement in scientific research. Rather than introducing new task-specific models, the framework integrates established techniques, including natural language inference (NLI)-based hypothesis validation, attribution-guided hypothesis refinement, and retrieval-augmented generation (RAG)-based external evidence retrieval, into a unified and controllable workflow. We evaluate the proposed framework on scientific texts in the chemistry domain to assess its applicability in practical scientific research scenarios. Extensive experiments demonstrate the effectiveness of the proposed framework and suggest that it produces reliable intermediate signals that enhance transparency and traceability throughout hypothesis validation and refinement. Our work offers a modular solution for deploying LLM-based systems in scientific research workflows. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Graphical abstract

18 pages, 4834 KB  
Article
Syntax–Semantics–Numeracy Fusion for Improving Math Word Problem Representation and Solving
by Zihan Feng, Hao Ming and Xinguo Yu
Symmetry 2026, 18(3), 434; https://doi.org/10.3390/sym18030434 - 2 Mar 2026
Abstract
Most pre-trained language representation models are designed to encode contextualized semantic information for general language processing tasks. However, they are insufficient for math word problem (MWP) solving, which requires not only linguistic syntax and semantic understanding but also numerical reasoning. In this work, [...] Read more.
Most pre-trained language representation models are designed to encode contextualized semantic information for general language processing tasks. However, they are insufficient for math word problem (MWP) solving, which requires not only linguistic syntax and semantic understanding but also numerical reasoning. In this work, we introduce SSN4Solver, a deep neural solver that improves MWP-solving performance by symmetrically fusing syntax, semantics, and numeracy representations within its contextual encoder. Our approach jointly captures syntactic structures from dependency trees, semantic features from part-of-speech tags, and the attributes and relations of numerical entities. By treating these heterogeneous information sources in a balanced and aligned manner, SSN4Solver constructs a rich, multi-faceted representation for MWP solving without introducing substantial computational overhead, empowering human–computer interaction (HCI) applications such as adaptive educational interfaces and intelligent tutoring systems. Extensive experiments demonstrate that SSN4Solver outperforms existing baseline models. In addition, a visualization scheme is designed to elucidate how the three types of representations contribute to the solving process. SSN4Solver thus offers a scalable solution, contributing to the development of HCI systems that are both intelligent and mathematically effective. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Human-Computer Interaction)
Show Figures

Figure 1

33 pages, 900 KB  
Article
Limits of Computational Selection and Their Implications for Human–AI Divergence in Convergent Creativity
by Sungwook Jung and Ken Nah
Information 2026, 17(3), 243; https://doi.org/10.3390/info17030243 - 2 Mar 2026
Abstract
This study investigated whether humans and generative Large Language Models (LLMs) exhibit similar performance in divergent ideation but diverge in convergent selection. To address the critical oversight in current AI creativity research, which predominantly focuses on generative output, this study introduces the original [...] Read more.
This study investigated whether humans and generative Large Language Models (LLMs) exhibit similar performance in divergent ideation but diverge in convergent selection. To address the critical oversight in current AI creativity research, which predominantly focuses on generative output, this study introduces the original conceptual framework of ‘Selection Alignment’ and a ‘novel dual-phase experimental protocol.’ This research transcends traditional generation-centric evaluations to establish a new paradigm for assessing the evaluative stage of creativity. A controlled experiment involved 240 design professionals (120 idea generators, 120 independent selectors) and two LLM agents (GPT-4o, Gemini 1.5 Pro). Participants and LLMs responded to identical divergent prompts, including 10 Alternative Uses Task-style prompts and 10 design problems. Both humans and LLMs generated candidate idea pools, then performed convergent selection by choosing the top five items per prompt. Idea generation was evaluated based on Fluency, Flexibility, and Semantic Breadth. Selection outcomes were compared using top-5 overlap rates derived from semantic clustering. The results indicated near-parity in generation metrics, showing no statistically significant differences between human and AI outputs. However, a substantial divergence was observed in convergent selection: the mean human–AI top-5 overlap was 19.2% for Model-A and 22.4% for Model-B, both significantly below permutation-based chance levels (null mean overlap ≈ 35%). AI selections were strongly predicted by embedding- and probability-based metrics, while human choices were better predicted by context- and experience-based criteria, highlighting a fundamental mechanistic divide. This suggests that convergent selection amplifies human–AI divergence, carrying significant implications for designing co-creative interfaces that integrate human experience into AI’s selection mechanisms. Full article
Show Figures

Figure 1

23 pages, 644 KB  
Article
A Deployment-Oriented Hybrid Semantic–QoS Framework for Web Service Selection: A Comparative Study of Transformer Encoders
by Vijayalakshmi Mahanra Rao, R Kanesaraj Ramasamy and Md Shohel Sayeed
Information 2026, 17(3), 242; https://doi.org/10.3390/info17030242 - 2 Mar 2026
Abstract
Transformer-based language models have been increasingly adopted to enhance semantic awareness in web service selection systems. However, the computational cost of large transformer encoders poses significant challenges for real-time and resource-constrained deployment scenarios. This study presents a deployment-oriented hybrid semantic–QoS framework that integrates [...] Read more.
Transformer-based language models have been increasingly adopted to enhance semantic awareness in web service selection systems. However, the computational cost of large transformer encoders poses significant challenges for real-time and resource-constrained deployment scenarios. This study presents a deployment-oriented hybrid semantic–QoS framework that integrates transformer-based domain-level semantic signals with traditional Quality of Service (QoS) metrics to support scalable service selection pipelines. Rather than aiming to establish end-to-end ranking optimality, this work focuses on a comparative analysis of transformer encoders within a unified pipeline, emphasizing accuracy–latency trade-offs, resource utilization, and deployment feasibility. Four representative BERT family models—BERT, DistilBERT, RoBERTa, and ALBERT—are evaluated under identical experimental conditions. The semantic component operates at the level of domain relevance estimation, and its output is combined with QoS indicators using a controllable weighting mechanism to examine sensitivity to deployment priorities. The results reveal clear trade-offs between semantic expressiveness and computational efficiency, with lightweight models such as DistilBERT demonstrating favorable scalability and response-time characteristics despite reduced semantic capacity. The findings provide practical insights for selecting transformer encoders in QoS-aware service selection pipelines deployed in cloud, edge, or real-time environments. By framing evaluation around deployment feasibility rather than ranking optimality, this study offers guidance for balancing semantic enrichment with operational constraints in real-world service selection systems. Full article
(This article belongs to the Section Information Systems)
Show Figures

Graphical abstract

Back to TopTop