Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (252)

Search Parameters:
Keywords = textual descriptions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
7 pages, 884 KB  
Proceeding Paper
Medical Specialty Classification: An Interactive Application with Iterative Improvement for Patient Triage
by Anas Chahid, Ismail Chahid, Mohamed Emharraf and Mohammed Ghaouth Belkasmi
Eng. Proc. 2025, 112(1), 64; https://doi.org/10.3390/engproc2025112064 - 4 Nov 2025
Viewed by 241
Abstract
The challenge of accurately identifying the appropriate medical specialty based on patient symptoms leads to delays in diagnosis and treatment. This paper presents an AI model developed to classify medical specialties from symptom descriptions. The model, implemented with BERT, hosted via a Python-based [...] Read more.
The challenge of accurately identifying the appropriate medical specialty based on patient symptoms leads to delays in diagnosis and treatment. This paper presents an AI model developed to classify medical specialties from symptom descriptions. The model, implemented with BERT, hosted via a Python-based Flask API v3, and integrated with an interactive frontend application, allows users to input symptoms textually or interactively select affected body parts and answer multiple choice questions. Following deployment, feedback data from doctors and residents was collected and utilized to enhance the model performance, supplemented by additional data from online medical forums. This study demonstrates significant improvements in finding the correct medical specialty, contributing to more efficient patient triage, reducing the time to diagnose and treat patients, and eliminating the presence of doctors in the initial process as they are often busy in emergency departments. The use of generative AI and large language models, notably BERT, is highlighted as a key factor in the model’s success. Full article
Show Figures

Figure 1

25 pages, 2378 KB  
Article
Adaptive Graph Neural Networks with Semi-Supervised Multi-Modal Fusion for Few-Shot Steel Strip Defect Detection
by Qing-Yi Kong, Ye Rong, Guang-Long Wang, Zi-Qi Xu, Qian Zhang, Zhan-Shuai Guan and Yu-Hui Fan
Processes 2025, 13(11), 3520; https://doi.org/10.3390/pr13113520 - 3 Nov 2025
Viewed by 534
Abstract
In recent years, deep learning-based methods for surface defect detection in steel strips have advanced rapidly. Nevertheless, existing approaches still face several challenges in practical applications, such as insufficient dimensionality of feature information, inadequate representation capability for single-modal samples, poor adaptability to few-shot [...] Read more.
In recent years, deep learning-based methods for surface defect detection in steel strips have advanced rapidly. Nevertheless, existing approaches still face several challenges in practical applications, such as insufficient dimensionality of feature information, inadequate representation capability for single-modal samples, poor adaptability to few-shot scenarios, and difficulties in cross-domain knowledge transfer. To overcome these limitations, this paper proposes a multi-modal fusion framework based on graph neural networks for few-shot classification and detection of surface defects. The proposed architecture consists of three core components: a multi-modal feature fusion module, a graph neural network module, and a cross-modal transfer learning module. By integrating heterogeneous data modalities—including visual images and textual descriptions—the method facilitates the construction of a more efficient and accurate defect classification and detection model. Experimental evaluations on steel strip surface defect datasets confirm the robustness and effectiveness of the proposed method under small-sample conditions. The results demonstrate that our approach provides a novel and reliable solution for automated quality inspection of surface defects in the steel industry. Full article
Show Figures

Figure 1

25 pages, 1179 KB  
Article
Quantifying Fire Risk Index in Chemical Industry Using Statistical Modeling Procedure
by Hyewon Jung, Seungil Ahn, Seungho Choi and Yeseul Jeon
Appl. Sci. 2025, 15(21), 11508; https://doi.org/10.3390/app152111508 - 28 Oct 2025
Viewed by 209
Abstract
Fire incident reports contain detailed textual narratives that capture causal factors often overlooked in structured records, while financial damage amounts provide measurable outcomes of these events. Integrating these two sources of information is essential for uncovering interpretable links between descriptive causes and their [...] Read more.
Fire incident reports contain detailed textual narratives that capture causal factors often overlooked in structured records, while financial damage amounts provide measurable outcomes of these events. Integrating these two sources of information is essential for uncovering interpretable links between descriptive causes and their economic consequences. To this end, we develop a data-driven framework that constructs a composite Risk Index, enabling systematic quantification of how specific keywords relate to property damage amounts. This index facilitates both the identification of high-impact terms and the aggregation of risks across semantically related clusters, thereby offering a principled measure of fire-related financial risk. Using more than a decade of Korean fire investigation reports on the chemical industry classified as Special Buildings (2013–2024), we employ topic modeling and network-based embedding to estimate semantic similarities from interactions among words, and subsequently apply Lasso regression to quantify their associations with property damage amounts, thereby estimating the fire risk index. This approach enables us to assess fire risk not only at the level of individual terms, but also within their broader textual context, where highly interactive related words provide insights into collective patterns of hazard representation and their potential impact on expected losses. The analysis highlights several domains of risk, including hazardous chemical leakage, unsafe storage practices, equipment and facility malfunctions, and environmentally induced ignition. The results demonstrate that text-derived indices provide interpretable and practically relevant insights, bridging unstructured narratives with structured loss information and offering a basis for evidence-based fire risk assessment and management. The derived Risk Index provides practical reference data for both safety management and insurance underwriting by enabling the prioritization of preventive measures within industrial sites and offering quantitative guidance for assessing facility-specific risk levels in insurance decisions. An R implementation of the proposed framework is openly available for public use. Full article
(This article belongs to the Special Issue Advanced Methodology and Analysis in Fire Protection Science)
Show Figures

Figure 1

16 pages, 5440 KB  
Article
Pov9D: Point Cloud-Based Open-Vocabulary 9D Object Pose Estimation
by Tianfu Wang and Hongguang Wang
J. Imaging 2025, 11(11), 380; https://doi.org/10.3390/jimaging11110380 - 28 Oct 2025
Viewed by 337
Abstract
We propose a point cloud-based framework for open-vocabulary object pose estimation, called Pov9D. Existing approaches are predominantly RGB-based and often rely on texture or appearance cues, making them susceptible to pose ambiguities when objects are textureless or lack distinctive visual features. In contrast, [...] Read more.
We propose a point cloud-based framework for open-vocabulary object pose estimation, called Pov9D. Existing approaches are predominantly RGB-based and often rely on texture or appearance cues, making them susceptible to pose ambiguities when objects are textureless or lack distinctive visual features. In contrast, Pov9D takes 3D point clouds as input, enabling direct access to geometric structures that are essential for accurate and robust pose estimation, especially in open-vocabulary settings. To bridge the gap between geometric observations and semantic understanding, Pov9D integrates category-level textual descriptions to guide the estimation process. To this end, we introduce a text-conditioned shape prior generator that predicts a normalized object shape from both the observed point cloud and the textual category description. This shape prior provides a consistent geometric reference, facilitating precise prediction of object translation, rotation, and size, even for unseen categories. Extensive experiments on the OO3D-9D benchmark demonstrate that Pov9D achieves state-of-the-art performance, improving Abs IoU@50 by 7.2% and Rel 10° 10 cm by 27.2% over OV9D. Full article
(This article belongs to the Special Issue 3D Image Processing: Progress and Challenges)
Show Figures

Figure 1

20 pages, 3084 KB  
Article
Decoding Construction Accident Causality: A Decade of Textual Reports Analyzed
by Yuelin Wang and Patrick X. W. Zou
Buildings 2025, 15(21), 3859; https://doi.org/10.3390/buildings15213859 - 25 Oct 2025
Viewed by 412
Abstract
Analyzing accident reports to absorb past experiences is crucial for construction site safety. Current methods of processing textual accident reports are time-consuming and labor-intensive. This research applied the LDA topic model to analyze construction accident reports, successfully identifying five main types of accidents: [...] Read more.
Analyzing accident reports to absorb past experiences is crucial for construction site safety. Current methods of processing textual accident reports are time-consuming and labor-intensive. This research applied the LDA topic model to analyze construction accident reports, successfully identifying five main types of accidents: Falls from Height (23.5%), Struck-by and Contact Injuries (22.4%), Slips, Trips, and Falls (21.8%), Hot Work & Vehicle Hazards (18.1%), and Lifting and Machinery Accidents (14.2%). By mining the rich contextual details within unstructured textual descriptions, this research revealed that environmental factors constituted the most prevalent category of contributing causes, followed by human factors. Further analysis traced the root causes to deficiencies in management systems, particularly poor task planning and inadequate training. The LDA model demonstrated superior effectiveness in extracting interpretable topics directly mappable to engineering knowledge and uncovering these latent factors from large-scale, decade-spanning textual data at low computational cost. The findings offer transformative perspectives for improving construction site safety by prioritizing environmental control and management system enhancement. The main theoretical contributions of this research are threefold. First, it demonstrates the efficacy of LDA topic modeling as a powerful tool for extracting interpretable and actionable knowledge from large-scale, unstructured textual safety data, aligning with the growing interest in data-driven safety management in the construction sector. Second, it provides large-scale, empirical evidence that challenges the traditional dogma of “human factor dominance” by systematically quantifying the critical role of environmental and managerial root causes. Third, it presents a transparent, data-driven protocol for transitioning from topic identification to causal analysis, moving from assertion to evidence. Future work should focus on integrating multi-dimensional data for comprehensive accident analysis. Full article
(This article belongs to the Special Issue Digitization and Automation Applied to Construction Safety Management)
Show Figures

Figure 1

26 pages, 18977 KB  
Article
Large Language Models for Structured Task Decomposition in Reinforcement Learning Problems with Sparse Rewards
by Unai Ruiz-Gonzalez, Alain Andres and Javier Del Ser
Mach. Learn. Knowl. Extr. 2025, 7(4), 126; https://doi.org/10.3390/make7040126 - 22 Oct 2025
Viewed by 695
Abstract
Reinforcement learning (RL) agents face significant challenges in sparse-reward environments, as insufficient exploration of the state space can result in inefficient training or incomplete policy learning. To address this challenge, this work proposes a teacher–student framework for RL that leverages the inherent knowledge [...] Read more.
Reinforcement learning (RL) agents face significant challenges in sparse-reward environments, as insufficient exploration of the state space can result in inefficient training or incomplete policy learning. To address this challenge, this work proposes a teacher–student framework for RL that leverages the inherent knowledge of large language models (LLMs) to decompose complex tasks into manageable subgoals. The capabilities of LLMs to comprehend problem structure and objectives, based on textual descriptions, can be harnessed to generate subgoals, similar to the guidance a human supervisor would provide. For this purpose, we introduce the following three subgoal types: positional, representation-based, and language-based. Moreover, we propose an LLM surrogate model to reduce computational overhead and demonstrate that the supervisor can be decoupled once the policy has been learned, further lowering computational costs. Under this framework, we evaluate the performance of three open-source LLMs (namely, Llama, DeepSeek, and Qwen). Furthermore, we assess our teacher–student framework on the MiniGrid benchmark—a collection of procedurally generated environments that demand generalization to previously unseen tasks. Experimental results indicate that our teacher–student framework facilitates more efficient learning and encourages enhanced exploration in complex tasks, resulting in faster training convergence and outperforming recent teacher–student methods designed for sparse-reward environments. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

17 pages, 631 KB  
Article
Women’s Perspectives on Vocalization in the First and Second Stages of Labor: A Qualitative Study
by Isabel Rute Pereira, Margarida Sim-Sim and Maria Otília Zangão
Women 2025, 5(4), 38; https://doi.org/10.3390/women5040038 - 13 Oct 2025
Viewed by 477
Abstract
Despite growing interest in humanized childbirth practices, there is still little qualitative research exploring women’s perspectives on vocalization during labor. The present study aims to analyze women’s experiences with the use of vocalization in the first and second stages of labor. A descriptive [...] Read more.
Despite growing interest in humanized childbirth practices, there is still little qualitative research exploring women’s perspectives on vocalization during labor. The present study aims to analyze women’s experiences with the use of vocalization in the first and second stages of labor. A descriptive and exploratory qualitative study was conducted using semi-structured interviews with 16 women in the postpartum period between February and April 2024. Participants were recruited by convenience sampling, and data saturation was achieved when no new themes emerged from the interviews. Thematic analysis was performed using IRaMuTeQ (version 0.8 alpha 7) software. The textual corpus generated allowed classification into five thematic categories: Vocalization as an instinctive expression in natural childbirth; Functionality of vocalization during labor; Medicalized childbirth and natural childbirth; Fears during childbirth and their contributing factors; Typology of vocalization in labor. We conclude that many women reported that vocalization during labor is instinctive and functional, providing pain relief, but also serving as a means of communication, empowering women. Its expression can be strongly influenced by sociocultural, emotional, and contextual factors in each woman’s particular sphere. These findings, although limited to a specific population, suggest that healthcare professionals should consider vocalization as an individualized support tool, taking cultural differences into account. Full article
Show Figures

Figure 1

18 pages, 46866 KB  
Article
SATrack: Semantic-Aware Alignment Framework for Visual–Language Tracking
by Yangyang Tian, Liusen Xu, Zhe Li, Liang Jiang, Cen Chen and Huanlong Zhang
Electronics 2025, 14(19), 3935; https://doi.org/10.3390/electronics14193935 - 4 Oct 2025
Viewed by 530
Abstract
Visual–language tracking often faces challenges like target deformation and confusion caused by similar objects. These issues can disrupt the alignment between visual inputs and their textual descriptions, leading to cross-modal semantic drift and feature-matching errors. To address these issues, we propose SATrack, a [...] Read more.
Visual–language tracking often faces challenges like target deformation and confusion caused by similar objects. These issues can disrupt the alignment between visual inputs and their textual descriptions, leading to cross-modal semantic drift and feature-matching errors. To address these issues, we propose SATrack, a Semantic-Aware Alignment framework for visual–language tracking. Specifically, we first propose the Semantically Aware Contrastive Alignment module, which leverages attention-guided semantic distance modeling to identify hard negative samples that are semantically similar but carry different labels. This helps the model better distinguish confusing instances and capture fine-grained cross-modal differences. Secondly, we design the Cross-Modal Token Filtering strategy, which leverages attention responses guided by both the visual template and the textual description to filter out irrelevant or weakly related tokens in the search region. This helps the model focus more precisely on the target. Finally, we propose a Confidence-Guided Template Memory mechanism, which evaluates the prediction quality of each frame using convolutional operations and confidence thresholding. High-confidence frames are stored to selectively update the template memory, enabling the model to adapt to appearance changes over time. Extensive experiments show that SATrack achieves a 65.8% success rate on the TNL2K benchmark, surpassing the previous state-of-the-art UVLTrack by 3.1% and demonstrating superior robustness and accuracy. Full article
(This article belongs to the Special Issue Deep Perception in Autonomous Driving, 2nd Edition)
Show Figures

Figure 1

25 pages, 1745 KB  
Article
On the Practical Philosophy of the Nuns’ Buddhist Academy at Mount Wutai Through “One-Week Intensive Buddha Retreats”
by Yong Li, Yi Zhang and Jing Wang
Religions 2025, 16(10), 1267; https://doi.org/10.3390/rel16101267 - 3 Oct 2025
Viewed by 774
Abstract
The educational philosophy of the Nuns’ Buddhist Academy at Pushou Monastery, Mount Wutai, is based on the principles of “Hua Yan as the foundation, precepts as the practice, and Pure Land as the destination.” This philosophy draws upon Buddhist scriptures, integrating descriptions of [...] Read more.
The educational philosophy of the Nuns’ Buddhist Academy at Pushou Monastery, Mount Wutai, is based on the principles of “Hua Yan as the foundation, precepts as the practice, and Pure Land as the destination.” This philosophy draws upon Buddhist scriptures, integrating descriptions of the Pure Land practice found in the Avatamsaka Sūtra and the Amitābha Sūtra. This approach translates the textual teachings of Buddhist classics into real-life practice, expressing the concept of “the non-obstruction of principle and phenomenon” in the tangible activities of practitioners. It also allows for the experiential understanding of the spiritual realms revealed in the scriptures during theoretical learning and practice. The philosophy of the Nuns’ Academy embodies the practical emphasis of Chinese Buddhism, guiding all aspects of learning and practice. This paper argues that the pure land practice is living. In order to understand pure land practice, there should be a comprehensive viewpoint. It is needed to explore this way of practice through the analysis of textual analysis, figuring its root in Buddhis sūtra, as well as a sociological method to investigate its manifestation at the present society. Moreover, the spiritual dimension should not be neglected for a full-scale study. In this sense, the pure land school is living at present. Full article
Show Figures

Figure 1

15 pages, 4063 KB  
Article
Context-Aware Dynamic Integration for Scene Recognition
by Chan Ho Bae and Sangtae Ahn
Mathematics 2025, 13(19), 3102; https://doi.org/10.3390/math13193102 - 27 Sep 2025
Viewed by 428
Abstract
The identification of scenes poses a notable challenge within the realm of image processing. Unlike object recognition, which typically involves relatively consistent forms, scene images exhibit a broader spectrum of variability. This research introduces an approach that combines image and text data to [...] Read more.
The identification of scenes poses a notable challenge within the realm of image processing. Unlike object recognition, which typically involves relatively consistent forms, scene images exhibit a broader spectrum of variability. This research introduces an approach that combines image and text data to improve scene recognition performance. A model for tagging images is employed to extract textual descriptions of objects within scene images, providing insights into the components present. Subsequently, a pre-trained encoder converts the text into a feature set that complements the visual information derived from the scene images. These features offer a comprehensive understanding of the scene’s content, and a dynamic integration network is designed to manage and prioritize information from both text and image data. The proposed framework can effectively identify crucial elements by adjusting its focus on either text or image features depending on the scene’s context. Consequently, the framework enhances scene recognition accuracy and provides a more holistic understanding of scene composition. By leveraging image tagging, this study improves the image model’s ability to analyze and interpret intricate scene elements. Furthermore, incorporating dynamic integration increases the accuracy and functionality of the scene recognition system. Full article
Show Figures

Figure 1

30 pages, 1770 KB  
Article
A Hybrid Numerical–Semantic Clustering Algorithm Based on Scalarized Optimization
by Ana-Maria Ifrim and Ionica Oncioiu
Algorithms 2025, 18(10), 607; https://doi.org/10.3390/a18100607 - 27 Sep 2025
Viewed by 527
Abstract
This paper addresses the challenge of segmenting consumer behavior in contexts characterized by both numerical regularities and semantic variability. Traditional models, such as RFM-based segmentation, capture the transactional dimension but neglect the implicit meanings expressed through product descriptions, reviews, and linguistic diversity. To [...] Read more.
This paper addresses the challenge of segmenting consumer behavior in contexts characterized by both numerical regularities and semantic variability. Traditional models, such as RFM-based segmentation, capture the transactional dimension but neglect the implicit meanings expressed through product descriptions, reviews, and linguistic diversity. To overcome this gap, we propose a hybrid clustering algorithm that integrates numerical and semantic distances within a unified scalar framework. The central element is a scalar objective function that combines Euclidean distance in the RFM space with cosine dissimilarity in the semantic embedding space. A continuous parameter λ regulates the relative influence of each component, allowing the model to adapt granularity and balance interpretability across heterogeneous data. Optimization is performed through a dual strategy: gradient descent ensures convergence in the numerical subspace, while genetic operators enable a broader exploration of semantic structures. This combination supports both computational stability and semantic coherence. The method is validated on a large-scale multilingual dataset of transactional records, covering five culturally distinct markets. Results indicate systematic improvements over classical approaches, with higher Silhouette scores, lower Davies–Bouldin values, and stronger intra-cluster semantic consistency. Beyond numerical performance, the proposed framework produces intelligible and culturally adaptable clusters, confirming its relevance for personalized decision-making. The contribution lies in advancing a scalarized formulation and hybrid optimization strategy with wide applicability in scenarios where numerical and textual signals must be analyzed jointly. Full article
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)
Show Figures

Graphical abstract

19 pages, 895 KB  
Article
Checking Medical Process Conformance by Exploiting LLMs
by Giorgio Leonardi, Stefania Montani and Manuel Striani
Appl. Sci. 2025, 15(18), 10184; https://doi.org/10.3390/app151810184 - 18 Sep 2025
Viewed by 467
Abstract
Clinical guidelines, which represent the normative process models for healthcare organizations, are typically available in a textual, unstructured form. This issue hampers the application of classical conformance-checking algorithms to the medical domain, which take in input of a formalized and computer-interpretable description of [...] Read more.
Clinical guidelines, which represent the normative process models for healthcare organizations, are typically available in a textual, unstructured form. This issue hampers the application of classical conformance-checking algorithms to the medical domain, which take in input of a formalized and computer-interpretable description of the process. In this paper, (i) we propose overcoming this problem by taking advantage of a Large Language Model (LLM), in order to extract normative rules from textual guidelines; (ii) we then check and quantify the conformance of the patient event log with respect to such rules. Additionally, (iii) we adopt the approach as a means for evaluating the quality of the models mined by different process discovery algorithms from the event log, by comparing their conformance to the rules. We have tested our work in the domain of stroke. As regards conformance checking, we have proved the compliance of four Northern Italy hospitals to a general rule for diagnosis timing and to two rules that refer to thrombolysis treatment, and have identified some issues related to other rules, which involve the availability of magnetic resonance instruments. As regards process model discovery evaluation, we have assessed the superiority of Heuristic Miner with respect to other mining algorithms on our dataset. It is worth noting that the easy extraction of rules in our LLM-assisted approach would make it quickly applicable to other fields as well. Full article
Show Figures

Figure 1

20 pages, 667 KB  
Article
The Role of Campaign Descriptions and Visual Features in Crowdfunding Success: Evidence from Africa
by Lenny Phulong Mamaro, Athenia Bongani Sibindi and Daniel Makina
J. Risk Financial Manag. 2025, 18(9), 518; https://doi.org/10.3390/jrfm18090518 - 17 Sep 2025
Viewed by 786
Abstract
Crowdfunding has gained popularity among entrepreneurs who seek funding for their business projects on crowdfunding platforms. The success of these campaigns largely depends on the ability to attract and convince backers to support the fundraising initiative. Drawing on signalling, persuasion, and attribution theories, [...] Read more.
Crowdfunding has gained popularity among entrepreneurs who seek funding for their business projects on crowdfunding platforms. The success of these campaigns largely depends on the ability to attract and convince backers to support the fundraising initiative. Drawing on signalling, persuasion, and attribution theories, this study examines how campaign descriptions and visual features specifically word description length, spelling errors, images, frequently asked questions (FAQs), number of backers, funding target, flexible funding, and campaign duration. The study utilised econometric techniques such as ordinary least squares and logistic regression models on a dataset consisting of 854 small and medium enterprises and entrepreneurial projects collected from Kickstarter, Indiegogo, and Fundraised databases. The probability of success is significantly increased by the length of project descriptions, the inclusion of images, and an increased number of backers. On the other hand, higher funding targets and flexible funding models decrease the probability of success. These results support the attribution and persuasion theories, indicating that detailed project descriptions can address information gaps and improve the project’s credibility and trustworthiness. This study contributes to the literature by providing an empirically grounded understanding of how textual and visual elements influence crowdfunding outcomes in the African context and offers practical guidance for entrepreneurs and investors on designing effective campaigns. Full article
(This article belongs to the Section Financial Markets)
Show Figures

Figure 1

21 pages, 588 KB  
Article
Research on an MOOC Recommendation Method Based on the Fusion of Behavioral Sequences and Textual Semantics
by Wenxin Zhao, Lei Zhao and Zhenbin Liu
Appl. Sci. 2025, 15(18), 10024; https://doi.org/10.3390/app151810024 - 13 Sep 2025
Viewed by 576
Abstract
To address the challenges of user behavior sparsity and insufficient utilization of course semantics on MOOC platforms, this paper proposes a personalized recommendation method that integrates user behavioral sequences with course textual semantic features. First, shallow word-level features from course titles are extracted [...] Read more.
To address the challenges of user behavior sparsity and insufficient utilization of course semantics on MOOC platforms, this paper proposes a personalized recommendation method that integrates user behavioral sequences with course textual semantic features. First, shallow word-level features from course titles are extracted using FastText, and deep contextual semantic representations from course descriptions are obtained via a fine-tuned BERT model. The two sets of semantic features are concatenated to form a multi-level semantic representation of course content. Next, the fused semantic features are mapped into the same vector space as course ID embeddings through a linear projection layer and combined with the original course ID embeddings via an additive fusion strategy, enhancing the model’s semantic perception of course content. Finally, the fused features are fed into an improved SASRec model, where a multi-head self-attention mechanism is employed to model the evolution of user interests, enabling collaborative recommendations across behavioral and semantic modalities. Experiments conducted on the MOOCCubeX dataset (1.26 million users, 632 courses) demonstrated that the proposed method achieved NDCG@10 and HR@10 scores of 0.524 and 0.818, respectively, outperforming SASRec and semantic single-modality baselines. This study offers an efficient yet semantically rich recommendation solution for MOOC scenarios. Full article
Show Figures

Figure 1

30 pages, 1729 KB  
Article
FiCT-O: Modelling Fictional Characters in Detective Fiction from the 19th to the 20th Century
by Enrica Bruno, Lorenzo Sabatino and Francesca Tomasi
Humanities 2025, 14(9), 180; https://doi.org/10.3390/h14090180 - 3 Sep 2025
Viewed by 915
Abstract
This paper proposes a formal descriptive model for understanding the evolution of characters in detective fiction from the 19th to the 20th century, using methodologies and technologies from the Semantic Web. The integration of Digital Humanities within the theory of comparative literature opens [...] Read more.
This paper proposes a formal descriptive model for understanding the evolution of characters in detective fiction from the 19th to the 20th century, using methodologies and technologies from the Semantic Web. The integration of Digital Humanities within the theory of comparative literature opens new paths of study that allow for a digital approach to the understanding of intertextuality through close reading techniques and ontological modelling. In this research area, the variety of possible textual relationships, the levels of analysis required to classify these connections, and the inherently referential nature of certain literary genres demand a structured taxonomy. This taxonomy should account for stylistic elements, narrative structures, and cultural recursiveness that are unique to literary texts. The detective figure, central to modern literature, provides an ideal lens for examining narrative intertextuality across the 19th and 20th centuries. The analysis concentrates on character traits and narrative functions, addressing various methods of rewriting within the evolving cultural and creative context of authorship. Through a comparative examination of a representative sample of detective fiction from the period under scrutiny, the research identifies mechanisms of (meta)narrative recurrence, transformation, and reworking within the canon. The outcome is a formal model for describing narrative structures and techniques, with a specific focus on character development, aimed at uncovering patterns of continuity and variation in diegetic content over time and across different works, adaptable to analogous cases of traditional reworking and narrative fluidity. Full article
Show Figures

Figure 1

Back to TopTop