Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (158)

Search Parameters:
Keywords = textual semantic analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1311 KB  
Article
A Novel Dual-Layer Deep Learning Architecture for Phishing and Spam Email Detection
by Sarmad Rashed and Caner Ozcan
Electronics 2026, 15(3), 630; https://doi.org/10.3390/electronics15030630 - 2 Feb 2026
Abstract
Phishing and spam emails continue to pose a serious cybersecurity threat, leading to financial loss, information leakage, and reputational damage. Traditional email filtering approaches struggle to keep pace with increasingly sophisticated attack strategies, particularly those involving malicious content and deceptive attachments. This study [...] Read more.
Phishing and spam emails continue to pose a serious cybersecurity threat, leading to financial loss, information leakage, and reputational damage. Traditional email filtering approaches struggle to keep pace with increasingly sophisticated attack strategies, particularly those involving malicious content and deceptive attachments. This study proposes a dual-layer deep learning architecture designed to enhance email security by improving the detection of phishing and spam messages. The first layer employs deep learning models, including LSTM- and transformer-based classifiers, to analyze email content and structural features across legitimate, phishing, and spam emails. The second layer focuses on spam emails containing attachments and applies advanced transformer models, such as GPT-2 and XLM-RoBERTa, to assess contextual and semantic patterns associated with malicious attachments. By integrating textual analysis with attachment-level inspection, the proposed architecture overcomes limitations of single-layer approaches that rely solely on email body content. Experimental evaluation using accuracy and F1-score demonstrates that the dual-layer framework achieves a minimum F1-score of 98.75 percent in spam–ham classification and attains an attachment detection accuracy of up to 99.46 percent. These results indicate that the proposed approach offers a reliable and scalable solution for enhancing real-world email security systems. Full article
Show Figures

Figure 1

23 pages, 2605 KB  
Article
Depression Detection on Social Media Using Multi-Task Learning with BERT and Hierarchical Attention: A DSM-5-Guided Approach
by Haichao Jin and Lin Zhang
Electronics 2026, 15(3), 598; https://doi.org/10.3390/electronics15030598 - 29 Jan 2026
Viewed by 164
Abstract
Depression represents a major global health challenge, yet traditional clinical diagnosis faces limitations, including high costs, limited coverage, and low patient willingness. Social media platforms provide new opportunities for early depression screening through user-generated content. However, existing methods often lack systematic integration of [...] Read more.
Depression represents a major global health challenge, yet traditional clinical diagnosis faces limitations, including high costs, limited coverage, and low patient willingness. Social media platforms provide new opportunities for early depression screening through user-generated content. However, existing methods often lack systematic integration of clinical knowledge and fail to leverage multi-modal information comprehensively. We propose a DSM-5-guided methodology that systematically maps clinical diagnostic criteria to computable social media features across three modalities: textual semantics (BERT-based deep semantic extraction), behavioral patterns (temporal activity analysis), and topic distributions (LDA-based cognitive bias identification). We design a hierarchical architecture integrating BERT, Bi-LSTM, hierarchical attention, and multi-task learning to capture both character-level and post-level importance while jointly optimizing depression classification, symptom recognition, and severity assessment. Experiments on the WU3D dataset (32,570 users, 2.19 million posts) demonstrate that our model achieves 91.8% F1-score, significantly outperforming baseline methods (BERT: 85.6%, TextCNN: 78.6%, and SVM: 72.1%) and large language models (GPT-4 few-shot: 86.9%). Ablation studies confirm that each component contributes meaningfully with synergistic effects. The model provides interpretable predictions through attention visualization and outputs fine-grained symptom assessments aligned with DSM-5 criteria. With low computational cost (~50 ms inference time), local deployability, and superior privacy protection, our approach offers significant practical value for large-scale mental health screening applications. This work demonstrates that domain-specialized methods with explicit clinical knowledge integration remain highly competitive in the era of general-purpose large language models. Full article
Show Figures

Figure 1

28 pages, 1521 KB  
Article
Image–Text Sentiment Analysis Based on Dual-Path Interaction Network with Multi-Level Consistency Learning
by Zhi Ji, Chunlei Wu, Qinfu Xu and Yixiang Wu
Electronics 2026, 15(3), 581; https://doi.org/10.3390/electronics15030581 - 29 Jan 2026
Viewed by 116
Abstract
With the continuous evolution of social media, users are increasingly inclined to express their personal emotions on digital platforms by integrating information presented in multiple modalities. Within this context, research on image–text sentiment analysis has garnered significant attention. Prior research efforts have made [...] Read more.
With the continuous evolution of social media, users are increasingly inclined to express their personal emotions on digital platforms by integrating information presented in multiple modalities. Within this context, research on image–text sentiment analysis has garnered significant attention. Prior research efforts have made notable progress by leveraging shared emotional concepts across visual and textual modalities. However, existing cross-modal sentiment analysis methods face two key challenges: Previous approaches often focus excessively on fusion, resulting in learned features that may not achieve emotional alignment; traditional fusion strategies are not optimized for sentiment tasks, leading to insufficient robustness in final sentiment discrimination. To address the aforementioned issues, this paper proposes a Dual-path Interaction Network with Multi-level Consistency Learning (DINMCL). It employs a multi-level feature representation module to decouple the global and local features of both text and image. These decoupled features are then fed into the Global Congruity Learning (GCL) and Local Crossing-Congruity Learning (LCL) modules, respectively. GCL models global semantic associations using Crossing Prompter, while LCL captures local consistency in fine-grained emotional cues across modalities through cross-modal attention mechanisms and adaptive prompt injection. Finally, a CLIP-based adaptive fusion layer integrates the multi-modal representations in a sentiment-oriented manner. Experiments on the MVSA_Single, MVSA_Multiple, and TumEmo datasets with baseline models such as CTMWA and CLMLF demonstrate that DINMCL significantly outperforms mainstream models in sentiment classification accuracy and F1-score and exhibits strong robustness when handling samples containing highly noisy symbols. Full article
(This article belongs to the Special Issue AI-Driven Image Processing: Theory, Methods, and Applications)
Show Figures

Figure 1

19 pages, 4487 KB  
Article
Research on Emerging Technology Identification Methods Based on a Knowledge Graph of High-Value Patents
by Chuan Zhan, Yang Zhou and Yanping Huang
Big Data Cogn. Comput. 2026, 10(2), 40; https://doi.org/10.3390/bdcc10020040 - 28 Jan 2026
Viewed by 104
Abstract
In the context of a new wave of scientific and technological revolution and industrial transformation, this study proposes an emerging technology identification framework that integrates a High-Value Patent Knowledge Graph with Social Network Analysis, aiming to systematically uncover the semantic and structural relationships [...] Read more.
In the context of a new wave of scientific and technological revolution and industrial transformation, this study proposes an emerging technology identification framework that integrates a High-Value Patent Knowledge Graph with Social Network Analysis, aiming to systematically uncover the semantic and structural relationships embedded in patent data and to support national efforts to secure strategic technological advantages. First, patent textual feature scores are extracted using the Doc2Vec model, while indicator feature scores are calculated across the technical, legal, and economic dimensions using the CRITIC weighting method. These two types of scores are then integrated to derive a comprehensive patent value score, and high-value patents are screened according to the Pareto principle. Subsequently, a High-Value Patent Knowledge Graph is constructed based on entity extraction using the BERT-BiLSTM-CRF model and relationship matching techniques. Building upon this graph, centrality analysis is conducted on the nodes, and the results are combined with the rich semantic relationships represented in the knowledge graph to further identify emerging technologies. Taking the New Energy Vehicle domain as an empirical case, a High-Value Patent Knowledge Graph comprising seven types of entities, six types of relationships, and 25,611 triplets is developed, through which six key emerging sub-technology directions are identified. The empirical findings demonstrate the effectiveness and robustness of the proposed approach for emerging technology identification. Full article
Show Figures

Figure 1

23 pages, 1729 KB  
Article
Integrating Textual Features with Survival Analysis for Predicting Employee Turnover
by Qian Ke and Yongze Xu
Behav. Sci. 2026, 16(2), 174; https://doi.org/10.3390/bs16020174 - 26 Jan 2026
Viewed by 123
Abstract
This study presents a novel methodology that integrates Transformer-based textual analysis from professional networking platforms with traditional demographic variables within a survival analysis framework to predict turnover. Using a dataset comprising 4087 work events from Maimai (a leading professional networking platform in China) [...] Read more.
This study presents a novel methodology that integrates Transformer-based textual analysis from professional networking platforms with traditional demographic variables within a survival analysis framework to predict turnover. Using a dataset comprising 4087 work events from Maimai (a leading professional networking platform in China) spanning 2020 to 2022, our approach combines sentiment analysis and deep learning semantic representations to enhance predictive accuracy and interpretability for HR decision-making. Methodologically, we adopt a hybrid feature-extraction strategy combining theory-driven methods (sentiment analysis and TF-IDF) with a data-driven Transformer-based technique. Survival analysis is then applied to model time-dependent turnover risks, and we compare multiple models to identify the most predictive feature sets. Results demonstrate that integrating textual and demographic features improves prediction performance, specifically increasing the C-index by 3.38% and the cumulative/dynamic AUC by 3.43%. The Transformer-based method outperformed traditional approaches in capturing nuanced employee sentiments. Survival analysis further boosts model adaptability by incorporating temporal dynamics and also provides interpretable risk factors for turnover, supporting data-driven HR strategy formulation. This research advances turnover prediction methodology by combining text analysis with survival modeling, offering small and medium-sized enterprises a practical, data-informed approach to workforce planning. The findings contribute to broader labor market insights and can inform both organizational talent retention strategies and related policy-making. Full article
(This article belongs to the Section Organizational Behaviors)
Show Figures

Figure 1

32 pages, 16166 KB  
Article
A Multimodal Ensemble-Based Framework for Detecting Fake News Using Visual and Textual Features
by Muhammad Abdullah, Hongying Zan, Arifa Javed, Muhammad Sohail, Orken Mamyrbayev, Zhanibek Turysbek, Hassan Eshkiki and Fabio Caraffini
Mathematics 2026, 14(2), 360; https://doi.org/10.3390/math14020360 - 21 Jan 2026
Viewed by 217
Abstract
Detecting fake news is essential in natural language processing to verify news authenticity and prevent misinformation-driven social, political, and economic disruptions targeting specific groups. A major challenge in multimodal fake news detection is effectively integrating textual and visual modalities, as semantic gaps and [...] Read more.
Detecting fake news is essential in natural language processing to verify news authenticity and prevent misinformation-driven social, political, and economic disruptions targeting specific groups. A major challenge in multimodal fake news detection is effectively integrating textual and visual modalities, as semantic gaps and contextual variations between images and text complicate alignment, interpretation, and the detection of subtle or blatant inconsistencies. To enhance accuracy in fake news detection, this article introduces an ensemble-based framework that integrates textual and visual data using ViLBERT’s two-stream architecture, incorporates VADER sentiment analysis to detect emotional language, and uses Image–Text Contextual Similarity to identify mismatches between visual and textual elements. These features are processed through the Bi-GRU classifier, Transformer-XL, DistilBERT, and XLNet, combined via a stacked ensemble method with soft voting, culminating in a T5 metaclassifier that predicts the outcome for robustness. Results on the Fakeddit and Weibo benchmarking datasets show that our method outperforms state-of-the-art models, achieving up to 96% and 94% accuracy in fake news detection, respectively. This study highlights the necessity for advanced multimodal fake news detection systems to address the increasing complexity of misinformation and offers a promising solution. Full article
Show Figures

Figure 1

33 pages, 550 KB  
Article
Intelligent Information Processing for Corporate Performance Prediction: A Hybrid Natural Language Processing (NLP) and Deep Learning Approach
by Qidi Yu, Chen Xing, Yanjing He, Sunghee Ahn and Hyung Jong Na
Electronics 2026, 15(2), 443; https://doi.org/10.3390/electronics15020443 - 20 Jan 2026
Viewed by 195
Abstract
This study proposes a hybrid machine learning framework that integrates structured financial indicators and unstructured textual strategy disclosures to improve firm-level management performance prediction. Using corporate business reports from South Korean listed firms, strategic text was extracted and categorized under the Balanced Scorecard [...] Read more.
This study proposes a hybrid machine learning framework that integrates structured financial indicators and unstructured textual strategy disclosures to improve firm-level management performance prediction. Using corporate business reports from South Korean listed firms, strategic text was extracted and categorized under the Balanced Scorecard (BSC) framework into financial, customer, internal process, and learning and growth dimensions. Various machine learning and deep learning models—including k-nearest neighbors (KNNs), support vector machine (SVM), light gradient boosting machine (LightGBM), convolutional neural network (CNN), long short-term memory (LSTM), autoencoder, and transformer—were evaluated, with results showing that the inclusion of strategic textual data significantly enhanced prediction accuracy, precision, recall, area under the curve (AUC), and F1-score. Among individual models, the transformer architecture demonstrated superior performance in extracting context-rich semantic features. A soft-voting ensemble model combining autoencoder, LSTM, and transformer achieved the best overall performance, leading in accuracy and AUC, while the best single deep learning model (transformer) obtained a marginally higher F1 score, confirming the value of hybrid learning. Furthermore, analysis revealed that customer-oriented strategy disclosures were the most predictive among BSC dimensions. These findings highlight the value of integrating financial and narrative data using advanced NLP and artificial intelligence (AI) techniques to develop interpretable and robust corporate performance forecasting models. In addition, we operationalize information security narratives using a reproducible cybersecurity lexicon and derive security disclosure intensity and weight share features that are jointly evaluated with BSC-based strategic vectors. Full article
(This article belongs to the Special Issue Advances in Intelligent Information Processing)
Show Figures

Figure 1

35 pages, 830 KB  
Article
Predicting Financial Contagion: A Deep Learning-Enhanced Actuarial Model for Systemic Risk Assessment
by Khalid Jeaab, Youness Saoudi, Smaaine Ouaharahe and Moulay El Mehdi Falloul
J. Risk Financial Manag. 2026, 19(1), 72; https://doi.org/10.3390/jrfm19010072 - 16 Jan 2026
Viewed by 415
Abstract
Financial crises increasingly exhibit complex, interconnected patterns that traditional risk models fail to capture. The 2008 global financial crisis, 2020 pandemic shock, and recent banking sector stress events demonstrate how systemic risks propagate through multiple channels simultaneously—e.g., network contagion, extreme co-movements, and information [...] Read more.
Financial crises increasingly exhibit complex, interconnected patterns that traditional risk models fail to capture. The 2008 global financial crisis, 2020 pandemic shock, and recent banking sector stress events demonstrate how systemic risks propagate through multiple channels simultaneously—e.g., network contagion, extreme co-movements, and information cascades—creating a multidimensional phenomenon that exceeds the capabilities of conventional actuarial or econometric approaches alone. This paper addresses the fundamental challenge of modeling this multidimensional systemic risk phenomenon by proposing a mathematically formalized three-tier integration framework that achieves 19.2% accuracy improvement over traditional models through the following: (1) dynamic network-copula coupling that captures 35% more tail dependencies than static approaches, (2) semantic-temporal alignment of textual signals with network evolution, and (3) economically optimized threshold calibration reducing false positives by 35% while maintaining 85% crisis detection sensitivity. Empirical validation on historical data (2000–2023) demonstrates significant improvements over traditional models: 19.2% increase in predictive accuracy (R2 from 0.68 to 0.87), 2.7 months earlier crisis detection compared to Basel III credit-to-GDP indicators, and 35% reduction in false positive rates while maintaining 85% crisis detection sensitivity. Case studies of the 2008 crisis and 2020 market turbulence illustrate the model’s ability to identify subtle precursor signals through integrated analysis of network structure evolution and semantic changes in regulatory communications. These advances provide financial regulators and institutions with enhanced tools for macroprudential supervision and countercyclical capital buffer calibration, strengthening financial system resilience against multifaceted systemic risks. Full article
(This article belongs to the Special Issue Financial Regulation and Risk Management amid Global Uncertainty)
Show Figures

Figure 1

23 pages, 4481 KB  
Article
PathSelect: Dynamic Token Condensation and Hierarchical Attention for Accelerated T2I Diffusion
by Yan Fu, Gaolin Ye, Ou Ye, Ting Hou and Ruimin Dai
Electronics 2026, 15(2), 342; https://doi.org/10.3390/electronics15020342 - 13 Jan 2026
Viewed by 193
Abstract
Recent advancements in large language models (LLMs) have significantly improved text-to-image (T2I) generation, enabling systems to produce visually compelling and semantically meaningful images. However, preserving fine-grained semantic consistency in generated images, particularly in response to complex and region-specific textual prompts, remains a key [...] Read more.
Recent advancements in large language models (LLMs) have significantly improved text-to-image (T2I) generation, enabling systems to produce visually compelling and semantically meaningful images. However, preserving fine-grained semantic consistency in generated images, particularly in response to complex and region-specific textual prompts, remains a key challenge. In this work, we propose a context-aware hierarchical agent mechanism that integrates a semantic condensation strategy to enhance attention efficiency and maintain critical visual-textual alignment. By dynamically fusing contextual information, the method effectively balances computational efficiency and ensures semantic alignment with textual descriptions. Experimental results demonstrate improved visual coherence and semantic consistency across diverse prompts, validated through quantitative metrics and qualitative analysis. Our contributions include: (i) introducing a novel semantic condensation strategy that enhances attention efficiency while preserving critical feature information; (ii) developing a new hierarchical agent attention mechanism to enhance computation efficiency; (iii) designing an iterative feedback method based on CLIP Score to improve image diversity and overall quality. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

27 pages, 1843 KB  
Article
AI-Driven Modeling of Near-Mid-Air Collisions Using Machine Learning and Natural Language Processing Techniques
by Dothang Truong
Aerospace 2026, 13(1), 80; https://doi.org/10.3390/aerospace13010080 - 12 Jan 2026
Viewed by 261
Abstract
As global airspace operations grow increasingly complex, the risk of near-mid-air collisions (NMACs) poses a persistent and critical challenge to aviation safety. Traditional collision-avoidance systems, while effective in many scenarios, are limited by rule-based logic and reliance on transponder data, particularly in environments [...] Read more.
As global airspace operations grow increasingly complex, the risk of near-mid-air collisions (NMACs) poses a persistent and critical challenge to aviation safety. Traditional collision-avoidance systems, while effective in many scenarios, are limited by rule-based logic and reliance on transponder data, particularly in environments featuring diverse aircraft types, unmanned aerial systems (UAS), and evolving urban air mobility platforms. This paper introduces a novel, integrative machine learning framework designed to analyze NMAC incidents using the rich, contextual information contained within the NASA Aviation Safety Reporting System (ASRS) database. The methodology is structured around three pillars: (1) natural language processing (NLP) techniques are applied to extract latent topics and semantic features from pilot and crew incident narratives; (2) cluster analysis is conducted on both textual and structured incident features to empirically define distinct typologies of NMAC events; and (3) supervised machine learning models are developed to predict pilot decision outcomes (evasive action vs. no action) based on integrated data sources. The analysis reveals seven operationally coherent topics that reflect communication demands, pattern geometry, visibility challenges, airspace transitions, and advisory-driven interactions. A four-cluster solution further distinguishes incident contexts ranging from tower-directed approaches to general aviation pattern and cruise operations. The Random Forest model produces the strongest predictive performance, with topic-based indicators, miss distance, altitude, and operating rule emerging as influential features. The results show that narrative semantics provide measurable signals of coordination load and acquisition difficulty, and that integrating text with structured variables enhances the prediction of maneuvering decisions in NMAC situations. These findings highlight opportunities to strengthen radio practice, manage pattern spacing, improve mixed equipage awareness, and refine alerting in short-range airport area encounters. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

24 pages, 474 KB  
Article
Chinese Buddhist Canon Digitization: A Review and Prospects
by Xu Zhang
Religions 2026, 17(1), 52; https://doi.org/10.3390/rel17010052 - 3 Jan 2026
Viewed by 635
Abstract
The digitization of the Chinese Buddhist Canon represents a transformative shift in Buddhist textual scholarship, enabling unprecedented access to and analysis of one of East Asia’s most extensive scriptural collections. This review examines the evolution of digital platforms, with a focus on the [...] Read more.
The digitization of the Chinese Buddhist Canon represents a transformative shift in Buddhist textual scholarship, enabling unprecedented access to and analysis of one of East Asia’s most extensive scriptural collections. This review examines the evolution of digital platforms, with a focus on the Chinese Buddhist Electronic Text Association (CBETA) and the SAT Daizōkyō Text Database, which have become foundational resources in the field. It evaluates their respective methodological paradigms—CBETA’s critical edition model and SAT’s interoperable, ecosystem-based approach—while highlighting their shared reliance on the Taishō Tripiṭaka as a base text. The study identifies a persistent “Taishō bottleneck,” wherein the dominance of a single edition obscures the rich textual diversity inherent in the canon’s three major lineages: Central, Southern, and Northern. By surveying newly accessible image databases of key editions such as the Zhaocheng Jin Canon 趙城金藏, Sixi Canon 思溪藏, and Qidan Canon 契丹藏, the paper argues for a paradigm shift toward a multi-lineage collation framework. The integration of artificial intelligence—particularly in OCR, text–image alignment, and semantic analysis—is presented as essential for realizing a “Hybrid Digital Canon.” This model would harmonize genealogical, media, and methodological pluralism, fostering a more nuanced and historically grounded digital philology. Full article
32 pages, 2191 KB  
Article
Evaluating Color Perception in Indoor Cultural Display Spaces of Traditional Chinese Floral Arrangements: A Combined Semantic Differential and Eye-Tracking Study
by Kun Yuan, Pingfang Fan, Han Qin and Wei Gong
Buildings 2026, 16(1), 181; https://doi.org/10.3390/buildings16010181 - 31 Dec 2025
Viewed by 346
Abstract
The color design of architectural interior display spaces directly affects the effectiveness of cultural information communication and the visual cognitive experience of viewers. However, there is currently a lack of combined subjective and objective evaluation regarding how to scientifically translate and apply traditional [...] Read more.
The color design of architectural interior display spaces directly affects the effectiveness of cultural information communication and the visual cognitive experience of viewers. However, there is currently a lack of combined subjective and objective evaluation regarding how to scientifically translate and apply traditional color systems in modern contexts. This study takes the virtual display space of traditional Chinese floral arrangements as a case, aiming to construct an evaluation framework integrating the Semantic Differential Method and eye-tracking technology, to empirically examine how color schemes based on the translation of traditional aesthetics affect the subjective perception and objective visual attention behavior of modern viewers. Firstly, colors were extracted and translated from Song Dynasty paintings and literature, constructing five sets of culturally representative color combination samples, which were then applied to standardized virtual exhibition booths. Eye tracking data of 49 participants during free viewing were recorded via an eye-tracker, and their subjective ratings on four dimensions—cultural color atmosphere perception, color matching comfort level, artwork form clarity, and explanatory text clarity—were collected. Data analysis comprehensively employed linear mixed models, non-parametric tests, and Spearman’s rank correlation analysis. The results show that, regarding subjective perception, different color schemes exhibited significant differences in traditional feel, comfort, and text clarity, with Sample 4 and Sample 5 performing better on multiple indicators; a moderate-strength, significant positive correlation was found between traditional cultural atmosphere perception and color matching comfort. Regarding objective eye-tracking behavior, color significantly influenced the overall visual engagement duration and the processing depth of the text area. Among them, the color scheme of Sample 5 better promoted sustained reading of auxiliary textual information, while the total fixation duration obtained for Sample 4 was significantly shorter than that of other schemes. No direct correlation was found between subjective ratings and spontaneous eye-tracking behavior under the experimental conditions of this study; the depth of processing textual information was a key factor driving overall visual engagement. The research provides empirical evidence and design insights for the scientific application of color in spaces such as cultural heritage displays to optimize visual experience. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

27 pages, 2127 KB  
Article
Positive-Unlabeled Learning in Implicit Feedback from Data Missing-Not-At-Random Perspective
by Sichao Wang, Tianyu Xia and Lingxiao Yang
Entropy 2026, 28(1), 41; https://doi.org/10.3390/e28010041 - 29 Dec 2025
Viewed by 482
Abstract
The lack of explicit negative labels issue is a prevalent challenge in numerous domains, including CV, NLP, and Recommender Systems (RSs). To address this challenge, many negative sample completion methods are proposed, such as optimizing sample distribution through pseudo-negative sampling and confidence screening [...] Read more.
The lack of explicit negative labels issue is a prevalent challenge in numerous domains, including CV, NLP, and Recommender Systems (RSs). To address this challenge, many negative sample completion methods are proposed, such as optimizing sample distribution through pseudo-negative sampling and confidence screening in CV, constructing reliable negative examples by leveraging textual semantics in NLP, and supplementing negative samples via sparsity analysis of user interaction behaviors and preference inference in RS for handling implicit feedback. However, most existing methods fail to adequately address the Missing-Not-At-Random (MNAR) nature of the data and the potential presence of unmeasured confounders, which compromise model robustness in practice. In this paper, we first formulate the prediction task in RS with implicit feedback as a positive-unlabeled (PU) learning problem. We then propose a two-phase debiasing framework consisting of exposure status imputation, followed by debiasing through the proposed doubly robust estimator. Moreover, our theoretical analysis shows that existing propensity-based approaches are biased in the presence of unmeasured confounders. To overcome this, we incorporate a robust deconfounding method in the debiasing phase to effectively mitigate the impact of unmeasured confounders. We conduct extensive experiments on three widely used real-world datasets to demonstrate the effectiveness and potential of the proposed methods. Full article
(This article belongs to the Special Issue Causal Inference in Recommender Systems)
Show Figures

Figure 1

27 pages, 478 KB  
Article
A Comparative Analysis of Woman Imagery in Imruʾ al-Qays’ Muʿallaqa and the Qurʾānic Depiction of al-Ḥūr al-ʿĪn
by Ahmed Ali Hussein Al-Ezzi, Soner Aksoy and Sakin Taş
Religions 2026, 17(1), 22; https://doi.org/10.3390/rel17010022 - 25 Dec 2025
Viewed by 646
Abstract
This study explores the Qurʾānic portrayal of al-ḥūr al-ʿīn in relation to pre-Islamic poetic traditions, with a particular focus on Imruʾ al-Qays’s Muʿallaqa—a foundational text in Arabic love poetry. It aims to examine how the Qurʾān reconfigures familiar expressions of female beauty—such [...] Read more.
This study explores the Qurʾānic portrayal of al-ḥūr al-ʿīn in relation to pre-Islamic poetic traditions, with a particular focus on Imruʾ al-Qays’s Muʿallaqa—a foundational text in Arabic love poetry. It aims to examine how the Qurʾān reconfigures familiar expressions of female beauty—such as al-luʾluʾ al-maknūn, qāṣirātu al-ṭarf, kawāʿib atrāban, ʿuruban, and abkāran—within a spiritual and eschatological framework. The research problem centers on understanding the rhetorical and semantic shift from the sensual, body-centered depictions of women found in Imruʾ al-Qays’s couplet to the morally elevated and symbolically charged representations presented in the Qurʾān. Using a comparative textual analysis method, the study draws on classical tafsīr literature and selected passages from Muʿallaqa to trace the semantic transformation of key terms and metaphors. The findings demonstrate that while the Qurʾān retains the linguistic forms and imagery familiar to its audience—including poetic conventions of beauty from Imruʾ al-Qays—it redirects them toward a higher moral and theological purpose. Female beauty becomes not a site of fleeting desire, but a symbol of divine reward, integrating physical perfection with spiritual purity. Ultimately, the research argues that the Qurʾān does not reject the aesthetic legacy of pre-Islamic poetry, but absorbs and elevates it, establishing a new rhetorical paradigm grounded in revelation and ethical transcendence. This study encourages further comparative research between Qurʾānic discourse and early Arabic poetry to illuminate the cultural and expressive transformations shaped by Islam. Full article
23 pages, 3108 KB  
Article
Transformer-Based Memory Reverse Engineering for Malware Behavior Reconstruction
by Khaled Alrawashdeh
Computers 2026, 15(1), 8; https://doi.org/10.3390/computers15010008 - 24 Dec 2025
Viewed by 656
Abstract
Volatile memory provides the most direct and clear view into a system’s runtime behavior. Yet, traditional forensics methods are prone to errors and remain fragile against modern obfuscation and injection techniques. This paper introduces a textual-attention transformer framework that treats raw memory bytes [...] Read more.
Volatile memory provides the most direct and clear view into a system’s runtime behavior. Yet, traditional forensics methods are prone to errors and remain fragile against modern obfuscation and injection techniques. This paper introduces a textual-attention transformer framework that treats raw memory bytes as linguistic tokens, allowing the model to read memory as text and infer contextual relationships across disjoint regions. The proposed model aligns positional encodings with memory addresses and learns to associate scattered structures—such as injected stubs, PE headers, and decryption routines—within a unified semantic space. Experiments on two publicly verifiable datasets, CIC-MalMem-2022 (multi-class) and NIST CFReDS Basic Memory Images (binary), demonstrate that this approach reconstructs malware behavior with ≈97% accuracy, outperforming CNN and LSTM baselines. Attention heatmaps reveal interpretable forensic cues that identify malicious regions, bridging AI and digital forensics. The proposed concept of textual self-attention for memory opens a new paradigm in automated memory analysis—transforming volatile memory into a readable, interpretable sequence for malware behavior reconstruction. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

Back to TopTop