Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,961)

Search Parameters:
Keywords = semantic information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 16965 KB  
Article
Visualising Relation Between Terminologies and HBIM Models for Historic Architecture
by Alberto Pettineo and Sandro Parrinello
Heritage 2026, 9(4), 140; https://doi.org/10.3390/heritage9040140 - 30 Mar 2026
Abstract
Moving beyond the limits of purely geometric or descriptive documentation, the study conceives the digital models as a structured information system capable of coherently and queryably organising both the formal-typological and the interpretative-historical dimensions of heritage. The methodology is developed within the framework [...] Read more.
Moving beyond the limits of purely geometric or descriptive documentation, the study conceives the digital models as a structured information system capable of coherently and queryably organising both the formal-typological and the interpretative-historical dimensions of heritage. The methodology is developed within the framework of the European Horizon MSCA project Hephaestus, which investigates cross-border Cultural Heritage Routes (CHRs) and historic fortification systems in the Adriatic and Baltic basins. The paper focuses on Adriatic CHR, through the selection, organisation, and interrelation of a distributed corpus of fortified architectures, articulated according to historical phases, territorial clusters, typological classes, and multilevel relationships. The study adopts an approach centered on HBIM models and ontological frameworks, implemented through complementary top-down and bottom-up processes. The results show the possibility of structuring HBIM-derived data within an ontology-based framework capable of linking, within a single information system, architectural elements, fortified systems, and territorial entities across heterogeneous case studies. The application to differentiated contexts highlights the ability of the models to adapt to different scales and levels of complexity, supporting querying, comparison, and multi-level interpretation of heritage. The variety of sources and contexts enables the methodology to be tested across heterogeneous historical and typological scenarios, strengthening its applicability and robustness within a multiscalar information structure. Full article
42 pages, 899 KB  
Review
Bridging the Semantic Gap: A Review of Data Interoperability Challenges and Advanced Methodologies from BIM to LCA
by Yilong Jia, Peng Zhang and Qinjun Liu
Sustainability 2026, 18(7), 3352; https://doi.org/10.3390/su18073352 - 30 Mar 2026
Abstract
Building Information Modelling (BIM) offers a pivotal opportunity to automate Life Cycle Assessment (LCA) within the Architecture, Engineering, and Construction (AEC) industry. However, seamless integration is persistently hindered by a semantic gap, a critical misalignment between the object-oriented, geometric definitions of BIM and [...] Read more.
Building Information Modelling (BIM) offers a pivotal opportunity to automate Life Cycle Assessment (LCA) within the Architecture, Engineering, and Construction (AEC) industry. However, seamless integration is persistently hindered by a semantic gap, a critical misalignment between the object-oriented, geometric definitions of BIM and the process-based material data required by Life Cycle Inventory (LCI) databases. This paper presents a comprehensive review of data interoperability challenges and evaluates advanced methodologies designed to bridge this divide, moving beyond simple tool comparison to analyse structural integration barriers. Through a systematic review of 124 primary studies published between 2010 and 2025, this research inductively derives the BIM-LCA Interoperability Triad. This framework analyses causal dependencies across three dimensions, including Semantic and Ontological Structures, Workflow and Temporal Integration, and System Architecture and Interoperability. Furthermore, by establishing a comparative challenge–solution matrix, the analysis reveals a maturity paradox in current methodologies. While semi-automated commercial plugins dominate practice due to accessibility, they frequently function as opaque black boxes with limited transparency. Conversely, advanced approaches utilising Semantic Web technologies and Machine Learning demonstrate superior capability in resolving terminological mismatches but currently face significant barriers regarding infrastructure and expertise. This study contributes a novel theoretical model for understanding integration failures. It concludes that future research must pivot from static schema mapping towards AI-driven semantic healing, dynamic Digital Twins, and explicit system boundary harmonisation to achieve truly automated, context-aware environmental assessments and support whole-life circularity. Full article
Show Figures

Figure 1

17 pages, 1639 KB  
Article
SemantIC-Mamba: Enhancing Semantic Fidelity with Mamba-Based Global Context Modeling for 6G Communications
by Bora Yoon, Junghyun Kim and Hong-Yeop Song
Electronics 2026, 15(7), 1444; https://doi.org/10.3390/electronics15071444 - 30 Mar 2026
Abstract
This paper proposes semantic interference cancellation (SemantIC)-Mamba, a novel framework that enhances semantic reconstruction accuracy while stably maintaining channel decoding performance. The proposed model follows a turbo structure in which information is complementarily exchanged between the semantic domain and the signal domain, progressively [...] Read more.
This paper proposes semantic interference cancellation (SemantIC)-Mamba, a novel framework that enhances semantic reconstruction accuracy while stably maintaining channel decoding performance. The proposed model follows a turbo structure in which information is complementarily exchanged between the semantic domain and the signal domain, progressively refining the reconstruction quality through iterative processing. To effectively perform this iterative information refinement, the proposed framework adopts a semantic autoencoder composed of three key components: a Conv block that extracts local features, a Mamba block that efficiently models long-range dependencies to integrate global contextual information, and an UpConv block that restores low-resolution features to the original resolution. Experimental results demonstrate that SemantIC-Mamba consistently achieves improved PSNR and classification accuracy compared to conventional SemantIC and SemantIC++ while maintaining channel decoding performance at a level comparable to existing models. Full article
(This article belongs to the Special Issue Digital Signal Processing and Wireless Communication, Volume II)
Show Figures

Figure 1

21 pages, 18952 KB  
Article
Evaluating AI-Based Image Inpainting Techniques for Facial Components Restoration Using Semantic Masks
by Hussein Sharadga, Abdullah Hayajneh and Erchin Serpedin
AI 2026, 7(4), 119; https://doi.org/10.3390/ai7040119 - 30 Mar 2026
Abstract
This paper presents a comparative analysis of advanced AI-based techniques for human face inpainting using semantic masks that fully occlude targeted facial components. The primary objective is to evaluate the ability of image inpainting methods to accurately restore semantically meaningful facial features. Our [...] Read more.
This paper presents a comparative analysis of advanced AI-based techniques for human face inpainting using semantic masks that fully occlude targeted facial components. The primary objective is to evaluate the ability of image inpainting methods to accurately restore semantically meaningful facial features. Our results show that existing inpainting models face significant challenges when semantic masks completely obscure the underlying facial structures. In contrast to random masks, which leave partial visual cues, semantic masks remove all structural information, making reconstruction substantially more difficult. We assess the performance of generative adversarial networks (GANs), transformer-based models, and diffusion models in restoring fully occluded facial components. To address these challenges, we explore three retraining strategies: using semantic masks, using random masks, and a hybrid approach combining both. While the hybrid strategy leverages the complementary strengths of each mask type and improves contextual understanding, fully accurate reconstruction remains challenging. These findings demonstrate that inpainting under fully occluding semantic masks is a critical yet underexplored area, offering opportunities for developing new AI architectures and strategies for advanced facial reconstruction. Full article
22 pages, 34338 KB  
Article
DSNet: Dynamic Segmentation Revolution for Remaining Useful Life Prediction in Mixed-Model Production
by Mingda Chen, Ruiyun Yu, Zhipeng Li and Peng Yang
Electronics 2026, 15(7), 1438; https://doi.org/10.3390/electronics15071438 (registering DOI) - 30 Mar 2026
Abstract
Remaining useful life (RUL) prediction is essential for ensuring equipment reliability in smart manufacturing. However, mixed-model production introduces a significant challenge due to the discrepancy between the continuous nature of latent degradation and the abrupt, discrete transitions observed in sensor signals. These transitions [...] Read more.
Remaining useful life (RUL) prediction is essential for ensuring equipment reliability in smart manufacturing. However, mixed-model production introduces a significant challenge due to the discrepancy between the continuous nature of latent degradation and the abrupt, discrete transitions observed in sensor signals. These transitions are driven by the stochastic sequencing of product variants, which obscures the true health state of the equipment. Traditional RUL models are primarily designed for continuous and coherent evolutionary patterns, and consequently, they struggle to distinguish these observable, event-driven jumps from the hidden, underlying degradation trajectories. To resolve this, we propose the Dynamic Segmentation Network (DSNet), a framework designed to synchronize with discrete production rhythms while preserving the continuity of latent health indicators. Specifically, a segmentation loss integrating Proxy-NCA and information entropy is developed to guide the model in discerning discrete process boundaries and achieving semantically consistent partitioning. Furthermore, a hybrid encoding scheme integrates absolute and rotary positional information to capture multi-granularity temporal dependencies, which effectively bridges global degradation trends with local intra-segment structures. These innovations empower DSNet to extract highly discriminative features that are robust to process-induced fluctuations, thereby significantly enhancing RUL prediction performance. Extensive evaluations on 53 industrial welding guns from Bayerische Motoren Werke (BMW) Shenyang plants demonstrate that DSNet achieves reductions in MAE and RMSE by 12.29% and 10.66%, respectively. Consistent performance gains across three public benchmarks further validate the framework’s exceptional generalizability and robustness. Full article
(This article belongs to the Special Issue Intelligent Sensing Empowered by Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1639 KB  
Article
Cascade Registration and Fusion for Unaligned Infrared and Visible Images in Autonomous Driving
by Long Xiao, Yidong Xie and Chengda Yao
Electronics 2026, 15(7), 1427; https://doi.org/10.3390/electronics15071427 - 30 Mar 2026
Abstract
Infrared and visible image fusion is a critical technology for enhancing the all-weather perception capabilities of autonomous driving systems. However, the inherent physical parallax of vehicle-mounted sensors combined with motion-induced vibrations makes it difficult to achieve strict alignment between the source images. Direct [...] Read more.
Infrared and visible image fusion is a critical technology for enhancing the all-weather perception capabilities of autonomous driving systems. However, the inherent physical parallax of vehicle-mounted sensors combined with motion-induced vibrations makes it difficult to achieve strict alignment between the source images. Direct fusion of such misaligned pairs leads to ghosting artifacts, which significantly compromises driving safety. To address this challenge, this paper proposes a cascaded deep fusion framework tailored for autonomous driving scenarios. A dual-modal perception dataset is first constructed, incorporating realistic physical parallax and non-rigid deformations. Subsequently, a decoupled strategy is established, characterized by geometric correction followed by semantic fusion: the Static-Feature Recursive Registration (SFRR) network is utilized to explicitly correct the spatial misalignments caused by parallax, thereby establishing geometric consistency; then, the Hierarchical Invertible Block Fusion (HIBF) network achieves lossless integration of cross-modal features by combining spatial frequency separation with invertible interaction techniques. Experimental results demonstrate that the proposed method outperforms representative algorithms across several metrics, including Mutual Information (MI), Visual Information Fidelity (VIF), Structural Similarity (SSIM), and Correlation Coefficient (CC), producing high-quality fused images with clear structural definitions. Full article
Show Figures

Figure 1

24 pages, 4909 KB  
Article
UniTriM: Unified Text–Image–Video Retrieval via Multi-Granular Alignment and Feature Disentanglement
by Yangchen Wang, Yan Hua, Yingyun Yang and Wenhui Zhang
Electronics 2026, 15(7), 1424; https://doi.org/10.3390/electronics15071424 - 30 Mar 2026
Abstract
With the proliferation of multimodal content on social media, creators increasingly require tools that can retrieve both images and videos relevant to a single textual query. However, existing cross-modal retrieval methods are typically confined to binary (text–image or text–video) settings and struggle with [...] Read more.
With the proliferation of multimodal content on social media, creators increasingly require tools that can retrieve both images and videos relevant to a single textual query. However, existing cross-modal retrieval methods are typically confined to binary (text–image or text–video) settings and struggle with fine-grained semantic alignment and spatiotemporal information imbalance. To address this issue, we propose UniTriM, a unified framework for text–image–video joint retrieval. First, UniTriM supports concurrent retrieval of semantically relevant images and videos given one textual input. To overcome the scarcity of text–image–video triplet data, we introduce a self-attention-based keyframe selection strategy that converts existing text–video datasets into triplet format. Second, we design a multi-granularity similarity alignment module that captures hierarchical semantics by modeling patch–frame–video and word–triple–sentence structures and jointly optimizes intra- and cross-granularity alignments to enhance fine-grained cross-modal correspondence. Third, to alleviate the inherent spatiotemporal information imbalance between static images and video-aligned text descriptions, we introduce a feature disentanglement module that disentangles spatial-related features from text and aligns them explicitly with image representations. Experiments conducted on three benchmark datasets MSR-VTT, MSVD, and DiDeMo demonstrate that UniTriM achieves state-of-the-art performance on joint retrieval tasks. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

29 pages, 6898 KB  
Article
MDE-UNet: A Physically Guided Asymmetric Fusion Network for Multi-Source Meteorological Data Lightning Identification
by Yihua Chen, Yuanpeng Han, Yujian Zhang, Yi Liu, Lin Song, Jialei Wang, Xinjue Wang and Qilin Zhang
Remote Sens. 2026, 18(7), 1027; https://doi.org/10.3390/rs18071027 - 29 Mar 2026
Abstract
Utilizing multi-source meteorological data for lightning identification is crucial for monitoring severe convective weather. However, several key challenges persist in this field: dimensional imbalance and modal competition among multi-source heterogeneous data, model training bias caused by the extreme sparsity of lightning samples, and [...] Read more.
Utilizing multi-source meteorological data for lightning identification is crucial for monitoring severe convective weather. However, several key challenges persist in this field: dimensional imbalance and modal competition among multi-source heterogeneous data, model training bias caused by the extreme sparsity of lightning samples, and an imbalance between false alarms and missed detections resulting from complex background noise. To address these challenges, this paper proposes a lightning identification network guided by physical priors and constrained by supervision. First, to tackle the issue of modal competition in fusing satellite (high-dimensional) and radar (low-dimensional) data, a physical prior-guided asymmetric radar information enhancement mechanism is introduced. This mechanism uses radar physical features as contextual guidance to selectively enhance the latent weak radar signatures. Second, at the architectural level, a multi-source multi-scale feature fusion module and a weighted sliding window–multilayer perceptron (MLP) enhanced decoding unit are constructed. The former achieves the coupling of multi-scale physical features at a 2 km grid scale through cross-level semantic alignment, building a highly consistent feature field that effectively improves the model’s ability to detect lightning signals. The latter leverages adaptive receptive fields and the nonlinear modeling capability of MLPs to effectively smooth spatially discrete noise, ensuring spatial continuity in the reconstructed results. Finally, to address the model bias caused by severe class imbalance between positive and negative samples—resulting from the extreme sparsity of lightning events—an asymmetrically weighted BCE-DICE loss function is designed. Its “asymmetric” characteristic is implemented by assigning different penalty weights to false-positive and false-negative predictions. This loss function balances pixel-level accuracy and inter-class equilibrium while imposing high-weight penalties on false-positive predictions, achieving synergistic optimization of feature enhancement and directional suppression. Experimental results show that the proposed method effectively increases the hit rate while substantially reducing the false alarm rate, enabling efficient utilization of multi-source data and high-precision identification of lightning strike areas. Full article
23 pages, 10440 KB  
Article
MIFMNet: A Multimodal Interactions and Fusion Mamba for RGBT Tracking with UAV Platforms
by Runze Guo, Xiaoyong Sun, Bei Sun, Hanxiang Qian, Zhaoyang Dang, Peida Zhou, Feiyang Liu and Shaojing Su
Remote Sens. 2026, 18(7), 1026; https://doi.org/10.3390/rs18071026 - 29 Mar 2026
Abstract
RGBT tracking holds irreplaceable value in unmanned aerial vehicle (UAV) ground observation missions, effectively supporting scenarios such as nighttime monitoring and low-altitude reconnaissance. However, existing frameworks based on CNNs or Transformers face inherent trade-offs between interaction capabilities and computational efficiency. Furthermore, current methods [...] Read more.
RGBT tracking holds irreplaceable value in unmanned aerial vehicle (UAV) ground observation missions, effectively supporting scenarios such as nighttime monitoring and low-altitude reconnaissance. However, existing frameworks based on CNNs or Transformers face inherent trade-offs between interaction capabilities and computational efficiency. Furthermore, current methods perform poorly in challenging scenarios involving target scale variations and rapid motion from UAV perspectives. To address these issues, this paper proposes a novel multimodal interaction and fusion Mamba network (MIFMNet), which achieves fundamental innovations relative to existing RGB-T fusion trackers and recent Mamba-based tracking methods. Different from existing RGB-T trackers that rely on CNN’s local convolution or Transformer’s quadratic-complexity self-attention for cross-modal fusion, MIFMNet departs from these architectures and designs modality-adaptive interaction mechanisms based on Mamba, fully leveraging the complementary information while resolving the efficiency-accuracy trade-off. Specifically, this paper designs the scale differential enhanced Mamba (SDEM), which expands the receptive field through multiscale parallel convolutions while amplifying complementary information via differential strategies to enhance feature responses to scale-varying objects. Furthermore, we propose flow-guided multilayer interaction Mamba (FMIM), which integrates inter-frame motion information into scanning prediction. This enables the network to adaptively adjust interaction priorities between shallow texture and high-level semantic features based on motion intensity, mitigating early information forgetting and enhancing robustness in dynamic scenes. Extensive experiments on four major benchmarks demonstrate that MIFMNet achieves state-of-the-art performance on precision and success rate, particularly excelling in UAV scenarios involving occlusion, scale variations, and rapid motion. Simultaneously, it achieves an inference speed of 35.3 FPS, enabling efficient deployment on resource-constrained platforms, thereby providing robust support for UAV applications of RGBT tracking. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

15 pages, 2837 KB  
Article
Expectation Violation Influences Neural Responses to the Accessibility of Cognitions Related to Suicide and Life: A Simultaneous EEG-fNIRS Study
by Liu Bo, Wu Yuntena, Jin Tonglin and Lei Zeyu
Brain Sci. 2026, 16(4), 367; https://doi.org/10.3390/brainsci16040367 - 28 Mar 2026
Abstract
Background/Objectives: Increased accessibility of suicidal cognitions reflects the cognitive processes underlying the acquisition of suicidal thoughts. Previous research shows that expectation violation reduces the accessibility of life cognitions rather than increasing that of suicidal cognitions, but this may be due to a [...] Read more.
Background/Objectives: Increased accessibility of suicidal cognitions reflects the cognitive processes underlying the acquisition of suicidal thoughts. Previous research shows that expectation violation reduces the accessibility of life cognitions rather than increasing that of suicidal cognitions, but this may be due to a slowing effect masking an increase in suicidal cognitions. Methods: Beyond the reaction time task, the present study used simultaneous EEG-fNIRS to reveal how expectation violation differentially affects the accessibility of suicidal and life cognitions. In a trial-by-trial cognitive task, participants read sentences that were either semantically consistent (expectation confirmation) or anomalous (expectation violation), followed by a semantic judgment on suicide-related, neutral, and life-related words. Response times for each word type served as a measure of cognitive accessibility for that category. Results: Compared to expectation confirmation, expectation violation reduced the cognitive accessibility of life rather than increasing that of suicide in the reaction time task. However, in neural responses, it led to reduced N1 amplitude, increased P2 amplitude for suicide-related information, and greater hemodynamic response in the left frontopolar region. Conclusions: Expectation violation triggered distinct neural responses to suicidal information, reflecting an attentional bias that may explain how suicidal thoughts emerge within normative cognition. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
34 pages, 393 KB  
Article
Symmetry-Aware Dual-Encoder Architecture for Context-Aware Grammatical Error Correction in Chinese Learner English: Toward a Spaced-Repetition Instructional Structure Sensitive to Individual Differences
by Jun Tian
Symmetry 2026, 18(4), 579; https://doi.org/10.3390/sym18040579 (registering DOI) - 28 Mar 2026
Abstract
Grammatical error correction (GEC) for Chinese learner English is still dominated by sentence-level modeling, which limits discourse-level consistency and weakens adaptation to learner-specific error profiles. From an instructional perspective, these limitations also reduce the value of automated feedback as a basis for spaced-repetition [...] Read more.
Grammatical error correction (GEC) for Chinese learner English is still dominated by sentence-level modeling, which limits discourse-level consistency and weakens adaptation to learner-specific error profiles. From an instructional perspective, these limitations also reduce the value of automated feedback as a basis for spaced-repetition instructional structures sensitive to individual differences. This study proposes a symmetry-aware dual-encoder architecture for context-aware GEC in Chinese learner English. A context encoder captures preceding-sentence information, while a source encoder integrates BERT-based semantic representations with Bi-GRU-based syntactic features for the current sentence. A gated decoder performs asymmetric fusion of local and contextual evidence. To better reflect corpus-level tendencies in Chinese learner English, a CLEC-informed augmentation strategy generates synthetic errors using empirical category frequencies as a coarse sampling prior. Experiments on CoNLL-2014, JFLEG, and CLEC show consistent improvements over strong neural baselines in F0.5 and GLEU under the current desktop-oriented implementation setting. Nevertheless, the integration of BERT, dual encoders, and gated decoding introduces non-negligible computational overhead, and the present system is therefore better suited to desktop writing-support scenarios than to strict real-time or large-scale online deployment. The proposed framework thus provides a practical technical basis for personalized grammar feedback and for future spaced-repetition instructional designs in ESL writing support. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Natural Language Processing)
19 pages, 1666 KB  
Article
MTLL: A Novel Multi-Task Learning Approach for Lymphocytic Leukemia Classification and Nucleus Segmentation
by Cuisi Ou, Zhigang Hu, Xinzheng Wang, Kaiwen Cao and Yipei Wang
Electronics 2026, 15(7), 1419; https://doi.org/10.3390/electronics15071419 - 28 Mar 2026
Abstract
Bone marrow cell classification and nucleus segmentation in microscopic images are fundamental tasks for computer-aided diagnosis of lymphocytic leukemia. However, bone marrow cells from different subtypes exhibit high morphological similarity, and structural information is often constrained under optical microscopic imaging, posing challenges for [...] Read more.
Bone marrow cell classification and nucleus segmentation in microscopic images are fundamental tasks for computer-aided diagnosis of lymphocytic leukemia. However, bone marrow cells from different subtypes exhibit high morphological similarity, and structural information is often constrained under optical microscopic imaging, posing challenges for stable and effective feature representation. To address this issue, we propose MTLL (Multitask Model on Lymphocytic Leukemia), a novel multitask approach that performs cell classification and nucleus segmentation within a unified network to exploit their complementary information. The model constructs a hybrid backbone for shared feature representation based on a CNN-Transformer architecture, in which Fuse-MBConv modules are tightly integrated with multilayer multi-scale transformers to enable deep fusion of local texture and global semantic information. For the segmentation branch, we design an AM (Atrous Multilayer Perceptron) decoder that combines atrous spatial pyramid pooling with multilayer perceptrons to fuse multi-scale information and accurately delineate nucleus boundaries. The classification branch incorporates prior knowledge of cell nuclei structures to capture subtle variations in cellular morphology and texture, thereby enhancing the model’s ability to distinguish between leukemia subtypes. Experimental results demonstrate that the MTLL model significantly outperforms existing advanced single-task and multi-task models in both lymphocytic leukemia classification and cell nucleus segmentation. These results validate the effectiveness of the multi-task feature-sharing strategy for lymphocytic leukemia diagnosis using bone marrow microscopic images. Full article
Show Figures

Figure 1

37 pages, 6776 KB  
Article
Semantic Mapping and Cross-Model Data Integration in BIM: A Lightweight and Scalable Schedule-Level Workflow
by Tianjiao Zhao and Ri Na
Buildings 2026, 16(7), 1347; https://doi.org/10.3390/buildings16071347 - 28 Mar 2026
Abstract
Despite the widespread adoption of BIM, information exchange across disciplines remains hindered by heterogeneous structures at the tabular data level, particularly when integrating data across multiple discipline-specific models. Manual mapping, rigid templates, or one-off programming scripts are labor-intensive and difficult to scale, limiting [...] Read more.
Despite the widespread adoption of BIM, information exchange across disciplines remains hindered by heterogeneous structures at the tabular data level, particularly when integrating data across multiple discipline-specific models. Manual mapping, rigid templates, or one-off programming scripts are labor-intensive and difficult to scale, limiting automated querying, cross-model aggregation, and schedule-level analytics. This study proposes a lightweight, workflow-driven approach for semantic normalization and cross-model integration of BIM schedule data, with optional script-supported workflow configuration used only to assist the configuration of deterministic, rule-guided mapping logic, rather than serving as a core analytical method. By introducing a customizable subcategory layer, the workflow enables fine-grained semantic alignment and efficient normalization across diverse schedule datasets, implemented through lightweight Python scripting and rule-guided semantic matching used solely as a supporting mechanism for deterministic field mapping. Using structural, architectural, and HVAC models, we demonstrate a stepwise process including data cleaning, hierarchical classification, consistency checking, batch analytics, and automated computation of cross-model metrics such as opening-to-wall ratios. Sample-based validation confirms the workflow’s reliability, achieving semantic mapping agreement rates above 95% and reducing manual processing time by more than 85%. The workflow is readily extensible to other disciplines and modeling conventions, supporting high-throughput data integration for tasks such as design coordination, semantic alignment, RFI reduction, accelerated design reviews, and data-driven decision making. Overall, rather than introducing a new algorithm, the contribution of this work lies in formalizing a reusable, schedule-level workflow abstraction that enables consistent semantic alignment and automated cross-model aggregation without relying on rigid ontologies or training-intensive learning-based models. Any optional tooling used during workflow configuration is auxiliary and does not constitute a standalone learning-based method requiring model training or performance benchmarking. This provides a reusable methodological foundation for scalable, schedule-level BIM data integration and cross-model analytics. Full article
Show Figures

Figure 1

20 pages, 1191 KB  
Article
Bridging the Semantic Gap in 5G: A Hybrid RAG Framework for Dual-Domain Understanding of O-RAN Standards and srsRAN Implementation
by Yedil Nurakhov, Nurislam Kassymbek, Duman Marlambekov, Aksultan Mukhanbet and Timur Imankulov
Appl. Sci. 2026, 16(7), 3275; https://doi.org/10.3390/app16073275 - 28 Mar 2026
Viewed by 138
Abstract
The rapid evolution of the Open Radio Access Network (O-RAN) architecture and the exponential growth in specification complexity create significant barriers for researchers translating 5G standards into practical implementations. Existing evaluation frameworks for large language models, such as ORAN-Bench-13K, focus predominantly on the [...] Read more.
The rapid evolution of the Open Radio Access Network (O-RAN) architecture and the exponential growth in specification complexity create significant barriers for researchers translating 5G standards into practical implementations. Existing evaluation frameworks for large language models, such as ORAN-Bench-13K, focus predominantly on the theoretical comprehension of regulatory documents while neglecting the critical aspect of software execution. This disparity results in a profound semantic gap, defined here as the structural and conceptual misalignment between abstract normative requirements and their concrete realization in the source code of open platforms like srsRAN. To bridge this divide and enable advanced cognitive reasoning, this paper presents a Hybrid Retrieval-Augmented Generation (RAG) framework designed to unify two heterogeneous knowledge domains: the O-RAN/3GPP specification corpus and the srsRAN C++ codebase. The proposed architecture leverages a hierarchical Parent–Child Chunking strategy to preserve the structural integrity of complex code and normative protocols. Additionally, it introduces a probabilistic Semantic Query Routing mechanism that dynamically selects the relevant context domain based on query intent. This routing actively mitigates semantic interference—a phenomenon where merging conflicting cross-domain terminology introduces informational noise, which our baseline tests showed degrades response accuracy by 4.7%. Empirical evaluation demonstrates that the hybrid approach successfully overcomes this, achieving an overall accuracy of 76.70% and outperforming the standard RAG baseline of 72.00%. Furthermore, system performance analysis reveals that effective context filtering reduces the average response generation latency to 3.47 s, compared to 3.73 s for traditional RAG methods, rendering the framework highly suitable for real-time telecommunications engineering tasks. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 1357 KB  
Article
Clinically Aligned Long-Context Transformers for Cross-Platform Mental Health Risk Detection
by Aditya Tekale and Mohammad Masum
Electronics 2026, 15(7), 1403; https://doi.org/10.3390/electronics15071403 - 27 Mar 2026
Viewed by 98
Abstract
Social media platforms contain rich but noisy narratives of psychological distress, creating opportunities for early mental health risk detection. However, existing datasets capture heterogeneous constructs such as suicide risk severity, depression diagnosis, and DSM-5 symptom presence, and most prior models are trained and [...] Read more.
Social media platforms contain rich but noisy narratives of psychological distress, creating opportunities for early mental health risk detection. However, existing datasets capture heterogeneous constructs such as suicide risk severity, depression diagnosis, and DSM-5 symptom presence, and most prior models are trained and evaluated on a single corpus, limiting their clinical alignment and cross-dataset generalizability. In this study, we fine-tune a domain-specific long-document transformer, AIMH/Mental-Longformer-base-4096, for binary mental health risk detection (risk vs. no risk) using two clinically aligned Reddit datasets: the C-SSRS Reddit corpus and the eRisk 2025 depression dataset. To handle long user histories, we introduce an LLM-based summarization pipeline that compresses posts exceeding 2000 tokens while preserving mental health-relevant information. We also conduct a seven-configuration ablation study across combinations of three corpora (C-SSRS, eRisk, and ReDSM5) to examine how dataset semantics influence model performance. On a held-out C-SSRS + eRisk test set (n = 279), the proposed model achieves a mean balanced accuracy of 0.89 ± 0.01 across five random seeds, with a best run of 0.90 and a 5.74 percentage point improvement over the strongest baseline (TF-IDF + Random Forest). The model also shows strong cross-platform generalization, achieving BA = 0.78 on the depression-reddit-cleaned dataset (n = 7731) and BA = 0.85 (ROC-AUC = 0.92) on a Twitter suicidal-intention dataset (n = 9119) without additional fine-tuning. The ablation analysis shows that although a three-dataset configuration (C-SSRS + eRisk + ReDSM5) maximizes aggregate performance, the ReDSM5 labels encode symptom presence rather than clinical risk, creating a semantic mismatch. This finding highlights the importance of label compatibility when combining heterogeneous mental health corpora. Explainability analysis using Integrated Gradients and attention visualization shows that the model focuses on clinically meaningful expressions such as therapy references, diagnosis, and hopelessness rather than isolated keywords. These results demonstrate that clinically aligned long-context transformers can provide accurate and interpretable mental health risk detection from social media while emphasizing the critical role of dataset semantics in multi-corpus training. Full article
(This article belongs to the Special Issue Role of Artificial Intelligence in Natural Language Processing)
Back to TopTop