Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (11,003)

Search Parameters:
Keywords = semantic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 7030 KiB  
Article
Integrating HBIM and GIS Through Object-Relational Databases for the Conservation of Rammed Earth Heritage: A Multiscale Approach
by F. Javier Chorro-Domínguez, Paula Redweik and José Juan Sanjosé-Blasco
Heritage 2025, 8(8), 336; https://doi.org/10.3390/heritage8080336 (registering DOI) - 16 Aug 2025
Abstract
Historic earthen architecture—particularly rammed earth—is underrepresented in digital heritage initiatives despite its widespread historical use and vulnerability to degradation. This paper presents a novel methodology for integrating semantic, geometric, and geospatial information from earthen heritage into a unified digital environment, bridging Heritage Building [...] Read more.
Historic earthen architecture—particularly rammed earth—is underrepresented in digital heritage initiatives despite its widespread historical use and vulnerability to degradation. This paper presents a novel methodology for integrating semantic, geometric, and geospatial information from earthen heritage into a unified digital environment, bridging Heritage Building Information Modeling (HBIM) and Geographic Information Systems (GIS) through an object-relational database. The proposed workflow enables automated and bidirectional data exchange between Revit (via Dynamo scripts) and open-source GIS tools (QGIS and PostgreSQL/PostGIS), supporting semantic alignment and spatial coherence. The method was tested on seven fortified rammed-earth sites in the southwestern Iberian Peninsula, chosen for their typological and territorial diversity. Results demonstrate the feasibility of multiscale documentation and analysis, supported by a structured database populated with geometric, semantic, diagnostic, and environmental information, enabling enriched interpretations of construction techniques, material variability, and conservation status. The approach also facilitates the integration of HBIM datasets into broader territorial management frameworks. This work contributes to the development of scalable, open-source digital tools tailored to vernacular heritage, offering a replicable strategy for bridging the gap between building-scale and landscape-scale documentation in cultural heritage management. Full article
(This article belongs to the Section Architectural Heritage)
22 pages, 5692 KiB  
Article
RiceStageSeg: A Multimodal Benchmark Dataset for Semantic Segmentation of Rice Growth Stages
by Jianping Zhang, Tailai Chen, Yizhe Li, Qi Meng, Yanying Chen, Jie Deng and Enhong Sun
Remote Sens. 2025, 17(16), 2858; https://doi.org/10.3390/rs17162858 (registering DOI) - 16 Aug 2025
Abstract
The accurate identification of rice growth stages is critical for precision agriculture, crop management, and yield estimation. Remote sensing technologies, particularly multimodal approaches that integrate high spatial and hyperspectral resolution imagery, have demonstrated great potential in large-scale crop monitoring. Multimodal data fusion offers [...] Read more.
The accurate identification of rice growth stages is critical for precision agriculture, crop management, and yield estimation. Remote sensing technologies, particularly multimodal approaches that integrate high spatial and hyperspectral resolution imagery, have demonstrated great potential in large-scale crop monitoring. Multimodal data fusion offers complementary and enriched spectral–spatial information, providing novel pathways for crop growth stage recognition in complex agricultural scenarios. However, the lack of publicly available multimodal datasets specifically designed for rice growth stage identification remains a significant bottleneck that limits the development and evaluation of relevant methods. To address this gap, we present RiceStageSeg, a multimodal benchmark dataset captured by unmanned aerial vehicles (UAVs), designed to support the development and assessment of segmentation models for rice growth monitoring. RiceStageSeg contains paired centimeter-level RGB and 10-band multispectral (MS) images acquired during several critical rice growth stages, including jointing and heading. Each image is accompanied by fine-grained, pixel-level annotations that distinguish between the different growth stages. We establish baseline experiments using several state-of-the-art semantic segmentation models under both unimodal (RGB-only, MS-only) and multimodal (RGB + MS fusion) settings. The experimental results demonstrate that multimodal feature-level fusion outperforms unimodal approaches in segmentation accuracy. RiceStageSeg offers a standardized benchmark to advance future research in multimodal semantic segmentation for agricultural remote sensing. The dataset will be made publicly available on GitHub v0.11.0 (accessed on 1 August 2025). Full article
Show Figures

Figure 1

19 pages, 3172 KiB  
Article
RASD: Relation Aware Spectral Decoupling Attention Network for Knowledge Graph Reasoning
by Zheng Wang, Taiyu Li and Zengzhao Chen
Appl. Sci. 2025, 15(16), 9049; https://doi.org/10.3390/app15169049 (registering DOI) - 16 Aug 2025
Abstract
Knowledge Graph Reasoning (KGR) aims to deduce missing or novel knowledge by learning structured information and semantic relationships within Knowledge Graphs (KGs). Despite significant advances achieved by deep neural networks in recent years, existing models typically extract non-linear representations from explicit features in [...] Read more.
Knowledge Graph Reasoning (KGR) aims to deduce missing or novel knowledge by learning structured information and semantic relationships within Knowledge Graphs (KGs). Despite significant advances achieved by deep neural networks in recent years, existing models typically extract non-linear representations from explicit features in a relatively simplistic manner and fail to fully exploit semantic heterogeneity of relation types and entity co-occurrence frequencies. Consequently, these models struggle to capture critical predictive cues embedded in various entities and relations. To address these limitations, this paper proposes a relation aware spectral decoupling attention network for KGR (RASD). First, a spectral decoupling attention network module projects joint embeddings of entities and relations into the frequency domain, extracting features across different frequency bands and adaptively allocating attention at the global level to model frequency specific information. Next, a relation-aware learning module employs relation aware filters and an augmentation mechanism to preserve distinct relational properties and suppress redundant features, thereby enhancing representation of heterogeneous relations. Experimental results demonstrate that RASD achieves significant and consistent improvements over multiple leading baseline models on link prediction tasks across five public benchmark datasets. Full article
Show Figures

Figure 1

19 pages, 825 KiB  
Article
Self-Supervised Cloud Classification with Patch Rotation Tasks (SSCC-PR)
by Wuyang Yan, Xiong Xiong, Xinyuan Xia, Yanchao Zhang and Xiaojie Guo
Appl. Sci. 2025, 15(16), 9051; https://doi.org/10.3390/app15169051 (registering DOI) - 16 Aug 2025
Abstract
Solar irradiance, which is closely influenced by cloud cover, significantly affects photovoltaic (PV) power generation efficiency. To improve cloud type recognition without relying on labeled data tasks, this paper proposes a self-supervised cloud classification method based on patch rotation prediction. In the Pre-training [...] Read more.
Solar irradiance, which is closely influenced by cloud cover, significantly affects photovoltaic (PV) power generation efficiency. To improve cloud type recognition without relying on labeled data tasks, this paper proposes a self-supervised cloud classification method based on patch rotation prediction. In the Pre-training stage, unlabeled ground-based cloud images are augmented through blockwise rotation, and high-level semantic representations are learned via a Swin Transformer encoder. In the fine-tuning stage, these representations are adapted to the cloud classification task using labeled data. Experimental results show that our method achieves 96.61% accuracy on the RCCD and 90.18% on the SWIMCAT dataset, outperforming existing supervised and self-supervised baselines by a clear margin. These results demonstrate the effectiveness and robustness of the proposed approach, especially in data-scarce scenarios. This research provides valuable technical support for improving the prediction of solar irradiance and optimizing PV power generation efficiency. Full article
27 pages, 18762 KiB  
Article
From Data to Decision: A Semantic and Network-Centric Approach to Urban Green Space Planning
by Elisavet Parisi and Charalampos Bratsas
Information 2025, 16(8), 695; https://doi.org/10.3390/info16080695 (registering DOI) - 16 Aug 2025
Abstract
Urban sustainability poses a deeply interdisciplinary challenge, spanning technical fields like data science and environmental science, design-oriented disciplines like architecture and spatial planning, and domains such as economics, policy, and social studies. While numerous advanced tools are used in these domains, ranging from [...] Read more.
Urban sustainability poses a deeply interdisciplinary challenge, spanning technical fields like data science and environmental science, design-oriented disciplines like architecture and spatial planning, and domains such as economics, policy, and social studies. While numerous advanced tools are used in these domains, ranging from geospatial systems to AI and network analysis-, they often remain fragmented, domain-specific, and difficult to integrate. This paper introduces a semantic framework that aims not to replace existing analytical methods, but to interlink their outputs and datasets within a unified, queryable knowledge graph. Leveraging semantic web technologies, the framework enables the integration of heterogeneous urban data, including spatial, network, and regulatory information, permitting advanced querying and pattern discovery across formats. Applying the methodology to two urban contexts—Thessaloniki (Greece) as a full implementation and Marine Parade GRC (Singapore) as a secondary test—we demonstrate its flexibility and potential to support more informed decision-making in diverse planning environments. The methodology reveals both opportunities and constraints shaped by accessibility, connectivity, and legal zoning, offering a reusable approach for urban interventions in other contexts. More broadly, the work illustrates how semantic technologies can foster interoperability among tools and disciplines, creating the conditions for truly data-driven, collaborative urban planning. Full article
Show Figures

Figure 1

21 pages, 3126 KiB  
Article
CViT Weakly Supervised Network Fusing Dual-Branch Local-Global Features for Hyperspectral Image Classification
by Wentao Fu, Xiyan Sun, Xiuhua Zhang, Yuanfa Ji and Jiayuan Zhang
Entropy 2025, 27(8), 869; https://doi.org/10.3390/e27080869 - 15 Aug 2025
Abstract
In hyperspectral image (HSI) classification, feature learning and label accuracy play a crucial role. In actual hyperspectral scenes, however, noisy labels are unavoidable and seriously impact the performance of methods. While deep learning has achieved remarkable results in HSI classification tasks, its noise-resistant [...] Read more.
In hyperspectral image (HSI) classification, feature learning and label accuracy play a crucial role. In actual hyperspectral scenes, however, noisy labels are unavoidable and seriously impact the performance of methods. While deep learning has achieved remarkable results in HSI classification tasks, its noise-resistant performance usually comes at the cost of feature representation capabilities. High-dimensional and deep convolution can capture rich deep semantic features, but with high complexity and resource consumption. To deal with these problems, we propose a CViT Weakly Supervised Network (CWSN) for HSI classification. Specifically, a lightweight 1D-2D two-branch network is used for local generalization and enhancement of spatial–spectral features. Then, the fusion and characterization of local and global features are achieved through the CNN-Vision Transformer (CViT) cascade strategy. The experimental results on four benchmark HSI datasets show that CWSN has good anti-noise ability and ensures the robustness and versatility of the network facing both clean and noisy training sets. Compared to other methods, the CWSN has better classification accuracy. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Graphical abstract

24 pages, 2703 KiB  
Article
Unsupervised Person Re-Identification via Deep Attribute Learning
by Shun Zhang, Yaohui Xu, Xuebin Zhang, Boyang Cheng and Ke Wang
Future Internet 2025, 17(8), 371; https://doi.org/10.3390/fi17080371 - 15 Aug 2025
Abstract
Driven by growing public security demands and the advancement of intelligent surveillance systems, person re-identification (ReID) has emerged as a prominent research focus in the field of computer vision. %The primary objective of person ReID is to retrieve individuals with the same identity [...] Read more.
Driven by growing public security demands and the advancement of intelligent surveillance systems, person re-identification (ReID) has emerged as a prominent research focus in the field of computer vision. %The primary objective of person ReID is to retrieve individuals with the same identity across different camera views. However, this task presents challenges due to its high sensitivity to variations in visual appearance caused by factors such as body pose and camera parameters. Although deep learning-based methods have achieved marked progress in ReID, the high cost of annotation remains a challenge that cannot be overlooked. To address this, we propose an unsupervised attribute learning framework that eliminates the need for costly manual annotations while maintaining high accuracy. The framework learns the mid-level human attributes (such as clothing type and gender) that are robust to substantial visual appearance variations and can hence boost the accuracy of attributes with a small amount of labeled data. To carry out our framework, we present a part-based convolutional neural network (CNN) architecture, which consists of two components for image and body attribute learning on a global level and upper- and lower-body image and attribute learning at a local level. The proposed architecture is trained to learn attribute-semantic and identity-discriminative feature representations simultaneously. For model learning, we first train our part-based network using a supervised approach on a labeled attribute dataset. Then, we apply an unsupervised clustering method to assign pseudo-labels to unlabeled images in a target dataset using our trained network. To improve feature compatibility, we introduce an attribute consistency scheme for unsupervised domain adaptation on this unlabeled target data. During training on the target dataset, we alternately perform three steps: extracting features with the updated model, assigning pseudo-labels to unlabeled images, and fine-tuning the model. % change Through a unified framework that fuses complementary attribute-label and identity label information, our approach achieves considerable improvements of 10.6\% and 3.91\% mAP on Market-1501→DukeMTMC-ReID and DukeMTMC-ReID→Market-1501 unsupervised domain adaptation tasks, respectively. Full article
(This article belongs to the Special Issue Advances in Deep Learning and Next-Generation Internet Technologies)
18 pages, 2055 KiB  
Article
Language-Driven Cross-Attention for Visible–Infrared Image Fusion Using CLIP
by Xue Wang, Jiatong Wu, Pengfei Zhang and Zhongjun Yu
Sensors 2025, 25(16), 5083; https://doi.org/10.3390/s25165083 - 15 Aug 2025
Abstract
Language-guided multimodal fusion, which integrates information from both visible and infrared images, has shown strong performance in image fusion tasks. In low-light or complex environments, a single modality often fails to fully capture scene features, whereas fused images enable robots to obtain multidimensional [...] Read more.
Language-guided multimodal fusion, which integrates information from both visible and infrared images, has shown strong performance in image fusion tasks. In low-light or complex environments, a single modality often fails to fully capture scene features, whereas fused images enable robots to obtain multidimensional scene understanding for navigation, localization, and environmental perception. This capability is particularly important in applications such as autonomous driving, intelligent surveillance, and search-and-rescue operations, where accurate recognition and efficient decision-making are critical. To enhance the effectiveness of multimodal fusion, we propose a text-guided infrared and visible image fusion network. The framework consists of two key components: an image fusion branch, which employs a cross-domain attention mechanism to merge multimodal features, and a text-guided module, which leverages the CLIP model to extract semantic cues from image descriptions containing visible content. These semantic parameters are then used to guide the feature modulation process during fusion. By integrating visual and linguistic information, our framework is capable of generating high-quality color-fused images that not only enhance visual detail but also enrich semantic understanding. On benchmark datasets, our method achieves strong quantitative performance: SF = 2.1381, Qab/f = 0.6329, MI = 14.2305, SD = 0.8527, VIF = 45.1842 on LLVIP, and SF = 1.3149, Qab/f = 0.5863, MI = 13.9676, SD = 94.7203, VIF = 0.7746 on TNO. These results highlight the robustness and scalability of our model, making it a promising solution for real-world multimodal perception applications. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

22 pages, 2050 KiB  
Article
A Trustworthy Dataset for APT Intelligence with an Auto-Annotation Framework
by Rui Qi, Ga Xiang, Yangsen Zhang, Qunsheng Yang, Mingyue Cheng, Haoyang Zhang, Mingming Ma, Lu Sun and Zhixing Ma
Electronics 2025, 14(16), 3251; https://doi.org/10.3390/electronics14163251 - 15 Aug 2025
Abstract
Advanced Persistent Threats (APTs) pose significant cybersecurity challenges due to their multi-stage complexity. Knowledge graphs (KGs) effectively model APT attack processes through node-link architectures; however, the scarcity of high-quality, annotated datasets limits research progress. The primary challenge lies in balancing annotation cost and [...] Read more.
Advanced Persistent Threats (APTs) pose significant cybersecurity challenges due to their multi-stage complexity. Knowledge graphs (KGs) effectively model APT attack processes through node-link architectures; however, the scarcity of high-quality, annotated datasets limits research progress. The primary challenge lies in balancing annotation cost and quality, particularly due to the lack of quality assessment methods for graph annotation data. This study addresses these issues by extending existing APT ontology definitions and developing a dynamic, trustworthy annotation framework for APT knowledge graphs. The framework introduces a self-verification mechanism utilizing large language model (LLM) annotation consistency and establishes a comprehensive graph data metric system for problem localization in annotated data. This metric system, based on structural properties, logical consistency, and APT attack chain characteristics, comprehensively evaluates annotation quality across representation, syntax semantics, and topological structure. Experimental results show that this framework significantly reduces annotation costs while maintaining quality. Using this framework, we constructed LAPTKG, a reliable dataset containing over 10,000 entities and relations. Baseline evaluations show substantial improvements in entity and relation extraction performance after metric correction, validating the framework’s effectiveness in reliable APT knowledge graph dataset construction. Full article
(This article belongs to the Special Issue Advances in Information Processing and Network Security)
18 pages, 3275 KiB  
Article
Equity Evaluation of Street-Level Greenery Based on Green View Index from Street View Images: A Case Study of Hangzhou, China
by Jinting Zhang, Cheng Liu, Min Xu and Sheng Zheng
Land 2025, 14(8), 1653; https://doi.org/10.3390/land14081653 - 15 Aug 2025
Abstract
Equity in urban greenery is essential to improving residents’ well-being and contributing to environmental justice. Research on equity in street-scale urban greenery remains limited, but this study addresses it by employing the green view index (GVI), a widely recognized indicator for assessing green [...] Read more.
Equity in urban greenery is essential to improving residents’ well-being and contributing to environmental justice. Research on equity in street-scale urban greenery remains limited, but this study addresses it by employing the green view index (GVI), a widely recognized indicator for assessing green space quality from a pedestrian perspective, using semantic segmentation methods and Baidu Street View (BSV) images to quantify street-level greenery. Through spatial clustering and hot spot analysis, the visibility and spatial distribution of street greenery in Hangzhou’s central urban area were examined. Furthermore, the Lorenz curve, Gini coefficient, and location entropy were applied to evaluate disparities in green visibility across urban spaces. The results show that the average GVI at the sample point level, road level, and district level in the study area are 0.167, 0.142, and 0.177, respectively. Meanwhile, the spatial heterogeneity of the GVI is highly pronounced, with distinct clustering characteristics. The Gini coefficient of street greenery visibility is 0.384, indicating a moderate level of inequality in the distribution of greenery resources. Notably, a higher GVI does not necessarily correspond to better internal greenery equity, highlighting disparities in the distribution of urban greenery. This study offers a more precise and refined quantification of urban greenery equity, providing critical insights for addressing spatial disparities and informing urban planning strategies aimed at promoting equitable green infrastructure. Full article
(This article belongs to the Special Issue Land Space Optimization and Governance)
Show Figures

Figure 1

22 pages, 1792 KiB  
Article
Automatic Scribble Annotations Based Semantic Segmentation Model for Seedling-Stage Maize Images
by Zhaoyang Li, Xin Liu, Hanbing Deng, Yuncheng Zhou and Teng Miao
Agronomy 2025, 15(8), 1972; https://doi.org/10.3390/agronomy15081972 - 15 Aug 2025
Abstract
Canopy coverage is a key indicator for judging maize growth and production prediction during the seedling stage. Researchers usually use deep learning methods to estimate canopy coverage from maize images, but fully supervised models usually need pixel-level annotations, which requires lots of manual [...] Read more.
Canopy coverage is a key indicator for judging maize growth and production prediction during the seedling stage. Researchers usually use deep learning methods to estimate canopy coverage from maize images, but fully supervised models usually need pixel-level annotations, which requires lots of manual labor. To overcome this problem, we propose ASLNet (Automatic Scribble Labeling-based Semantic Segmentation Network), a weakly supervised model for image semantic segmentation. We designed a module which could self-generate scribble labels for maize plants in an image. Accordingly, ASLNet was constructed using a collaborative mechanism composed of scribble label generation, pseudo-label guided training, and double-loss joint optimization. The cross-scale contrastive regularization can realize semantic segmentation without manual labels. We evaluated the model for label quality and segmentation accuracy. The results showed that ASLNet generated high-quality scribble labels with stable segmentation performance across different scribble densities. Compared to Scribble4All, ASLNet improved mIoU by 3.15% and outperformed fully and weakly supervised models by 6.6% and 15.28% in segmentation accuracy, respectively. Our works proved that ASLNet could be trained by pseudo-labels and offered a cost-effective approach for canopy coverage estimation at maize’s seedling stage. This research enables the early acquisition of corn growth conditions and the prediction of corn yield. Full article
(This article belongs to the Section Precision and Digital Agriculture)
21 pages, 6933 KiB  
Article
DECC-Net: A Maize Tassel Segmentation Model Based on UAV-Captured Imagery
by Yinchuan Liu, Lili He, Yuying Cao, Xinyue Gao, Shoutian Dong and Yinjiang Jia
Agriculture 2025, 15(16), 1751; https://doi.org/10.3390/agriculture15161751 - 15 Aug 2025
Abstract
The male flower of the maize plant, known as the tassel, is a strong indicator of the growth, development, and reproductive stages of maize crops. Monitoring maize tassels under natural conditions is significant for maize breeding, management, and yield estimation. Unmanned aerial vehicle [...] Read more.
The male flower of the maize plant, known as the tassel, is a strong indicator of the growth, development, and reproductive stages of maize crops. Monitoring maize tassels under natural conditions is significant for maize breeding, management, and yield estimation. Unmanned aerial vehicle (UAV) remote sensing combined with deep learning-based semantic segmentation offers a novel approach for monitoring maize tassel phenotypic traits. The morphological and size variations in maize tassels, together with numerous similar interference factors in the farmland environment (such as leaf veins, female ears, etc.), pose significant challenges to the accurate segmentation of tassels. To address these challenges, we propose DECC-Net, a novel segmentation model designed to accurately extract maize tassels from complex farmland environments. DECC-Net integrates the Dynamic Kernel Feature Extraction (DKE) module to comprehensively capture semantic features of tassels, along with the Lightweight Channel Cross Transformer (LCCT) and Adaptive Feature Channel Enhancement (AFE) modules to guide effective fusion of multi-stage encoder features while mitigating semantic gaps. Experimental results demonstrate that DECC-Net achieves advanced performance, with IoU and Dice scores of 83.3% and 90.9%, respectively, outperforming existing segmentation models while exhibiting robust generalization across diverse scenarios. This work provides valuable insights for maize varietal selection, yield estimation, and field management operations. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

20 pages, 2386 KiB  
Article
Personalized Federated Learning Based on Dynamic Parameter Fusion and Prototype Alignment
by Ying Chen, Jing Wen, Shaoling Liang, Zhaofa Chen and Baohua Huang
Sensors 2025, 25(16), 5076; https://doi.org/10.3390/s25165076 - 15 Aug 2025
Abstract
To address the limitation of generalization of federated learning under non-independent and identically distributed (Non-IID) data, we propose FedDFPA, a personalized federated learning framework that integrates dynamic parameter fusion and prototype alignment. We design a class-wise dynamic parameter fusion mechanism that adaptively fuses [...] Read more.
To address the limitation of generalization of federated learning under non-independent and identically distributed (Non-IID) data, we propose FedDFPA, a personalized federated learning framework that integrates dynamic parameter fusion and prototype alignment. We design a class-wise dynamic parameter fusion mechanism that adaptively fuses global and local classifier parameters at the class level. It enables each client to preserve its reliable local knowledge while selectively incorporating beneficial global information for personalized classification. We introduce a prototype alignment mechanism based on both global and historical information. By aligning current local features with global prototypes and historical local prototypes, it improves cross-client semantic consistency and enhances the stability of local features. To evaluate the effectiveness of FedDFPA, we conduct extensive experiments on various Non-IID settings and client participation rates. Compared to the average performance of state-of-the-art algorithms, FedDFPA improves the average test accuracy by 3.59% and 4.71% under practical and pathological heterogeneous settings, respectively. These results confirm the effectiveness of our dual-mechanism design in achieving a better balance between personalization and collaboration in federated learning. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 2870 KiB  
Article
A Spatiotemporal–Semantic Coupling Intelligent Q&A Method for Land Use Approval Based on Knowledge Graphs and Intelligent Agents
by Huimin Liu, Shutong Yin, Xin Hu, Min Deng, Xuexi Yang and Gang Xu
Appl. Sci. 2025, 15(16), 9012; https://doi.org/10.3390/app15169012 - 15 Aug 2025
Abstract
The rapid retrieval and precise acquisition of land use approval information are crucial for enhancing the efficiency and quality of land use approval, as well as for promoting the intelligent transformation of land use approval processes. As an advanced retrieval method, question-answering (Q&A) [...] Read more.
The rapid retrieval and precise acquisition of land use approval information are crucial for enhancing the efficiency and quality of land use approval, as well as for promoting the intelligent transformation of land use approval processes. As an advanced retrieval method, question-answering (Q&A) technology has become a core technical support for addressing current issues such as low approval efficiency and difficulty in obtaining information. However, existing Q&A technologies suffer from significant hallucination problems and limitations in considering spatiotemporal factors in the land use approval domain. To effectively address these issues, this study proposes a spatiotemporal–semantic coupling intelligent Q&A method for land use approval based on knowledge graphs (KGs) and intelligent agent technology, aiming to enhance the efficiency and quality of land use approval. Firstly, a land use approval knowledge graph (LUAKG) is constructed, systematically integrating domain knowledge such as policy clauses, legal regulations, and approval procedures. Then, by combining large language models (LLMs) and intelligent agent technology, a spatiotemporal–semantic coupling Q&A framework is designed. Through the use of spatiotemporal analysis tools, this framework can comprehensively consider spatial, temporal, and semantic factors when handling land approval tasks, enabling dynamic decision-making and precise reasoning. The research results show that, compared to traditional Q&A based on LLMs and Q&A based on retrieval-enhanced generation (RAG), the proposed method improves accuracy by 16% and 9% in general knowledge Q&A tasks. In the project review Q&A task, F1 scores and accuracy increase by 2% and 9%, respectively, compared to RAG-QA. Particularly, under the spatiotemporal–semantic multidimensional analysis, the improvement in F1 score and accuracy ranges from 2 to 6% and 7 to 10%, respectively. Full article
Show Figures

Figure 1

18 pages, 1417 KiB  
Article
A Fusion-Based Approach with Bayes and DeBERTa for Efficient and Robust Spam Detection
by Ao Zhang, Kelei Li and Haihua Wang
Algorithms 2025, 18(8), 515; https://doi.org/10.3390/a18080515 - 15 Aug 2025
Abstract
Spam emails pose ongoing risks to digital security, including data breaches, privacy violations, and financial losses. Addressing the limitations of traditional detection systems in terms of accuracy, adaptability, and resilience remains a significant challenge. In this paper, we propose a hybrid spam detection [...] Read more.
Spam emails pose ongoing risks to digital security, including data breaches, privacy violations, and financial losses. Addressing the limitations of traditional detection systems in terms of accuracy, adaptability, and resilience remains a significant challenge. In this paper, we propose a hybrid spam detection framework that integrates a classical multinomial naive Bayes classifier with a pre-trained large language model, DeBERTa. The framework employs a weighted probability fusion strategy to combine the strengths of both models—lexical pattern recognition and deep semantic understanding—into a unified decision process. We evaluate the proposed method on a widely used spam dataset. Experimental results demonstrate that the hybrid model achieves superior performance in terms of accuracy and robustness when compared with other classifiers. The findings support the effectiveness of hybrid modeling in advancing spam detection techniques. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Back to TopTop