Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (367)

Search Parameters:
Keywords = semantic information measure

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 28899 KiB  
Article
MSDP-Net: A Multi-Scale Domain Perception Network for HRRP Target Recognition
by Hongxu Li, Xiaodi Li, Zihan Xu, Xinfei Jin and Fulin Su
Remote Sens. 2025, 17(15), 2601; https://doi.org/10.3390/rs17152601 - 26 Jul 2025
Viewed by 297
Abstract
High-resolution range profile (HRRP) recognition serves as a foundational task in radar automatic target recognition (RATR), enabling robust classification under all-day and all-weather conditions. However, existing approaches often struggle to simultaneously capture the multi-scale spatial dependencies and global spectral relationships inherent in HRRP [...] Read more.
High-resolution range profile (HRRP) recognition serves as a foundational task in radar automatic target recognition (RATR), enabling robust classification under all-day and all-weather conditions. However, existing approaches often struggle to simultaneously capture the multi-scale spatial dependencies and global spectral relationships inherent in HRRP signals, limiting their effectiveness in complex scenarios. To address these limitations, we propose a novel multi-scale domain perception network tailored for HRRP-based target recognition, called MSDP-Net. MSDP-Net introduces a hybrid spatial–spectral representation learning strategy through a multiple-domain perception HRRP (DP-HRRP) encoder, which integrates multi-head convolutions to extract spatial features across diverse receptive fields, and frequency-aware filtering to enhance critical spectral components. To further enhance feature fusion, we design a hierarchical scale fusion (HSF) branch that employs stacked semantically enhanced scale fusion (SESF) blocks to progressively aggregate information from fine to coarse scales in a bottom-up manner. This architecture enables MSDP-Net to effectively model complex scattering patterns and aspect-dependent variations. Extensive experiments on both simulated and measured datasets demonstrate the superiority of MSDP-Net, achieving 80.75% accuracy on the simulated dataset and 94.42% on the measured dataset, highlighting its robustness and practical applicability. Full article
Show Figures

Figure 1

18 pages, 516 KiB  
Article
A Nested Named Entity Recognition Model Robust in Few-Shot Learning Environments Using Label Description Information
by Hyunsun Hwang, Youngjun Jung, Changki Lee and Wooyoung Go
Appl. Sci. 2025, 15(15), 8255; https://doi.org/10.3390/app15158255 - 24 Jul 2025
Viewed by 180
Abstract
Nested named entity recognition (NER) is a task that identifies hierarchically structured entities, where one entity can contain other entities within its span. This study introduces a nested NER model for few-shot learning environments, addressing the difficulty of building extensive datasets for general [...] Read more.
Nested named entity recognition (NER) is a task that identifies hierarchically structured entities, where one entity can contain other entities within its span. This study introduces a nested NER model for few-shot learning environments, addressing the difficulty of building extensive datasets for general named entities. We enhance the Biaffine nested NER model by modifying its output layer to incorporate label semantic information through a novel label description embedding (LDE) approach, improving performance with limited training data. Our method replaces the traditional biaffine classifier with a label attention mechanism that leverages comprehensive natural language descriptions of entity types, encoded using BERT to capture rich semantic relationships between labels and input spans. We conducted comprehensive experiments on four benchmark datasets: GENIA (nested NER), ACE 2004 (nested NER), ACE 2005 (nested NER), and CoNLL 2003 English (flat NER). Performance was evaluated across multiple few-shot scenarios (1-shot, 5-shot, 10-shot, and 20-shot) using F1-measure as the primary metric, with five different random seeds to ensure robust evaluation. We compared our approach against strong baselines including BERT-LSTM-CRF with nested tags, the original Biaffine model, and recent few-shot NER methods (FewNER, FIT, LPNER, SpanNER). Results demonstrate significant improvements across all few-shot scenarios. On GENIA, our LDE model achieves 45.07% F1 in five-shot learning compared to 30.74% for the baseline Biaffine model (46.4% relative improvement). On ACE 2005, we obtain 44.24% vs. 32.38% F1 in five-shot scenarios (36.6% relative improvement). The model shows consistent gains in 10-shot (57.19% vs. 49.50% on ACE 2005) and 20-shot settings (64.50% vs. 58.21% on ACE 2005). Ablation studies confirm that semantic information from label descriptions is the key factor enabling robust few-shot performance. Transfer learning experiments demonstrate the model’s ability to leverage knowledge from related domains. Our findings suggest that incorporating label semantic information can substantially enhance NER models in low-resource settings, opening new possibilities for applying NER in specialized domains or languages with limited annotated data. Full article
(This article belongs to the Special Issue Applications of Natural Language Processing to Data Science)
Show Figures

Figure 1

20 pages, 4388 KiB  
Article
An Optimized Semantic Matching Method and RAG Testing Framework for Regulatory Texts
by Bingjie Li, Haolin Wen, Songyi Wang, Tao Hu, Xin Liang and Xing Luo
Electronics 2025, 14(14), 2856; https://doi.org/10.3390/electronics14142856 - 17 Jul 2025
Viewed by 303
Abstract
To enhance the accuracy and reliability of large language models (LLMs) in regulatory question-answering tasks, this study addresses the complexity and domain-specificity of regulatory texts by designing a retrieval-augmented generation (RAG) testing framework. It proposes a dimensionality reduction-based semantic similarity measurement method and [...] Read more.
To enhance the accuracy and reliability of large language models (LLMs) in regulatory question-answering tasks, this study addresses the complexity and domain-specificity of regulatory texts by designing a retrieval-augmented generation (RAG) testing framework. It proposes a dimensionality reduction-based semantic similarity measurement method and a retrieval optimization approach leveraging information reasoning. Through the construction of the technical route of the intelligent knowledge management system, the semantic understanding capabilities of multiple mainstream embedding models in the text matching of financial regulations are systematically evaluated. The workflow encompasses data processing, knowledge base construction, embedding model selection, vectorization, recall parameter analysis, and retrieval performance benchmarking. Furthermore, the study innovatively introduces a multidimensional scaling (MDS) based semantic similarity measurement method and a question-reasoning processing technique. Compared to traditional cosine similarity (CS) metrics, these methods significantly improved recall accuracy. Experimental results demonstrate that, under the RAG testing framework, the mxbai-embed-large embedding model combined with MDS similarity calculation, Top-k recall, and information reasoning effectively addresses core challenges such as the structuring of regulatory texts and the generalization of domain-specific terminology. This approach provides a reusable technical solution for optimizing semantic matching in vertical-domain RAG systems, particularly for MDSs such as law and finance. Full article
Show Figures

Figure 1

24 pages, 3833 KiB  
Article
Impact of Lighting Conditions on Emotional and Neural Responses of International Students in Cultural Exhibition Halls
by Xinyu Zhao, Zhisheng Wang, Tong Zhang, Ting Liu, Hao Yu and Haotian Wang
Buildings 2025, 15(14), 2507; https://doi.org/10.3390/buildings15142507 - 17 Jul 2025
Viewed by 340
Abstract
This study investigates how lighting conditions influence emotional and neural responses in a standardized, simulated museum environment. A multimodal evaluation framework combining subjective and objective measures was used. Thirty-two international students assessed their viewing experiences using 14 semantic differential descriptors, while real-time EEG [...] Read more.
This study investigates how lighting conditions influence emotional and neural responses in a standardized, simulated museum environment. A multimodal evaluation framework combining subjective and objective measures was used. Thirty-two international students assessed their viewing experiences using 14 semantic differential descriptors, while real-time EEG signals were recorded via the EMOTIV EPOC X device. Spectral energy analyses of the α, β, and θ frequency bands were conducted, and a θα energy ratio combined with γ coefficients was used to model attention and comfort levels. The results indicated that high illuminance (300 lx) and high correlated color temperature (4000 K) significantly enhanced both attention and comfort. Art majors showed higher attention levels than engineering majors during short-term viewing. Among four regression models, the backpropagation (BP) neural network achieved the highest predictive accuracy (R2 = 88.65%). These findings provide empirical support for designing culturally inclusive museum lighting and offer neuroscience-informed strategies for promoting the global dissemination of traditional Chinese culture, further supported by retrospective interview insights. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

19 pages, 914 KiB  
Article
Meta-Learning Task Relations for Ensemble-Based Temporal Domain Generalization in Sensor Data Forecasting
by Liang Zhang, Jiayi Liu, Bo Jin and Xiaopeng Wei
Sensors 2025, 25(14), 4434; https://doi.org/10.3390/s25144434 - 16 Jul 2025
Viewed by 213
Abstract
Temporal domain generalization is crucial for the temporal forecasting of sensor data due to the non-stationary and evolving nature of most sensor-generated time series. However, temporal dynamics vary in scale, semantics, and structure, leading to distribution shifts that a single model cannot easily [...] Read more.
Temporal domain generalization is crucial for the temporal forecasting of sensor data due to the non-stationary and evolving nature of most sensor-generated time series. However, temporal dynamics vary in scale, semantics, and structure, leading to distribution shifts that a single model cannot easily generalize over. Additionally, conflicts between temporal domain-specific patterns and limited model capacity make it difficult to learn shared parameters that work universally. To address this challenge, we propose an ensemble learning framework that leverages multiple domain-specific models to improve temporal domain generalization for sensor data forecasting. We first segment the original sensor time series into distinct temporal tasks to better handle the distribution shifts inherent in sensor measurements. A meta-learning strategy is then applied to extract shared representations across these tasks. Specifically, during meta-training, a recurrent encoder combined with variational inference captures contextual information for each task, which is used to generate task-specific model parameters. Relationships among tasks are modeled via a self-attention mechanism. For each query, the prediction results are adaptively reweighted based on all previously learned models. At inference, predictions are directly generated through the learned ensemble mechanism without additional tuning. Extensive experiments on public sensor datasets demonstrate that our method significantly enhances the generalization performance in forecasting across unseen sensor segments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 10490 KiB  
Article
DFPS: An Efficient Downsampling Algorithm Designed for the Global Feature Preservation of Large-Scale Point Cloud Data
by Jiahui Dong, Maoyi Tian, Jiayong Yu, Guoyu Li, Yunfei Wang and Yuxin Su
Sensors 2025, 25(14), 4279; https://doi.org/10.3390/s25144279 - 9 Jul 2025
Viewed by 305
Abstract
This paper introduces an efficient 3D point cloud downsampling algorithm (DFPS) based on adaptive multi-level grid partitioning. By leveraging an adaptive hierarchical grid partitioning mechanism, the algorithm dynamically adjusts computational intensity in accordance with terrain complexity. This approach effectively balances the global feature [...] Read more.
This paper introduces an efficient 3D point cloud downsampling algorithm (DFPS) based on adaptive multi-level grid partitioning. By leveraging an adaptive hierarchical grid partitioning mechanism, the algorithm dynamically adjusts computational intensity in accordance with terrain complexity. This approach effectively balances the global feature retention of point cloud data with computational efficiency, making it highly adaptable to the growing trend of large-scale 3D point cloud datasets. DFPS is designed with a multithreaded parallel acceleration architecture, which significantly enhances processing speed. Experimental results demonstrate that, for a point cloud dataset containing millions of points, DFPS reduces processing time from approximately 161,665 s using the original FPS method to approximately 71.64 s at a 12.5% sampling rate, achieving an efficiency improvement of over 2200 times. As the sampling rate decreases, the performance advantage becomes more pronounced: at a 3.125% sampling rate, the efficiency improves by nearly 10,000 times. By employing visual observation and quantitative analysis (with the chamfer distance as the measurement index), it is evident that DFPS can effectively preserve global feature information. Notably, DFPS does not depend on GPU-based heterogeneous computing, enabling seamless deployment in resource-constrained environments such as airborne and mobile devices, which makes DFPS an effective and lightweighting tool for providing high-quality input data for subsequent algorithms, including point cloud registration and semantic segmentation. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

19 pages, 528 KiB  
Article
Quantum-Inspired Attention-Based Semantic Dependency Fusion Model for Aspect-Based Sentiment Analysis
by Chenyang Xu, Xihan Wang, Jiacheng Tang, Yihang Wang, Lianhe Shao and Quanli Gao
Axioms 2025, 14(7), 525; https://doi.org/10.3390/axioms14070525 - 9 Jul 2025
Viewed by 293
Abstract
Aspect-Based Sentiment Analysis (ABSA) has gained significant popularity in recent years, which emphasizes the aspect-level sentiment representation of sentences. Current methods for ABSA often use pre-trained models and graph convolution to represent word dependencies. However, they struggle with long-range dependency issues in lengthy [...] Read more.
Aspect-Based Sentiment Analysis (ABSA) has gained significant popularity in recent years, which emphasizes the aspect-level sentiment representation of sentences. Current methods for ABSA often use pre-trained models and graph convolution to represent word dependencies. However, they struggle with long-range dependency issues in lengthy texts, resulting in averaging and loss of contextual semantic information. In this paper, we explore how richer semantic relationships can be encoded more efficiently. Inspired by quantum theory, we construct superposition states from text sequences and utilize them with quantum measurements to explicitly capture complex semantic relationships within word sequences. Specifically, we propose an attention-based semantic dependency fusion method for ABSA, which employs a quantum embedding module to create a superposition state of real-valued word sequence features in a complex-valued Hilbert space. This approach yields a word sequence density matrix representation that enhances the handling of long-range dependencies. Furthermore, we introduce a quantum cross-attention mechanism to integrate sequence features with dependency relationships between specific word pairs, aiming to capture the associations between particular aspects and comments more comprehensively. Our experiments on the SemEval-2014 and Twitter datasets demonstrate the effectiveness of the quantum-inspired attention-based semantic dependency fusion model for the ABSA task. Full article
Show Figures

Figure 1

18 pages, 1537 KiB  
Article
HierLabelNet: A Two-Stage LLMs Framework with Data Augmentation and Label Selection for Geographic Text Classification
by Zugang Chen and Le Zhao
ISPRS Int. J. Geo-Inf. 2025, 14(7), 268; https://doi.org/10.3390/ijgi14070268 - 8 Jul 2025
Viewed by 305
Abstract
Earth observation data serve as a fundamental resource in Earth system science. The rapid advancement of remote sensing and in situ measurement technologies has led to the generation of massive volumes of data, accompanied by a growing body of geographic textual information. Efficient [...] Read more.
Earth observation data serve as a fundamental resource in Earth system science. The rapid advancement of remote sensing and in situ measurement technologies has led to the generation of massive volumes of data, accompanied by a growing body of geographic textual information. Efficient and accurate classification and management of these geographic texts has become a critical challenge in the field. However, the effectiveness of traditional classification approaches is hindered by several issues, including data sparsity, class imbalance, semantic ambiguity, and the prevalence of domain-specific terminology. To address these limitations and enable the intelligent management of geographic information, this study proposes an efficient geographic text classification framework based on large language models (LLMs), tailored to the unique semantic and structural characteristics of geographic data. Specifically, LLM-based data augmentation strategies are employed to mitigate the scarcity of labeled data and class imbalance. A semantic vector database is utilized to filter the label space prior to inference, enhancing the model’s adaptability to diverse geographic terms. Furthermore, few-shot prompt learning guides LLMs in understanding domain-specific language, while an output alignment mechanism improves classification stability for complex descriptions. This approach offers a scalable solution for the automated semantic classification of geographic text for unlocking the potential of ever-expanding geospatial big data, thereby advancing intelligent information processing and knowledge discovery in the geospatial domain. Full article
Show Figures

Figure 1

28 pages, 10581 KiB  
Article
A Textual Semantic Analysis Framework Integrating Geographic Metaphors and GIS-Based Spatial Analysis Methods
by Yu Liu, Zhen Ren, Kaifeng Wang, Qin Tian, Xi Kuai and Sheng Li
Symmetry 2025, 17(7), 1064; https://doi.org/10.3390/sym17071064 - 4 Jul 2025
Viewed by 414
Abstract
Geographic information systems (GISs) have shown considerable promise in enhancing textual semantic analysis. Current textual semantic analysis methods face significant limitations in accurately delineating semantic boundaries, identifying semantic clustering patterns, and representing knowledge evolution. To address these issues, this study proposes a framework [...] Read more.
Geographic information systems (GISs) have shown considerable promise in enhancing textual semantic analysis. Current textual semantic analysis methods face significant limitations in accurately delineating semantic boundaries, identifying semantic clustering patterns, and representing knowledge evolution. To address these issues, this study proposes a framework that innovatively introduces GIS methods into textual semantic analysis and aligns them with the conceptual foundation of geographical metaphor theory. Specifically, word embedding models are employed to endow semantic primitives with comprehensive, high-dimensional semantic representations. GIS methods and geographical metaphors are subsequently utilized to project both semantic primitives and their relationships into a low-dimensional geospatial analog, thereby constructing a semantic space model that facilitates accurate delineation of semantic boundaries. On the basis of this model, spatial correlation measurements are adopted to reveal underlying semantic patterns, while knowledge evolution is represented using ArcGIS 10.7-based visualization techniques. Experiments on social media data validate the effectiveness of the framework in semantic boundary delineation and clustering pattern identification. Moreover, the framework supports dynamic three-dimensional visualization of topic evolution. Importantly, by employing specialized visualization methods, the proposed framework enables the intuitive representation of semantic symmetry and asymmetry within semantic spaces. Full article
(This article belongs to the Special Issue Applications Based on Symmetry/Asymmetry in Data Mining)
Show Figures

Figure 1

22 pages, 11841 KiB  
Article
LVID-SLAM: A Lightweight Visual-Inertial SLAM for Dynamic Scenes Based on Semantic Information
by Shuwen Wang, Qiming Hu, Xu Zhang, Wei Li, Ying Wang and Enhui Zheng
Sensors 2025, 25(13), 4117; https://doi.org/10.3390/s25134117 - 1 Jul 2025
Viewed by 472
Abstract
Simultaneous Localization and Mapping (SLAM) remains challenging in dynamic environments. Recent approaches combining deep learning with algorithms for dynamic scenes comprise two types: faster, less accurate object detection-based methods and highly accurate, computationally costly instance segmentation-based methods. In addition, maps lacking semantic information [...] Read more.
Simultaneous Localization and Mapping (SLAM) remains challenging in dynamic environments. Recent approaches combining deep learning with algorithms for dynamic scenes comprise two types: faster, less accurate object detection-based methods and highly accurate, computationally costly instance segmentation-based methods. In addition, maps lacking semantic information hinder robots from understanding their environment and performing complex tasks. This paper presents a lightweight visual-inertial SLAM system. The system is based on the classic ORB-SLAM3 framework, which starts a new thread for object detection and tightly couples the semantic information of object detection with geometric information to remove feature points from dynamic objects. In addition, Inertial Measurement Unit (IMU) data are employed to assist in feature point extraction, thereby compensating for visual pose tracking loss. Finally, a dense octree-based semantic map is constructed by fusing semantic information and visualized using ROS. LVID-SLAM demonstrates excellent pose accuracy and robustness in highly dynamic scenes on the public TUM dataset, with an average ATE reduction of more than 80% compared to ORB-SLAM3. The experimental results demonstrate that LVID-SLAM outperforms other methods in dynamic conditions, offering both real-time capability and robustness. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

20 pages, 19840 KiB  
Article
A Comparison of Segmentation Methods for Semantic OctoMap Generation
by Marcin Czajka, Maciej Krupka, Daria Kubacka, Michał Remigiusz Janiszewski and Dominik Belter
Appl. Sci. 2025, 15(13), 7285; https://doi.org/10.3390/app15137285 - 27 Jun 2025
Viewed by 485
Abstract
Semantic mapping plays a critical role in enabling autonomous vehicles to understand and navigate complex environments. Instead of computationally demanding 3D segmentation of point clouds, we propose efficient segmentation on RGB images and projection of the corresponding LIDAR measurements on the semantic OctoMap. [...] Read more.
Semantic mapping plays a critical role in enabling autonomous vehicles to understand and navigate complex environments. Instead of computationally demanding 3D segmentation of point clouds, we propose efficient segmentation on RGB images and projection of the corresponding LIDAR measurements on the semantic OctoMap. This study presents a comparative evaluation of different semantic segmentation methods and examines the impact of input image resolution on the accuracy of 3D semantic environment reconstruction, inference time, and computational resource usage. The experiments were conducted using an ROS 2-based pipeline that combines RGB images and LiDAR point clouds. Semantic segmentation is performed using ONNX-exported deep neural networks, with class predictions projected onto corresponding 3D LiDAR data using calibrated extrinsic. The resulting semantically annotated point clouds are fused into a probabilistic 3D representation using an OctoMap, where each voxel stores both occupancy and semantic class information. Multiple encoder–decoder architectures with various backbone configurations are evaluated in terms of segmentation quality, latency, memory footprint, and GPU utilization. Furthermore, a comparison between high and low image resolutions is conducted to assess trade-offs between model accuracy and real-time applicability. Full article
Show Figures

Figure 1

18 pages, 3051 KiB  
Article
Segmentation and Fractional Coverage Estimation of Soil, Illuminated Vegetation, and Shaded Vegetation in Corn Canopy Images Using CCSNet and UAV Remote Sensing
by Shanxin Zhang, Jibo Yue, Xiaoyan Wang, Haikuan Feng, Yang Liu and Meiyan Shu
Agriculture 2025, 15(12), 1309; https://doi.org/10.3390/agriculture15121309 - 18 Jun 2025
Viewed by 554
Abstract
The accurate estimation of corn canopy structure and light conditions is essential for effective crop management and informed variety selection. This study introduces CCSNet, a deep learning-based semantic segmentation model specifically developed to extract fractional coverages of soil, illuminated vegetation, and shaded vegetation [...] Read more.
The accurate estimation of corn canopy structure and light conditions is essential for effective crop management and informed variety selection. This study introduces CCSNet, a deep learning-based semantic segmentation model specifically developed to extract fractional coverages of soil, illuminated vegetation, and shaded vegetation from high-resolution corn canopy images acquired by UAVs. CCSNet improves segmentation accuracy by employing multi-level feature fusion and pyramid pooling to effectively capture multi-scale contextual information. The model was evaluated using Pixel Accuracy (PA), mean Intersection over Union (mIoU), and Recall, and was benchmarked against U-Net, PSPNet and UNetFormer. On the test set, CCSNet utilizing a ResNet50 backbone achieved the highest accuracy, with an mIoU of 86.42% and a PA of 93.58%. In addition, its estimation of fractional coverage for key canopy components yielded a root mean squared error (RMSE) ranging from 3.16% to 5.02%. Compared to lightweight backbones (e.g., MobileNetV2), CCSNet exhibited superior generalization performance when integrated with deeper backbones. These results highlight CCSNet’s capability to deliver high-precision segmentation and reliable phenotypic measurements. This provides valuable insights for breeders to evaluate light-use efficiency and facilitates intelligent decision-making in precision agriculture. Full article
(This article belongs to the Special Issue Research Advances in Perception for Agricultural Robots)
Show Figures

Figure 1

17 pages, 2975 KiB  
Article
A Topology Identification Strategy of Low-Voltage Distribution Grids Based on Feature-Enhanced Graph Attention Network
by Yang Lei, Fan Yang, Yanjun Feng, Wei Hu and Yinzhang Cheng
Energies 2025, 18(11), 2821; https://doi.org/10.3390/en18112821 - 29 May 2025
Viewed by 441
Abstract
Accurate topological connectivity is critical for the safe operation and management of low-voltage distribution grids (LVDGs). However, due to the complexity of the structure and the lack of measurement equipment, obtaining and maintaining these topological connections has become a challenge. This paper proposes [...] Read more.
Accurate topological connectivity is critical for the safe operation and management of low-voltage distribution grids (LVDGs). However, due to the complexity of the structure and the lack of measurement equipment, obtaining and maintaining these topological connections has become a challenge. This paper proposes a topology identification strategy for LVDGs based on a feature-enhanced graph attention network (F-GAT). First, the topology of the LVDG is represented as a graph structure using measurement data collected from intelligent terminals, with a feature matrix encoding the basic information of each entity. Secondly, the meta-path form of the heterogeneous graph is designed according to the connection characteristics of the LVDG, and the walking sequence is enhanced using a heterogeneous skip-gram model to obtain an embedded representation of the structural characteristics of each node. Then, the F-GAT model is used to learn potential association patterns and structural information in the graph topology, achieving a joint low-dimensional representation of electrical attributes and graph semantics. Finally, case studies on five urban LVDGs in the Wuhan region are conducted to validate the effectiveness and practicality of the proposed F-GAT model. Full article
Show Figures

Figure 1

28 pages, 3438 KiB  
Article
Optimizing Remote Sensing Image Retrieval Through a Hybrid Methodology
by Sujata Alegavi and Raghvendra Sedamkar
J. Imaging 2025, 11(6), 179; https://doi.org/10.3390/jimaging11060179 - 28 May 2025
Viewed by 561
Abstract
The contemporary challenge in remote sensing lies in the precise retrieval of increasingly abundant and high-resolution remotely sensed images (RS image) stored in expansive data warehouses. The heightened spatial and spectral resolutions, coupled with accelerated image acquisition rates, necessitate advanced tools for effective [...] Read more.
The contemporary challenge in remote sensing lies in the precise retrieval of increasingly abundant and high-resolution remotely sensed images (RS image) stored in expansive data warehouses. The heightened spatial and spectral resolutions, coupled with accelerated image acquisition rates, necessitate advanced tools for effective data management, retrieval, and exploitation. The classification of large-sized images at the pixel level generates substantial data, escalating the workload and search space for similarity measurement. Semantic-based image retrieval remains an open problem due to limitations in current artificial intelligence techniques. Furthermore, on-board storage constraints compel the application of numerous compression algorithms to reduce storage space, intensifying the difficulty of retrieving substantial, sensitive, and target-specific data. This research proposes an innovative hybrid approach to enhance the retrieval of remotely sensed images. The approach leverages multilevel classification and multiscale feature extraction strategies to enhance performance. The retrieval system comprises two primary phases: database building and retrieval. Initially, the proposed Multiscale Multiangle Mean-shift with Breaking Ties (MSMA-MSBT) algorithm selects informative unlabeled samples for hyperspectral and synthetic aperture radar images through an active learning strategy. Addressing the scaling and rotation variations in image capture, a flexible and dynamic algorithm, modified Deep Image Registration using Dynamic Inlier (IRDI), is introduced for image registration. Given the complexity of remote sensing images, feature extraction occurs at two levels. Low-level features are extracted using the modified Multiscale Multiangle Completed Local Binary Pattern (MSMA-CLBP) algorithm to capture local contexture features, while high-level features are obtained through a hybrid CNN structure combining pretrained networks (Alexnet, Caffenet, VGG-S, VGG-M, VGG-F, VGG-VDD-16, VGG-VDD-19) and a fully connected dense network. Fusion of low- and high-level features facilitates final class distinction, with soft thresholding mitigating misclassification issues. A region-based similarity measurement enhances matching percentages. Results, evaluated on high-resolution remote sensing datasets, demonstrate the effectiveness of the proposed method, outperforming traditional algorithms with an average accuracy of 86.66%. The hybrid retrieval system exhibits substantial improvements in classification accuracy, similarity measurement, and computational efficiency compared to state-of-the-art scene classification and retrieval methods. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing: 2nd Edition)
Show Figures

Figure 1

19 pages, 2921 KiB  
Article
Interpolated Retrieval of Relevant Material, Not Irrelevant Material, Enhances New Learning of a Video Lecture In-Person and Online
by Zohara Assadipour, Dahwi Ahn and Jason C. K. Chan
Behav. Sci. 2025, 15(5), 668; https://doi.org/10.3390/bs15050668 - 14 May 2025
Viewed by 575
Abstract
Interpolated retrieval enhances the learning of new information—a finding known as the forward testing effect. The context change account suggests that learning benefits are due to a shift in internal context, which can be triggered through the retrieval of either content-relevant or content-irrelevant [...] Read more.
Interpolated retrieval enhances the learning of new information—a finding known as the forward testing effect. The context change account suggests that learning benefits are due to a shift in internal context, which can be triggered through the retrieval of either content-relevant or content-irrelevant information. In two experiments, we examined whether interpolated episodic, autobiographical, and semantic retrieval would enhance new learning of a video lecture, compared to interpolated review. Participants watched a STEM topic lecture divided into three ~5 min segments and completed their assigned interpolated activity after the first two segments. Across both a laboratory (Experiment 1, N = 249) and online setting (Experiment 2, N = 246), only episodic retrieval enhanced the learning of new material; autobiographical and semantic retrieval (content-irrelevant) did not improve new learning. Critically, we introduced a measure of context change to determine whether the level of engagement in these interpolated activities predicted recall. Engagement correlated with criterial test performance when controlling for effort (seriousness). Our results support a multi-factor explanation for the forward testing effect, providing evidence for both the context change and strategy change accounts, although we emphasize that support for context change should be interpreted with caution. Full article
(This article belongs to the Special Issue Educational Applications of Cognitive Psychology)
Show Figures

Figure 1

Back to TopTop