Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = local neighborhood dynamic fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 24301 KiB  
Article
Robust Optical and SAR Image Registration Using Weighted Feature Fusion
by Ao Luo, Anxi Yu, Yongsheng Zhang, Wenhao Tong and Huatao Yu
Remote Sens. 2025, 17(15), 2544; https://doi.org/10.3390/rs17152544 - 22 Jul 2025
Viewed by 315
Abstract
Image registration constitutes the fundamental basis for the joint interpretation of synthetic aperture radar (SAR) and optical images. However, robust image registration remains challenging due to significant regional heterogeneity in remote sensing scenes (e.g., co-existing urban and marine areas within a single image). [...] Read more.
Image registration constitutes the fundamental basis for the joint interpretation of synthetic aperture radar (SAR) and optical images. However, robust image registration remains challenging due to significant regional heterogeneity in remote sensing scenes (e.g., co-existing urban and marine areas within a single image). To overcome this challenge, this article proposes a novel optical–SAR image registration method named Gradient and Standard Deviation Feature Weighted Fusion (GDWF). First, a Block-local standard deviation (Block-LSD) operator is proposed to extract block-based feature points with regional adaptability. Subsequently, a dual-modal feature description is developed, constructing both gradient-based descriptors and local standard deviation (LSD) descriptors for the neighborhoods surrounding the detected feature points. To further enhance matching robustness, a confidence-weighted feature fusion strategy is proposed. By establishing a reliability evaluation model for similarity measurement maps, the contribution weights of gradient features and LSD features are dynamically optimized, ensuring adaptive performance under varying conditions. To verify the effectiveness of the method, different optical and SAR datasets are used to compare it with the currently advanced algorithms MOGF, CFOG, and FED-HOPC. The experimental results demonstrate that the proposed GDWF algorithm achieves the best performance in terms of registration accuracy and robustness among all compared methods, effectively handling optical–SAR image pairs with significant regional heterogeneity. Full article
Show Figures

Figure 1

35 pages, 58241 KiB  
Article
DGMNet: Hyperspectral Unmixing Dual-Branch Network Integrating Adaptive Hop-Aware GCN and Neighborhood Offset Mamba
by Kewen Qu, Huiyang Wang, Mingming Ding, Xiaojuan Luo and Wenxing Bao
Remote Sens. 2025, 17(14), 2517; https://doi.org/10.3390/rs17142517 - 19 Jul 2025
Viewed by 272
Abstract
Hyperspectral sparse unmixing (SU) networks have recently received considerable attention due to their model hyperspectral images (HSIs) with a priori spectral libraries and to capture nonlinear features through deep networks. This method effectively avoids errors associated with endmember extraction, and enhances the unmixing [...] Read more.
Hyperspectral sparse unmixing (SU) networks have recently received considerable attention due to their model hyperspectral images (HSIs) with a priori spectral libraries and to capture nonlinear features through deep networks. This method effectively avoids errors associated with endmember extraction, and enhances the unmixing performance via nonlinear modeling. However, two major challenges remain: the use of large spectral libraries with high coherence leads to computational redundancy and performance degradation; moreover, certain feature extraction models, such as Transformer, while exhibiting strong representational capabilities, suffer from high computational complexity. To address these limitations, this paper proposes a hyperspectral unmixing dual-branch network integrating an adaptive hop-aware GCN and neighborhood offset Mamba that is termed DGMNet. Specifically, DGMNet consists of two parallel branches. The first branch employs the adaptive hop-neighborhood-aware GCN (AHNAGC) module to model global spatial features. The second branch utilizes the neighborhood spatial offset Mamba (NSOM) module to capture fine-grained local spatial structures. Subsequently, the designed Mamba-enhanced dual-stream feature fusion (MEDFF) module fuses the global and local spatial features extracted from the two branches and performs spectral feature learning through a spectral attention mechanism. Moreover, DGMNet innovatively incorporates a spectral-library-pruning mechanism into the SU network and designs a new pruning strategy that accounts for the contribution of small-target endmembers, thereby enabling the dynamic selection of valid endmembers and reducing the computational redundancy. Finally, an improved ESS-Loss is proposed, which combines an enhanced total variation (ETV) with an l1/2 sparsity constraint to effectively refine the model performance. The experimental results on two synthetic and five real datasets demonstrate the effectiveness and superiority of the proposed method compared with the state-of-the-art methods. Notably, experiments on the Shahu dataset from the Gaofen-5 satellite further demonstrated DGMNet’s robustness and generalization. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

17 pages, 1416 KiB  
Article
A Transformer-Based Pavement Crack Segmentation Model with Local Perception and Auxiliary Convolution Layers
by Yi Zhu, Ting Cao and Yiqing Yang
Electronics 2025, 14(14), 2834; https://doi.org/10.3390/electronics14142834 - 15 Jul 2025
Viewed by 301
Abstract
Crack detection in complex pavement scenarios remains challenging due to the sparse small-target features and computational inefficiency of existing methods. To address these limitations, this study proposes an enhanced architecture based on Mask2Former. The framework integrates two key innovations. A Local Perception Module [...] Read more.
Crack detection in complex pavement scenarios remains challenging due to the sparse small-target features and computational inefficiency of existing methods. To address these limitations, this study proposes an enhanced architecture based on Mask2Former. The framework integrates two key innovations. A Local Perception Module (LPM) reconstructs geometric topological relationships through a Sequence-Space Dynamic Transformation Mechanism (DS2M), enhancing neighborhood feature extraction via depthwise separable convolutions. Simultaneously, an Auxiliary Convolutional Layer (ACL) combines lightweight residual convolutions with shallow high-resolution features, preserving critical edge details through channel attention weighting. Experimental evaluations demonstrate the model’s superior performance, achieving improvements of 3.2% in mIoU and 2.7% in mAcc compared to baseline methods, while maintaining computational efficiency with only 12.8 GFLOPs. These results validate the effectiveness of geometric relationship modeling and hierarchical feature fusion for pavement crack detection, suggesting practical potential for infrastructure maintenance systems. The proposed approach balances precision and efficiency, offering a viable solution for real-world applications with complex crack patterns and hardware constraints. Full article
Show Figures

Figure 1

28 pages, 8102 KiB  
Article
Multi-Neighborhood Sparse Feature Selection for Semantic Segmentation of LiDAR Point Clouds
by Rui Zhang, Guanlong Huang, Fengpu Bao and Xin Guo
Remote Sens. 2025, 17(13), 2288; https://doi.org/10.3390/rs17132288 - 3 Jul 2025
Viewed by 354
Abstract
LiDAR point clouds, as direct carriers of 3D spatial information, comprehensively record the geometric features and spatial topological relationships of object surfaces, providing intelligent systems with rich 3D scene representation capability. However, current point cloud semantic segmentation methods primarily extract features through operations [...] Read more.
LiDAR point clouds, as direct carriers of 3D spatial information, comprehensively record the geometric features and spatial topological relationships of object surfaces, providing intelligent systems with rich 3D scene representation capability. However, current point cloud semantic segmentation methods primarily extract features through operations such as convolution and pooling, yet fail to adequately consider sparse features that significantly influence the final results of point cloud-based scene perception, resulting in insufficient feature representation capability. To address these problems, a sparse feature dynamic graph convolutional neural network, abbreviated as SFDGNet, is constructed in this paper for LiDAR point clouds of complex scenes. In the context of this paper, sparse features refer to feature representations in which only a small number of activation units or channels exhibit significant responses during the forward pass of the model. First, a sparse feature regularization method was used to motivate the network model to learn the sparsified feature weight matrix. Next, a split edge convolution module, abbreviated as SEConv, was designed to extract the local features of the point cloud from multiple neighborhoods by dividing the input feature channels, and to effectively learn sparse features to avoid feature redundancy. Finally, a multi-neighborhood feature fusion strategy was developed that combines the attention mechanism to fuse the local features of different neighborhoods and obtain global features with fine-grained information. Taking S3DIS and ScanNet v2 datasets, we evaluated the feasibility and effectiveness of SFDGNet by comparing it with six typical semantic segmentation models. Compared with the benchmark model DGCNN, SFDGNet improved overall accuracy (OA), mean accuracy (mAcc), mean intersection over union (mIoU), and sparsity by 1.8%, 3.7%, 3.5%, and 85.5% on the S3DIS dataset, respectively. The mIoU on the ScanNet v2 validation set, mIoU on the test set, and sparsity were improved by 3.2%, 7.0%, and 54.5%, respectively. Full article
(This article belongs to the Special Issue Remote Sensing for 2D/3D Mapping)
Show Figures

Graphical abstract

18 pages, 8647 KiB  
Article
An Improved DHA Star and ADA-DWA Fusion Algorithm for Robot Path Planning
by Yizhe Jia, Yong Cai, Jun Zhou, Hui Hu, Xuesheng Ouyang, Jinlong Mo and Hao Dai
Robotics 2025, 14(7), 90; https://doi.org/10.3390/robotics14070090 - 29 Jun 2025
Viewed by 515
Abstract
The advancement of mobile robot technology has made path planning a necessary condition for autonomous navigation, but traditional algorithms have issues with efficiency and reliability in dynamic and unstructured environments. This study proposes a Dynamic Hybrid A* (DHA*)–Adaptive Dynamic Window Approach (ADA-DWA) fusion [...] Read more.
The advancement of mobile robot technology has made path planning a necessary condition for autonomous navigation, but traditional algorithms have issues with efficiency and reliability in dynamic and unstructured environments. This study proposes a Dynamic Hybrid A* (DHA*)–Adaptive Dynamic Window Approach (ADA-DWA) fusion algorithm for efficient and reliable path planning in dynamic unstructured environments. This paper improves the A* algorithm by introducing a dynamic hybrid heuristic function, optimizing the selection of key nodes, and enhancing the neighborhood search strategy, and collaboratively optimizes the search efficiency and path smoothness through curvature optimization. On this basis, the local planning layer introduces a self-adjusting weight-adaptive system in the DWA framework to dynamically optimize the speed, sampling distribution, and trajectory evaluation metrics, achieving a balance between obstacle avoidance and environmental adaptability. The proposed fusion algorithm’s comprehensive advantages over traditional methods in key operational indicators, including path optimality, computational efficiency, and obstacle avoidance capability, have been widely verified through numerical simulations and physical platforms. This method successfully resolves the inherent trade-off between efficiency and reliability in complex robot navigation scenarios, providing enhanced operational robustness for practical applications ranging from industrial logistics to field robots. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

24 pages, 3113 KiB  
Article
Gradual Geometry-Guided Knowledge Distillation for Source-Data-Free Domain Adaptation
by Yangkuiyi Zhang and Song Tang
Mathematics 2025, 13(9), 1491; https://doi.org/10.3390/math13091491 - 30 Apr 2025
Viewed by 434
Abstract
Due to access to the source data during the transfer phase, conventional domain adaptation works have recently raised safety and privacy concerns. More research attention thus shifts to a more practical setting known as source-data-free domain adaptation (SFDA). The new challenge is how [...] Read more.
Due to access to the source data during the transfer phase, conventional domain adaptation works have recently raised safety and privacy concerns. More research attention thus shifts to a more practical setting known as source-data-free domain adaptation (SFDA). The new challenge is how to obtain reliable semantic supervision in the absence of source domain training data and the labels on the target domain. To that end, in this work, we introduce a novel Gradual Geometry-Guided Knowledge Distillation (G2KD) approach for SFDA. Specifically, to address the lack of supervision, we used local geometry of data to construct a more credible probability distribution over the potential categories, termed geometry-guided knowledge. Then, knowledge distillation was adopted to integrate this extra information for boosting the adaptation. More specifically, first, we constructed a neighborhood geometry for any target data using a similarity comparison on the whole target dataset. Second, based on pre-obtained semantic estimation by clustering, we mined soft semantic representations expressing the geometry-guided knowledge by semantic fusion. Third, using the soften labels, we performed knowledge distillation regulated by the new objective. Considering the unsupervised setting of SFDA, in addition to the distillation loss and student loss, we introduced a mixed entropy regulator that minimized the entropy of individual data as well as maximized the mutual entropy with augmentation data to utilize neighbor relation. Our contribution is that, through local geometry discovery with semantic representation and self-knowledge distillation, the semantic information hidden in the local structures is transformed to effective semantic self-supervision. Also, our knowledge distillation works in a gradual way that is helpful to capture the dynamic variations in the local geometry, mitigating the previous guidance degradation and deviation at the same time. Extensive experiments on five challenging benchmarks confirmed the state-of-the-art performance of our method. Full article
(This article belongs to the Special Issue Robust Perception and Control in Prognostic Systems)
Show Figures

Figure 1

20 pages, 3968 KiB  
Article
Research on Multi-Scale Point Cloud Completion Method Based on Local Neighborhood Dynamic Fusion
by Yalun Liu, Jiantao Sun and Ling Zhao
Appl. Sci. 2025, 15(6), 3006; https://doi.org/10.3390/app15063006 - 10 Mar 2025
Viewed by 1103
Abstract
Point cloud completion reconstructs incomplete, sparse inputs into complete 3D shapes. However, in the current 3D completion task, it is difficult to effectively extract the local details of an incomplete one, resulting in poor restoration of local details and low accuracy of the [...] Read more.
Point cloud completion reconstructs incomplete, sparse inputs into complete 3D shapes. However, in the current 3D completion task, it is difficult to effectively extract the local details of an incomplete one, resulting in poor restoration of local details and low accuracy of the completed point clouds. To address this problem, this paper proposes a multi-scale point cloud completion method based on local neighborhood dynamic fusion (LNDF: adaptive aggregation of multi-scale local features through dynamic range and weight adjustment). Firstly, the farthest point sampling (FPS) strategy is applied to the original incomplete and defective point clouds for down-sampling to obtain three types of point clouds at different scales. When extracting features from point clouds of different scales, the local neighborhood aggregation of key points is dynamically adjusted, and the Transformer architecture is integrated to further enhance the correlation of local feature extraction information. Secondly, by combining the method of generating point clouds layer by layer in a pyramid-like manner, the local details of the point clouds are gradually enriched from coarse to fine to achieve point cloud completion. Finally, when designing the decoder, inspired by the concept of generative adversarial networks (GANs), an attention discriminator designed in series with a feature extraction layer and an attention layer is added to further optimize the completion performance of the network. Experimental results show that LNDM-Net reduces the average Chamfer Distance (CD) by 5.78% on PCN and 4.54% on ShapeNet compared to SOTA. The visualization of completion results demonstrates the superior performance of our method in both point cloud completion accuracy and local detail preservation. When handling diverse samples and incomplete point clouds in real-world 3D scenarios from the KITTI dataset, the approach exhibits enhanced generalization capability and completion fidelity. Full article
(This article belongs to the Special Issue Advanced Pattern Recognition & Computer Vision)
Show Figures

Figure 1

30 pages, 9485 KiB  
Article
Research on Path Planning Algorithm of Driverless Ferry Vehicles Combining Improved A* and DWA
by Zhaohong Wang and Gang Li
Sensors 2024, 24(13), 4041; https://doi.org/10.3390/s24134041 - 21 Jun 2024
Cited by 11 | Viewed by 1508
Abstract
In view of the fact that the global planning algorithm cannot avoid unknown dynamic and static obstacles and the local planning algorithm easily falls into local optimization in large-scale environments, an improved path planning algorithm based on the integration of A* and DWA [...] Read more.
In view of the fact that the global planning algorithm cannot avoid unknown dynamic and static obstacles and the local planning algorithm easily falls into local optimization in large-scale environments, an improved path planning algorithm based on the integration of A* and DWA is proposed and applied to driverless ferry vehicles. Aiming at the traditional A* algorithm, the vector angle cosine value is introduced to improve the heuristic function to enhance the search direction; the search neighborhood is expanded and optimized to improve the search efficiency; aiming at the problem that there are many turning points in the A* algorithm, a cubic quasi-uniform B-spline curve is used to smooth the path. At the same time, fuzzy control theory is introduced to improve the traditional DWA so that the weight coefficient of the evaluation function can be dynamically adjusted in different environments, effectively avoiding the problem of a local optimal solution. Through the fusion of the improved DWA and the improved A* algorithm, the key nodes in global planning are used as sub-target punctuation to guide the DWA for local planning, so as to ensure that the ferry vehicle avoids obstacles in real time. Simulation results show that the fusion algorithm can avoid unknown dynamic and static obstacles efficiently and in real time on the basis of obtaining the global optimal path. In different environment maps, the effectiveness and adaptability of the fusion algorithm are verified. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

19 pages, 6351 KiB  
Article
Point Cloud Deep Learning Network Based on Local Domain Multi-Level Feature
by Xianquan Han, Xijiang Chen, Hui Deng, Peng Wan and Jianzhou Li
Appl. Sci. 2023, 13(19), 10804; https://doi.org/10.3390/app131910804 - 28 Sep 2023
Cited by 2 | Viewed by 2379
Abstract
Point cloud deep learning networks have been widely applied in point cloud classification, part segmentation and semantic segmentation. However, current point cloud deep learning networks are insufficient in the local feature extraction of the point cloud, which affects the accuracy of point cloud [...] Read more.
Point cloud deep learning networks have been widely applied in point cloud classification, part segmentation and semantic segmentation. However, current point cloud deep learning networks are insufficient in the local feature extraction of the point cloud, which affects the accuracy of point cloud classification and segmentation. To address this issue, this paper proposes a local domain multi-level feature fusion point cloud deep learning network. First, dynamic graph convolutional operation is utilized to obtain the local neighborhood feature of the point cloud. Then, relation-shape convolution is used to extract a deeper-level edge feature of the point cloud, and max pooling is adopted to aggregate the edge features. Finally, point cloud classification and segmentation are realized based on global features and local features. We use the ModelNet40 and ShapeNet datasets to conduct the comparison experiment, which is a large-scale 3D CAD model dataset and a richly annotated, large-scale dataset of 3D shapes. For ModelNet40, the overall accuracy (OA) of the proposed method is similar to DGCNN, RS-CNN, PointConv and GAPNet, all exceeding 92%. Compared to PointNet, PointNet++, SO-Net and MSHANet, the OA of the proposed method is improved by 5%, 2%, 3% and 2.6%, respectively. For the ShapeNet dataset, the mean Intersection over Union (mIoU) of the part segmentation achieved by the proposed method is 86.3%, which is 2.9%, 1.4%, 1.7%, 1.7%, 1.2%, 0.1% and 1.0% higher than PointNet, RS-Net, SCN, SPLATNet, DGCNN, RS-CNN and LRC-NET, respectively. Full article
(This article belongs to the Special Issue Novel Approaches for Remote Sensing Image Processing)
Show Figures

Figure 1

23 pages, 34874 KiB  
Article
Multi-Feature Fusion and Adaptive Kernel Combination for SAR Image Classification
by Xiaoying Wu, Xianbin Wen, Haixia Xu, Liming Yuan and Changlun Guo
Appl. Sci. 2021, 11(4), 1603; https://doi.org/10.3390/app11041603 - 10 Feb 2021
Cited by 4 | Viewed by 2324
Abstract
Synthetic aperture radar (SAR) image classification is an important task in remote sensing applications. However, it is challenging due to the speckle embedding in SAR imaging, which significantly degrades the classification performance. To address this issue, a new SAR image classification framework based [...] Read more.
Synthetic aperture radar (SAR) image classification is an important task in remote sensing applications. However, it is challenging due to the speckle embedding in SAR imaging, which significantly degrades the classification performance. To address this issue, a new SAR image classification framework based on multi-feature fusion and adaptive kernel combination is proposed in this paper. Expressing pixel similarity by non-negative logarithmic likelihood difference, the generalized neighborhoods are newly defined. The adaptive kernel combination is designed on them to dynamically explore multi-feature information that is robust to speckle noise. Then, local consistency optimization is further applied to enhance label spatial smoothness during classification. By simultaneously utilizing adaptive kernel combination and local consistency optimization for the first time, the texture feature information, context information within features, generalized spatial information between features, and complementary information among features is fully integrated to ensure accurate and smooth classification. Compared with several state-of-the-art methods on synthetic and real SAR images, the proposed method demonstrates better performance in visual effects and classification quality, as the image edges and details are better preserved according to the experimental results. Full article
Show Figures

Figure 1

Back to TopTop