Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = DAISY descriptor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 5446 KB  
Article
Dense 3D Reconstruction Based on Multi-Aspect SAR Using a Novel SAR-DAISY Feature Descriptor
by Shanshan Feng, Fei Teng, Jun Wang and Wen Hong
Remote Sens. 2025, 17(10), 1753; https://doi.org/10.3390/rs17101753 - 17 May 2025
Viewed by 827
Abstract
Dense 3D reconstruction from multi-aspect angle synthetic aperture radar (SAR) imagery has gained considerable attention for urban monitoring applications. However, achieving reliable dense matching between multi-aspect SAR images remains challenging due to three fundamental issues: anisotropic scattering characteristics that cause inconsistent features across [...] Read more.
Dense 3D reconstruction from multi-aspect angle synthetic aperture radar (SAR) imagery has gained considerable attention for urban monitoring applications. However, achieving reliable dense matching between multi-aspect SAR images remains challenging due to three fundamental issues: anisotropic scattering characteristics that cause inconsistent features across different aspect angles, geometric distortions, and speckle noise. To overcome these limitations, we introduce SAR-DAISY, a novel local feature descriptor specifically designed for dense matching in multi-aspect SAR images. The proposed method adapts the DAISY descriptor structure to SAR images specifically by incorporating the Gradient by Ratio (GR) operator for robust gradient calculation in speckle-affected imagery and enforcing multi-aspect consistency constraints during matching. We validated our method on W-band airborne SAR data collected over urban areas using circular flight paths. Experimental results demonstrate that SAR-DAISY generates detailed 3D point clouds with well-preserved structural features and high computational efficiency. The estimated heights of urban structures align with ground truth measurements. This approach enables 3D representation of complex urban environments from multi-aspect SAR data without requiring prior knowledge. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis (2nd Edition))
Show Figures

Graphical abstract

25 pages, 9712 KB  
Article
Comparative Analysis of Color Space and Channel, Detector, and Descriptor for Feature-Based Image Registration
by Wenan Yuan, Sai Raghavendra Prasad Poosa and Rutger Francisco Dirks
J. Imaging 2024, 10(5), 105; https://doi.org/10.3390/jimaging10050105 - 28 Apr 2024
Cited by 5 | Viewed by 2650
Abstract
The current study aimed to quantify the value of color spaces and channels as a potential superior replacement for standard grayscale images, as well as the relative performance of open-source detectors and descriptors for general feature-based image registration purposes, based on a large [...] Read more.
The current study aimed to quantify the value of color spaces and channels as a potential superior replacement for standard grayscale images, as well as the relative performance of open-source detectors and descriptors for general feature-based image registration purposes, based on a large benchmark dataset. The public dataset UDIS-D, with 1106 diverse image pairs, was selected. In total, 21 color spaces or channels including RGB, XYZ, Y′CrCb, HLS, L*a*b* and their corresponding channels in addition to grayscale, nine feature detectors including AKAZE, BRISK, CSE, FAST, HL, KAZE, ORB, SIFT, and TBMR, and 11 feature descriptors including AKAZE, BB, BRIEF, BRISK, DAISY, FREAK, KAZE, LATCH, ORB, SIFT, and VGG were evaluated according to reprojection error (RE), root mean square error (RMSE), structural similarity index measure (SSIM), registration failure rate, and feature number, based on 1,950,984 image registrations. No meaningful benefits from color space or channel were observed, although XYZ, RGB color space and L* color channel were able to outperform grayscale by a very minor margin. Per the dataset, the best-performing color space or channel, detector, and descriptor were XYZ/RGB, SIFT/FAST, and AKAZE. The most robust color space or channel, detector, and descriptor were L*a*b*, TBMR, and VGG. The color channel, detector, and descriptor with the most initial detector features and final homography features were Z/L*, FAST, and KAZE. In terms of the best overall unfailing combinations, XYZ/RGB+SIFT/FAST+VGG/SIFT seemed to provide the highest image registration quality, while Z+FAST+VGG provided the most image features. Full article
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)
Show Figures

Figure 1

17 pages, 3275 KB  
Article
A Dual-Tree–Complex Wavelet Transform-Based Infrared and Visible Image Fusion Technique and Its Application in Tunnel Crack Detection
by Feng Wang and Tielin Chen
Appl. Sci. 2024, 14(1), 114; https://doi.org/10.3390/app14010114 - 22 Dec 2023
Cited by 5 | Viewed by 2045
Abstract
Computer vision methods have been widely used in recent years for the detection of structural cracks. To address the issues of poor image quality and the inadequate performance of semantic segmentation networks under low-light conditions in tunnels, in this paper, infrared images are [...] Read more.
Computer vision methods have been widely used in recent years for the detection of structural cracks. To address the issues of poor image quality and the inadequate performance of semantic segmentation networks under low-light conditions in tunnels, in this paper, infrared images are used, and a preprocessing method based on image fusion technology is developed. First, the DAISY descriptor and the perspective transform are applied for image alignment. Then, the source image is decomposed into high- and low-frequency components of different scales and directions using DT-CWT, and high- and low-frequency subband fusion rules are designed according to the characteristics of infrared and visible images. Finally, a fused image is reconstructed from the processed coefficients, and the fusion results are evaluated using the improved semantic segmentation network. The results show that using the proposed fusion method to preprocess images leads to a low false alarm rate and low missed detection rate in comparison to those using the source image directly or using the classical fusion algorithm. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

32 pages, 23484 KB  
Article
An Illumination Insensitive Descriptor Combining the CSLBP Features for Street View Images in Augmented Reality: Experimental Studies
by Zejun Xiang, Ronghua Yang, Chang Deng, Mingxing Teng, Mengkun She and Degui Teng
ISPRS Int. J. Geo-Inf. 2020, 9(6), 362; https://doi.org/10.3390/ijgi9060362 - 1 Jun 2020
Cited by 2 | Viewed by 3068
Abstract
The common feature matching algorithms for street view images are sensitive to the illumination changes in augmented reality (AR), this may cause low accuracy of matching between street view images. This paper proposes a novel illumination insensitive feature descriptor by integrating the center-symmetric [...] Read more.
The common feature matching algorithms for street view images are sensitive to the illumination changes in augmented reality (AR), this may cause low accuracy of matching between street view images. This paper proposes a novel illumination insensitive feature descriptor by integrating the center-symmetric local binary pattern (CS-LBP) into a common feature description framework. This proposed descriptor can be used to improve the performance of eight commonly used feature-matching algorithms, e.g., SIFT, SURF, DAISY, BRISK, ORB, FREAK, KAZE, and AKAZE. We perform the experiments on five street view image sequences with different illumination changes. By comparing with the performance of eight original algorithms, the evaluation results show that our improved algorithms can improve the matching accuracy of street view images with changing illumination. Further, the time consumption only increases a little. Therefore, our combined descriptors are much more robust against light changes to satisfy the high precision requirement of augmented reality (AR) system. Full article
(This article belongs to the Special Issue GIS Software and Engineering for Big Data)
Show Figures

Graphical abstract

26 pages, 1867 KB  
Article
Fast Finger Vein Recognition Based on Sparse Matching Algorithm under a Multicore Platform for Real-Time Individuals Identification
by Ruber Hernández-García, Ricardo J. Barrientos, Cristofher Rojas, Wladimir E. Soto-Silva, Marco Mora, Paulo Gonzalez and Fernando Emmanuel Frati
Symmetry 2019, 11(9), 1167; https://doi.org/10.3390/sym11091167 - 15 Sep 2019
Cited by 3 | Viewed by 4638
Abstract
Nowadays, individual identification is a problem in many private companies, but also in governmental and public order entities. Currently, there are multiple biometric methods, each with different advantages. Finger vein recognition is a modern biometric technique, which has several advantages, especially in terms [...] Read more.
Nowadays, individual identification is a problem in many private companies, but also in governmental and public order entities. Currently, there are multiple biometric methods, each with different advantages. Finger vein recognition is a modern biometric technique, which has several advantages, especially in terms of security and accuracy. However, image deformations and time efficiency are two of the major limitations of state-of-the-art contributions. In spite of affine transformations produced during the acquisition process, the geometric structure of finger vein images remains invariant. This consideration of the symmetry phenomena presented in finger vein images is exploited in the present work. We combine an image enhancement procedure, the DAISY descriptor, and an optimized Coarse-to-fine PatchMatch (CPM) algorithm under a multicore parallel platform, to develop a fast finger vein recognition method for real-time individuals identification. Our proposal provides an effective and efficient technique to obtain the displacement between finger vein images and considering it as discriminatory information. Experimental results on two well-known databases, PolyU and SDUMLA, show that our proposed approach achieves results comparable to deformation-based techniques of the state-of-the-art, finding statistical differences respect to non-deformation-based approaches. Moreover, our method highly outperforms the baseline method in time efficiency. Full article
Show Figures

Graphical abstract

25 pages, 1069 KB  
Article
Individuals Identification Based on Palm Vein Matching under a Parallel Environment
by Ruber Hernández-García, Ricardo J. Barrientos, Cristofher Rojas and Marco Mora
Appl. Sci. 2019, 9(14), 2805; https://doi.org/10.3390/app9142805 - 12 Jul 2019
Cited by 22 | Viewed by 6779
Abstract
Biometric identification and verification are essential mechanisms in modern society. Palm vein recognition is an emerging biometric technique, which has several advantages, especially in terms of security against forgery. Contactless palm vein systems are more suitable for real-world applications, but two of the [...] Read more.
Biometric identification and verification are essential mechanisms in modern society. Palm vein recognition is an emerging biometric technique, which has several advantages, especially in terms of security against forgery. Contactless palm vein systems are more suitable for real-world applications, but two of the major challenges of the state-of-the-art contributions are image deformations and time efficiency. In the present work, we propose a new method for palm vein recognition by combining DAISY descriptor and the Coarse-to-fine PatchMatch (CPM) algorithm in a parallel matching process. Our proposal aims at providing an effective and efficient technique to obtain similarity of palm vein images considering their displacements as discriminatory information. Extensive evaluation on three publicly available databases demonstrates that the discriminability of the proposed approach reaches the state-of-the-art results while it is considerably superior in time efficiency. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 6133 KB  
Article
A Multi-View Stereo Algorithm Based on Homogeneous Direct Spatial Expansion with Improved Reconstruction Accuracy and Completeness
by Yalan Li and Zhiyang Li
Appl. Sci. 2017, 7(5), 446; https://doi.org/10.3390/app7050446 - 29 Apr 2017
Cited by 6 | Viewed by 5393
Abstract
Reconstruction of 3D structures from multiple 2D images has wide applications in such fields as computer vision, cultural heritage preservation, etc. This paper presents a novel multi-view stereo algorithm based on homogeneous direct spatial expansion (MVS-HDSE) with high reconstruction accuracy and completeness. It [...] Read more.
Reconstruction of 3D structures from multiple 2D images has wide applications in such fields as computer vision, cultural heritage preservation, etc. This paper presents a novel multi-view stereo algorithm based on homogeneous direct spatial expansion (MVS-HDSE) with high reconstruction accuracy and completeness. It adopts many unique measures in each step of reconstruction, including initial seed point extraction using the DAISY descriptor to increase the number of initial sparse seed points, homogeneous direct spatial expansion to enhance efficiency, initial value modification via a conditional-double-surface-fitting method before optimization and adaptive consistency filtering after optimization to ensure high accuracy, processing using a multi-level image pyramid to further improve completeness and efficiency, etc. As demonstrated by experiments, owing to above measures the proposed algorithm attained much improved reconstruction completeness and accuracy. Full article
Show Figures

Figure 1

Back to TopTop