Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = completed local binary pattern (CLBP)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3438 KiB  
Article
Optimizing Remote Sensing Image Retrieval Through a Hybrid Methodology
by Sujata Alegavi and Raghvendra Sedamkar
J. Imaging 2025, 11(6), 179; https://doi.org/10.3390/jimaging11060179 - 28 May 2025
Viewed by 567
Abstract
The contemporary challenge in remote sensing lies in the precise retrieval of increasingly abundant and high-resolution remotely sensed images (RS image) stored in expansive data warehouses. The heightened spatial and spectral resolutions, coupled with accelerated image acquisition rates, necessitate advanced tools for effective [...] Read more.
The contemporary challenge in remote sensing lies in the precise retrieval of increasingly abundant and high-resolution remotely sensed images (RS image) stored in expansive data warehouses. The heightened spatial and spectral resolutions, coupled with accelerated image acquisition rates, necessitate advanced tools for effective data management, retrieval, and exploitation. The classification of large-sized images at the pixel level generates substantial data, escalating the workload and search space for similarity measurement. Semantic-based image retrieval remains an open problem due to limitations in current artificial intelligence techniques. Furthermore, on-board storage constraints compel the application of numerous compression algorithms to reduce storage space, intensifying the difficulty of retrieving substantial, sensitive, and target-specific data. This research proposes an innovative hybrid approach to enhance the retrieval of remotely sensed images. The approach leverages multilevel classification and multiscale feature extraction strategies to enhance performance. The retrieval system comprises two primary phases: database building and retrieval. Initially, the proposed Multiscale Multiangle Mean-shift with Breaking Ties (MSMA-MSBT) algorithm selects informative unlabeled samples for hyperspectral and synthetic aperture radar images through an active learning strategy. Addressing the scaling and rotation variations in image capture, a flexible and dynamic algorithm, modified Deep Image Registration using Dynamic Inlier (IRDI), is introduced for image registration. Given the complexity of remote sensing images, feature extraction occurs at two levels. Low-level features are extracted using the modified Multiscale Multiangle Completed Local Binary Pattern (MSMA-CLBP) algorithm to capture local contexture features, while high-level features are obtained through a hybrid CNN structure combining pretrained networks (Alexnet, Caffenet, VGG-S, VGG-M, VGG-F, VGG-VDD-16, VGG-VDD-19) and a fully connected dense network. Fusion of low- and high-level features facilitates final class distinction, with soft thresholding mitigating misclassification issues. A region-based similarity measurement enhances matching percentages. Results, evaluated on high-resolution remote sensing datasets, demonstrate the effectiveness of the proposed method, outperforming traditional algorithms with an average accuracy of 86.66%. The hybrid retrieval system exhibits substantial improvements in classification accuracy, similarity measurement, and computational efficiency compared to state-of-the-art scene classification and retrieval methods. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing: 2nd Edition)
Show Figures

Figure 1

21 pages, 6058 KiB  
Article
A Fine-Tuned Hybrid Stacked CNN to Improve Bengali Handwritten Digit Recognition
by Ruhul Amin, Md. Shamim Reza, Yuichi Okuyama, Yoichi Tomioka and Jungpil Shin
Electronics 2023, 12(15), 3337; https://doi.org/10.3390/electronics12153337 - 4 Aug 2023
Cited by 5 | Viewed by 2280
Abstract
Recognition of Bengali handwritten digits has several unique challenges, including the variation in writing styles, the different shapes and sizes of digits, the varying levels of noise, and the distortion in the images. Despite significant improvements, there is still room for further improvement [...] Read more.
Recognition of Bengali handwritten digits has several unique challenges, including the variation in writing styles, the different shapes and sizes of digits, the varying levels of noise, and the distortion in the images. Despite significant improvements, there is still room for further improvement in the recognition rate. By building datasets and developing models, researchers can advance state-of-the-art support, which can have important implications for various domains. In this paper, we introduce a new dataset of 5440 handwritten Bengali digit images acquired from a Bangladeshi University that is now publicly available. Both conventional machine learning and CNN models were used to evaluate the task. To begin, we scrutinized the results of the ML model used after integrating three image feature descriptors, namely Binary Pattern (LBP), Complete Local Binary Pattern (CLBP), and Histogram of Oriented Gradients (HOG), using principal component analysis (PCA), which explained 95% of the variation in these descriptors. Then, via a fine-tuning approach, we designed three customized CNN models and their stack to recognize Bengali handwritten digits. On handcrafted image features, the XGBoost classifier achieved the best accuracy at 85.29%, an ROC AUC score of 98.67%, and precision, recall, and F1 scores ranging from 85.08% to 85.18%, indicating that there was still room for improvement. On our own data, the proposed customized CNN models and their stack model surpassed all other models, reaching a 99.66% training accuracy and a 97.57% testing accuracy. In addition, to robustify our proposed CNN model, we used another dataset of Bengali handwritten digits obtained from the Kaggle repository. Our stack CNN model provided remarkable performance. It obtained a training accuracy of 99.26% and an almost equally remarkable testing accuracy of 96.14%. Without any rigorous image preprocessing, fewer epochs, and less computation time, our proposed CNN model performed the best and proved the most resilient throughout all of the datasets, which solidified its position at the forefront of the field. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 3rd Edition)
Show Figures

Figure 1

20 pages, 26007 KiB  
Article
The Influence of CLBP Window Size on Urban Vegetation Type Classification Using High Spatial Resolution Satellite Images
by Zhou Chen, Xianyun Fei, Xiangwei Gao, Xiaoxue Wang, Huimin Zhao, Kapo Wong, Jin Yeu Tsou and Yuanzhi Zhang
Remote Sens. 2020, 12(20), 3393; https://doi.org/10.3390/rs12203393 - 16 Oct 2020
Cited by 6 | Viewed by 2690
Abstract
Urban vegetation can regulate ecological balance, reduce the influence of urban heat islands, and improve human beings’ mental state. Accordingly, classification of urban vegetation types plays a significant role in urban vegetation research. This paper presents various window sizes of completed local binary [...] Read more.
Urban vegetation can regulate ecological balance, reduce the influence of urban heat islands, and improve human beings’ mental state. Accordingly, classification of urban vegetation types plays a significant role in urban vegetation research. This paper presents various window sizes of completed local binary pattern (CLBP) texture features classifying urban vegetation based on high spatial-resolution WorldView-2 images in areas of Shanghai (China) and Lianyungang (Jiangsu province, China). To demonstrate the stability and universality of different CLBP window textures, two study areas were selected. Using spectral information alone and spectral information combined with texture information, imagery is classified using random forest (RF) method based on vegetation type, showing that use of spectral information with CLBP window textures can achieve 7.28% greater accuracy than use of only spectral information for urban vegetation type classification, with accuracy greater for single vegetation types than for mixed ones. Optimal window sizes of CLBP textures for grass, shrub, arbor, shrub-grass, arbor-grass, and arbor-shrub-grass are 3 × 3, 3 × 3, 11 × 11, 9 × 9, 9 × 9, 7 × 7 for urban vegetation type classification. Furthermore, optimal CLBP window size is determined by the roughness of vegetation texture. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas)
Show Figures

Graphical abstract

16 pages, 3133 KiB  
Article
Multi-Scale Feature Fusion for Coal-Rock Recognition Based on Completed Local Binary Pattern and Convolution Neural Network
by Xiaoyang Liu, Wei Jing, Mingxuan Zhou and Yuxing Li
Entropy 2019, 21(6), 622; https://doi.org/10.3390/e21060622 - 25 Jun 2019
Cited by 32 | Viewed by 4730
Abstract
Automatic coal-rock recognition is one of the critical technologies for intelligent coal mining and processing. Most existing coal-rock recognition methods have some defects, such as unsatisfactory performance and low robustness. To solve these problems, and taking distinctive visual features of coal and rock [...] Read more.
Automatic coal-rock recognition is one of the critical technologies for intelligent coal mining and processing. Most existing coal-rock recognition methods have some defects, such as unsatisfactory performance and low robustness. To solve these problems, and taking distinctive visual features of coal and rock into consideration, the multi-scale feature fusion coal-rock recognition (MFFCRR) model based on a multi-scale Completed Local Binary Pattern (CLBP) and a Convolution Neural Network (CNN) is proposed in this paper. Firstly, the multi-scale CLBP features are extracted from coal-rock image samples in the Texture Feature Extraction (TFE) sub-model, which represents texture information of the coal-rock image. Secondly, the high-level deep features are extracted from coal-rock image samples in the Deep Feature Extraction (DFE) sub-model, which represents macroscopic information of the coal-rock image. The texture information and macroscopic information are acquired based on information theory. Thirdly, the multi-scale feature vector is generated by fusing the multi-scale CLBP feature vector and deep feature vector. Finally, multi-scale feature vectors are input to the nearest neighbor classifier with the chi-square distance to realize coal-rock recognition. Experimental results show the coal-rock image recognition accuracy of the proposed MFFCRR model reaches 97.9167%, which increased by 2%–3% compared with state-of-the-art coal-rock recognition methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

21 pages, 4821 KiB  
Article
Ship Classification Based on Multifeature Ensemble with Convolutional Neural Network
by Qiaoqiao Shi, Wei Li, Ran Tao, Xu Sun and Lianru Gao
Remote Sens. 2019, 11(4), 419; https://doi.org/10.3390/rs11040419 - 18 Feb 2019
Cited by 53 | Viewed by 6139
Abstract
As an important part of maritime traffic, ships play an important role in military and civilian applications. However, ships’ appearances are susceptible to some factors such as lighting, occlusion, and sea state, making ship classification more challenging. This is of great importance when [...] Read more.
As an important part of maritime traffic, ships play an important role in military and civilian applications. However, ships’ appearances are susceptible to some factors such as lighting, occlusion, and sea state, making ship classification more challenging. This is of great importance when exploring global and detailed information for ship classification in optical remote sensing images. In this paper, a novel method to obtain discriminative feature representation of a ship image is proposed. The proposed classification framework consists of a multifeature ensemble based on convolutional neural network (ME-CNN). Specifically, two-dimensional discrete fractional Fourier transform (2D-DFrFT) is employed to extract multi-order amplitude and phase information, which contains such important information as profiles, edges, and corners; completed local binary pattern (CLBP) is used to obtain local information about ship images; Gabor filter is used to gain the global information about ship images. Then, deep convolutional neural network (CNN) is applied to extract more abstract features based on the above information. CNN, extracting high-level features automatically, has performed well for object classification tasks. After high-feature learning, as the one of fusion strategies, decision-level fusion is investigated for the final classification result. The average accuracy of the proposed approach is 98.75% on the BCCT200-resize data, 92.50% on the original BCCT200 data, and 87.33% on the challenging VAIS data, which validates the effectiveness of the proposed method when compared to the existing state-of-art algorithms. Full article
(This article belongs to the Special Issue AI-based Remote Sensing Oceanography)
Show Figures

Figure 1

16 pages, 5077 KiB  
Article
Assessing Texture Features to Classify Coastal Wetland Vegetation from High Spatial Resolution Imagery Using Completed Local Binary Patterns (CLBP)
by Minye Wang, Xianyun Fei, Yuanzhi Zhang, Zhou Chen, Xiaoxue Wang, Jin Yeu Tsou, Dawei Liu and Xia Lu
Remote Sens. 2018, 10(5), 778; https://doi.org/10.3390/rs10050778 - 17 May 2018
Cited by 40 | Viewed by 6460
Abstract
Coastal wetland vegetation is a vital component that plays an important role in environmental protection and the maintenance of the ecological balance. As such, the efficient classification of coastal wetland vegetation types is key to the preservation of wetlands. Based on its detailed [...] Read more.
Coastal wetland vegetation is a vital component that plays an important role in environmental protection and the maintenance of the ecological balance. As such, the efficient classification of coastal wetland vegetation types is key to the preservation of wetlands. Based on its detailed spatial information, high spatial resolution imagery constitutes an important tool for extracting suitable texture features for improving the accuracy of classification. In this paper, a texture feature, Completed Local Binary Patterns (CLBP), which is highly suitable for face recognition, is presented and applied to vegetation classification using high spatial resolution Pléiades satellite imagery in the central zone of Yancheng National Natural Reservation (YNNR) in Jiangsu, China. To demonstrate the potential of CLBP texture features, Grey Level Co-occurrence Matrix (GLCM) texture features were used to compare the classification. Using spectral data alone and spectral data combined with texture features, the image was classified using a Support Vector Machine (SVM) based on vegetation types. The results show that CLBP and GLCM texture features yielded an accuracy 6.50% higher than that gained when using only spectral information for vegetation classification. However, CLBP showed greater improvement in terms of classification accuracy than GLCM for Spartina alterniflora. Furthermore, for the CLBP features, CLBP_magnitude (CLBP_m) was more effective than CLBP_sign (CLBP_s), CLBP_center (CLBP_c), and CLBP_s/m or CLBP_s/m/c. These findings suggest that the CLBP approach offers potential for vegetation classification in high spatial resolution images. Full article
Show Figures

Figure 1

22 pages, 5754 KiB  
Article
Cross-Domain Ground-Based Cloud Classification Based on Transfer of Local Features and Discriminative Metric Learning
by Zhong Zhang, Donghong Li, Shuang Liu, Baihua Xiao and Xiaozhong Cao
Remote Sens. 2018, 10(1), 8; https://doi.org/10.3390/rs10010008 - 21 Dec 2017
Cited by 29 | Viewed by 3810
Abstract
Cross-domain ground-based cloud classification is a challenging issue as the appearance of cloud images from different cloud databases possesses extreme variations. Two fundamental problems which are essential for cross-domain ground-based cloud classification are feature representation and similarity measurement. In this paper, we propose [...] Read more.
Cross-domain ground-based cloud classification is a challenging issue as the appearance of cloud images from different cloud databases possesses extreme variations. Two fundamental problems which are essential for cross-domain ground-based cloud classification are feature representation and similarity measurement. In this paper, we propose an effective feature representation called transfer of local features (TLF), and measurement method called discriminative metric learning (DML). The TLF is a generalized representation framework that can integrate various kinds of local features, e.g., local binary patterns (LBP), local ternary patterns (LTP) and completed LBP (CLBP). In order to handle domain shift, such as variations of illumination, image resolution, capturing location, occlusion and so on, the TLF mines the maximum response in regions to make a stable representation for domain variations. We also propose to learn a discriminant metric, simultaneously. We make use of sample pairs and the relationship among cloud classes to learn the distance metric. Furthermore, in order to improve the practicability of the proposed method, we replace the original cloud images with the convolutional activation maps which are then applied to TLF and DML. The proposed method has been validated on three cloud databases which are collected in China alone, provided by Chinese Academy of Meteorological Sciences (CAMS), Meteorological Observation Centre (MOC), and Institute of Atmospheric Physics (IAP). The classification accuracies outperform the state-of-the-art methods. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

17 pages, 4703 KiB  
Article
Remote Sensing Image Scene Classification Using Multi-Scale Completed Local Binary Patterns and Fisher Vectors
by Longhui Huang, Chen Chen, Wei Li and Qian Du
Remote Sens. 2016, 8(6), 483; https://doi.org/10.3390/rs8060483 - 8 Jun 2016
Cited by 154 | Viewed by 9732
Abstract
An effective remote sensing image scene classification approach using patch-based multi-scale completed local binary pattern (MS-CLBP) features and a Fisher vector (FV) is proposed. The approach extracts a set of local patch descriptors by partitioning an image and its multi-scale versions into dense [...] Read more.
An effective remote sensing image scene classification approach using patch-based multi-scale completed local binary pattern (MS-CLBP) features and a Fisher vector (FV) is proposed. The approach extracts a set of local patch descriptors by partitioning an image and its multi-scale versions into dense patches and using the CLBP descriptor to characterize local rotation invariant texture information. Then, Fisher vector encoding is used to encode the local patch descriptors (i.e., patch-based CLBP features) into a discriminative representation. To improve the discriminative power of feature representation, multiple sets of parameters are used for CLBP to generate multiple FVs that are concatenated as the final representation for an image. A kernel-based extreme learning machine (KELM) is then employed for classification. The proposed method is extensively evaluated on two public benchmark remote sensing image datasets (i.e., the 21-class land-use dataset and the 19-class satellite scene dataset) and leads to superior classification performance (93.00% for the 21-class dataset with an improvement of approximately 3% when compared with the state-of-the-art MS-CLBP and 94.32% for the 19-class dataset with an improvement of approximately 1%). Full article
Show Figures

Graphical abstract

33 pages, 2019 KiB  
Article
Image-Based Coral Reef Classification and Thematic Mapping
by A.S.M. Shihavuddin, Nuno Gracias, Rafael Garcia, Arthur C. R. Gleason and Brooke Gintert
Remote Sens. 2013, 5(4), 1809-1841; https://doi.org/10.3390/rs5041809 - 15 Apr 2013
Cited by 108 | Viewed by 15403
Abstract
This paper presents a novel image classification scheme for benthic coral reef images that can be applied to both single image and composite mosaic datasets. The proposed method can be configured to the characteristics (e.g., the size of the dataset, number of classes, [...] Read more.
This paper presents a novel image classification scheme for benthic coral reef images that can be applied to both single image and composite mosaic datasets. The proposed method can be configured to the characteristics (e.g., the size of the dataset, number of classes, resolution of the samples, color information availability, class types, etc.) of individual datasets. The proposed method uses completed local binary pattern (CLBP), grey level co-occurrence matrix (GLCM), Gabor filter response, and opponent angle and hue channel color histograms as feature descriptors. For classification, either k-nearest neighbor (KNN), neural network (NN), support vector machine (SVM) or probability density weighted mean distance (PDWMD) is used. The combination of features and classifiers that attains the best results is presented together with the guidelines for selection. The accuracy and efficiency of our proposed method are compared with other state-of-the-art techniques using three benthic and three texture datasets. The proposed method achieves the highest overall classification accuracy of any of the tested methods and has moderate execution time. Finally, the proposed classification scheme is applied to a large-scale image mosaic of the Red Sea to create a completely classified thematic map of the reef benthos. Full article
Show Figures

Graphical abstract

Back to TopTop