Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = JSEG

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 14990 KiB  
Article
Automated Delineation of Microstands in Hemiboreal Mixed Forests Using Stereo GeoEye-1 Data
by Linda Gulbe, Juris Zarins and Ints Mednieks
Remote Sens. 2022, 14(6), 1471; https://doi.org/10.3390/rs14061471 - 18 Mar 2022
Cited by 2 | Viewed by 2114
Abstract
A microstand is a small forest area with a homogeneous tree species, height, and density composition. High-spatial-resolution GeoEye-1 multispectral (MS) images and GeoEye-1-based canopy height models (CHMs) allow delineating microstands automatically. This paper studied the potential benefits of two microstand segmentation workflows: (1) [...] Read more.
A microstand is a small forest area with a homogeneous tree species, height, and density composition. High-spatial-resolution GeoEye-1 multispectral (MS) images and GeoEye-1-based canopy height models (CHMs) allow delineating microstands automatically. This paper studied the potential benefits of two microstand segmentation workflows: (1) our modification of JSEG and (2) generic region merging (GRM) of the Orfeo Toolbox, both intended for the microstand border refinement and automated stand volume estimation in hemiboreal forests. Our modification of JSEG uses a CHM as the primary data source for segmentation by refining the results using MS data. Meanwhile, the CHM and multispectral data fusion were achieved as multiband segmentation for the GRM workflow. The accuracy was evaluated using several sets of metrics (unsupervised, supervised direct assessment, and system-level assessment). Metrics were calculated for a regular segment grid to check the benefits compared with the simple image patches. The metrics showed very similar results for both workflows. The most successful combinations in the workflow parameters retrieved over 75 % of the boundaries selected by a human interpreter. However, the impact of data fusion and parameter combinations on stand volume estimation accuracy was minimal, causing variations of the RMSE within approximately 7 m3/ha. Full article
Show Figures

Graphical abstract

19 pages, 6357 KiB  
Article
Deep Convolutional Neural Network with KNN Regression for Automatic Image Annotation
by Ramla Bensaci, Belal Khaldi, Oussama Aiadi and Ayoub Benchabana
Appl. Sci. 2021, 11(21), 10176; https://doi.org/10.3390/app112110176 - 29 Oct 2021
Cited by 6 | Viewed by 3121
Abstract
Automatic image annotation is an active field of research in which a set of annotations are automatically assigned to images based on their content. In literature, some works opted for handcrafted features and manual approaches of linking concepts to images, whereas some others [...] Read more.
Automatic image annotation is an active field of research in which a set of annotations are automatically assigned to images based on their content. In literature, some works opted for handcrafted features and manual approaches of linking concepts to images, whereas some others involved convolutional neural networks (CNNs) as black boxes to solve the problem without external interference. In this work, we introduce a hybrid approach that combines the advantages of both CNN and the conventional concept-to-image assignment approaches. J-image segmentation (JSEG) is firstly used to segment the image into a set of homogeneous regions, then a CNN is employed to produce a rich feature descriptor per area, and then, vector of locally aggregated descriptors (VLAD) is applied to the extracted features to generate compact and unified descriptors. Thereafter, the not too deep clustering (N2D clustering) algorithm is performed to define local manifolds constituting the feature space, and finally, the semantic relatedness is calculated for both image–concept and concept–concept using KNN regression to better grasp the meaning of concepts and how they relate. Through a comprehensive experimental evaluation, our method has indicated a superiority over a wide range of recent related works by yielding F1 scores of 58.89% and 80.24% with the datasets Corel 5k and MSRC v2, respectively. Additionally, it demonstrated a relatively high capacity of learning more concepts with higher accuracy, which results in N+ of 212 and 22 with the datasets Corel 5k and MSRC v2, respectively. Full article
Show Figures

Figure 1

19 pages, 14069 KiB  
Article
Edge-Based Color Image Segmentation Using Particle Motion in a Vector Image Field Derived from Local Color Distance Images
by Wutthichai Phornphatcharaphong and Nawapak Eua-Anant
J. Imaging 2020, 6(7), 72; https://doi.org/10.3390/jimaging6070072 - 16 Jul 2020
Cited by 10 | Viewed by 4122
Abstract
This paper presents an edge-based color image segmentation approach, derived from the method of particle motion in a vector image field, which could previously be applied only to monochrome images. Rather than using an edge vector field derived from a gradient vector field [...] Read more.
This paper presents an edge-based color image segmentation approach, derived from the method of particle motion in a vector image field, which could previously be applied only to monochrome images. Rather than using an edge vector field derived from a gradient vector field and a normal compressive vector field derived from a Laplacian-gradient vector field, two novel orthogonal vector fields were directly computed from a color image, one parallel and another orthogonal to the edges. These were then used in the model to force a particle to move along the object edges. The normal compressive vector field is created from the collection of the center-to-centroid vectors of local color distance images. The edge vector field is later derived from the normal compressive vector field so as to obtain a vector field analogous to a Hamiltonian gradient vector field. Using the PASCAL Visual Object Classes Challenge 2012 (VOC2012), the Berkeley Segmentation Data Set, and Benchmarks 500 (BSDS500), the benchmark score of the proposed method is provided in comparison to those of the traditional particle motion in a vector image field (PMVIF), Watershed, simple linear iterative clustering (SLIC), K-means, mean shift, and J-value segmentation (JSEG). The proposed method yields better Rand index (RI), global consistency error (GCE), normalized variation of information (NVI), boundary displacement error (BDE), Dice coefficients, faster computation time, and noise resistance. Full article
(This article belongs to the Special Issue Color Image Segmentation )
Show Figures

Figure 1

15 pages, 2858 KiB  
Article
Color Image Complexity versus Over-Segmentation: A Preliminary Study on the Correlation between Complexity Measures and Number of Segments
by Mihai Ivanovici, Radu-Mihai Coliban, Cosmin Hatfaludi and Irina Emilia Nicolae
J. Imaging 2020, 6(4), 16; https://doi.org/10.3390/jimaging6040016 - 30 Mar 2020
Cited by 6 | Viewed by 4658
Abstract
It is said that image segmentation is a very difficult or complex task. First of all, we emphasize the subtle difference between the notions of difficulty and complexity. Then, in this article, we focus on the question of how two widely used color [...] Read more.
It is said that image segmentation is a very difficult or complex task. First of all, we emphasize the subtle difference between the notions of difficulty and complexity. Then, in this article, we focus on the question of how two widely used color image complexity measures correlate with the number of segments resulting in over-segmentation. We study the evolution of both the image complexity measures and number of segments as the image complexity is gradually decreased by means of low-pass filtering. In this way, we tackle the possibility of predicting the difficulty of color image segmentation based on image complexity measures. We analyze the complexity of images from the point of view of color entropy and color fractal dimension and for color fractal images and the Berkeley data set we correlate these two metrics with the segmentation results, more specifically the number of quasi-flat zones and the number of JSEG regions in the resulting segmentation map. We report on our experimental results and draw conclusions. Full article
(This article belongs to the Special Issue Color Image Segmentation )
Show Figures

Figure 1

23 pages, 55361 KiB  
Article
Dynamic Post-Earthquake Image Segmentation with an Adaptive Spectral-Spatial Descriptor
by Genyun Sun, Yanling Hao, Xiaolin Chen, Jinchang Ren, Aizhu Zhang, Binghu Huang, Yuanzhi Zhang and Xiuping Jia
Remote Sens. 2017, 9(9), 899; https://doi.org/10.3390/rs9090899 - 30 Aug 2017
Cited by 10 | Viewed by 7530
Abstract
The region merging algorithm is a widely used segmentation technique for very high resolution (VHR) remote sensing images. However, the segmentation of post-earthquake VHR images is more difficult due to the complexity of these images, especially high intra-class and low inter-class variability among [...] Read more.
The region merging algorithm is a widely used segmentation technique for very high resolution (VHR) remote sensing images. However, the segmentation of post-earthquake VHR images is more difficult due to the complexity of these images, especially high intra-class and low inter-class variability among damage objects. Herein two key issues must be resolved: the first is to find an appropriate descriptor to measure the similarity of two adjacent regions since they exhibit high complexity among the diverse damage objects, such as landslides, debris flow, and collapsed buildings. The other is how to solve over-segmentation and under-segmentation problems, which are commonly encountered with conventional merging strategies due to their strong dependence on local information. To tackle these two issues, an adaptive dynamic region merging approach (ADRM) is introduced, which combines an adaptive spectral-spatial descriptor and a dynamic merging strategy to adapt to the changes of merging regions for successfully detecting objects scattered globally in a post-earthquake image. In the new descriptor, the spectral similarity and spatial similarity of any two adjacent regions are automatically combined to measure their similarity. Accordingly, the new descriptor offers adaptive semantic descriptions for geo-objects and thus is capable of characterizing different damage objects. Besides, in the dynamic region merging strategy, the adaptive spectral-spatial descriptor is embedded in the defined testing order and combined with graph models to construct a dynamic merging strategy. The new strategy can find the global optimal merging order and ensures that the most similar regions are merged at first. With combination of the two strategies, ADRM can identify spatially scattered objects and alleviates the phenomenon of over-segmentation and under-segmentation. The performance of ADRM has been evaluated by comparing with four state-of-the-art segmentation methods, including the fractal net evolution approach (FNEA, as implemented in the eCognition software, Trimble Inc., Westminster, CO, USA), the J-value segmentation (JSEG) method, the graph-based segmentation (GSEG) method, and the statistical region merging (SRM) approach. The experiments were conducted on six VHR subarea images captured by RGB sensors mounted on aerial platforms, which were acquired after the 2008 Wenchuan Ms 8.0 earthquake. Quantitative and qualitative assessments demonstrated that the proposed method offers high feasibility and improved accuracy in the segmentation of post-earthquake VHR aerial images. Full article
Show Figures

Graphical abstract

32 pages, 4321 KiB  
Article
Building Roof Segmentation from Aerial Images Using a Lineand Region-Based Watershed Segmentation Technique
by Youssef El Merabet, Cyril Meurie, Yassine Ruichek, Abderrahmane Sbihi and Raja Touahni
Sensors 2015, 15(2), 3172-3203; https://doi.org/10.3390/s150203172 - 2 Feb 2015
Cited by 29 | Viewed by 8598
Abstract
In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of [...] Read more.
In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Back to TopTop