Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = deep embedding clustering (DEC)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 6263 KB  
Article
Revealing the Ecological Security Pattern in China’s Ecological Civilization Demonstration Area
by Xuelong Yang, Haisheng Cai, Xiaomin Zhao and Han Zhang
Land 2025, 14(8), 1560; https://doi.org/10.3390/land14081560 - 29 Jul 2025
Cited by 3 | Viewed by 1707
Abstract
The construction and maintenance of an ecological security pattern (ESP) are important for promoting the regional development of ecological civilizations, realizing sustainable and healthy development, and creating a harmonious and beautiful space for human beings and nature to thrive. Traditional construction methods have [...] Read more.
The construction and maintenance of an ecological security pattern (ESP) are important for promoting the regional development of ecological civilizations, realizing sustainable and healthy development, and creating a harmonious and beautiful space for human beings and nature to thrive. Traditional construction methods have the limitations of a single dimension, a single method, and excessive human subjective intervention for source and corridor identification, without considering the multidimensional quality of the sources and the structural connectivity and resilience optimization of the corridors. Therefore, an ecological civilization demonstration area (Jiangxi Province) was used as the study area, a new research method for ESP was proposed, and an empirical study was conducted. To evaluate ecosystem service (ES) importance–disturbance–risk and extract sustainability sources through the deep embedded clustering–self-organizing map (DEC–SOM) deep unsupervised learning clustering algorithm, ecological networks (ENs) were constructed by applying the minimum cumulative resistance (MCR) gravity model and circuit theory. The ENs were then optimized to improve performance by combining the comparative advantages of the two approaches in terms of structural connectivity and resilience. A comparative analysis of EN performance was constructed among different functional control zones, and the ESP was constructed to include 42 ecological sources, 134 corridors, 210 restoration nodes, and 280 protection nodes. An ESP of ‘1 nucleus, 3 belts, 6 zones, and multiple corridors’ was constructed, and the key restoration components and protection functions were clarified. This study offers a valuable reference for ecological management, protection, and restoration and provides insights into the promotion of harmonious symbiosis between human beings and nature and sustainable regional development. Full article
(This article belongs to the Special Issue Urban Ecological Indicators: Land Use and Coverage)
Show Figures

Figure 1

20 pages, 6740 KB  
Article
Concept-Based Explanations for Millimeter Wave Radar Target Recognition
by Qijie Shang, Tieran Zheng, Liwen Zhang, Youcheng Zhang and Zhe Ma
Remote Sens. 2024, 16(14), 2640; https://doi.org/10.3390/rs16142640 - 19 Jul 2024
Viewed by 1715
Abstract
This paper presents exploratory work on the use of Testing with Concept Activation Vectors (TCAV) within a concept-based explanation framework to provide the explainability of millimeter-wave (MMW) radar target recognition. Given that the radar spectrum is difficult for non-domain experts to understand visually, [...] Read more.
This paper presents exploratory work on the use of Testing with Concept Activation Vectors (TCAV) within a concept-based explanation framework to provide the explainability of millimeter-wave (MMW) radar target recognition. Given that the radar spectrum is difficult for non-domain experts to understand visually, defining concepts for radar remains a significant challenge. In response, drawing from the visual analytical experience of experts, some basic concepts based on brightness, striping, size, and shape are adopted in this paper. However, the simplicity of basic concept definitions sometimes leads to vague correlations with recognition targets and significant variability among individuals, limiting their adaptability to specific tasks. To address these issues, this study proposes a Basic Concept-Guided Deep Embedding Clustering (BCG-DEC) method that can effectively discover task-specific composite concepts. BCG-DEC methodically analyzes the deep semantic information of radar data through four distinct stages from the perspective of concept discovery, ensuring that the concepts discovered accurately conform to the task-specific property of MMW radar target recognition. The experimental results show that the proposed method not only expands the number of concepts but also effectively solves the problem of difficulty in annotating basic concepts. In the ROD2021 MMW radar explainability experiments, the concepts proved crucial for recognizing specific categories of radar targets. Full article
Show Figures

Figure 1

19 pages, 4042 KB  
Article
A Novel Data Mining Framework to Investigate Causes of Boiler Failures in Waste-to-Energy Plants
by Dong Wang, Lili Jiang, Måns Kjellander, Eva Weidemann, Johan Trygg and Mats Tysklind
Processes 2024, 12(7), 1346; https://doi.org/10.3390/pr12071346 - 28 Jun 2024
Cited by 2 | Viewed by 2366
Abstract
Examining boiler failure causes is crucial for thermal power plant safety and profitability. However, traditional approaches are complex and expensive, lacking precise operational insights. Although data-driven approaches hold substantial potential in addressing these challenges, there is a gap in systematic approaches for investigating [...] Read more.
Examining boiler failure causes is crucial for thermal power plant safety and profitability. However, traditional approaches are complex and expensive, lacking precise operational insights. Although data-driven approaches hold substantial potential in addressing these challenges, there is a gap in systematic approaches for investigating failure root causes with unlabeled data. Therefore, we proffered a novel framework rooted in data mining methodologies to probe the accountable operational variables for boiler failures. The primary objective was to furnish precise guidance for future operations to proactively prevent similar failures. The framework was centered on two data mining approaches, Principal Component Analysis (PCA) + K-means and Deep Embedded Clustering (DEC), with PCA + K-means serving as the baseline against which the performance of DEC was evaluated. To demonstrate the framework’s specifics, a case study was performed using datasets obtained from a waste-to-energy plant in Sweden. The results showed the following: (1) The clustering outcomes of DEC consistently surpass those of PCA + K-means across nearly every dimension. (2) The operational temperature variables T-BSH3rm, T-BSH2l, T-BSH3r, T-BSH1l, T-SbSH3, and T-BSH1r emerged as the most significant contributors to the failures. It is advisable to maintain the operational levels of T-BSH3rm, T-BSH2l, T-BSH3r, T-BSH1l, T-SbSH3, and T-BSH1r around 527 °C, 432 °C, 482 °C, 338 °C, 313 °C, and 343 °C respectively. Moreover, it is crucial to prevent these values from reaching or exceeding 594 °C, 471 °C, 537 °C, 355 °C, 340 °C, and 359 °C for prolonged durations. The findings offer the opportunity to improve future operational conditions, thereby extending the overall service life of the boiler. Consequently, operators can address faulty tubes during scheduled annual maintenance without encountering failures and disrupting production. Full article
Show Figures

Figure 1

21 pages, 1731 KB  
Article
Analyzing Indo-European Language Similarities Using Document Vectors
by Samuel R. Schrader and Eren Gultepe
Informatics 2023, 10(4), 76; https://doi.org/10.3390/informatics10040076 - 26 Sep 2023
Cited by 2 | Viewed by 4165
Abstract
The evaluation of similarities between natural languages often relies on prior knowledge of the languages being studied. We describe three methods for building phylogenetic trees and clustering languages without the use of language-specific information. The input to our methods is a set of [...] Read more.
The evaluation of similarities between natural languages often relies on prior knowledge of the languages being studied. We describe three methods for building phylogenetic trees and clustering languages without the use of language-specific information. The input to our methods is a set of document vectors trained on a corpus of parallel translations of the Bible into 22 Indo-European languages, representing 4 language families: Indo-Iranian, Slavic, Germanic, and Romance. This text corpus consists of a set of 532,092 Bible verses, with 24,186 identical verses translated into each language. The methods are (A) hierarchical clustering using distance between language vector centroids, (B) hierarchical clustering using a network-derived distance measure, and (C) Deep Embedded Clustering (DEC) of language vectors. We evaluate our methods using a ground-truth tree and language families derived from said tree. All three achieve clustering F-scores above 0.9 on the Indo-Iranian and Slavic families; most confusion is between the Germanic and Romance families. The mean F-scores across all families are 0.864 (centroid clustering), 0.953 (network partitioning), and 0.763 (DEC). This shows that document vectors can be used to capture and compare linguistic features of multilingual texts, and thus could help extend language similarity and other translation studies research. Full article
(This article belongs to the Special Issue Digital Humanities and Visualization)
Show Figures

Figure 1

13 pages, 1135 KB  
Article
ClearF++: Improved Supervised Feature Scoring Using Feature Clustering in Class-Wise Embedding and Reconstruction
by Sehee Wang, So Yeon Kim and Kyung-Ah Sohn
Bioengineering 2023, 10(7), 824; https://doi.org/10.3390/bioengineering10070824 - 10 Jul 2023
Cited by 2 | Viewed by 1656
Abstract
Feature selection methods are essential for accurate disease classification and identifying informative biomarkers. While information-theoretic methods have been widely used, they often exhibit limitations such as high computational costs. Our previously proposed method, ClearF, addresses these issues by using reconstruction error from low-dimensional [...] Read more.
Feature selection methods are essential for accurate disease classification and identifying informative biomarkers. While information-theoretic methods have been widely used, they often exhibit limitations such as high computational costs. Our previously proposed method, ClearF, addresses these issues by using reconstruction error from low-dimensional embeddings as a proxy for the entropy term in the mutual information. However, ClearF still has limitations, including a nontransparent bottleneck layer selection process, which can result in unstable feature selection. To address these limitations, we propose ClearF++, which simplifies the bottleneck layer selection and incorporates feature-wise clustering to enhance biomarker detection. We compare its performance with other commonly used methods such as MultiSURF and IFS, as well as ClearF, across multiple benchmark datasets. Our results demonstrate that ClearF++ consistently outperforms these methods in terms of prediction accuracy and stability, even with limited samples. We also observe that employing the Deep Embedded Clustering (DEC) algorithm for feature-wise clustering improves performance, indicating its suitability for handling complex data structures with limited samples. ClearF++ offers an improved biomarker prioritization approach with enhanced prediction performance and faster execution. Its stability and effectiveness with limited samples make it particularly valuable for biomedical data analysis. Full article
(This article belongs to the Special Issue Machine Learning Technology in Biomedical Engineering)
Show Figures

Figure 1

19 pages, 5181 KB  
Article
Deep Clustering Efficient Learning Network for Motion Recognition Based on Self-Attention Mechanism
by Tielin Ru and Ziheng Zhu
Appl. Sci. 2023, 13(5), 2996; https://doi.org/10.3390/app13052996 - 26 Feb 2023
Cited by 7 | Viewed by 2516
Abstract
Multi-person behavior event recognition has become an increasingly challenging research field in human–computer interaction. With the rapid development of deep learning and computer vision, it plays an important role in the inference and analysis of real sports events, that is, given the video [...] Read more.
Multi-person behavior event recognition has become an increasingly challenging research field in human–computer interaction. With the rapid development of deep learning and computer vision, it plays an important role in the inference and analysis of real sports events, that is, given the video frequency of sports events, when letting it analyze and judge the behavior trend of athletes, often faced with the limitations of large-scale data sets and hardware, it takes a lot of time, and the accuracy of the results is not high. Therefore, we propose a deep clustering learning network for motion recognition under the self-attention mechanism, which can efficiently solve the accuracy and efficiency problems of sports event analysis and judgment. This method can not only solve the problem of gradient disappearance and explosion in the recurrent neural network (RNN), but also capture the internal correlation between multiple people on the sports field for identification, etc., by using the long and short-term memory network (LSTM), and combine the motion coding information in the key frames with the deep embedded clustering (DEC) to better analyze and judge the complex behavior change types of athletes. In addition, by using the self-attention mechanism, we can not only analyze the whole process of the sports video macroscopically, but also focus on the specific attributes of the movement, extract the key posture features of the athletes, further enhance the features, effectively reduce the amount of parameters in the calculation process of self-attention, reduce the computational complexity, and maintain the ability to capture details. The accuracy and efficiency of reasoning and judgment are improved. Through verification on large video datasets of mainstream sports, we achieved high accuracy and improved the efficiency of inference and prediction. It is proved that the method is effective and feasible in the analysis and reasoning of sports videos. Full article
Show Figures

Figure 1

21 pages, 4326 KB  
Article
The Performance of Topic Evolution Based on a Feature Maximization Measurement for the Linguistics Domain
by Junchao Feng, Jianjun Miao, Yue Tang, Yuechen Li and Jundong Feng
Axioms 2022, 11(8), 412; https://doi.org/10.3390/axioms11080412 - 18 Aug 2022
Cited by 1 | Viewed by 2018
Abstract
Understanding the performance of the data mining approach and topic evolution in a certain scientific domain is imperative to capturing key domain developments and facilitating knowledge transfer within and across domains. Our research selects linguistics as an exploratory domain and exploits the feature [...] Read more.
Understanding the performance of the data mining approach and topic evolution in a certain scientific domain is imperative to capturing key domain developments and facilitating knowledge transfer within and across domains. Our research selects linguistics as an exploratory domain and exploits the feature maximization (FM) measurement for feature selection, combined with the contrast ratio to conduct the diachronic analysis for the linguistics domain’s topics. To accurately mine the linguistics domain’s topics and obtain the optimal clustering model selection, we exploit an integrated method associated with the deep embedding for clustering (DEC) algorithm based on the keywords-based Text Representation Matrix (KTRM) and Lamirel’s EC index and test the performance of this method. The results show that the FM measurement is applicable in the linguistics domain for topic mining, and the combinatory method has the advantage of an unbiased clustering optimization model and applies to the design of non-parameter clustering and algorithms from the low dimension to the high dimension of datasets. The findings suggest that this approach could be suitable for a diachronic analysis of topic evolution and facilitate the performance of topic detection. In addition, these findings of text detection can rise to knowledge fusion cognition with the factor of language as an available research objective in interdisciplinary research. Full article
Show Figures

Figure 1

28 pages, 8374 KB  
Article
Evaluation and Prediction of Landslide Susceptibility in Yichang Section of Yangtze River Basin Based on Integrated Deep Learning Algorithm
by Lili Chang, Rui Zhang and Chunsheng Wang
Remote Sens. 2022, 14(11), 2717; https://doi.org/10.3390/rs14112717 - 6 Jun 2022
Cited by 20 | Viewed by 3853
Abstract
Landslide susceptibility evaluation (LSE) refers to the probability of landslide occurrence in a region under a specific geological environment and trigger conditions, which is crucial to preventing and controlling landslide risk. The mainstream of the Yangtze River in Yichang City belongs to the [...] Read more.
Landslide susceptibility evaluation (LSE) refers to the probability of landslide occurrence in a region under a specific geological environment and trigger conditions, which is crucial to preventing and controlling landslide risk. The mainstream of the Yangtze River in Yichang City belongs to the largest basin in the Three Gorges Reservoir area and is prone to landslides. Affected by global climate change, seismic activity, and accelerated urbanization, geological disasters such as landslide collapses and debris flows in the study area have increased significantly. Therefore, it is urgent to carry out the LSE in the Yichang section of the Yangtze River Basin. The main results are as follows: (1) Based on historical landslide catalog, geological data, geographic data, hydrological data, remote sensing data, and other multi-source spatial-temporal big data, we construct the LSE index system; (2) In this paper, unsupervised Deep Embedding Clustering (DEC) algorithm and deep integration network (Capsule Neural Network based on SENet: SE-CapNet) are used for the first time to participate in non-landslide sample selection, and LSE in the study area and the accuracy of the algorithm is 96.29; (3) Based on the constructed sensitivity model and rainfall forecast data, the main driving mechanisms of landslides in the Yangtze River Basin were revealed. In this paper, the study area’s mid-long term LSE prediction and trend analysis are carried out. (4) The complete results show that the method has good performance and high precision, providing a reference for subsequent LSE, landslide susceptibility prediction (LSP), and change rule research, and providing a scientific basis for landslide disaster prevention. Full article
Show Figures

Figure 1

22 pages, 10550 KB  
Article
A Defect-Inspection System Constructed by Applying Autoencoder with Clustered Latent Vectors and Multi-Thresholding Classification
by Cheng-Chang Lien and Yu-De Chiu
Appl. Sci. 2022, 12(4), 1883; https://doi.org/10.3390/app12041883 - 11 Feb 2022
Viewed by 2746
Abstract
Defect inspection is an important issue in the field of industrial automation. In general, defect-inspection methods can be categorized into supervised and unsupervised methods. When supervised learning is applied to defect inspection, the large variation of defect patterns can make the data coverage [...] Read more.
Defect inspection is an important issue in the field of industrial automation. In general, defect-inspection methods can be categorized into supervised and unsupervised methods. When supervised learning is applied to defect inspection, the large variation of defect patterns can make the data coverage incomplete for model training, which can introduce the problem of low detection accuracy. Therefore, this paper focuses on the construction of a defect-inspection system with an unsupervised learning model. Furthermore, few studies have focused on the analysis between the reconstruction error on the normal areas and the repair effect on the defective areas for unsupervised defect-inspection systems. Hence, this paper addresses this important issue. There are four main contributions to this paper. First, we compare the effects of SSIM (Structural Similarity Index Measure) and MSE (Mean Square Error) functions on the reconstruction error. Second, various kinds of Autoencoders are constructed by referring to the Inception architecture in GoogleNet and DEC (Deep Embedded Clustering) module. Third, two-stage model training is proposed to train the Autoencoder models. In the first stage, the Autoencoder models are trained to have basic image-reconstruction capabilities for the normal areas. In the second stage, the DEC algorithm is added to the training of the Autoencoder model to further strengthen feature discrimination and then increase the capability to repair defective areas. Fourth, the multi-thresholding image segmentation method is applied to improve the classification accuracy of normal and defect images. In this study, we focus on the defect inspection on the texture patterns. Therefore, we select the nanofiber image database and carpet and grid images in the MVTec database to conduct experiments. The experimental results show that the accuracy of classifying normal and defect patch nanofiber images is about 86% and the classification accuracy can approach 89% and 98% for carpet and grid datasets in the MVTec database, respectively. It is obvious that our proposed defect-inspection and classification system outperforms the methods in MVTec. Full article
(This article belongs to the Special Issue Advances in Computer Vision, Volume Ⅱ)
Show Figures

Figure 1

Back to TopTop