Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = one-side classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 389 KB  
Article
A Similarity Measure for Linking CoinJoin Output Spenders
by Michael Herbert Ziegler, Mariusz Nowostawski and Basel Katt
J. Cybersecur. Priv. 2025, 5(4), 88; https://doi.org/10.3390/jcp5040088 - 18 Oct 2025
Viewed by 201
Abstract
This paper introduces a novel similarity measure to link transactions which spend outputs of CoinJoin transactions, CoinJoin Spending Transactions (CSTs), by analyzing their on-chain properties, addressing the challenge of preserving user privacy in blockchain systems. Despite the adoption of privacy-enhancing techniques like CoinJoin, [...] Read more.
This paper introduces a novel similarity measure to link transactions which spend outputs of CoinJoin transactions, CoinJoin Spending Transactions (CSTs), by analyzing their on-chain properties, addressing the challenge of preserving user privacy in blockchain systems. Despite the adoption of privacy-enhancing techniques like CoinJoin, users remain vulnerable to transaction linkage through shared output patterns. The proposed method leverages timestamp analysis of mixed outputs and employs a one-sided Chamfer distance to quantify similarities between CSTs, enabling the identification of transactions associated with the same user. The approach is evaluated across three major CoinJoin implementations (Dash, Whirlpool, and Wasabi 2.0) demonstrating its effectiveness in detecting linked CSTs. Additionally, the work improves transaction classification rules for Wasabi 2.0 by introducing criteria for uncommon denomination outputs, reducing false positives. Results show that multiple CSTs spending shared CoinJoin outputs are prevalent, highlighting the practical significance of the similarity measure. The findings underscore the ongoing privacy risks posed by transaction linkage, even within privacy-focused protocols. This work contributes to the understanding of CoinJoin’s limitations and offers insights for developing more robust privacy mechanisms in decentralized systems. To the authors knowledge this is the first work analyzing the linkage between CSTs. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

22 pages, 3629 KB  
Article
Pulse-Echo Ultrasonic Verification of Silicate Surface Treatments Using an External-Excitation/Single-Receiver Configuration: ROC-Based Differentiation of Concrete Specimens
by Libor Topolář, Lukáš Kalina, David Markusík, Vladislav Cába, Martin Sedlačík, Felix Černý, Szymon Skibicki and Vlastimil Bílek
Materials 2025, 18(16), 3765; https://doi.org/10.3390/ma18163765 - 11 Aug 2025
Viewed by 416
Abstract
This study investigates a non-destructive, compact pulse-echo ultrasonic method that combines an external transmitter with a single receiving sensor to identify different surface treatments applied to cementitious materials. The primary objective was to evaluate whether treatment-induced acoustic changes could be reliably quantified using [...] Read more.
This study investigates a non-destructive, compact pulse-echo ultrasonic method that combines an external transmitter with a single receiving sensor to identify different surface treatments applied to cementitious materials. The primary objective was to evaluate whether treatment-induced acoustic changes could be reliably quantified using time-domain signal parameters. Three types of surface conditions were examined: untreated reference specimens (R), specimens treated with a standard lithium silicate solution (A), and those treated with an enriched formulation containing hexylene glycol (B) intended to enhance pore sealing via gelation. A broadband piezoelectric receiver collected the backscattered echoes, from which the maximum amplitude, root mean square (RMS) voltage, signal energy, and effective duration were extracted. Receiver operating characteristic (ROC) analysis was conducted to quantify the discriminative power of each parameter. The results showed excellent classification performance between groups involving the B-treatment (AUC ≥ 0.96), whereas the R vs. A comparison yielded moderate separation (AUC ≈ 0.61). Optimal cut-off values were established using the Youden index, with sensitivity and specificity exceeding 96% in the best-performing scenarios. The results demonstrate that a single-receiver, one-sided pulse-echo arrangement coupled with straightforward amplitude metrics provides a rapid, cost-effective, and field-adaptable tool for the quality control of silicate-surface treatments. By translating laboratory ultrasonics into a practical on-site protocol, this study helps close the gap between the experimental characterisation and real-world implementation of surface-treatment verification. Full article
Show Figures

Figure 1

18 pages, 2737 KB  
Article
Cost-Effective Multitask Active Learning in Wearable Sensor Systems
by Asiful Arefeen and Hassan Ghasemzadeh
Sensors 2025, 25(5), 1522; https://doi.org/10.3390/s25051522 - 28 Feb 2025
Viewed by 1360
Abstract
Multitask learning models provide benefits by reducing model complexity and improving accuracy by concurrently learning multiple tasks with shared representations. Leveraging inductive knowledge transfer, these models mitigate the risk of overfitting on any specific task, leading to enhanced overall performance. However, supervised multitask [...] Read more.
Multitask learning models provide benefits by reducing model complexity and improving accuracy by concurrently learning multiple tasks with shared representations. Leveraging inductive knowledge transfer, these models mitigate the risk of overfitting on any specific task, leading to enhanced overall performance. However, supervised multitask learning models, like many neural networks, require substantial amounts of labeled data. Given the cost associated with data labeling, there is a need for an efficient label acquisition mechanism, known as multitask active learning (MTAL). In wearable sensor systems, success of MTAL largely hinges on its query strategies because active learning in such settings involves interaction with end-users (e.g., patients) for annotation. However, these strategies have not been studied in mobile health settings and wearable systems to date. While strategies like one-sided sampling, alternating sampling, and rank-combination-based sampling have been proposed in the past, their applicability in mobile sensor settings—a domain constrained by label deficit—remains largely unexplored. This study investigates the MTAL querying approaches and addresses crucial questions related to the choice of sampling methods and the effectiveness of multitask learning in mobile health applications. Utilizing two datasets on activity recognition and emotion classification, our findings reveal that rank-based sampling outperforms other techniques, particularly in tasks with high correlation. However, sole reliance on informativeness for sample selection may introduce biases into models. To address this issue, we also propose a Clustered Stratified Sampling (CSS) method in tandem with the multitask active learning query process. CSS identifies clustered mini-batches of samples, optimizing budget utilization and maximizing performance. When employed alongside rank-based query selection, our proposed CSS algorithm demonstrates up to 9% improvement in accuracy over traditional querying approaches for a 2000-query budget. Full article
(This article belongs to the Special Issue Edge AI for Wearables and IoT)
Show Figures

Figure 1

23 pages, 8121 KB  
Article
Transmission Line Fault Classification Based on the Combination of Scaled Wavelet Scalograms and CNNs Using a One-Side Sensor for Data Collection
by Ahmed Sabri Altaie, Mohamed Abderrahim and Afaneen Anwer Alkhazraji
Sensors 2024, 24(7), 2124; https://doi.org/10.3390/s24072124 - 26 Mar 2024
Cited by 5 | Viewed by 1843
Abstract
This research focuses on leveraging wavelet transform for fault classification within electrical power transmission networks. This study meticulously examines the influence of various parameters, such as fault resistance, fault inception angle, fault location, and other essential components, on the accuracy of fault classification. [...] Read more.
This research focuses on leveraging wavelet transform for fault classification within electrical power transmission networks. This study meticulously examines the influence of various parameters, such as fault resistance, fault inception angle, fault location, and other essential components, on the accuracy of fault classification. We endeavor to explore the interplay between classification accuracy and the input data while assessing the efficacy of combining wavelet analysis with deep learning methodologies. The data, sourced from network recorders, including phase currents and voltages, undergo a scaled continuous wavelet transform (S-CWT) to generate scalogram images. These images are subsequently utilized as inputs for pretrained deep learning models. The experiments encompass various fault scenarios, spanning distinct fault types, locations, times, and resistance values. A remarkable feature of the proposed work is the attainment of 100% classification accuracy, obviating the need for additional algorithmic enhancements. The foundation of this achievement is the deliberate selection of the right input. The decision to employ an identical number of samples as the number of scales for the CWT emerges as a pivotal factor. This approach underpins the high accuracy and renders supplementary algorithms superfluous. Furthermore, this research underscores the versatility of this approach, showcasing its effectiveness across diverse networks and scenarios. Wavelet transform, after rigorous experimentation, emerges as a reliable tool for capturing transient fault characteristics with an optimal balance between time and frequency resolutions. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

14 pages, 2565 KB  
Article
Optically Guided High-Frequency Ultrasound to Differentiate High-Risk Basal Cell Carcinoma Subtypes: A Single-Centre Prospective Study
by Szabolcs Bozsányi, Mehdi Boostani, Klára Farkas, Phyllida Hamilton-Meikle, Noémi Nóra Varga, Boglárka Szabó, Flóra Vasanits, Enikő Kuroli, Fanni Adél Meznerics, Kende Lőrincz, Péter Holló, András Bánvölgyi, Norbert M. Wikonkál, Gyorgy Paragh and Norbert Kiss
J. Clin. Med. 2023, 12(21), 6910; https://doi.org/10.3390/jcm12216910 - 3 Nov 2023
Cited by 10 | Viewed by 9036
Abstract
Background: Basal cell carcinoma (BCC) is the most common type of skin cancer in the Caucasian population. Currently, invasive biopsy is the only way of establishing the histological subtype (HST) that determines the treatment options. Our study aimed to evaluate whether optically guided [...] Read more.
Background: Basal cell carcinoma (BCC) is the most common type of skin cancer in the Caucasian population. Currently, invasive biopsy is the only way of establishing the histological subtype (HST) that determines the treatment options. Our study aimed to evaluate whether optically guided high-frequency ultrasound (OG-HFUS) imaging could differentiate aggressive HST BCCs from low-risk tumors. Methods: We conducted prospective clinical and dermoscopic examinations of BCCs, followed by 33 MHz OG-HFUS imaging, surgical excision, and a histological analysis. We enrolled 75 patients with 78 BCCs. In total, 63 BCCs were utilized to establish a novel OG-HFUS risk classification algorithm, while 15 were employed for the validation of this algorithm. The mean age of the patients was 72.9 ± 11.2 years. Histology identified 16 lesions as aggressive HST (infiltrative or micronodular subtypes) and 47 as low-risk HST (superficial or nodular subtypes). To assess the data, we used a one-sided Fisher’s exact test for a categorical analysis and a Receiver Operating Characteristic (ROC) curve analysis to evaluate the diagnostic accuracy. Results: OG-HFUS distinguished aggressive BCC HSTs by their irregular shape (p < 0.0001), ill-defined margins (p < 0.0001), and non-homogeneous internal echoes (p = 0.004). We developed a risk-categorizing algorithm that differentiated aggressive HSTs from low-risk HSTs with a higher sensitivity (82.4%) and specificity (91.3%) than a combined macroscopic and dermoscopic evaluation (sensitivity: 40.1% and specificity: 73.1%). The positive and negative predictive values (PPV and NPV, respectively) for dermoscopy were 30.2% and 76.8%, respectively. In comparison, the OG-HFUS-based algorithm demonstrated a PPV of 94.7% and an NPV of 78.6%. We verified the algorithm using an independent image set, n = 15, including 12 low-risk and 3 high-risk (high-risk) with two blinded evaluators, where we found a sensitivity of 83.33% and specificity of 91.66%. Conclusions: Our study shows that OG-HFUS can identify aggressive BCC HSTs based on easily identifiable morphological parameters, supporting early therapeutic decision making. Full article
Show Figures

Figure 1

14 pages, 628 KB  
Article
Three-Stage Sampling Algorithm for Highly Imbalanced Multi-Classification Time Series Datasets
by Haoming Wang
Symmetry 2023, 15(10), 1849; https://doi.org/10.3390/sym15101849 - 1 Oct 2023
Cited by 2 | Viewed by 2232
Abstract
To alleviate the data imbalance problem caused by subjective and objective factors, scholars have developed different data-preprocessing algorithms, among which undersampling algorithms are widely used because of their fast and efficient performance. However, when the number of samples of some categories in a [...] Read more.
To alleviate the data imbalance problem caused by subjective and objective factors, scholars have developed different data-preprocessing algorithms, among which undersampling algorithms are widely used because of their fast and efficient performance. However, when the number of samples of some categories in a multi-classification dataset is too small to be processed via sampling or the number of minority class samples is only one or two, the traditional undersampling algorithms will be less effective. In this study, we select nine multi-classification time series datasets with extremely few samples as research objects, fully consider the characteristics of time series data, and use a three-stage algorithm to alleviate the data imbalance problem. In stage one, random oversampling with disturbance items is used to increase the number of sample points; in stage two, on the basis of the latter operation, SMOTE (synthetic minority oversampling technique) oversampling is employed; in stage three, the dynamic time-warping distance is used to calculate the distance between sample points, identify the sample points of Tomek links at the boundary, and clean up the boundary noise. This study proposes a new sampling algorithm. In the nine multi-classification time series datasets with extremely few samples, the new sampling algorithm is compared with four classic undersampling algorithms, namely, ENN (edited nearest neighbours), NCR (neighborhood cleaning rule), OSS (one-side selection), and RENN (repeated edited nearest neighbors), based on the macro accuracy, recall rate, and F1-score evaluation indicators. The results are as follows: of the nine datasets selected, for the dataset with the most categories and the fewest minority class samples, FiftyWords, the accuracy of the new sampling algorithm was 0.7156, far beyond that of ENN, RENN, OSS, and NCR; its recall rate was also better than that of the four undersampling algorithms used for comparison, corresponding to 0.7261; and its F1-score was 200.71%, 188.74%, 155.29%, and 85.61% better than that of ENN, RENN, OSS, and NCR, respectively. For the other eight datasets, this new sampling algorithm also showed good indicator scores. The new algorithm proposed in this study can effectively alleviate the data imbalance problem of multi-classification time series datasets with many categories and few minority class samples and, at the same time, clean up the boundary noise data between classes. Full article
(This article belongs to the Topic Advances in Computational Materials Sciences)
Show Figures

Figure 1

17 pages, 2378 KB  
Article
Community Governance Based on Sentiment Analysis: Towards Sustainable Management and Development
by Xudong Zhang, Zejun Yan, Qianfeng Wu, Ke Wang, Kelei Miao, Zhangquan Wang and Yourong Chen
Sustainability 2023, 15(3), 2684; https://doi.org/10.3390/su15032684 - 2 Feb 2023
Cited by 3 | Viewed by 3158
Abstract
The promotion of community governance by digital means is an important research topic in developing smart cities. Currently, community governance is mostly based on reactive response, which lacks timely and proactive technical means for emergency monitoring. The easiest way for residents to contact [...] Read more.
The promotion of community governance by digital means is an important research topic in developing smart cities. Currently, community governance is mostly based on reactive response, which lacks timely and proactive technical means for emergency monitoring. The easiest way for residents to contact their properties is to call the property call center, and the call centers of many properties store many speech data. However, text sentiment classification in community scenes still faces challenges such as small corpus size, one-sided sentiment feature extraction, and insufficient sentiment classification accuracy. To address such problems, we propose a novel community speech text sentiment classification algorithm combining two-channel features and attention mechanisms to obtain effective emotional information and provide decision support for the emergency management of public emergencies. Firstly, text vectorization based on word position information is proposed, and a SKEP-based community speech–text enhancement model is constructed to obtain the corresponding corpus. Secondly, a dual-channel emotional text feature extraction method that integrates spatial and temporal sequences is proposed to extract diverse emotional features effectively. Finally, an improved cross-entropy loss function suitable for community speech text is proposed for model training, which can achieve sentiment analysis and obtain all aspects of community conditions. The proposed method is conducive to improving community residents’ sense of happiness, satisfaction, and fulfillment, enhancing the effectiveness and resilience of urban community governance. Full article
Show Figures

Figure 1

12 pages, 1784 KB  
Article
The Effect of Dataset Imbalance on the Performance of SCADA Intrusion Detection Systems
by Asaad Balla, Mohamed Hadi Habaebi, Elfatih A. A. Elsheikh, Md. Rafiqul Islam and F. M. Suliman
Sensors 2023, 23(2), 758; https://doi.org/10.3390/s23020758 - 9 Jan 2023
Cited by 45 | Viewed by 4590
Abstract
Integrating IoT devices in SCADA systems has provided efficient and improved data collection and transmission technologies. This enhancement comes with significant security challenges, exposing traditionally isolated systems to the public internet. Effective and highly reliable security devices, such as intrusion detection system (IDSs) [...] Read more.
Integrating IoT devices in SCADA systems has provided efficient and improved data collection and transmission technologies. This enhancement comes with significant security challenges, exposing traditionally isolated systems to the public internet. Effective and highly reliable security devices, such as intrusion detection system (IDSs) and intrusion prevention systems (IPS), are critical. Countless studies used deep learning algorithms to design an efficient IDS; however, the fundamental issue of imbalanced datasets was not fully addressed. In our research, we examined the impact of data imbalance on developing an effective SCADA-based IDS. To investigate the impact of various data balancing techniques, we chose two unbalanced datasets, the Morris power dataset, and CICIDS2017 dataset, including random sampling, one-sided selection (OSS), near-miss, SMOTE, and ADASYN. For binary classification, convolutional neural networks were coupled with long short-term memory (CNN-LSTM). The system’s effectiveness was determined by the confusion matrix, which includes evaluation metrics, such as accuracy, precision, detection rate, and F1-score. Four experiments on the two datasets demonstrate the impact of the data imbalance. This research aims to help security researchers in understanding imbalanced datasets and their impact on DL SCADA-IDS. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

28 pages, 807 KB  
Article
An Ensemble and Iterative Recovery Strategy Based kGNN Method to Edit Data with Label Noise
by Baiyun Chen, Longhai Huang, Zizhong Chen and Guoyin Wang
Mathematics 2022, 10(15), 2743; https://doi.org/10.3390/math10152743 - 3 Aug 2022
Cited by 1 | Viewed by 2096
Abstract
Learning label noise is gaining increasing attention from a variety of disciplines, particularly in supervised machine learning for classification tasks. The k nearest neighbors (kNN) classifier is often used as a natural way to edit the training sets due to its [...] Read more.
Learning label noise is gaining increasing attention from a variety of disciplines, particularly in supervised machine learning for classification tasks. The k nearest neighbors (kNN) classifier is often used as a natural way to edit the training sets due to its sensitivity to label noise. However, the kNN-based editor may remove too many instances if not designed to take care of the label noise. In addition, the one-sided nearest neighbor (NN) rule is unconvincing, as it just considers the nearest neighbors from the perspective of the query sample. In this paper, we propose an ensemble and iterative recovery strategy-based kGNN method (EIRS-kGNN) to edit data with label noise. EIRS-kGNN first uses the general nearest neighbors (GNN) to expand the one-sided NN rule to a binary-sided NN rule, taking the neighborhood of the queried samples into account. Then, it ensembles the prediction results of a finite set of ks in the kGNN to prudently judge the noise levels for each sample. Finally, two loops, i.e., the inner loop and the outer loop, are leveraged to iteratively detect label noise. A frequency indicator is derived from the iterative processes to guide the mixture approaches, including relabeling and removing, to deal with the detected label noise. The goal of EIRS-kGNN is to recover the distribution of the data set as if it were not corrupted. Experimental results on both synthetic data sets and UCI benchmarks, including binary data sets and multi-class data sets, demonstrate the effectiveness of the proposed EIRS-kGNN method. Full article
(This article belongs to the Special Issue Recent Advances in Artificial Intelligence and Machine Learning)
Show Figures

Figure 1

12 pages, 278 KB  
Article
The Effects of Information and Communication Technology (ICT) Use on Human Development—A Macroeconomic Approach
by Nada Karaman Aksentijević, Zoran Ježić and Petra Adelajda Zaninović
Economies 2021, 9(3), 128; https://doi.org/10.3390/economies9030128 - 3 Sep 2021
Cited by 56 | Viewed by 18675
Abstract
Information and communication technology (ICT) is considered a significant factor in economic growth and development. Over the past two decades, scholars have studied the impact of ICT on economic growth, but there has been little research that has addressed the impact of ICT [...] Read more.
Information and communication technology (ICT) is considered a significant factor in economic growth and development. Over the past two decades, scholars have studied the impact of ICT on economic growth, but there has been little research that has addressed the impact of ICT on human development, which is considered one of the fundamental factors of economic development. This could be especially important from the perspective of developing countries, which can develop faster through the implementation of ICT. Thus, the aim of this paper is to investigate the effects of ICT use on human development, distinguishing effects among high, upper-middle, lower-middle and low-income countries following the World Bank classification 2020. Our sample includes 130 countries in the period from 2007 to 2019. The empirical analysis is based on dynamic panel data regression analysis. We use Generalized Method of Moments (GMM) as an estimator, i.e., two-step system GMM. The results primarily support the dynamic behaviour of human development. The results of the analysis also show that ICT has highly significant positive effects on human development in lower-middle-income and low-income countries, while the effects do not appear to be significant in high- and middle-income countries. This research serves as an argument for the need to invest in ICT and its implementation in low-income countries; however, it also suggests that the story is not one-sided and that there are possible negative effects of ICT use on human development. From the perspective of economic policy, the results can be a guideline for the implementation and use of ICT in developing countries, which could lead to economic growth and development and thus better quality of life. On the other hand, policymakers in developed countries cannot rely on ICT alone; they should also consider other technological innovations that could ensure a better quality of life. Full article
40 pages, 35280 KB  
Review
Review on Generative Adversarial Networks: Focusing on Computer Vision and Its Applications
by Sung-Wook Park, Jae-Sub Ko, Jun-Ho Huh and Jong-Chan Kim
Electronics 2021, 10(10), 1216; https://doi.org/10.3390/electronics10101216 - 20 May 2021
Cited by 75 | Viewed by 16014
Abstract
The emergence of deep learning model GAN (Generative Adversarial Networks) is an important turning point in generative modeling. GAN is more powerful in feature and expression learning compared to machine learning-based generative model algorithms. Nowadays, it is also used to generate non-image data, [...] Read more.
The emergence of deep learning model GAN (Generative Adversarial Networks) is an important turning point in generative modeling. GAN is more powerful in feature and expression learning compared to machine learning-based generative model algorithms. Nowadays, it is also used to generate non-image data, such as voice and natural language. Typical technologies include BERT (Bidirectional Encoder Representations from Transformers), GPT-3 (Generative Pretrained Transformer-3), and MuseNet. GAN differs from the machine learning-based generative model and the objective function. Training is conducted by two networks: generator and discriminator. The generator converts random noise into a true-to-life image, whereas the discriminator distinguishes whether the input image is real or synthetic. As the training continues, the generator learns more sophisticated synthesis techniques, and the discriminator grows into a more accurate differentiator. GAN has problems, such as mode collapse, training instability, and lack of evaluation matrix, and many researchers have tried to solve these problems. For example, solutions such as one-sided label smoothing, instance normalization, and minibatch discrimination have been proposed. The field of application has also expanded. This paper provides an overview of GAN and application solutions for computer vision and artificial intelligence healthcare field researchers. The structure and principle of operation of GAN, the core models of GAN proposed to date, and the theory of GAN were analyzed. Application examples of GAN such as image classification and regression, image synthesis and inpainting, image-to-image translation, super-resolution and point registration were then presented. The discussion tackled GAN’s problems and solutions, and the future research direction was finally proposed. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

14 pages, 1020 KB  
Article
Expert Decision Support Technique for Algal Bloom Governance in Urban Lakes Based on Text Analysis
by Yu-Ting Bai, Bai-Hai Zhang, Xiao-Yi Wang, Xue-Bo Jin, Ji-Ping Xu and Zhao-Yang Wang
Water 2017, 9(5), 308; https://doi.org/10.3390/w9050308 - 28 Apr 2017
Cited by 7 | Viewed by 4088
Abstract
As a typical phenomenon of eutrophication pollution, algal bloom threatens public health and water security. The governance of algal bloom is largely affected by administrators’ knowledge and experience, which may lead to a subjective and one-sided decision-making result. Meanwhile, experts in the specific [...] Read more.
As a typical phenomenon of eutrophication pollution, algal bloom threatens public health and water security. The governance of algal bloom is largely affected by administrators’ knowledge and experience, which may lead to a subjective and one-sided decision-making result. Meanwhile, experts in the specific field can provide professional support. How to utilize expert resources adequately and automatically has been a problem. This paper proposes an expert decision support technique for algal bloom governance based on text analysis methods. Firstly, the decision support mechanism is introduced to form a general decision-making framework. Secondly, the expert classification method is proposed to help with choosing suitable experts. Thirdly, a multi-criteria group decision-making method is presented based on the automatic analysis of experts’ decision opinions. Finally, an experiment is conducted to verify the expert decision support technique. The results show the technique’s feasibility and rationality. This paper describes experts’ information and opinions with natural language, which can intuitively reflect the natural meaning. The expert decision support technique based on text analysis broadens the management thought of water pollution in urban lakes. Full article
(This article belongs to the Special Issue Urban Water Challenges)
Show Figures

Figure 1

27 pages, 1976 KB  
Article
Time-Frequency Methods for Structural Health Monitoring
by Alexander L. Pyayt, Alexey P. Kozionov, Ilya I. Mokhov, Bernhard Lang, Robert J. Meijer, Valeria V. Krzhizhanovskaya and Peter M. A. Sloot
Sensors 2014, 14(3), 5147-5173; https://doi.org/10.3390/s140305147 - 12 Mar 2014
Cited by 34 | Viewed by 12396
Abstract
Detection of early warning signals for the imminent failure of large and complex engineered structures is a daunting challenge with many open research questions. In this paper we report on novel ways to perform Structural Health Monitoring (SHM) of flood protection systems (levees, [...] Read more.
Detection of early warning signals for the imminent failure of large and complex engineered structures is a daunting challenge with many open research questions. In this paper we report on novel ways to perform Structural Health Monitoring (SHM) of flood protection systems (levees, earthen dikes and concrete dams) using sensor data. We present a robust data-driven anomaly detection method that combines time-frequency feature extraction, using wavelet analysis and phase shift, with one-sided classification techniques to identify the onset of failure anomalies in real-time sensor measurements. The methodology has been successfully tested at three operational levees. We detected a dam leakage in the retaining dam (Germany) and “strange” behaviour of sensors installed in a Boston levee (UK) and a Rhine levee (Germany). Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Graphical abstract

10 pages, 331 KB  
Article
Green’s Symmetries in Finite Digraphs
by Allen D. Parks
Symmetry 2011, 3(3), 564-573; https://doi.org/10.3390/sym3030564 - 15 Aug 2011
Cited by 2 | Viewed by 5602
Abstract
The semigroup DV of digraphs on a set V of n labeled vertices is defined. It is shown that DV is faithfully represented by the semigroup Bn of n ´ n Boolean matrices and that the Green’s L, R, H, [...] Read more.
The semigroup DV of digraphs on a set V of n labeled vertices is defined. It is shown that DV is faithfully represented by the semigroup Bn of n ´ n Boolean matrices and that the Green’s L, R, H, and D equivalence classifications of digraphs in DV follow directly from the Green’s classifications already established for Bn. The new results found from this are: (i) L, R, and H equivalent digraphs contain sets of vertices with identical neighborhoods which remain invariant under certain one-sided semigroup multiplications that transform one digraph into another within the same equivalence class, i.e., these digraphs exhibit Green’s isoneighborhood symmetries; and (ii) D equivalent digraphs are characterized by isomorphic inclusion lattices that are generated by their out-neighborhoods and which are preserved under certain two-sided semigroup multiplications that transform digraphs within the same D equivalence class, i.e., these digraphs are characterized by Green’s isolattice symmetries. As a simple illustrative example, the Green’s classification of all digraphs on two vertices is presented and the associated Green’s symmetries are identified. Full article
Show Figures

Figure 1

Back to TopTop