Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (113)

Search Parameters:
Keywords = multilabel learning algorithms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1621 KB  
Article
Transfer Learning Approach with Features Block Selection via Genetic Algorithm for High-Imbalance and Multi-Label Classification of HPA Confocal Microscopy Images
by Vincenzo Taormina, Domenico Tegolo and Cesare Valenti
Bioengineering 2025, 12(12), 1379; https://doi.org/10.3390/bioengineering12121379 - 18 Dec 2025
Viewed by 486
Abstract
Advances in deep learning are impressive in various fields and have achieved performance beyond human capabilities in tasks such as image classification, as demonstrated in competitions such as the ImageNet Large Scale Visual Recognition Challenge. Nonetheless, complex applications like medical imaging continue to [...] Read more.
Advances in deep learning are impressive in various fields and have achieved performance beyond human capabilities in tasks such as image classification, as demonstrated in competitions such as the ImageNet Large Scale Visual Recognition Challenge. Nonetheless, complex applications like medical imaging continue to present significant challenges; a prime example is the Human Protein Atlas (HPA) dataset, which is computationally challenging and complex due to the high-class imbalance with the presence of rare patterns and the need for multi-label classification. It includes 28 distinct patterns and more than 500 unique label combinations, with protein localization that can appear in different cellular regions such as the nucleus, the cytoplasm, and the nuclear membrane. Moreover, the dataset provides four distinct channels for each sample, adding to its complexity, with green representing the target protein, red indicating microtubules, blue showing the nucleus, and yellow depicting the endoplasmic reticulum. We propose a two-phase transfer learning approach based on feature-block extraction from twelve ImageNet-pretrained CNNs. In the first phase, we address single-label multiclass classification using CNNs as feature extractors combined with SVM classifiers on a subset of the HPA dataset. We demonstrate that the simple concatenation of feature blocks extracted from different CNNs improves performance. Furthermore, we apply a genetic algorithm to select the sub-optimal combination of feature blocks. In the second phase, based on the results of the previous stage, we apply two simple multi-label classification strategies and compare their performance with four classifiers. Our method integrates image-level and cell-level analysis. At the image level, we assess the discriminative contribution of individual and combined channels, showing that the green channel is the strongest individually but benefits from combinations with red and yellow. At the cellular level, we extract features from the nucleus and nuclear-membrane ring, an analysis not previously explored in the HPA literature, which proves effective for recognizing rare patterns. Combining these perspectives enhances the detection of rare classes, achieving an F1 score of 0.8 for “Rods & Rings”, outperforming existing approaches. Accurate identification of rare patterns is essential for biological and clinical applications, underscoring the significance of our contribution. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

24 pages, 5207 KB  
Article
Graph Neural Networks vs. Traditional QSAR: A Comprehensive Comparison for Multi-Label Molecular Odor Prediction
by Tengteng Wen, Xianfa Cai and Jincheng Li
Molecules 2025, 30(23), 4605; https://doi.org/10.3390/molecules30234605 - 30 Nov 2025
Viewed by 701
Abstract
Molecular odor prediction represents a fundamental challenge in computational chemistry with significant applications in fragrance design, food science, and chemical safety assessment. While traditional Quantitative Structure–Activity Relationship (QSAR) methods rely on hand-crafted molecular descriptors, recent advances in graph neural networks (GNNs) enable direct [...] Read more.
Molecular odor prediction represents a fundamental challenge in computational chemistry with significant applications in fragrance design, food science, and chemical safety assessment. While traditional Quantitative Structure–Activity Relationship (QSAR) methods rely on hand-crafted molecular descriptors, recent advances in graph neural networks (GNNs) enable direct end-to-end learning from molecular graph structures. However, systematic comparison between these approaches for multi-label odor prediction remains limited. This study presents a comprehensive evaluation of traditional QSAR methods compared with modern GNN approaches for multi-label molecular odor prediction. Using the GoodScent dataset containing 3304 molecules with six high-frequency odor types (fruity, green, sweet, floral, woody, herbal), we systematically evaluate 23 model configurations across traditional machine learning algorithms (Random Forest, SVM, GBDT, MLP, XGBoost, LightGBM) with three feature-processing strategies and three GNN architectures (GCN, GAT, NNConv). The results demonstrate that GNN models achieve significantly superior performance, with GCN achieving the highest macro F1-score of 0.5193 compared to 0.4766 for the best traditional method (MLP with basic preprocessing), representing a 24.1% relative improvement. Critically, we discover that threshold optimization is essential for multi-label chemical classification. These findings establish GNNs as the preferred approach for molecular property prediction tasks and provide crucial insights for handling class imbalance in chemical informatics applications. Full article
(This article belongs to the Special Issue Analysis of Natural Volatile Organic Compounds (NVOCs))
Show Figures

Figure 1

29 pages, 1051 KB  
Article
Urdu Toxicity Detection: A Multi-Stage and Multi-Label Classification Approach
by Ayesha Rashid, Sajid Mahmood, Usman Inayat and Muhammad Fahad Zia
AI 2025, 6(8), 194; https://doi.org/10.3390/ai6080194 - 21 Aug 2025
Cited by 1 | Viewed by 3067
Abstract
Social media empowers freedom of expression but is often misused for abuse and hate. The detection of such content is crucial, especially in under-resourced languages like Urdu. To address this challenge, this paper designed a comprehensive multilabel dataset, the Urdu toxicity corpus (UTC). [...] Read more.
Social media empowers freedom of expression but is often misused for abuse and hate. The detection of such content is crucial, especially in under-resourced languages like Urdu. To address this challenge, this paper designed a comprehensive multilabel dataset, the Urdu toxicity corpus (UTC). Second, the Urdu toxicity detection model is developed, which detects toxic content from an Urdu dataset presented in Nastaliq Font. The proposed framework initially processed the gathered data and then applied feature engineering using term frequency-inverse document frequency, bag-of-words, and N-gram techniques. Subsequently, the synthetic minority over-sampling technique is used to address the data imbalance problem, and manual data annotation is performed to ensure label accuracy. Four machine learning models, namely logistic regression, support vector machine, random forest, and gradient boosting, are applied to preprocessed data. The results indicate that the RF outperformed all evaluation metrics. Deep learning algorithms, including long short-term memory (LSTM), Bidirectional LSTM, and gated recurrent unit, have also been applied to UTC for classification purposes. Random forest outperforms the other models, achieving a precision, recall, F1-score, and accuracy of 0.97, 0.99, 0.98, and 0.99, respectively. The proposed model demonstrates a strong potential to detect rude, offensive, abusive, and hate speech content from user comments in Urdu Nastaliq. Full article
Show Figures

Figure 1

16 pages, 1037 KB  
Article
Generative Learning from Semantically Confused Label Distribution via Auto-Encoding Variational Bayes
by Xinhai Li, Chenxu Meng, Heng Zhou, Yi Guo, Bowen Xue, Tianzuo Yu and Yunan Lu
Electronics 2025, 14(13), 2736; https://doi.org/10.3390/electronics14132736 - 7 Jul 2025
Viewed by 922
Abstract
Label Distribution Learning (LDL) has emerged as a powerful paradigm for addressing label ambiguity, offering a more nuanced quantification of the instance–label relationship compared to traditional single-label and multi-label learning approaches. This paper focuses on the challenge of noisy label distributions, which is [...] Read more.
Label Distribution Learning (LDL) has emerged as a powerful paradigm for addressing label ambiguity, offering a more nuanced quantification of the instance–label relationship compared to traditional single-label and multi-label learning approaches. This paper focuses on the challenge of noisy label distributions, which is ubiquitous in real-world applications due to the annotator subjectivity, algorithmic biases, and experimental errors. Existing related LDL algorithms often assume a linear combination of true and random label distributions when modeling the noisy label distributions, an oversimplification that fails to capture the practical generation processes of noisy label distributions. Therefore, this paper introduces an assumption that the noise in label distributions primarily arises from the semantic confusion between labels and proposes a novel generative label distribution learning algorithm to model the confusion-based generation process of both the feature data and the noisy label distribution data. The proposed model is inferred using variational methods and its effectiveness is demonstrated through extensive experiments across various real-world datasets, showcasing its superiority in handling noisy label distributions. Full article
(This article belongs to the Special Issue Neural Networks: From Software to Hardware)
Show Figures

Graphical abstract

27 pages, 10447 KB  
Article
Supervised Learning-Based Fault Classification in Industrial Rotating Equipment Using Multi-Sensor Data
by Aziz Kubilay Ovacıklı, Mert Yagcioglu, Sevgi Demircioglu, Tugberk Kocatekin and Sibel Birtane
Appl. Sci. 2025, 15(13), 7580; https://doi.org/10.3390/app15137580 - 6 Jul 2025
Cited by 8 | Viewed by 2641
Abstract
The reliable operation of rotating machinery is critical in industrial production, necessitating advanced fault diagnosis and maintenance strategies to ensure operational availability. This study employs supervised machine learning algorithms to apply multi-label classification for fault detection in rotating machinery, utilizing a real dataset [...] Read more.
The reliable operation of rotating machinery is critical in industrial production, necessitating advanced fault diagnosis and maintenance strategies to ensure operational availability. This study employs supervised machine learning algorithms to apply multi-label classification for fault detection in rotating machinery, utilizing a real dataset from multi-sensor systems installed on a suction fan in a typical manufacturing industry. The presented system focuses on multi-modal data analysis, such as vibration analysis, temperature monitoring, and ultrasound, for more effective fault diagnosis. The performance of general machine learning algorithms such as kNN, SVM, RF, and some boosting techniques was evaluated, and it was shown that the Random Forest achieved the best classification accuracy. Feature importance analysis has revealed how specific domain characteristics, such as vibration velocity and ultrasound levels, contribute significantly to performance and enabled the detection of multiple faults simultaneously. The results demonstrate the machine learning model’s ability to retrieve valuable information from multi-sensor data integration, improving predictive maintenance strategies. The presented study contributes a practical framework in intelligent fault diagnosis as it presents an example of a real-world implementation while enabling future improvements in industrial condition-based maintenance systems. Full article
Show Figures

Figure 1

23 pages, 578 KB  
Article
Distributed Partial Label Multi-Dimensional Classification via Label Space Decomposition
by Zhen Xu and Sicong Chen
Electronics 2025, 14(13), 2623; https://doi.org/10.3390/electronics14132623 - 28 Jun 2025
Cited by 3 | Viewed by 527
Abstract
Multi-dimensional classification (MDC), in which the training data are concurrently associated with numerous label variables across many dimensions, has garnered significant interest recently. Most of the current MDC methods are based on the framework of supervised learning, which induces a predictive model from [...] Read more.
Multi-dimensional classification (MDC), in which the training data are concurrently associated with numerous label variables across many dimensions, has garnered significant interest recently. Most of the current MDC methods are based on the framework of supervised learning, which induces a predictive model from a large amount of precisely labeled data. So, they are challenged to obtain satisfactory learning results in the situation where the training data are not annotated with precise labels but assigned with ambiguous labels. Besides, the current MDC algorithms only consider the scenario of centralized learning, where all training data are handled at a single node for the purpose of classifier induction. However, in some real applications, the training data are not consolidated at a single fusion center, but rather are dispersedly distributed among multiple nodes. In this study, we focus on the problem of decentralized classification involving partial multi-dimensional data that have partially accessible candidate labels, and develop a distributed method called dPL-MDC for learning with these partial labels. In this algorithm, we conduct one-vs.-one decomposition on the originally heterogeneous multi-dimensional output space, such that the problem of partial MDC can be transformed into the issue of distributed partial multi-label learning. Then, by using several shared anchor data to characterize the global distribution of label variables, we propose a novel distributed approach to learn the label confidence of the training data. Under the supervision of recovered credible labels, the classifier can be induced by exploiting the high-order label dependencies from a common low-dimensional subspace. Experiments performed on various datasets indicate that our proposed method is capable of achieving learning performance in distributed partial MDC. Full article
Show Figures

Figure 1

46 pages, 2221 KB  
Article
A Novel Metaheuristic-Based Methodology for Attack Detection in Wireless Communication Networks
by Walaa N. Ismail
Mathematics 2025, 13(11), 1736; https://doi.org/10.3390/math13111736 - 24 May 2025
Cited by 6 | Viewed by 1479
Abstract
The landscape of 5G communication introduces heightened risks from malicious attacks, posing significant threats to network security and availability. The unique characteristics of 5G networks, while enabling advanced communication, present challenges in distinguishing between legitimate and malicious traffic, making it more difficult to [...] Read more.
The landscape of 5G communication introduces heightened risks from malicious attacks, posing significant threats to network security and availability. The unique characteristics of 5G networks, while enabling advanced communication, present challenges in distinguishing between legitimate and malicious traffic, making it more difficult to detect anonymous traffic. Current methodologies for intrusion detection within 5G communication exhibit limitations in accuracy, efficiency, and adaptability to evolving network conditions. In this study, we explore the application of an adaptive optimized machine learning-based framework to improve intrusion detection system (IDS) performance in wireless network access scenarios. The framework used involves developing a lightweight model based on a convolutional neural network with 11 layers, referred to as CSO-2D-CNN, which demonstrates fast learning rates and excellent generalization capabilities. Additionally, an optimized attention-based XGBoost classifier is utilized to improve model performance by combining the benefits of parallel gradient boosting and attention mechanisms. By focusing on the most relevant features, this attention mechanism makes the model suitable for complex and high-dimensional traffic patterns typical of 5G communication. As in previous approaches, it eliminates the need to manually select features such as entropy, payload size, and opcode sequences. Furthermore, the metaheuristic Cat Swarm Optimization (CSO) algorithm is employed to fine-tune the hyperparameters of both the CSO-2D-CNN and the attention-based XGBoost classifier. Extensive experiments conducted on a recent dataset of network traffic demonstrate that the system can adapt to both binary and multiclass classification tasks for high-dimensional and imbalanced data. The results show a low false-positive rate and a high level of accuracy, with a maximum of 99.97% for multilabel attack detection and 99.99% for binary task classification, validating the effectiveness of the proposed framework in the 5G wireless context. Full article
Show Figures

Figure 1

50 pages, 1639 KB  
Article
High-Performance Deployment Operational Data Analytics of Pre-Trained Multi-Label Classification Architectures with Differential-Evolution-Based Hyperparameter Optimization (AutoDEHypO)
by Teo Prica and Aleš Zamuda
Mathematics 2025, 13(10), 1681; https://doi.org/10.3390/math13101681 - 20 May 2025
Viewed by 1217
Abstract
This article presents a high-performance-computing differential-evolution-based hyperparameter optimization automated workflow (AutoDEHypO), which is deployed on a petascale supercomputer and utilizes multiple GPUs to execute a specialized fitness function for machine learning (ML). The workflow is designed for operational analytics of energy efficiency. In [...] Read more.
This article presents a high-performance-computing differential-evolution-based hyperparameter optimization automated workflow (AutoDEHypO), which is deployed on a petascale supercomputer and utilizes multiple GPUs to execute a specialized fitness function for machine learning (ML). The workflow is designed for operational analytics of energy efficiency. In this differential evolution (DE) optimization use case, we analyze how energy efficiently the DE algorithm performs with different DE strategies and ML models. The workflow analysis considers key factors such as DE strategies and automated use case configurations, such as an ML model architecture and dataset, while monitoring both the achieved accuracy and the utilization of computing resources, such as the elapsed time and consumed energy. While the efficiency of a chosen DE strategy is assessed based on a multi-label supervised ML accuracy, operational data about the consumption of resources of individual completed jobs obtained from a Slurm database are reported. To demonstrate the impact on energy efficiency, using our analysis workflow, we visualize the obtained operational data and aggregate them with statistical tests that compare and group the energy efficiency of the DE strategies applied in the ML models. Full article
(This article belongs to the Special Issue Innovations in High-Performance Computing)
Show Figures

Figure 1

18 pages, 1459 KB  
Article
Inferring Mechanical Properties of Wire Rods via Transfer Learning Using Pre-Trained Neural Networks
by Adriany A. F. Eduardo, Gustavo A. S. Martinez, Ted W. Grant, Lucas B. S. Da Silva and Wei-Liang Qian
J 2025, 8(2), 15; https://doi.org/10.3390/j8020015 - 30 Apr 2025
Viewed by 3822
Abstract
The primary objective of this study is to explore how machine learning techniques can be incorporated into the analysis of material deformation. Neural network algorithms are applied to the study of mechanical properties of wire rods subjected to cold plastic deformations. Specifically, this [...] Read more.
The primary objective of this study is to explore how machine learning techniques can be incorporated into the analysis of material deformation. Neural network algorithms are applied to the study of mechanical properties of wire rods subjected to cold plastic deformations. Specifically, this study explores how pre-trained neural networks with appropriate architecture can be exploited to predict apparently distinct but internally related features. Tentative predictions are made by observing only an insignificant cropped fraction of the material’s profile. The neural network models are trained and calibrated using 6400 image fractions with a resolution of 120×90 pixels. Different architectures are developed with a focus on two particular aspects. Firstly, different possible architectures are compared, particularly between multi-output and multi-label convolutional neural networks (CNNs). Moreover, a hybrid model is employed, essentially a conjunction of a CNN with a multi-layer perceptron (MLP). The neural network’s input constitutes combined numerical and visual data, and its architecture primarily consists of seven dense layers and eight convolutional layers. By proper calibration and fine-tuning, observed improvements over the standard CNN models are reflected by good training and test accuracies in order to predict the material’s mechanical properties, with efficiency demonstrated by the loss function’s rapid convergence. Secondly, the role of the pre-training process is investigated. The obtained CNN-MLP model can inherit the learning from a pre-trained multi-label CNN, initially developed for distinct features such as localization and number of passes. It is demonstrated that the pre-training effectively accelerates the learning process for the target feature. Therefore, it is concluded that appropriate architecture design and pre-training are essential for applying machine learning techniques to realistic problems. Full article
Show Figures

Figure 1

51 pages, 2432 KB  
Article
A Hubness Information-Based k-Nearest Neighbor Approach for Multi-Label Learning
by Zeyu Teng, Shanshan Tang, Min Huang and Xingwei Wang
Mathematics 2025, 13(7), 1202; https://doi.org/10.3390/math13071202 - 5 Apr 2025
Viewed by 2201
Abstract
Multi-label classification (MLC) plays a crucial role in various real-world scenarios. Prediction with nearest neighbors has achieved competitive performance in MLC. Hubness, a phenomenon in which a few points appear in the k-nearest neighbor (kNN) lists of many points in high-dimensional spaces, may [...] Read more.
Multi-label classification (MLC) plays a crucial role in various real-world scenarios. Prediction with nearest neighbors has achieved competitive performance in MLC. Hubness, a phenomenon in which a few points appear in the k-nearest neighbor (kNN) lists of many points in high-dimensional spaces, may significantly impact machine learning applications and has recently attracted extensive attention. However, it has not been adequately addressed in developing MLC algorithms. To address this issue, we propose a hubness-aware kNN-based MLC algorithm in this paper, named multi-label hubness information-based k-nearest neighbor (MLHiKNN). Specifically, we introduce a fuzzy measure of label relevance and employ a weighted kNN scheme. The hubness information is used to compute each training example’s membership in relevance and irrelevance to each label and calculate weights for the nearest neighbors of a query point. Then, MLHiKNN exploits high-order label correlations by training a logistic regression model for each label using the kNN voting results with respect to all possible labels. Experimental results on 28 benchmark datasets demonstrate that MLHiKNN is competitive among the compared methods, including nine well-established MLC algorithms and three commonly used hubness reduction techniques, in dealing with MLC problems. Full article
Show Figures

Figure 1

17 pages, 16395 KB  
Article
Towards Effective Parkinson’s Monitoring: Movement Disorder Detection and Symptom Identification Using Wearable Inertial Sensors
by Umar Khan, Qaiser Riaz, Mehdi Hussain, Muhammad Zeeshan and Björn Krüger
Algorithms 2025, 18(4), 203; https://doi.org/10.3390/a18040203 - 4 Apr 2025
Viewed by 1689
Abstract
Parkinson’s disease lacks a cure, yet symptomatic relief can be achieved through various treatments. This study dives into the critical aspect of anomalous event detection in the activities of daily living of patients with Parkinson’s disease and the identification of associated movement disorders, [...] Read more.
Parkinson’s disease lacks a cure, yet symptomatic relief can be achieved through various treatments. This study dives into the critical aspect of anomalous event detection in the activities of daily living of patients with Parkinson’s disease and the identification of associated movement disorders, such as tremors, dyskinesia, and bradykinesia. Utilizing the inertial data acquired from the most affected upper limb of the patients, this study aims to create an optimal pipeline for Parkinson’s patient monitoring. This study proposes a two-stage movement disorder detection and classification pipeline for binary classification (normal or anomalous event) and multi-label classification (tremors, dyskinesia, and bradykinesia), respectively. The proposed pipeline employs and evaluates manual feature crafting for classical machine learning algorithms, as well as an RNN-CNN-inspired deep learning model that does not require manual feature crafting. This study also explore three different window sizes for signal segmentation and two different auto-segment labeling approaches for precise and correct labeling of the continuous signal. The performance of the proposed model is validated on a publicly available inertial dataset. Comparisons with existing works reveal the novelty of our approach, covering multiple anomalies (tremors, dyskinesia, and bradykinesia) and achieving 93.03% recall for movement disorder detection (binary) and 91.54% recall for movement disorder classification (multi-label). We believe that the proposed approach will advance the field towards more effective and comprehensive solutions for Parkinson’s detection and symptom classification. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

20 pages, 4435 KB  
Article
OMAL: A Multi-Label Active Learning Approach from Data Streams
by Qiao Fang, Chen Xiang, Jicong Duan, Benallal Soufiyan, Changbin Shao, Xibei Yang, Sen Xu and Hualong Yu
Entropy 2025, 27(4), 363; https://doi.org/10.3390/e27040363 - 29 Mar 2025
Viewed by 1190
Abstract
With the rapid growth of digital computing, communication, and storage devices applied in various real-world scenarios, more and more data have been collected and stored to drive the development of machine learning techniques. It is also noted that the data that emerge in [...] Read more.
With the rapid growth of digital computing, communication, and storage devices applied in various real-world scenarios, more and more data have been collected and stored to drive the development of machine learning techniques. It is also noted that the data that emerge in real-world applications tend to become more complex. In this study, we regard a complex data type, i.e., multi-label data, acquired with a time constraint in a dynamic online scenario. Under such conditions, constructing a learning model has to face two challenges: it requires dynamically adapting the variances in label correlations and imbalanced data distributions and it requires more labeling consumptions. To solve these two issues, we propose a novel online multi-label active learning (OMAL) algorithm that considers simultaneously adopting uncertainty (using the average entropy of prediction probabilities) and diversity (using the average cosine distance between feature vectors) as an active query strategy. Specifically, to focus on label correlations, we use a classifier chain (CC) as the multi-label learning model and design a label co-occurrence ranking strategy to arrange label sequence in CC. To adapt the naturally imbalanced distribution of the multi-label data, we select weight extreme learning machine (WELM) as the basic binary-class classifier in CC. The experimental results on ten benchmark multi-label datasets that were transformed into streams show that our proposed method is superior to several popular static multi-label active learning algorithms in terms of both the Macro-F1 and Micro-F1 metrics, indicating its specifical adaptions in the dynamic data stream environment. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

19 pages, 854 KB  
Article
Correlation and Knowledge-Based Joint Feature Selection for Copper Flotation Backbone Process Design
by Haipei Dong, Fuli Wang, Dakuo He and Yan Liu
Minerals 2025, 15(4), 353; https://doi.org/10.3390/min15040353 - 27 Mar 2025
Viewed by 452
Abstract
The intelligentization of flotation design plays a crucial role in enhancing industrial competitiveness and resource efficiency. Our previous work established a mapping relationship between the flotation backbone process graph and label vectors, enabling an intelligent design of the copper flotation backbone process through [...] Read more.
The intelligentization of flotation design plays a crucial role in enhancing industrial competitiveness and resource efficiency. Our previous work established a mapping relationship between the flotation backbone process graph and label vectors, enabling an intelligent design of the copper flotation backbone process through multilabel classification. Due to the insufficient quantity of training samples in historical databases, traditional feature selection methods perform poorly, owing to insufficient learning. To address the label-specific feature selection problem for this design, this study proposes correlation and knowledge-based joint feature selection (CK-JFS). In this proposed method, label correlations ensure that features specific to strongly related labels are prioritized, while domain knowledge further refines the selection process by applying specialized knowledge to copper flotation. This mode of data and knowledge integration significantly reduces the reliance of label-specific feature selection on the number of training samples. The results demonstrate that CK-JFS achieves significantly higher accuracy and computational efficiency compared to traditional multilabel feature selection algorithms in the context of copper flotation backbone process design. Full article
Show Figures

Figure 1

23 pages, 3638 KB  
Article
Automatic Recognition of Dual-Component Radar Signals Based on Deep Learning
by Zeyu Tang, Hong Shen and Chan-Tong Lam
Sensors 2025, 25(6), 1809; https://doi.org/10.3390/s25061809 - 14 Mar 2025
Cited by 3 | Viewed by 1764
Abstract
The increasing density and complexity of electromagnetic signals have brought new challenges to multi-component radar signal recognition. To address the problem of low recognition accuracy under low signal-to-noise ratios (SNR) in adapting the common recognition framework of combining time–frequency transformations (TFTs) with convolutional [...] Read more.
The increasing density and complexity of electromagnetic signals have brought new challenges to multi-component radar signal recognition. To address the problem of low recognition accuracy under low signal-to-noise ratios (SNR) in adapting the common recognition framework of combining time–frequency transformations (TFTs) with convolutional neural networks (CNNs), this paper proposes a new dual-component radar signal recognition framework (TFGM-RMNet) that combines a deep time–frequency generation module with a Transformer-based residual network. First, the received noisy signal is preprocessed. Then, the deep time–frequency generation module is used to learn the complete basis function to obtain various TF features of the time signal, and the corresponding time–frequency representation (TFR) is output under the supervision of high-quality images. Next, a ResNet combined with cascaded multi-head attention (MHSA) is applied to extract local and global features from the TFR. Finally, modulation format prediction is achieved through multi-label classification. The proposed framework does not require explicit TFT during testing, and the TFT process is built into TFGM to replace the traditional TFT. The classification results and ideal TFR are obtained during testing, realizing an end-to-end deep learning (DL) framework. The simulation results show that, when SNR > −8 dB, this method can achieve an average recognition accuracy close to 100%. It achieves 97% accuracy even at an SNR of −10 dB. At the same time, under low SNR, the recognition performance is better than the existing algorithms including DCNN-RAMIML, DCNN-MLL, and DCNN-MIML. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

29 pages, 3905 KB  
Article
Federated Deep Learning for Scalable and Privacy-Preserving Distributed Denial-of-Service Attack Detection in Internet of Things Networks
by Abdulrahman A. Alshdadi, Abdulwahab Ali Almazroi, Nasir Ayub, Miltiadis D. Lytras, Eesa Alsolami, Faisal S. Alsubaei and Riad Alharbey
Future Internet 2025, 17(2), 88; https://doi.org/10.3390/fi17020088 - 13 Feb 2025
Cited by 11 | Viewed by 2362
Abstract
Industry-wide IoT networks have altered operations and increased vulnerabilities, notably DDoS attacks. IoT systems are decentralised. Therefore, these attacks flood networks with malicious traffic, creating interruptions, financial losses, and availability issues. We need scalable, privacy-preserving, and resource-efficient IoT intrusion detection algorithms to solve [...] Read more.
Industry-wide IoT networks have altered operations and increased vulnerabilities, notably DDoS attacks. IoT systems are decentralised. Therefore, these attacks flood networks with malicious traffic, creating interruptions, financial losses, and availability issues. We need scalable, privacy-preserving, and resource-efficient IoT intrusion detection algorithms to solve this essential problem. This paper presents a Federated-Learning (FL) framework using ResVGG-SwinNet, a hybrid deep-learning architecture, for multi-label DDoS attack detection. ResNet improves feature extraction, VGGNet optimises feature refining, and Swin-Transformer captures contextual dependencies, making the model sensitive to complicated attack patterns across varied network circumstances. Using the FL framework, decentralised training protects data privacy and scales and adapts across diverse IoT contexts. New preprocessing methods like Dynamic Proportional Class Adjustment (DPCA) and Dual Adaptive Selector (DAS) for feature optimisation improve system efficiency and accuracy. The model performed well on CIC-DDoS2019, UNSW-NB15, and IoT23 datasets, with 99.0% accuracy, 2.5% false alert rate, and 99.3% AUC. With a 93.0% optimisation efficiency score, the system balances computational needs with robust detection. With advanced deep-learning models, FL provides a scalable, safe, and effective DDoS detection solution that overcomes significant shortcomings in current systems. The framework protects IoT networks from growing cyber threats and provides a complete approach for current IoT-driven ecosystems. Full article
Show Figures

Figure 1

Back to TopTop