Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (361)

Search Parameters:
Keywords = malicious activity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2082 KiB  
Article
XTTS-Based Data Augmentation for Profanity Keyword Recognition in Low-Resource Speech Scenarios
by Shin-Chi Lai, Yi-Chang Zhu, Szu-Ting Wang, Yen-Ching Chang, Ying-Hsiu Hung, Jhen-Kai Tang and Wen-Kai Tsai
Appl. Syst. Innov. 2025, 8(4), 108; https://doi.org/10.3390/asi8040108 - 31 Jul 2025
Viewed by 94
Abstract
As voice cloning technology rapidly advances, the risk of personal voices being misused by malicious actors for fraud or other illegal activities has significantly increased, making the collection of speech data increasingly challenging. To address this issue, this study proposes a data augmentation [...] Read more.
As voice cloning technology rapidly advances, the risk of personal voices being misused by malicious actors for fraud or other illegal activities has significantly increased, making the collection of speech data increasingly challenging. To address this issue, this study proposes a data augmentation method based on XText-to-Speech (XTTS) synthesis to tackle the challenges of small-sample, multi-class speech recognition, using profanity as a case study to achieve high-accuracy keyword recognition. Two models were therefore evaluated: a CNN model (Proposed-I) and a CNN-Transformer hybrid model (Proposed-II). Proposed-I leverages local feature extraction, improving accuracy on a real human speech (RHS) test set from 55.35% without augmentation to 80.36% with XTTS-enhanced data. Proposed-II integrates CNN’s local feature extraction with Transformer’s long-range dependency modeling, further boosting test set accuracy to 88.90% while reducing the parameter count by approximately 41%, significantly enhancing computational efficiency. Compared to a previously proposed incremental architecture, the Proposed-II model achieves an 8.49% higher accuracy while reducing parameters by about 98.81% and MACs by about 98.97%, demonstrating exceptional resource efficiency. By utilizing XTTS and public corpora to generate a novel keyword speech dataset, this study enhances sample diversity and reduces reliance on large-scale original speech data. Experimental analysis reveals that an optimal synthetic-to-real speech ratio of 1:5 significantly improves the overall system accuracy, effectively addressing data scarcity. Additionally, the Proposed-I and Proposed-II models achieve accuracies of 97.54% and 98.66%, respectively, in distinguishing real from synthetic speech, demonstrating their strong potential for speech security and anti-spoofing applications. Full article
(This article belongs to the Special Issue Advancements in Deep Learning and Its Applications)
24 pages, 845 KiB  
Article
Towards Tamper-Proof Trust Evaluation of Internet of Things Nodes Leveraging IOTA Ledger
by Assiya Akli and Khalid Chougdali 
Sensors 2025, 25(15), 4697; https://doi.org/10.3390/s25154697 - 30 Jul 2025
Viewed by 210
Abstract
Trust evaluation has become a major challenge in the quickly developing Internet of Things (IoT) environment because of the vulnerabilities and security hazards associated with networked devices. To overcome these obstacles, this study offers a novel approach for evaluating trust that uses IOTA [...] Read more.
Trust evaluation has become a major challenge in the quickly developing Internet of Things (IoT) environment because of the vulnerabilities and security hazards associated with networked devices. To overcome these obstacles, this study offers a novel approach for evaluating trust that uses IOTA Tangle technology. By decentralizing the trust evaluation process, our approach reduces the risks related to centralized solutions, including privacy violations and single points of failure. To offer a thorough and reliable trust evaluation, this study combines direct and indirect trust measures. Moreover, we incorporate IOTA-based trust metrics to evaluate a node’s trust based on its activity in creating and validating IOTA transactions. The proposed framework ensures data integrity and secrecy by implementing immutable, secure storage for trust scores on IOTA. This ensures that no node transmits a wrong trust score for itself. The results show that the proposed scheme is efficient compared to recent literature, achieving up to +3.5% higher malicious node detection accuracy, up to 93% improvement in throughput, 40% reduction in energy consumption, and up to 24% lower end-to-end delay across various network sizes and adversarial conditions. Our contributions improve the scalability, security, and dependability of trust assessment processes in Internet of Things networks, providing a strong solution to the prevailing issues in current centralized trust models. Full article
Show Figures

Figure 1

19 pages, 3365 KiB  
Article
Robust Federated Learning Against Data Poisoning Attacks: Prevention and Detection of Attacked Nodes
by Pretom Roy Ovi and Aryya Gangopadhyay
Electronics 2025, 14(15), 2970; https://doi.org/10.3390/electronics14152970 - 25 Jul 2025
Viewed by 263
Abstract
Federated learning (FL) enables collaborative model building among a large number of participants without sharing sensitive data to the central server. Because of its distributed nature, FL has limited control over local data and the corresponding training process. Therefore, it is susceptible to [...] Read more.
Federated learning (FL) enables collaborative model building among a large number of participants without sharing sensitive data to the central server. Because of its distributed nature, FL has limited control over local data and the corresponding training process. Therefore, it is susceptible to data poisoning attacks where malicious workers use malicious training data to train the model. Furthermore, attackers on the worker side can easily manipulate local data by swapping the labels of training instances, adding noise to training instances, and adding out-of-distribution training instances in the local data to initiate data poisoning attacks. And local workers under such attacks carry incorrect information to the server, poison the global model, and cause misclassifications. So, the prevention and detection of such data poisoning attacks is crucial to build a robust federated training framework. To address this, we propose a prevention strategy in federated learning, namely confident federated learning, to protect workers from such data poisoning attacks. Our proposed prevention strategy at first validates the label quality of local training samples by characterizing and identifying label errors in the local training data, and then excludes the detected mislabeled samples from the local training. To this aim, we experiment with our proposed approach on both the image and audio domains, and our experimental results validated the robustness of our proposed confident federated learning in preventing the data poisoning attacks. Our proposed method can successfully detect the mislabeled training samples with above 85% accuracy and exclude those detected samples from the training set to prevent data poisoning attacks on the local workers. However, our prevention strategy can successfully prevent the attack locally in the presence of a certain percentage of poisonous samples. Beyond that percentage, the prevention strategy may not be effective in preventing attacks. In such cases, detection of the attacked workers is needed. So, in addition to the prevention strategy, we propose a novel detection strategy in the federated learning framework to detect the malicious workers under attack. We propose to create a class-wise cluster representation for every participating worker by utilizing the neuron activation maps of local models and analyze the resulting clusters to filter out the workers under attack before model aggregation. We experimentally demonstrated the efficacy of our proposed detection strategy in detecting workers affected by data poisoning attacks, along with the attack types, e.g., label-flipping or dirty labeling. In addition, our experimental results suggest that the global model could not converge even after a large number of training rounds in the presence of malicious workers, whereas after detecting the malicious workers with our proposed detection method and discarding them from model aggregation, we ensured that the global model achieved convergence within very few training rounds. Furthermore, our proposed approach stays robust under different data distributions and model sizes and does not require prior knowledge about the number of attackers in the system. Full article
Show Figures

Figure 1

27 pages, 8594 KiB  
Article
An Explainable Hybrid CNN–Transformer Architecture for Visual Malware Classification
by Mohammed Alshomrani, Aiiad Albeshri, Abdulaziz A. Alsulami and Badraddin Alturki
Sensors 2025, 25(15), 4581; https://doi.org/10.3390/s25154581 - 24 Jul 2025
Viewed by 668
Abstract
Malware continues to develop, posing significant challenges for traditional signature-based detection systems. Visual malware classification, which transforms malware binaries into grayscale images, has emerged as a promising alternative for recognizing patterns in malicious code. This study presents a hybrid deep learning architecture that [...] Read more.
Malware continues to develop, posing significant challenges for traditional signature-based detection systems. Visual malware classification, which transforms malware binaries into grayscale images, has emerged as a promising alternative for recognizing patterns in malicious code. This study presents a hybrid deep learning architecture that combines the local feature extraction capabilities of ConvNeXt-Tiny (a CNN-based model) with the global context modeling of the Swin Transformer. The proposed model is evaluated using three benchmark datasets—Malimg, MaleVis, VirusMNIST—encompassing 61 malware classes. Experimental results show that the hybrid model achieved a validation accuracy of 94.04%, outperforming both the ConvNeXt-Tiny-only model (92.45%) and the Swin Transformer-only model (90.44%). Additionally, we extended our validation dataset to two more datasets—Maldeb and Dumpware-10—to strengthen the empirical foundation of our work. The proposed hybrid model achieved competitive accuracy on both, with 98% on Maldeb and 97% on Dumpware-10. To enhance model interpretability, we employed Gradient-weighted Class Activation Mapping (Grad-CAM), which visualizes the learned representations and reveals the complementary nature of CNN and Transformer modules. The hybrid architecture, combined with explainable AI, offers an effective and interpretable approach for malware classification, facilitating better understanding and trust in automated detection systems. In addition, a real-time deployment scenario is demonstrated to validate the model’s practical applicability in dynamic environments. Full article
(This article belongs to the Special Issue Cyber Security and AI—2nd Edition)
Show Figures

Figure 1

16 pages, 1251 KiB  
Article
Enhanced Detection of Intrusion Detection System in Cloud Networks Using Time-Aware and Deep Learning Techniques
by Nima Terawi, Huthaifa I. Ashqar, Omar Darwish, Anas Alsobeh, Plamen Zahariev and Yahya Tashtoush
Computers 2025, 14(7), 282; https://doi.org/10.3390/computers14070282 - 17 Jul 2025
Viewed by 331
Abstract
This study introduces an enhanced Intrusion Detection System (IDS) framework for Denial-of-Service (DoS) attacks, utilizing network traffic inter-arrival time (IAT) analysis. By examining the timing between packets and other statistical features, we detected patterns of malicious activity, allowing early and effective DoS threat [...] Read more.
This study introduces an enhanced Intrusion Detection System (IDS) framework for Denial-of-Service (DoS) attacks, utilizing network traffic inter-arrival time (IAT) analysis. By examining the timing between packets and other statistical features, we detected patterns of malicious activity, allowing early and effective DoS threat mitigation. We generate real DoS traffic, including normal, Internet Control Message Protocol (ICMP), Smurf attack, and Transmission Control Protocol (TCP) classes, and develop nine predictive algorithms, combining traditional machine learning and advanced deep learning techniques with optimization methods, including the synthetic minority sampling technique (SMOTE) and grid search (GS). Our findings reveal that while traditional machine learning achieved moderate accuracy, it struggled with imbalanced datasets. In contrast, Deep Neural Network (DNN) models showed significant improvements with optimization, with DNN combined with GS (DNN-GS) reaching 89% accuracy. However, we also used Recurrent Neural Networks (RNNs) combined with SMOTE and GS (RNN-SMOTE-GS), which emerged as the best-performing with a precision of 97%, demonstrating the effectiveness of combining SMOTE and GS and highlighting the critical role of advanced optimization techniques in enhancing the detection capabilities of IDS models for the accurate classification of various types of network traffic and attacks. Full article
Show Figures

Figure 1

18 pages, 1199 KiB  
Article
Adaptive, Privacy-Enhanced Real-Time Fraud Detection in Banking Networks Through Federated Learning and VAE-QLSTM Fusion
by Hanae Abbassi, Saida El Mendili and Youssef Gahi
Big Data Cogn. Comput. 2025, 9(7), 185; https://doi.org/10.3390/bdcc9070185 - 9 Jul 2025
Viewed by 752
Abstract
Increased digital banking operations have brought about a surge in suspicious activities, necessitating heightened real-time fraud detection systems. Conversely, traditional static approaches encounter challenges in maintaining privacy while adapting to new fraudulent trends. In this paper, we provide a unique approach to tackling [...] Read more.
Increased digital banking operations have brought about a surge in suspicious activities, necessitating heightened real-time fraud detection systems. Conversely, traditional static approaches encounter challenges in maintaining privacy while adapting to new fraudulent trends. In this paper, we provide a unique approach to tackling those challenges by integrating VAE-QLSTM with Federated Learning (FL) in a semi-decentralized architecture, maintaining privacy alongside adapting to emerging malicious behaviors. The suggested architecture builds on the adeptness of VAE-QLSTM to capture meaningful representations of transactions, serving in abnormality detection. On the other hand, QLSTM combines quantum computational capability with temporal sequence modeling, seeking to give a rapid and scalable method for real-time malignancy detection. The designed approach was set up through TensorFlow Federated on two real-world datasets—notably IEEE-CIS and European cardholders—outperforming current strategies in terms of accuracy and sensitivity, achieving 94.5% and 91.3%, respectively. This proves the potential of merging VAE-QLSTM with FL to address fraud detection difficulties, ensuring privacy and scalability in advanced banking networks. Full article
Show Figures

Figure 1

20 pages, 6286 KiB  
Article
Near-Field Microwave Sensing for Chip-Level Tamper Detection
by Maryam Saadat Safa and Shahin Tajik
Sensors 2025, 25(13), 4188; https://doi.org/10.3390/s25134188 - 5 Jul 2025
Viewed by 376
Abstract
Stealthy chip-level tamper attacks, such as hardware Trojan insertions or security-critical circuit modifications, can threaten modern microelectronic systems’ security. While traditional inspection and side-channel methods offer potential for tamper detection, they may not reliably detect all forms of attacks and often face practical [...] Read more.
Stealthy chip-level tamper attacks, such as hardware Trojan insertions or security-critical circuit modifications, can threaten modern microelectronic systems’ security. While traditional inspection and side-channel methods offer potential for tamper detection, they may not reliably detect all forms of attacks and often face practical limitations in terms of scalability, accuracy, or applicability. This work introduces a non-invasive, contactless tamper detection method employing a complementary split-ring resonator (CSRR). CSRRs, which are typically deployed for non-destructive material characterization, can be placed on the surface of the chip’s package to detect subtle variations in the impedance of the chip’s power delivery network (PDN) caused by tampering. The changes in the PDN’s impedance profile perturb the local electric near field and consequently affect the sensor’s impedance. These changes manifest as measurable variations in the sensor’s scattering parameters. By monitoring these variations, our approach enables robust and cost-effective physical integrity verification requiring neither physical contact with the chips or printed circuit board (PCB) nor activation of the underlying malicious circuits. To validate our claims, we demonstrate the detection of various chip-level tamper events on an FPGA manufactured with 28 nm technology. Full article
(This article belongs to the Special Issue Sensors in Hardware Security)
Show Figures

Figure 1

18 pages, 3039 KiB  
Article
Security Symmetry in Embedded Systems: Using Microsoft Defender for IoT to Detect Firmware Downgrade Attacks
by Marian Hristov, Maria Nenova and Viktoria Dimitrova
Symmetry 2025, 17(7), 1061; https://doi.org/10.3390/sym17071061 - 4 Jul 2025
Viewed by 361
Abstract
Nowadays, the world witnesses cyber attacks daily, and these threats are becoming exponentially sophisticated due to advances in Artificial Intelligence (AI). This progress allows adversaries to accelerate malware development and streamline the exploitation process. The motives vary, and so do the consequences. Unlike [...] Read more.
Nowadays, the world witnesses cyber attacks daily, and these threats are becoming exponentially sophisticated due to advances in Artificial Intelligence (AI). This progress allows adversaries to accelerate malware development and streamline the exploitation process. The motives vary, and so do the consequences. Unlike Information Technology (IT) breaches, Operational Technology (OT)—such as manufacturing plants, electric grids, or water and wastewater facilities—compromises can have life-threatening or environmentally hazardous consequences. For that reason, this article explores a potential cyber attack against an OT environment—firmware downgrade—and proposes a solution for detection and response by implementing Microsoft Defender for IoT (D4IoT), one of the leading products on the market for OT monitoring. To detect the malicious firmware downgrade activity, D4IoT was implemented in a pre-commissioning (non-production) environment. The solution passively monitored the network, identified the deviation, and generated alerts for response actions. Testing showed that D4IoT effectively detected the firmware downgrade attempts based on a protocol analysis and asset behavior profiling. These findings demonstrate that D4IoT provides valuable detection capabilities against an intentional firmware downgrade designed to exploit known vulnerabilities in the older, less secure version, thereby strengthening the cybersecurity posture of OT environments. The explored attack scenario leverages the symmetry between genuine and malicious firmware flows, where the downgrade mimics the upgrade process, aiming to create challenges in detection. The proposed solution discerns adversarial actions from legitimate firmware changes by breaking this functional symmetry through behavioral profiling. Full article
Show Figures

Figure 1

33 pages, 5362 KiB  
Article
A Method for Trust-Based Collaborative Smart Device Selection and Resource Allocation in the Financial Internet of Things
by Bo Wang, Jiesheng Wang and Mingchu Li
Sensors 2025, 25(13), 4082; https://doi.org/10.3390/s25134082 - 30 Jun 2025
Viewed by 238
Abstract
With the rapid development of the Financial Internet of Things (FIoT), many intelligent devices have been deployed in various business scenarios. Due to the unique characteristics of these devices, they are highly vulnerable to malicious attacks, posing significant threats to the system’s stability [...] Read more.
With the rapid development of the Financial Internet of Things (FIoT), many intelligent devices have been deployed in various business scenarios. Due to the unique characteristics of these devices, they are highly vulnerable to malicious attacks, posing significant threats to the system’s stability and security. Moreover, the limited resources available in the FIoT, combined with the extensive deployment of AI algorithms, can significantly reduce overall system availability. To address the challenge of resisting malicious behaviors and attacks in the FIoT, this paper proposes a trust-based collaborative smart device selection algorithm that integrates both subjective and objective trust mechanisms with dynamic blacklists and whitelists, leveraging domain knowledge and game theory. It is essential to evaluate real-time dynamic trust levels during system execution to accurately assess device trustworthiness. A dynamic blacklist and whitelist transformation mechanism is also proposed to capture the evolving behavior of collaborative service devices and update the lists accordingly. The proposed algorithm enhances the anti-attack capabilities of smart devices in the FIoT by combining adaptive trust evaluation with blacklist and whitelist strategies. It maintains a high task success rate in both single and complex attack scenarios. Furthermore, to address the challenge of resource allocation for trusted smart devices under constrained edge resources, a coalition game-based algorithm is proposed that considers both device activity and trust levels. Experimental results demonstrate that the proposed method significantly improves task success rates and resource allocation performance compared to existing approaches. Full article
(This article belongs to the Special Issue Network Security and IoT Security: 2nd Edition)
Show Figures

Figure 1

37 pages, 10762 KiB  
Article
Evaluating Adversarial Robustness of No-Reference Image and Video Quality Assessment Models with Frequency-Masked Gradient Orthogonalization Adversarial Attack
by Khaled Abud, Sergey Lavrushkin and Dmitry Vatolin
Big Data Cogn. Comput. 2025, 9(7), 166; https://doi.org/10.3390/bdcc9070166 - 25 Jun 2025
Viewed by 782
Abstract
Neural-network-based models have made considerable progress in many computer vision areas over recent years. However, many works have exposed their vulnerability to malicious input data manipulation—that is, to adversarial attacks. Although many recent works have thoroughly examined the adversarial robustness of classifiers, the [...] Read more.
Neural-network-based models have made considerable progress in many computer vision areas over recent years. However, many works have exposed their vulnerability to malicious input data manipulation—that is, to adversarial attacks. Although many recent works have thoroughly examined the adversarial robustness of classifiers, the robustness of Image Quality Assessment (IQA) methods remains understudied. This paper addresses this gap by proposing FM-GOAT (Frequency-Masked Gradient Orthogonalization Attack), a novel white box adversarial method tailored for no-reference IQA models. Using a novel gradient orthogonalization technique, FM-GOAT uniquely optimizes adversarial perturbations against multiple perceptual constraints to minimize visibility, moving beyond traditional lp-norm bounds. We evaluate FM-GOAT on seven state-of-the-art NR-IQA models across three image and video datasets, revealing significant vulnerability to the proposed attack. Furthermore, we examine the applicability of adversarial purification methods to the IQA task, as well as their efficiency in mitigating white box adversarial attacks. By studying the activations from models’ intermediate layers, we explore their behavioral patterns in adversarial scenarios and discover valuable insights that may lead to better adversarial detection. Full article
Show Figures

Figure 1

25 pages, 28388 KiB  
Article
Software Trusted Platform Module (SWTPM) Resource Sharing Scheme for Embedded Systems
by Da-Chuan Chen, Guan-Ruei Chen and Yu-Ping Liao
Sensors 2025, 25(12), 3828; https://doi.org/10.3390/s25123828 - 19 Jun 2025
Viewed by 449
Abstract
Embedded system networks are widely deployed across various domains and often perform mission-critical tasks, making it essential for all nodes within the system to be trustworthy. Traditionally, each node is equipped with a discrete Trusted Platform Module (dTPM) to ensure network-wide trustworthiness. However, [...] Read more.
Embedded system networks are widely deployed across various domains and often perform mission-critical tasks, making it essential for all nodes within the system to be trustworthy. Traditionally, each node is equipped with a discrete Trusted Platform Module (dTPM) to ensure network-wide trustworthiness. However, this study proposes a cost-effective system architecture that deploys software-based TPMs (SWTPMs) on the majority of nodes, while reserving dTPMs for a few central nodes to maintain overall system integrity. The proposed architecture employs IBMACS for system integrity reporting. In addition, a database-based anomaly detection (AD) agent is developed to identify and isolate untrusted nodes. A traffic anomaly detection agent is also introduced to monitor communication between servers and clients, ensuring that traffic patterns remain normal. Finally, a custom measurement kernel is implemented, along with an activation agent, to enforce a measured boot process for custom applications during startup. This architecture is designed to safeguard mission-critical embedded systems from malicious threats while reducing deployment costs. Full article
(This article belongs to the Special Issue Privacy and Security for IoT-Based Smart Homes)
Show Figures

Figure 1

26 pages, 1588 KiB  
Article
GlassBoost: A Lightweight and Explainable Classification Framework for Tabular Datasets
by Ehsan Namjoo, Alison N. O’Connor, Jim Buckley and Conor Ryan
Appl. Sci. 2025, 15(12), 6931; https://doi.org/10.3390/app15126931 - 19 Jun 2025
Viewed by 453
Abstract
Explainable artificial intelligence (XAI) is essential for fostering trust, transparency, and accountability in machine learning systems, particularly when applied in high-stakes domains. This paper introduces a novel XAI system designed for classification tasks on tabular data, which offers a balance between performance and [...] Read more.
Explainable artificial intelligence (XAI) is essential for fostering trust, transparency, and accountability in machine learning systems, particularly when applied in high-stakes domains. This paper introduces a novel XAI system designed for classification tasks on tabular data, which offers a balance between performance and interpretability. The proposed method, GlassBoost, first trains an XGBoost model on a given dataset and then computes gain scores, quantifying the average improvement in the model’s loss function contributed by each feature during tree splits. Based on these scores, a subset of significant features is selected. A shallow decision tree is then trained using the top d features with the highest gain scores, where d is significantly smaller than the total number of original features. This model compression yields a transparent, IF–THEN rule-based decision process that remains faithful to the original high-performing model. To evaluate the system, we apply it to an anomaly detection task in the context of intrusion detection systems (IDSs), using a dataset containing traffic features from both malicious and normal activities. Results show that our method achieves high accuracy, precision, and recall while providing a clear and interpretable explanation of its decision-making. We further validate its explainability using SHAP, a well-established approach in the field of XAI. Comparative analysis demonstrates that GlassBoost outperforms SHAP in terms of precision, recall, and accuracy, with more balanced performance across the three metrics. Likewise, our review of literature findings indicate that Glassboost outperforms many other XAI models while retaining computational efficiency. In one of our configurations, GlassBoost achieved accuracy of 0.9868, recall of 0.9792, and precision of 0.9843 using only eight features within a tree structure of a maximum depth of four. Full article
Show Figures

Figure 1

20 pages, 3628 KiB  
Article
Homomorphic Encryption-Based Federated Active Learning on GCNs
by Xiaohu He, Zhihao Song, Dandan Zhang, Hongwei Ju and Qingfang Meng
Symmetry 2025, 17(6), 969; https://doi.org/10.3390/sym17060969 - 18 Jun 2025
Viewed by 356
Abstract
With the dramatic growth in dataset size, active learning has become one of the effective methods to deal with large-scale unlabeled data. However, most of the existing active learning methods are inefficient due poor target models and lack the ability to utilize the [...] Read more.
With the dramatic growth in dataset size, active learning has become one of the effective methods to deal with large-scale unlabeled data. However, most of the existing active learning methods are inefficient due poor target models and lack the ability to utilize the feature similarity between labeled and unlabeled data. Furthermore, data leakage is a serious threat to data privacy. In this paper, considering the features of the data itself, an augmented graph convolutional network is proposed which acts as a sampler for data selection in active learning, avoiding the involvement of the initial poor target model. Then, by applying the proposed GCN as a substitute for the initial poor target model, this paper proposes an active learning model based on augmented GCNs, which is able to select more representative data, enabling the active learning model to achieve better classification performance with limited labeled data. Finally, this paper proposes a homomorphic encryption-based federated active learning model to improve the data utilization and enhance the security of private data. Experiments were conducted on three datasets, Cora, CiteSeer and PubMed, and achieved accuracy rates of 94.47%, 92.86% and 91.51%, respectively, while providing provable security guarantees. Furthermore, the highest malicious user detection accuracy was 88.07%, and the global model test accuracy reached 88.42%, 84.22% and 81.46%, under a model poisoning attack. Full article
(This article belongs to the Special Issue Applications Based on Symmetry in Applied Cryptography)
Show Figures

Figure 1

18 pages, 1005 KiB  
Article
FedEach: Federated Learning with Evaluator-Based Incentive Mechanism for Human Activity Recognition
by Hyun Woo Lim, Sean Yonathan Tanjung, Ignatius Iwan, Bernardo Nugroho Yahya and Seok-Lyong Lee
Sensors 2025, 25(12), 3687; https://doi.org/10.3390/s25123687 - 12 Jun 2025
Viewed by 446
Abstract
Federated learning (FL) is a decentralized approach that aims to establish a global model by aggregating updates from diverse clients without sharing their local data. However, the approach becomes complicated when Byzantine clients join with arbitrary manipulation, referred to as malicious clients. Classical [...] Read more.
Federated learning (FL) is a decentralized approach that aims to establish a global model by aggregating updates from diverse clients without sharing their local data. However, the approach becomes complicated when Byzantine clients join with arbitrary manipulation, referred to as malicious clients. Classical techniques, such as Federated Averaging (FedAvg), are insufficient to incentivize reliable clients and discourage malicious clients. Other existing Byzantine FL schemes to address malicious clients are either incentive-reliable clients or need-to-provide server-labeled data as the public validation dataset, which increase time complexity. This study introduces a federated learning framework with an evaluator-based incentive mechanism (FedEach) that offers robustness with no dependency on server-labeled data. In this framework, we introduce evaluators and participants. Unlike the existing approaches, the server selects the evaluators and participants among the clients using model-based performance evaluation criteria such as test score and reputation. Afterward, the evaluators assess and evaluate whether a participant is reliable or malicious. Subsequently, the server exclusively aggregates models from these identified reliable participants and the evaluators for global model updates. After this aggregation, the server calculates each client’s contribution, prioritizing each client’s contribution to ensure the fair recognition of high-quality updates and penalizing malicious clients based on their contributions. Empirical evidence obtained from the performance in human activity recognition (HAR) datasets highlights FedEach’s effectiveness, especially in environments with a high presence of malicious clients. In addition, FedEach maintains computational efficiency so that it is reliable for efficient FL applications such as sensor-based HAR with wearable devices and mobile sensing. Full article
(This article belongs to the Special Issue Wearable Devices for Physical Activity and Healthcare Monitoring)
Show Figures

Figure 1

36 pages, 5316 KiB  
Article
Risk Assessment of Cryptojacking Attacks on Endpoint Systems: Threats to Sustainable Digital Agriculture
by Tetiana Babenko, Kateryna Kolesnikova, Maksym Panchenko, Olga Abramkina, Nikolay Kiktev, Yuliia Meish and Pavel Mazurchuk
Sustainability 2025, 17(12), 5426; https://doi.org/10.3390/su17125426 - 12 Jun 2025
Cited by 1 | Viewed by 978
Abstract
Digital agriculture has rapidly developed in the last decade in many countries where the share of agricultural production is a significant part of the total volume of gross production. Digital agroecosystems are developed using a variety of IT solutions, software and hardware tools, [...] Read more.
Digital agriculture has rapidly developed in the last decade in many countries where the share of agricultural production is a significant part of the total volume of gross production. Digital agroecosystems are developed using a variety of IT solutions, software and hardware tools, wired and wireless data transmission technologies, open source code, Open API, etc. A special place in agroecosystems is occupied by electronic payment technologies and blockchain technologies, which allow farmers and other agricultural enterprises to conduct commodity and monetary transactions with suppliers, creditors, and buyers of products. Such ecosystems contribute to the sustainable development of agriculture, agricultural engineering, and management of production and financial operations in the agricultural industry and related industries, as well as in other sectors of the economy of a number of countries. The introduction of crypto solutions in the agricultural sector is designed to create integrated platforms aimed at helping farmers manage supply lines or gain access to financial services. At the same time, there are risks of illegal use of computing power for cryptocurrency mining—cryptojacking. This article offers a thorough risk assessment of cryptojacking attacks on endpoint systems, focusing on identifying critical vulnerabilities within IT infrastructures and outlining practical preventive measures. The analysis examines key attack vectors—including compromised websites, infected applications, and supply chain infiltration—and explores how unauthorized cryptocurrency mining degrades system performance and endangers data security. The research methodology combines an evaluation of current cybersecurity trends, a review of specialized literature, and a controlled experiment simulating cryptojacking attacks. The findings highlight the importance of multi-layered protection mechanisms and ongoing system monitoring to detect malicious activities at an early stage. Full article
(This article belongs to the Section Sustainable Agriculture)
Show Figures

Figure 1

Back to TopTop