Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = cyber threat hunting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3691 KiB  
Article
A Syntax-Aware Graph Network with Contrastive Learning for Threat Intelligence Triple Extraction
by Zhenxiang He, Ziqi Zhao and Zhihao Liu
Symmetry 2025, 17(7), 1013; https://doi.org/10.3390/sym17071013 - 27 Jun 2025
Viewed by 379
Abstract
As Advanced Persistent Threats (APTs) continue to evolve, constructing a dynamic cybersecurity knowledge graph requires precise extraction of entity–relationship triples from unstructured threat intelligence. Existing approaches, however, face significant challenges in modeling low-frequency threat associations, extracting multi-relational entities, and resolving overlapping entity scenarios. [...] Read more.
As Advanced Persistent Threats (APTs) continue to evolve, constructing a dynamic cybersecurity knowledge graph requires precise extraction of entity–relationship triples from unstructured threat intelligence. Existing approaches, however, face significant challenges in modeling low-frequency threat associations, extracting multi-relational entities, and resolving overlapping entity scenarios. To overcome these limitations, we propose the Symmetry-Aware Prototype Contrastive Learning (SAPCL) framework for joint entity and relation extraction. By explicitly modeling syntactic symmetry in attack-chain dependency structures and its interaction with asymmetric adversarial semantics, SAPCL integrates dependency relation types with contextual features using a type-enhanced Graph Attention Network. This symmetry–asymmetry fusion facilitates a more effective extraction of multi-relational triples. Furthermore, we introduce a triple prototype contrastive learning mechanism that enhances the robustness of low-frequency relations through hierarchical semantic alignment and adaptive prototype updates. A non-autoregressive decoding architecture is also employed to globally generate multi-relational triples while mitigating semantic ambiguities. SAPCL was evaluated on three publicly available CTI datasets: HACKER, ACTI, and LADDER. It achieved F1-scores of 56.63%, 60.21%, and 53.65%, respectively. Notably, SAPCL demonstrated a substantial improvement of 14.5 percentage points on the HACKER dataset, validating its effectiveness in real-world cyber threat extraction scenarios. By synergizing syntactic–semantic multi-feature fusion with symmetry-driven dynamic representation learning, SAPCL establishes a symmetry–asymmetry adaptive paradigm for cybersecurity knowledge graph construction, thus enhancing APT attack tracing, threat hunting, and proactive cyber defense. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Artificial Intelligence for Cybersecurity)
Show Figures

Figure 1

19 pages, 964 KiB  
Article
SGMNet: A Supervised Seeded Graph-Matching Method for Cyber Threat Hunting
by Chenghong Zhang and Lingyin Su
Symmetry 2025, 17(6), 898; https://doi.org/10.3390/sym17060898 - 6 Jun 2025
Viewed by 440
Abstract
Proactively hunting known attack behaviors within system logs, termed threat hunting, is gaining traction in cybersecurity. Existing methods typically rely on constructing a query graph representing known attack patterns and identifying it as a subgraph within a system-wide provenance graph. However, the large [...] Read more.
Proactively hunting known attack behaviors within system logs, termed threat hunting, is gaining traction in cybersecurity. Existing methods typically rely on constructing a query graph representing known attack patterns and identifying it as a subgraph within a system-wide provenance graph. However, the large scale and redundancy of provenance data lead to poor matching efficiency and high false-positive rates. To address these issues, this paper introduces SGMNet, a supervised seeded graph-matching network designed for efficient and accurate threat hunting. By selecting indicators of compromise (IOCs) as initial seed nodes, SGMNet extracts compact subgraphs from large-scale provenance graphs, significantly reducing graph size and complexity. It then learns adaptive node-expansion strategies to capture relevant context while suppressing irrelevant noise. Experiments on four real-world system log datasets demonstrate that SGMNet achieves a runtime reduction of over 60% compared to baseline methods, while reducing false positives by 35.2% on average. These results validate that SGMNet not only improves computational efficiency but also enhances detection precision, making it well suited for real-time threat hunting in large-scale environments. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

22 pages, 2426 KiB  
Article
A Novel Cloud-Enabled Cyber Threat Hunting Platform for Evaluating the Cyber Risks Associated with Smart Health Ecosystems
by Abdullah Alabdulatif and Navod Neranjan Thilakarathne
Appl. Sci. 2024, 14(20), 9567; https://doi.org/10.3390/app14209567 - 20 Oct 2024
Cited by 3 | Viewed by 1904
Abstract
The fast proliferation of Internet of Things (IoT) devices has dramatically altered healthcare, increasing the efficiency and efficacy of smart health ecosystems. However, this expansion has created substantial security risks, as cybercriminals increasingly target IoT devices in order to exploit their weaknesses and [...] Read more.
The fast proliferation of Internet of Things (IoT) devices has dramatically altered healthcare, increasing the efficiency and efficacy of smart health ecosystems. However, this expansion has created substantial security risks, as cybercriminals increasingly target IoT devices in order to exploit their weaknesses and relay critical health information. The rising threat landscape poses serious concerns across various domains within healthcare, where the protection of patient information and the integrity of medical devices are paramount. Smart health systems, while offering numerous benefits, are particularly vulnerable to cyber-attacks due to the integration of IoT devices and the vast amounts of data they generate. Healthcare providers, although unable to control the actions of cyber adversaries, can take proactive steps to secure their systems by adopting robust cybersecurity measures, such as strong user authentication, regular system updates, and the implementation of advanced security technologies. This research introduces a groundbreaking approach to addressing the cybersecurity challenges in smart health ecosystems through the deployment of a novel cloud-enabled cyber threat-hunting platform. This platform leverages deception technology, which involves creating decoys, traps, and false information to divert cybercriminals away from legitimate health data and systems. By using this innovative approach, the platform assesses the cyber risks associated with smart health systems, offering actionable recommendations to healthcare stakeholders on how to minimize cyber risks and enhance the security posture of IoT-enabled healthcare solutions. Overall, this pioneering research represents a significant advancement in safeguarding the increasingly interconnected world of smart health ecosystems, providing a promising strategy for defending against the escalating cyber threats faced by the healthcare industry. Full article
Show Figures

Figure 1

24 pages, 5627 KiB  
Article
Proactive Threat Hunting in Critical Infrastructure Protection through Hybrid Machine Learning Algorithm Application
by Ali Shan and Seunghwan Myeong
Sensors 2024, 24(15), 4888; https://doi.org/10.3390/s24154888 - 27 Jul 2024
Cited by 6 | Viewed by 3606
Abstract
Cyber-security challenges are growing globally and are specifically targeting critical infrastructure. Conventional countermeasure practices are insufficient to provide proactive threat hunting. In this study, random forest (RF), support vector machine (SVM), multi-layer perceptron (MLP), AdaBoost, and hybrid models were applied for proactive threat [...] Read more.
Cyber-security challenges are growing globally and are specifically targeting critical infrastructure. Conventional countermeasure practices are insufficient to provide proactive threat hunting. In this study, random forest (RF), support vector machine (SVM), multi-layer perceptron (MLP), AdaBoost, and hybrid models were applied for proactive threat hunting. By automating detection, the hybrid machine learning-based method improves threat hunting and frees up time to concentrate on high-risk warnings. These models are implemented on approach devices, access, and principal servers. The efficacy of several models, including hybrid approaches, is assessed. The findings of these studies are that the AdaBoost model provides the highest efficiency, with a 0.98 ROC area and 95.7% accuracy, detecting 146 threats with 29 false positives. Similarly, the random forest model achieved a 0.98 area under the ROC curve and a 95% overall accuracy, accurately identifying 132 threats and reducing false positives to 31. The hybrid model exhibited promise with a 0.89 ROC area and 94.9% accuracy, though it requires further refinement to lower its false positive rate. This research emphasizes the role of machine learning in improving cyber-security, particularly for critical infrastructure. Advanced ML techniques enhance threat detection and response times, and their continuous learning ability ensures adaptability to new threats. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors Technology in Smart Cities)
Show Figures

Figure 1

18 pages, 672 KiB  
Article
Threat Hunting System for Protecting Critical Infrastructures Using a Machine Learning Approach
by Mario Aragonés Lozano, Israel Pérez Llopis and Manuel Esteve Domingo
Mathematics 2023, 11(16), 3448; https://doi.org/10.3390/math11163448 - 9 Aug 2023
Cited by 5 | Viewed by 4071
Abstract
Cyberattacks are increasing in number and diversity in nature daily, and the tendency for them is to escalate dramatically in the forseeable future, with critical infrastructures (CI) assets and networks not being an exception to this trend. As time goes by, cyberattacks are [...] Read more.
Cyberattacks are increasing in number and diversity in nature daily, and the tendency for them is to escalate dramatically in the forseeable future, with critical infrastructures (CI) assets and networks not being an exception to this trend. As time goes by, cyberattacks are more complex than before and unknown until they spawn, being very difficult to detect and remediate. To be reactive against those cyberattacks, usually defined as zero-day attacks, cyber-security specialists known as threat hunters must be in organizations’ security departments. All the data generated by the organization’s users must be processed by those threat hunters (which are mainly benign and repetitive and follow predictable patterns) in short periods to detect unusual behaviors. The application of artificial intelligence, specifically machine learning (ML) techniques (for instance NLP, C-RNN-GAN, or GNN), can remarkably impact the real-time analysis of those data and help to discriminate between harmless data and malicious data, but not every technique is helpful in every circumstance; as a consequence, those specialists must know which techniques fit the best at every specific moment. The main goal of the present work is to design a distributed and scalable system for threat hunting based on ML, and with a special focus on critical infrastructure needs and characteristics. Full article
(This article belongs to the Special Issue Neural Networks and Their Applications)
Show Figures

Figure 1

16 pages, 4539 KiB  
Article
Proactive Ransomware Detection Using Extremely Fast Decision Tree (EFDT) Algorithm: A Case Study
by Ibrahim Ba’abbad and Omar Batarfi
Computers 2023, 12(6), 121; https://doi.org/10.3390/computers12060121 - 15 Jun 2023
Cited by 8 | Viewed by 3330
Abstract
Several malware variants have attacked systems and data over time. Ransomware is among the most harmful malware since it causes huge losses. In order to get a ransom, ransomware is software that locks the victim’s machine or encrypts his personal information. Numerous research [...] Read more.
Several malware variants have attacked systems and data over time. Ransomware is among the most harmful malware since it causes huge losses. In order to get a ransom, ransomware is software that locks the victim’s machine or encrypts his personal information. Numerous research has been conducted to stop and quickly recognize ransomware attacks. For proactive forecasting, artificial intelligence (AI) techniques are used. Traditional machine learning/deep learning (ML/DL) techniques, however, take a lot of time and decrease the accuracy and latency performance of network monitoring. In this study, we utilized the Hoeffding trees classifier as one of the stream data mining classification techniques to detect and prevent ransomware attacks. Three Hoeffding trees classifier algorithms are selected to be applied to the Resilient Information Systems Security (RISS) research group dataset. After configuration, Massive Online Analysis (MOA) software is utilized as a testing framework. The results of Hoeffding tree classifier algorithms are then assessed to choose the enhanced model with the highest accuracy and latency performance. In conclusion, the 99.41% classification accuracy was the highest result achieved by the EFDT algorithm in 66 ms. Full article
Show Figures

Figure 1

12 pages, 803 KiB  
Article
A Neuro-Symbolic Classifier with Optimized Satisfiability for Monitoring Security Alerts in Network Traffic
by Darian Onchis, Codruta Istin and Eduard Hogea
Appl. Sci. 2022, 12(22), 11502; https://doi.org/10.3390/app122211502 - 12 Nov 2022
Cited by 6 | Viewed by 2475
Abstract
We introduce in this paper a neuro-symbolic predictive model based on Logic Tensor Networks, capable of discriminating and at the same time of explaining the bad connections, called alerts or attacks, and the normal connections. The proposed classifier incorporates both the ability of [...] Read more.
We introduce in this paper a neuro-symbolic predictive model based on Logic Tensor Networks, capable of discriminating and at the same time of explaining the bad connections, called alerts or attacks, and the normal connections. The proposed classifier incorporates both the ability of deep neural networks to improve on their own through learning from experience and the interpretability of the results provided by the symbolic artificial intelligence approach. Compared to other existing solutions, we advance in the discovery of potential security breaches from a cognitive perspective. By introducing the reasoning in the model, our aim is to further reduce the human staff needed to deal with the cyber-threat hunting problem. To justify the need for shifting towards hybrid systems for this task, the design, the implementation, and the comparison of the dense neural network and the neuro-symbolic model is performed in detail. While in terms of standard accuracy, both models demonstrated similar precision, we further introduced for our model the concept of interactive accuracy as a way of querying the model results at any time coupled with deductive reasoning over data. By applying our model on the CIC-IDS2017 dataset, we reached an accuracy of 0.95, with levels of satisfiability around 0.85. Other advantages such as overfitting mitigation and scalability issues are also presented. Full article
(This article belongs to the Special Issue Advances in Secure AI: Technology and Applications)
Show Figures

Figure 1

32 pages, 97596 KiB  
Article
Trusted Threat Intelligence Sharing in Practice and Performance Benchmarking through the Hyperledger Fabric Platform
by Hisham Ali, Jawad Ahmad, Zakwan Jaroucheh, Pavlos Papadopoulos, Nikolaos Pitropakis, Owen Lo, Will Abramson and William J. Buchanan
Entropy 2022, 24(10), 1379; https://doi.org/10.3390/e24101379 - 28 Sep 2022
Cited by 13 | Viewed by 4743
Abstract
Historically, threat information sharing has relied on manual modelling and centralised network systems, which can be inefficient, insecure, and prone to errors. Alternatively, private blockchains are now widely used to address these issues and improve overall organisational security. An organisation’s vulnerabilities to attacks [...] Read more.
Historically, threat information sharing has relied on manual modelling and centralised network systems, which can be inefficient, insecure, and prone to errors. Alternatively, private blockchains are now widely used to address these issues and improve overall organisational security. An organisation’s vulnerabilities to attacks might change over time. It is utterly important to find a balance among a current threat, the potential countermeasures, their consequences and costs, and the estimation of the overall risk that this provides to the organisation. For enhancing organisational security and automation, applying threat intelligence technology is critical for detecting, classifying, analysing, and sharing new cyberattack tactics. Trusted partner organisations can then share newly identified threats to improve their defensive capabilities against unknown attacks. On this basis, organisations can help reduce the risk of a cyberattack by providing access to past and current cybersecurity events through blockchain smart contracts and the Interplanetary File System (IPFS). The suggested combination of technologies can make organisational systems more reliable and secure, improving system automation and data quality. This paper outlines a privacy-preserving mechanism for threat information sharing in a trusted way. It proposes a reliable and secure architecture for data automation, quality, and traceability based on the Hyperledger Fabric private-permissioned distributed ledger technology and the MITRE ATT&CK threat intelligence framework. This methodology can also be applied to combat intellectual property theft and industrial espionage. Full article
(This article belongs to the Special Issue Information Security and Privacy: From IoT to IoV)
Show Figures

Figure 1

Back to TopTop