Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (40)

Search Parameters:
Keywords = audit logs

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6452 KiB  
Article
A Blockchain and IoT-Enabled Framework for Ethical and Secure Coffee Supply Chains
by John Byrd, Kritagya Upadhyay, Samir Poudel, Himanshu Sharma and Yi Gu
Future Internet 2025, 17(8), 334; https://doi.org/10.3390/fi17080334 - 27 Jul 2025
Viewed by 360
Abstract
The global coffee supply chain is a complex multi-stakeholder ecosystem plagued by fragmented records, unverifiable origin claims, and limited real-time visibility. These limitations pose risks to ethical sourcing, product quality, and consumer trust. To address these issues, this paper proposes a blockchain and [...] Read more.
The global coffee supply chain is a complex multi-stakeholder ecosystem plagued by fragmented records, unverifiable origin claims, and limited real-time visibility. These limitations pose risks to ethical sourcing, product quality, and consumer trust. To address these issues, this paper proposes a blockchain and IoT-enabled framework for secure and transparent coffee supply chain management. The system integrates simulated IoT sensor data such as Radio-Frequency Identification (RFID) identity tags, Global Positioning System (GPS) logs, weight measurements, environmental readings, and mobile validations with Ethereum smart contracts to establish traceability and automate supply chain logic. A Solidity-based Ethereum smart contract is developed and deployed on the Sepolia testnet to register users and log batches and to handle ownership transfers. The Internet of Things (IoT) data stream is simulated using structured datasets to mimic real-world device behavior, ensuring that the system is tested under realistic conditions. Our performance evaluation on 1000 transactions shows that the model incurs low transaction costs and demonstrates predictable efficiency behavior of the smart contract in decentralized conditions. Over 95% of the 1000 simulated transactions incurred a gas fee of less than ETH 0.001. The proposed architecture is also scalable and modular, providing a foundation for future deployment with live IoT integrations and off-chain data storage. Overall, the results highlight the system’s ability to improve transparency and auditability, automate enforcement, and enhance consumer confidence in the origin and handling of coffee products. Full article
Show Figures

Figure 1

35 pages, 3157 KiB  
Article
Federated Unlearning Framework for Digital Twin–Based Aviation Health Monitoring Under Sensor Drift and Data Corruption
by Igor Kabashkin
Electronics 2025, 14(15), 2968; https://doi.org/10.3390/electronics14152968 - 24 Jul 2025
Viewed by 275
Abstract
Ensuring data integrity and adaptability in aircraft health monitoring (AHM) is vital for safety-critical aviation systems. Traditional digital twin (DT) and federated learning (FL) frameworks, while effective in enabling distributed, privacy-preserving fault detection, lack mechanisms to remove the influence of corrupted or adversarial [...] Read more.
Ensuring data integrity and adaptability in aircraft health monitoring (AHM) is vital for safety-critical aviation systems. Traditional digital twin (DT) and federated learning (FL) frameworks, while effective in enabling distributed, privacy-preserving fault detection, lack mechanisms to remove the influence of corrupted or adversarial data once these have been integrated into global models. This paper proposes a novel FL–DT–FU framework that combines digital twin-based subsystem modeling, federated learning for collaborative training, and federated unlearning (FU) to support the post hoc correction of compromised model contributions. The architecture enables real-time monitoring through local DTs, secure model aggregation via FL, and targeted rollback using gradient subtraction, re-aggregation, or constrained retraining. A comprehensive simulation environment is developed to assess the impact of sensor drift, label noise, and adversarial updates across a federated fleet of aircraft. The experimental results demonstrate that FU methods restore up to 95% of model accuracy degraded by data corruption, significantly reducing false negative rates in early fault detection. The proposed system further supports auditability through cryptographic logging, aligning with aviation regulatory standards. This study establishes federated unlearning as a critical enabler for resilient, correctable, and trustworthy AI in next-generation AHM systems. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)
Show Figures

Graphical abstract

22 pages, 557 KiB  
Article
Using Blockchain Ledgers to Record AI Decisions in IoT
by Vikram Kulothungan
IoT 2025, 6(3), 37; https://doi.org/10.3390/iot6030037 - 3 Jul 2025
Viewed by 742
Abstract
The rapid integration of AI into IoT systems has outpaced the ability to explain and audit automated decisions, resulting in a serious transparency gap. We address this challenge by proposing a blockchain-based framework to create immutable audit trails of AI-driven IoT decisions. In [...] Read more.
The rapid integration of AI into IoT systems has outpaced the ability to explain and audit automated decisions, resulting in a serious transparency gap. We address this challenge by proposing a blockchain-based framework to create immutable audit trails of AI-driven IoT decisions. In our approach, each AI inference comprising key inputs, model ID, and output is logged to a permissioned blockchain ledger, ensuring that every decision is traceable and auditable. IoT devices and edge gateways submit cryptographically signed decision records via smart contracts, resulting in an immutable, timestamped log that is tamper-resistant. This decentralized approach guarantees non-repudiation and data integrity while balancing transparency with privacy (e.g., hashing personal data on-chain) to meet data protection norms. Our design aligns with emerging regulations, such as the EU AI Act’s logging mandate and GDPR’s transparency requirements. We demonstrate the framework’s applicability in two domains: healthcare IoT (logging diagnostic AI alerts for accountability) and industrial IoT (tracking autonomous control actions), showing its generalizability to high-stakes environments. Our contributions include the following: (1) a novel architecture for AI decision provenance in IoT, (2) a blockchain-based design to securely record AI decision-making processes, and (3) a simulation informed performance assessment based on projected metrics (throughput, latency, and storage) to assess the approach’s feasibility. By providing a reliable immutable audit trail for AI in IoT, our framework enhances transparency and trust in autonomous systems and offers a much-needed mechanism for auditable AI under increasing regulatory scrutiny. Full article
(This article belongs to the Special Issue Blockchain-Based Trusted IoT)
Show Figures

Figure 1

22 pages, 2434 KiB  
Article
Sylph: An Unsupervised APT Detection System Based on the Provenance Graph
by Kaida Jiang, Zihan Gao, Siyu Zhang and Futai Zou
Information 2025, 16(7), 566; https://doi.org/10.3390/info16070566 - 2 Jul 2025
Viewed by 336
Abstract
Traditional detection methods and security defenses are gradually insufficient to cope with evolving attack techniques and strategies, and have coarse detection granularity and high memory overhead. As a result, we propose Sylph, a lightweight unsupervised APT detection method based on a provenance graph, [...] Read more.
Traditional detection methods and security defenses are gradually insufficient to cope with evolving attack techniques and strategies, and have coarse detection granularity and high memory overhead. As a result, we propose Sylph, a lightweight unsupervised APT detection method based on a provenance graph, which not only detects APT attacks but also localizes APT attacks with a fine event granularity and feeds possible attacks back to system detectors to reduce their localization burden. Sylph proposes a whole-process architecture from provenance graph collection to anomaly detection, starting from the system audit logs, and dividing subgraphs based on time slices of the provenance graph it transforms into to reduce memory overhead. Starting from the system audit logs, the provenance graph it transforms into is divided into subgraphs based on time slices, which reduces the memory occupation and improves the detection efficiency at the same time; on the basis of generating the sequence of subgraphs, the full graph embedding of the subgraphs is carried out by using Graph2Vec to obtain their feature vectors, and the anomaly detection based on unsupervised learning is carried out by using an autoencoder, which is capable of detecting new types of attacks that have not yet appeared. After the experimental evaluation, Sylph can realize the APT attack detection with higher accuracy and achieve an accuracy rate. Full article
(This article belongs to the Special Issue Emerging Research on Neural Networks and Anomaly Detection)
Show Figures

Figure 1

28 pages, 2298 KiB  
Article
Data-Driven Business Process Evaluation in Commercial Banks: Multi-Dimensional Framework with Hybrid Analytical Approaches
by Zaiwen Ni, Binqing Xiao and Yanying Li
Systems 2025, 13(4), 256; https://doi.org/10.3390/systems13040256 - 6 Apr 2025
Cited by 1 | Viewed by 480
Abstract
The efficiency and reliability of business processes in commercial banks are critical to financial stability and compliance. However, traditional evaluation methods that rely on retrospective qualitative assessments and static frameworks struggle to address the dynamic complexities inherent in modern banking operations. These approaches [...] Read more.
The efficiency and reliability of business processes in commercial banks are critical to financial stability and compliance. However, traditional evaluation methods that rely on retrospective qualitative assessments and static frameworks struggle to address the dynamic complexities inherent in modern banking operations. These approaches lack real-time monitoring, fail to leverage granular event log data, and overlook organizational interdependencies, hindering proactive risk management and optimization. To bridge these gaps, this study proposes a data-driven evaluation framework that integrates three core dimensions: efficiency, quality, and flexibility. We developed a hybrid analytical model by integrating process mining with DEMATEL-AHP to analyze a Chinese bank’s performance guarantee process, comparing pre- and post-centralization workflows. The analysis revealed that post-centralization processes exhibited improved flexibility but reductions in efficiency and quality. Moreover, the social network analysis highlighted structural shifts, including expanded audit participation and reduced departmental cohesion, contributing to inefficiencies. This study advances business process management by demonstrating that a data-driven process evaluation framework offers greater persuasiveness and methodological rigor than traditional qualitative approaches. Full article
(This article belongs to the Special Issue Data-Driven Methods in Business Process Management)
Show Figures

Figure 1

13 pages, 809 KiB  
Article
The Impact of Paratracheal Lymphadenectomy on Survival After Esophagectomy: A Nationwide Propensity Score Matched Analysis
by Eliza R. C. Hagens, B. Feike Kingma, Mark I. van Berge Henegouwen, Alicia S. Borggreve, Jelle P. Ruurda, Richard van Hillegersberg and Suzanne S. Gisbertz
Cancers 2025, 17(5), 888; https://doi.org/10.3390/cancers17050888 - 5 Mar 2025
Viewed by 693
Abstract
Purpose: To investigate the impact of paratracheal lymphadenectomy on survival in patients undergoing an esophagectomy for cancer. The secondary objective was to assess the effect on short-term outcomes. Methods: Between 2011–2017, patients with an esophageal or gastroesophageal junction carcinoma treated with elective transthoracic [...] Read more.
Purpose: To investigate the impact of paratracheal lymphadenectomy on survival in patients undergoing an esophagectomy for cancer. The secondary objective was to assess the effect on short-term outcomes. Methods: Between 2011–2017, patients with an esophageal or gastroesophageal junction carcinoma treated with elective transthoracic esophagectomy with two-field lymphadenectomy were included from the Dutch Upper Gastro-intestinal Cancer Audit registry. After 1:1 propensity score matching of patients with and without paratracheal lymphadenectomy within histologic subgroups, short-term outcomes and overall survival were compared between the two groups. Results: A total of 1154 patients with adenocarcinoma and 294 patients with squamous cell carcinoma were matched. Lymph node yield was significantly higher (22 versus 19 nodes, p < 0.001) in patients with paratracheal lymphadenectomy for both tumor types. Paratracheal lymphadenectomy was associated with more recurrent laryngeal nerve injury (10% versus 5%, p = 0.002) and chylothorax in patients with adenocarcinoma (10% versus 5%, p = 0.010) and with more anastomotic leakage in patients with squamous cell carcinoma (42% versus 27%, p = 0.014). The 3- and 5-year survival in patients with and without a paratracheal lymphadenectomy were for adenocarcinoma, respectively, 58% versus 56% and 48% in both groups (log rank: p = 0.578) and for patients with a squamous cell carcinoma, 62% in both groups and 57% versus 54% (log rank: p = 0.668). Conclusions: The addition of paratracheal lymphadenectomy significantly increases lymph node yield in transthoracic esophagectomy but did not result in improved survival for esophageal cancer patients in the current dataset. However, there was an increase in postoperative morbidity in patients who underwent a paratracheal lymphadenectomy. Full article
(This article belongs to the Section Cancer Survivorship and Quality of Life)
Show Figures

Figure 1

19 pages, 2746 KiB  
Article
Decentralized and Secure Blockchain Solution for Tamper-Proof Logging Events
by J. D. Morillo Reina and T. J. Mateo Sanguino
Future Internet 2025, 17(3), 108; https://doi.org/10.3390/fi17030108 - 1 Mar 2025
Cited by 1 | Viewed by 2247
Abstract
Log files are essential assets for IT engineers engaged in the security of server and computer systems. They provide crucial information for identifying malicious events, conducting cybersecurity incident analyses, performing audits, system maintenance, and ensuring compliance with security regulations. Nevertheless, there is still [...] Read more.
Log files are essential assets for IT engineers engaged in the security of server and computer systems. They provide crucial information for identifying malicious events, conducting cybersecurity incident analyses, performing audits, system maintenance, and ensuring compliance with security regulations. Nevertheless, there is still the possibility of deliberate data manipulation by own personnel, especially with regard to system access and configuration changes, where error tracking or debugging traces are vital. To address tampering of log files, this work proposes a solution to ensure data integrity, immutability, and non-repudiation through different blockchain-based public registry systems. This approach offers an additional layer of security through a decentralized, tamper-resistant ledger. To this end, this manuscript aims to provide a solid guideline for creating secure log storage systems. For this purpose, methodologies and experiments using two different blockchains are presented to demonstrate their effectiveness in various contexts, such as transactions with and without metadata. The findings suggest that Solana’s response times make it well suited for environments with moderately critical records requiring certification. In contrast, Cardano shows higher response times, thus making it suitable for less frequent events with metadata that requires legitimacy. Full article
(This article belongs to the Special Issue Future Directions in Blockchain Technologies)
Show Figures

Figure 1

22 pages, 16196 KiB  
Article
A Study on a Scenario-Based Security Incident Prediction System for Cybersecurity
by Yong-Joon Lee
Appl. Sci. 2024, 14(24), 11836; https://doi.org/10.3390/app142411836 - 18 Dec 2024
Cited by 1 | Viewed by 2048
Abstract
In the 4th industrial era, the proliferation of interconnected smart devices and advancements in AI, particularly big data and machine learning, have integrated various industrial domains into cyberspace. This convergence brings novel security threats, making it essential to prevent known incidents and anticipate [...] Read more.
In the 4th industrial era, the proliferation of interconnected smart devices and advancements in AI, particularly big data and machine learning, have integrated various industrial domains into cyberspace. This convergence brings novel security threats, making it essential to prevent known incidents and anticipate potential breaches. This study develops a scenario-based evaluation system to predict and evaluate possible security accidents using the MITRE ATT&CK framework. It analyzes various security incidents, leveraging attack strategies and techniques to create detailed security scenarios and profiling services. Key contributions include integrating security logs, quantifying incident likelihood, and establishing proactive threat management measures. The study also proposes automated security audits and legacy system integration to enhance security posture. Experimental results show the system’s efficacy in detecting and preventing threats, providing actionable insights and a structured approach to threat analysis and response. This research lays the foundation for advanced security prediction systems, ensuring robust defense mechanisms against emerging cyber threats. Full article
Show Figures

Figure 1

32 pages, 5273 KiB  
Article
Forensic Investigation Capabilities of Microsoft Azure: A Comprehensive Analysis and Its Significance in Advancing Cloud Cyber Forensics
by Zlatan Morić, Vedran Dakić, Ana Kapulica and Damir Regvart
Electronics 2024, 13(22), 4546; https://doi.org/10.3390/electronics13224546 - 19 Nov 2024
Viewed by 3262
Abstract
This article delves into Microsoft Azure’s cyber forensic capabilities, focusing on the unique challenges in cloud security incident investigation. Cloud services are growing in popularity, and Azure’s shared responsibility model, multi-tenant nature, and dynamically scalable resources offer unique advantages and complexities for digital [...] Read more.
This article delves into Microsoft Azure’s cyber forensic capabilities, focusing on the unique challenges in cloud security incident investigation. Cloud services are growing in popularity, and Azure’s shared responsibility model, multi-tenant nature, and dynamically scalable resources offer unique advantages and complexities for digital forensics. These factors complicate forensic evidence collection, preservation, and analysis. Data collection, logging, and virtual machine analysis are covered, considering physical infrastructure restrictions and cloud data transience. It evaluates Azure-native and third-party forensic tools and recommends methods that ensure effective investigations while adhering to legal and regulatory standards. It also describes how AI and machine learning automate data analysis in forensic investigations, improving speed and accuracy. This integration advances cyber forensic methods and sets new standards for future innovations. Unified Audit Logs (UALs) in Azure are examined, focusing on how Azure Data Explorer and Kusto Query Language (KQL) can effectively parse and query large datasets and unstructured data to detect sophisticated cyber threats. The findings provide a framework for other organizations to improve forensic analysis, advancing cloud cyber forensics while bridging theoretical practices and practical applications, enhancing organizations’ ability to combat increasingly sophisticated cybercrime. Full article
(This article belongs to the Special Issue Artificial Intelligence and Database Security)
Show Figures

Figure 1

28 pages, 1659 KiB  
Article
DPTracer: Integrating Log-Driven Accountability into Data Provision Networks
by JongHyup Lee
Appl. Sci. 2024, 14(18), 8503; https://doi.org/10.3390/app14188503 - 20 Sep 2024
Viewed by 1050
Abstract
Emerging applications such as blockchain, autonomous vehicles, healthcare, federated learning, self-consistent large language models (LLMs), and multi-agent LLMs increasingly rely on the reliable acquisition and provision of data from external sources. Multi-component networks, which supply data to the applications, are defined as data [...] Read more.
Emerging applications such as blockchain, autonomous vehicles, healthcare, federated learning, self-consistent large language models (LLMs), and multi-agent LLMs increasingly rely on the reliable acquisition and provision of data from external sources. Multi-component networks, which supply data to the applications, are defined as data provision networks (DPNs) and prioritize accuracy and reliability over delivery efficiency. However, the effectiveness of the security mechanisms of DPNs, such as self-correction, is limited without a fine-grained log of node activities. This paper presents DPTracer: a novel logging system designed for DPNs that uses tamper-evident logging to address the challenges of maintaining a reliable log in an untrusted environment of DPNs. By integrating logging and validation into the data provisioning process, DPTracer ensures comprehensive logs and continuous auditing. Our system uses Process Tree as a data structure to store log records and generate proofs. This structure permits validating node activities and reconstructing historical data provision processes, which are crucial for self-correction and verifying data sufficiency before results are finalized. We evaluate the overheads introduced by DPTracer regarding computation, memory, storage, and communication. The results demonstrate that DPTracer incurs reasonable overheads, making it practical for real-world applications. Despite these overheads, DPTracer enhances security by protecting DPNs from post-process and in-process tampering. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 1632 KiB  
Article
ConGraph: Advanced Persistent Threat Detection Method Based on Provenance Graph Combined with Process Context in Cyber-Physical System Environment
by Linrui Li and Wen Chen
Electronics 2024, 13(5), 945; https://doi.org/10.3390/electronics13050945 - 29 Feb 2024
Cited by 7 | Viewed by 2981
Abstract
With the wide use of Cyber-Physical Systems (CPS) in many applications, targets of advanced persistent threats (APTs) have been extended to the IoT and industrial control systems. Provenance graph analysis based on system audit logs has become a promising way for APT detection [...] Read more.
With the wide use of Cyber-Physical Systems (CPS) in many applications, targets of advanced persistent threats (APTs) have been extended to the IoT and industrial control systems. Provenance graph analysis based on system audit logs has become a promising way for APT detection and investigation. However, we cannot afford to ignore that existing provenance-based APT detection systems lack the process–context information at system runtime, which seriously limits detection performance. In this paper, we proposed ConGraph, an approach for detecting APT attacks using provenance graphs combined with process context; we presented a module for collecting process context to detect APT attacks. This module collects file access behavior, network access behavior, and interactive relationship features of processes to enrich semantic information of the provenance graph. It was the first time that the provenance graph was combined with multiple process–context information to improve the detection performance of APT attacks. ConGraph extracts process activity features from the provenance graphs and submits the features to a CNN-BiLSTM model to detect underlying APT activities. Compared to some state-of-the-art models, our model raised the average precision rate, recall rate, and F-1 score by 13.12%, 25.61%, and 24.28%, respectively. Full article
(This article belongs to the Special Issue Digital Security and Privacy Protection: Trends and Applications)
Show Figures

Figure 1

20 pages, 1618 KiB  
Article
Leveraging Artificial Intelligence and Provenance Blockchain Framework to Mitigate Risks in Cloud Manufacturing in Industry 4.0
by Mifta Ahmed Umer, Elefelious Getachew Belay and Luis Borges Gouveia
Electronics 2024, 13(3), 660; https://doi.org/10.3390/electronics13030660 - 5 Feb 2024
Cited by 4 | Viewed by 3369
Abstract
Cloud manufacturing is an evolving networked framework that enables multiple manufacturers to collaborate in providing a range of services, including design, development, production, and post-sales support. The framework operates on an integrated platform encompassing a range of Industry 4.0 technologies, such as Industrial [...] Read more.
Cloud manufacturing is an evolving networked framework that enables multiple manufacturers to collaborate in providing a range of services, including design, development, production, and post-sales support. The framework operates on an integrated platform encompassing a range of Industry 4.0 technologies, such as Industrial Internet of Things (IIoT) devices, cloud computing, Internet communication, big data analytics, artificial intelligence, and blockchains. The connectivity of industrial equipment and robots to the Internet opens cloud manufacturing to the massive attack risk of cybersecurity and cyber crime threats caused by external and internal attackers. The impacts can be severe because the physical infrastructure of industries is at stake. One potential method to deter such attacks involves utilizing blockchain and artificial intelligence to track the provenance of IIoT devices. This research explores a practical approach to achieve this by gathering provenance data associated with operational constraints defined in smart contracts and identifying deviations from these constraints through predictive auditing using artificial intelligence. A software architecture comprising IIoT communications to machine learning for comparing the latest data with predictive auditing outcomes and logging appropriate risks was designed, developed, and tested. The state changes in the smart ledger of smart contracts were linked with the risks so that the blockchain peers can detect high deviations and take actions in a timely manner. The research defined the constraints related to physical boundaries and weightlifting limits allocated to three forklifts and showcased the mechanisms of detecting risks of breaking these constraints with the help of artificial intelligence. It also demonstrated state change rejections by blockchains at medium and high-risk levels. This study followed software development in Java 8 using JDK 8, CORDA blockchain framework, and Weka package for random forest machine learning. As a result of this, the model, along with its design and implementation, has the potential to enhance efficiency and productivity, foster greater trust and transparency in the manufacturing process, boost risk management, strengthen cybersecurity, and advance sustainability efforts. Full article
(This article belongs to the Special Issue Advances in IoT Security)
Show Figures

Figure 1

21 pages, 7974 KiB  
Article
ProvGRP: A Context-Aware Provenance Graph Reduction and Partition Approach for Facilitating Attack Investigation
by Jiawei Li, Ru Zhang and Jianyi Liu
Electronics 2024, 13(1), 100; https://doi.org/10.3390/electronics13010100 - 25 Dec 2023
Cited by 1 | Viewed by 2128
Abstract
Attack investigation is a crucial technique in proactively defending against sophisticated attacks. Its purpose is to identify attack entry points and previously unknown attack traces through comprehensive analysis of audit data. However, a major challenge arises from the vast and redundant nature of [...] Read more.
Attack investigation is a crucial technique in proactively defending against sophisticated attacks. Its purpose is to identify attack entry points and previously unknown attack traces through comprehensive analysis of audit data. However, a major challenge arises from the vast and redundant nature of audit logs, making attack investigation difficult and prohibitively expensive. To address this challenge, various technologies have been proposed to reduce audit data, facilitating efficient analysis. However, most of these techniques rely on defined templates without considering the rich context information of events. Moreover, these methods fail to remove false dependencies caused by the coarse-grained nature of logs. To address these limitations, this paper proposes a context-aware provenance graph reduction and partition approach for facilitating attack investigation named ProvGRP. Specifically, three features are proposed to determine whether system events are the same behavior from multiple dimensions. Based on the insight that information paths belonging to the same high-level behavior share similar information flow patterns, ProvGRP generates information paths containing context, and identifies and merges paths that share similar flow patterns. Experimental results show that ProvGRP can efficiently reduce provenance graphs with minimal loss of crucial information, thereby facilitating attack investigation in terms of runtime and results. Full article
(This article belongs to the Special Issue Data Security and Privacy: Challenges and Techniques)
Show Figures

Figure 1

14 pages, 3126 KiB  
Article
ConLBS: An Attack Investigation Approach Using Contrastive Learning with Behavior Sequence
by Jiawei Li, Ru Zhang and Jianyi Liu
Sensors 2023, 23(24), 9881; https://doi.org/10.3390/s23249881 - 17 Dec 2023
Cited by 1 | Viewed by 1494
Abstract
Attack investigation is an important research field in forensics analysis. Many existing supervised attack investigation methods rely on well-labeled data for effective training. While the unsupervised approach based on BERT can mitigate the issues, the high degree of similarity between certain real-world attacks [...] Read more.
Attack investigation is an important research field in forensics analysis. Many existing supervised attack investigation methods rely on well-labeled data for effective training. While the unsupervised approach based on BERT can mitigate the issues, the high degree of similarity between certain real-world attacks and normal behaviors makes it challenging to accurately identify disguised attacks. This paper proposes ConLBS, an attack investigation approach that combines the contrastive learning framework and multi-layer transformer network to realize the classification of behavior sequences. Specifically, ConLBS constructs behavior sequences describing behavior patterns from audit logs, and a novel lemmatization strategy is proposed to map the semantics to the attack pattern layer. Four different augmentation strategies are explored to enhance the differentiation between attack and normal behavior sequences. Moreover, ConLBS can perform unsupervised representation learning on unlabeled sequences, and can be trained either supervised or unsupervised depending on the availability of labeled data. The performance of ConLBS is evaluated in two public datasets. The results show that ConLBS can effectively identify attack behavior sequences in the cases of unlabeled data or less labeled data to realize attack investigation, and can achieve superior effectiveness compared to existing methods and models. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

10 pages, 1264 KiB  
Article
Association of MRI Volume Parameters in Predicting Patient Outcome at Time of Initial Diagnosis of Glioblastoma
by Kin Sing Lau, Isidoro Ruisi and Michael Back
Brain Sci. 2023, 13(11), 1579; https://doi.org/10.3390/brainsci13111579 - 10 Nov 2023
Viewed by 1584
Abstract
Purpose: Patients with glioblastoma (GBM) may demonstrate varying patterns of infiltration and relapse. Improving the ability to predict these patterns may influence the management strategies at the time of initial diagnosis. This study aims to examine the impact of the ratio (T2/T1) of [...] Read more.
Purpose: Patients with glioblastoma (GBM) may demonstrate varying patterns of infiltration and relapse. Improving the ability to predict these patterns may influence the management strategies at the time of initial diagnosis. This study aims to examine the impact of the ratio (T2/T1) of the non-enhancing volume in T2-weighted images (T2) to the enhancing volume in MRI T1-weighted gadolinium-enhanced images (T1gad) on patient outcome. Methods and Materials: A retrospective audit was performed from established prospective databases in patients managed consecutively with radiation therapy (RT) for GBM between 2016 and 2019. Patient, tumour and treatment-related factors were assessed in relation to outcome. Volumetric data from the initial diagnostic MRI were obtained via the manual segmentation of the T1gd and T2 abnormalities. A T2/T1 ratio was calculated from these volumes. The initial relapse site was assessed on MRI in relation to the site of the original T1gad volume and surgical cavity. The major endpoints were median relapse-free survival (RFS) from the date of diagnosis and site of initial relapse (defined as either local at the initial surgical site or any distance more than 20 mm from initial T1gad abnormality). The analysis was performed for association between known prognostic factors as well as the radiological factors using log-rank tests for subgroup comparisons, with correction for multiple comparisons. Results: One hundred and seventy-seven patients with GBM were managed consecutively with RT between 2016 and 2019 and were eligible for the analysis. The median age was 62 years. Seventy-four percent were managed under a 60Gy (Stupp) protocol, whilst 26% were on a 40Gy (Elderly) protocol. Major neuroanatomical subsites were Lateral Temporal (18%), Anterior Temporal (13%) and Medial Frontal (10%). Median volumes on T1gd and T2 were 20 cm3 (q1–3:8–43) and 37 cm3 (q1–3: 17–70), respectively. The median T2/T1 ratio was 2.1. For the whole cohort, the median OS was 16.0 months (95%CI:14.1–18.0). One hundred and forty-eight patients have relapsed with a median RFS of 11.4 months (95%CI:10.4–12.5). A component of distant relapse was evident in 43.9% of relapses, with 23.6% isolated relapse. Better ECOG performance Status (p = 0.007), greater extent of resection (p = 0.020), MGMT methylation (p < 0.001) and RT60Gy Dose (p = 0.050) were associated with improved RFS. Although the continuous variable of initial T1gd volume (p = 0.39) and T2 volume (p = 0.23) were not associated with RFS, the lowest T2/T1 quartile (reflecting a relatively lower T2 volume compared to T1gd volume) was significantly associated with improved RFS (p = 0.016) compared with the highest quartile. The lowest T2/T1 ratio quartile was also associated with a lower risk of distant relapse (p = 0.031). Conclusion: In patients diagnosed with GBM, the volumetric parameters of the diagnostic MRI with a ratio of T2 and T1gad abnormality may assist in the prediction of relapse-free survival and patterns of relapse. A further understanding of these relationships has the potential to impact the design of future radiation therapy target volume delineation protocols. Full article
(This article belongs to the Section Neuro-oncology)
Show Figures

Figure 1

Back to TopTop