Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (975)

Search Parameters:
Keywords = vulnerability classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 255 KB  
Article
Antioxidant Capacity of Colostrum of Mothers with Gestational Diabetes Mellitus—A Cross-Sectional Study
by Paulina Gaweł, Karolina Karcz, Natalia Zaręba-Wdowiak and Barbara Królak-Olejnik
Nutrients 2025, 17(21), 3324; https://doi.org/10.3390/nu17213324 - 22 Oct 2025
Abstract
Background: Women with gestational diabetes mellitus (GDM) are vulnerable to oxidative stress, yet limited data exist on the antioxidant potential of their breast milk. This study aimed to evaluate the antioxidant capacity and basic composition of colostrum in women with GDM compared to [...] Read more.
Background: Women with gestational diabetes mellitus (GDM) are vulnerable to oxidative stress, yet limited data exist on the antioxidant potential of their breast milk. This study aimed to evaluate the antioxidant capacity and basic composition of colostrum in women with GDM compared to healthy controls, focusing on total antioxidant capacity (TAC) and enzymatic antioxidants: superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx). Methods: The study included 77 lactating mothers: 56 with gestational diabetes (15 managed with diet/exercise—GDM G1; 41 required insulin—GDM G2) and 21 healthy controls. Colostrum samples were collected on days 3–5 postpartum and analyzed for macronutrients and antioxidant enzymes. To enable comparisons across study groups and to explore associations with maternal characteristics, a range of statistical methods was applied. A taxonomic (classification) analysis was then performed using the predictors that best fit the data: study group membership, maternal hypothyroidism history (from the medical interview), and gestational weight gain. Results: TAC was significantly lower in the GDM G2 group compared to GDM G1 and controls (p = 0.001), with no differences in enzymatic antioxidants. The control group had the highest energy (p = 0.048) and dry matter content (p = 0.015), while protein, fat, and carbohydrate levels did not differ significantly. After dividing the study group into four clusters, based on maternal health factors, including GDM status and thyroid function, TAC levels differed significantly between clusters, with the highest values observed in Cluster 3 (healthy controls without thyroid dysfunction) and the lowest in Cluster 2 (GDM and hypothyroidism). Analysis of colostrum composition revealed significant differences in energy content (p = 0.047) and dry matter concentration (p = 0.011), while no significant differences were found in other macronutrients. Conclusions: Our findings suggest that maternal metabolic and endocrine conditions, such as GDM and thyroid dysfunction, may differentially influence the nutritional and functional properties of colostrum—particularly its antioxidant potential. Full article
(This article belongs to the Special Issue Maternal and Child Nutrition: From Pregnancy to Early Life)
23 pages, 3864 KB  
Article
Lightweight and Accurate Deep Learning for Strawberry Leaf Disease Recognition: An Interpretable Approach
by Raquel Ochoa-Ornelas, Alberto Gudiño-Ochoa, Ansel Y. Rodríguez González, Leonardo Trujillo, Daniel Fajardo-Delgado and Karla Liliana Puga-Nathal
AgriEngineering 2025, 7(10), 355; https://doi.org/10.3390/agriengineering7100355 - 21 Oct 2025
Abstract
Background/Objectives: Strawberry crops are vulnerable to fungal diseases that severely affect yield and quality. Deep learning has shown strong potential for plant disease recognition; however, most architectures rely on tens of millions of parameters, limiting their use in low-resource agricultural settings. This [...] Read more.
Background/Objectives: Strawberry crops are vulnerable to fungal diseases that severely affect yield and quality. Deep learning has shown strong potential for plant disease recognition; however, most architectures rely on tens of millions of parameters, limiting their use in low-resource agricultural settings. This study presents Light-MobileBerryNet, a lightweight and interpretable model designed to achieve accurate strawberry disease classification while remaining computationally efficient for potential use on mobile and edge devices. Methods: The model, inspired by MobileNetV3-Small, integrates inverted residual blocks, depthwise separable convolutions, squeeze-and-excitation modules, and Swish activation to enhance efficiency. A publicly available dataset was processed using CLAHE and data augmentation, and split into training, validation, and test subsets under consistent conditions. Performance was benchmarked against state-of-the-art CNNs. Results: Light-MobileBerryNet achieved 96.6% accuracy, precision, recall, and F1-score, with a Matthews correlation coefficient of 0.96, while requiring fewer than one million parameters (~2 MB). Grad-CAM confirmed that predictions focused on biologically relevant lesion regions. Conclusions: Light-MobileBerryNet approaches state-of-the-art performance with a fraction of the computational cost, providing a practical and interpretable solution for precision agriculture. Full article
Show Figures

Figure 1

25 pages, 11762 KB  
Article
AI-RiskX: An Explainable Deep Learning Approach for Identifying At-Risk Patients During Pandemics
by Nada Zendaoui, Nardjes Bouchemal, Mohamed Rafik Aymene Berkani, Mounira Bouzahzah, Saad Harous and Naila Bouchemal
Bioengineering 2025, 12(10), 1127; https://doi.org/10.3390/bioengineering12101127 - 21 Oct 2025
Viewed by 51
Abstract
Pandemics place extraordinary pressure on healthcare systems, particularly in identifying and prioritizing high-risk groups such as the elderly, pregnant women, and individuals with chronic diseases. Existing Artificial Intelligence models often fall short, focusing on single diseases, lacking interpretability, and overlooking patient-specific vulnerabilities. To [...] Read more.
Pandemics place extraordinary pressure on healthcare systems, particularly in identifying and prioritizing high-risk groups such as the elderly, pregnant women, and individuals with chronic diseases. Existing Artificial Intelligence models often fall short, focusing on single diseases, lacking interpretability, and overlooking patient-specific vulnerabilities. To address these gaps, we propose an Explainable Deep Learning approach for identifying at-risk patients during pandemics (AI-RiskX). AI-RiskX performs risk classification of patients diagnosed with COVID-19 or related infections to support timely intervention and resource allocation. Unlike previous models, AI-RiskX integrates five public datasets (asthma, diabetes, heart, kidney, and thyroid), employs the Synthetic Minority Over-sampling Technique (SMOTE) for class balancing, and uses a hybrid convolutional neural network–long short-term memory model (CNN–LSTM) for robust disease classification. SHAP (SHapley Additive exPlanations) enables both individual and population-level interpretability, while a post-prediction rule-based module stratifies patients by age and pregnancy status. Achieving 98.78% accuracy, AI-RiskX offers a scalable, interpretable solution for equitable classification and decision support in public health emergencies. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

14 pages, 389 KB  
Article
A Similarity Measure for Linking CoinJoin Output Spenders
by Michael Herbert Ziegler, Mariusz Nowostawski and Basel Katt
J. Cybersecur. Priv. 2025, 5(4), 88; https://doi.org/10.3390/jcp5040088 - 18 Oct 2025
Viewed by 174
Abstract
This paper introduces a novel similarity measure to link transactions which spend outputs of CoinJoin transactions, CoinJoin Spending Transactions (CSTs), by analyzing their on-chain properties, addressing the challenge of preserving user privacy in blockchain systems. Despite the adoption of privacy-enhancing techniques like CoinJoin, [...] Read more.
This paper introduces a novel similarity measure to link transactions which spend outputs of CoinJoin transactions, CoinJoin Spending Transactions (CSTs), by analyzing their on-chain properties, addressing the challenge of preserving user privacy in blockchain systems. Despite the adoption of privacy-enhancing techniques like CoinJoin, users remain vulnerable to transaction linkage through shared output patterns. The proposed method leverages timestamp analysis of mixed outputs and employs a one-sided Chamfer distance to quantify similarities between CSTs, enabling the identification of transactions associated with the same user. The approach is evaluated across three major CoinJoin implementations (Dash, Whirlpool, and Wasabi 2.0) demonstrating its effectiveness in detecting linked CSTs. Additionally, the work improves transaction classification rules for Wasabi 2.0 by introducing criteria for uncommon denomination outputs, reducing false positives. Results show that multiple CSTs spending shared CoinJoin outputs are prevalent, highlighting the practical significance of the similarity measure. The findings underscore the ongoing privacy risks posed by transaction linkage, even within privacy-focused protocols. This work contributes to the understanding of CoinJoin’s limitations and offers insights for developing more robust privacy mechanisms in decentralized systems. To the authors knowledge this is the first work analyzing the linkage between CSTs. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

20 pages, 1689 KB  
Article
Prediction of High-Risk Failures in Urban District Heating Pipelines Using KNN-Based Relabeling and AI Models
by Sungyeol Lee, Jaemo Kang, Jinyoung Kim and Myeongsik Kong
Appl. Sci. 2025, 15(20), 11104; https://doi.org/10.3390/app152011104 - 16 Oct 2025
Viewed by 164
Abstract
This study generated an AI (Artificial Intelligence)-based prediction model for identifying high-risk groups of failures in urban district heating pipelines using pipeline attribute information and historical failure records. A total of 324,495 records from normally operating pipelines and 2293 failure cases were collected. [...] Read more.
This study generated an AI (Artificial Intelligence)-based prediction model for identifying high-risk groups of failures in urban district heating pipelines using pipeline attribute information and historical failure records. A total of 324,495 records from normally operating pipelines and 2293 failure cases were collected. Because the dataset exhibited severe imbalance, a KNN (K Nearest Neighbors)-based similarity selection was applied to reclassify the top 10% of normal data most similar to failure cases as high-risk. Input variables for model development included pipe diameter, purpose, insulation level, year of burial, and burial environment, supplemented with derived variables to enhance predictive capacity. The dataset was trained using XGBoost (eXtreme Gradient Boosting) v3.0.2, LightGBM (Light Gradient-Boosting Machine) v4.5.0, and an ensemble model (XGBoost + LightGBM), and the performance metrics were compared. The XGBoost model (K = 2) achieved the best results, with an F2-score of 0.921 and an AUC of 0.993. Variable importance analysis indicated that year of burial, insulation level, and purpose were the most influential features, highlighting pipeline aging and insulation condition as key determinants of high-risk classification. The proposed approach enables prioritization of failure risk management and identification of vulnerable sections using only attribute data, even in situations where sensor installation and infrared thermography are limited. Future research should consider distance functions suitable for mixed variables, sensitivity to unit length, and SHAP (Shapley Additive exPlanations)-based interpretability analysis to further generalize the model and enhance its field applicability. Full article
Show Figures

Figure 1

29 pages, 1829 KB  
Review
A Comprehensive Review of Cybersecurity Threats to Wireless Infocommunications in the Quantum-Age Cryptography
by Ivan Laktionov, Grygorii Diachenko, Dmytro Moroz and Iryna Getman
IoT 2025, 6(4), 61; https://doi.org/10.3390/iot6040061 - 16 Oct 2025
Viewed by 475
Abstract
The dynamic growth in the dependence of numerous industrial sectors, businesses, and critical infrastructure on infocommunication technologies necessitates the enhancement of their resilience to cyberattacks and radio-frequency threats. This article addresses a relevant scientific and applied issue, which is to formulate prospective directions [...] Read more.
The dynamic growth in the dependence of numerous industrial sectors, businesses, and critical infrastructure on infocommunication technologies necessitates the enhancement of their resilience to cyberattacks and radio-frequency threats. This article addresses a relevant scientific and applied issue, which is to formulate prospective directions for improving the effectiveness of cybersecurity approaches for infocommunication networks through a comparative analysis and logical synthesis of the state-of-the-art of applied research on cyber threats to the information security of mobile and satellite networks, including those related to the rapid development of quantum computing technologies. The article presents results on the systematisation of cyberattacks at the physical, signalling and cryptographic levels, as well as threats to cryptographic protocols and authentication systems. Particular attention is given to the prospects for implementing post-quantum cryptography, hybrid cryptographic models and the integration of threat detection mechanisms based on machine learning and artificial intelligence algorithms. The article proposes a classification of current threats according to architectural levels, analyses typical protocol vulnerabilities in next-generation mobile networks and satellite communications, and identifies key research gaps in existing cybersecurity approaches. Based on a critical analysis of scientific and applied literature, this article identifies key areas for future research. These include developing lightweight cryptographic algorithms, standardising post-quantum cryptographic models, creating adaptive cybersecurity frameworks and optimising protection mechanisms for resource-constrained devices within information and digital networks. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of the Internet of Things)
Show Figures

Figure 1

14 pages, 967 KB  
Article
Spatial Variability of Rainfall and Vulnerability Assessment of Water Resources Infrastructure for Adaptive Management Implementation in Ceará, Brazil
by Gabriela de Azevedo Reis, Larissa Zaira Rafael Rolim, Ticiana Marinho de Carvalho Studart, Samiria Maria Oliveira da Silva, Francisco de Assis de Souza Filho and Maria Aparecida Melo Rocha
Sustainability 2025, 17(20), 9147; https://doi.org/10.3390/su17209147 - 15 Oct 2025
Viewed by 151
Abstract
Given that a robust water resource management strategy requires the knowledge of natural and climatic factors and social and economic factors, we applied a variability and vulnerability assessment as a quantitative tool to characterize water resources in Ceará, Brazil. A methodological approach that [...] Read more.
Given that a robust water resource management strategy requires the knowledge of natural and climatic factors and social and economic factors, we applied a variability and vulnerability assessment as a quantitative tool to characterize water resources in Ceará, Brazil. A methodological approach that identifies and quantifies variability and vulnerability would allow better solutions to management decision problems. This approach functions as an indicator-based framework separating areas with similar water availability and water resources infrastructure, indicating the influence of natural and anthropogenic factors in the area’s water resources. The assessment proceeded with the regions’ delimitation, classifying them according to rainfall amount and spatial variability. The Adaptive Capacity for Water Management Index (ACWM) was evaluated using georeferenced water infrastructure information based on that classification. Most of the state’s area is subjected to low rainfall (below average). Nonetheless, of the areas with low rainfall, 48% have high variability. Within those areas critical water infrastructures are located that supply water to the state’s main industrial and populated city. Thus, the acknowledgment of this characteristic can complement current water management. Lastly, the authors provided recommendations based on the coupling of variability and vulnerability assessments with adaptive management to address improvements in the current water allocation system. Full article
Show Figures

Figure 1

14 pages, 482 KB  
Article
Diffusion-Based Model for Audio Steganography
by Ji Xi, Zhengwang Xia, Weiqi Zhang, Yue Xie and Li Zhao
Electronics 2025, 14(20), 4019; https://doi.org/10.3390/electronics14204019 - 14 Oct 2025
Viewed by 275
Abstract
Audio steganography exploits redundancies in the human auditory system to conceal secret information within cover audio, ensuring that the hidden data remains undetectable during normal listening. However, recent research shows that current audio steganography techniques are vulnerable to detection by deep learning-based steganalyzers, [...] Read more.
Audio steganography exploits redundancies in the human auditory system to conceal secret information within cover audio, ensuring that the hidden data remains undetectable during normal listening. However, recent research shows that current audio steganography techniques are vulnerable to detection by deep learning-based steganalyzers, which analyze the high-dimensional features of stego audio for classification. While deep learning-based steganography has been extensively studied for image covers, its application to audio remains underexplored, particularly in achieving robust embedding and extraction with minimal perceptual distortion. We propose a diffusion-based audio steganography model comprising two primary modules: (i) a diffusion-based embedding module that autonomously integrates secret messages into cover audio while preserving high perceptual quality and (ii) a corresponding diffusion-based extraction module that accurately recovers the embedded data. The framework supports both pre-existing cover audio and the generation of high-quality steganographic cover audio with superior perceptual quality for message embedding. After training, the model achieves state-of-the-art performance in terms of embedding capacity and resistance to detection by deep learning steganalyzers. The experimental results demonstrate that our diffusion-based approach significantly outperforms existing methods across varying embedding rates, yielding stego audio with superior auditory quality and lower detectability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

16 pages, 1026 KB  
Article
Multi-Criteria Evaluation of Bioavailable Trace Elements in Fine and Coarse Particulate Matter: Implications for Sustainable Air-Quality Management and Health Risk Assessment
by Elwira Zajusz-Zubek and Zygmunt Korban
Sustainability 2025, 17(20), 9045; https://doi.org/10.3390/su17209045 - 13 Oct 2025
Viewed by 191
Abstract
Bioavailable fractions of particulate-bound trace elements are key determinants of inhalation toxicity, yet air-quality assessments typically rely on total metal concentrations, which may underestimate health risks. This study integrates the exchangeable (F1) and reducible (F2) fractions of trace elements in fine (PM2.5 [...] Read more.
Bioavailable fractions of particulate-bound trace elements are key determinants of inhalation toxicity, yet air-quality assessments typically rely on total metal concentrations, which may underestimate health risks. This study integrates the exchangeable (F1) and reducible (F2) fractions of trace elements in fine (PM2.5) and coarse (PM10) particulate matter with multi-criteria decision-making (TOPSIS) and similarity-based classification (Czekanowski’s method). Archival weekly-integrated samples from the summer season were collected at eight industrially influenced sites in southern Poland. Sequential extraction (F1–F2) and ICP-MS were applied to quantify concentrations of cadmium, cobalt, chromium, nickel, and lead in PM2.5 and PM10. Aggregated hazard values were then derived with TOPSIS, and site similarity was explored using Czekanowski’s reordered distance matrices. Regulatory targets for cadmium and nickel, and the limit for lead in PM10 were not exceeded, but F1/F2 profiles revealed pronounced site-to-site differences in potential mobility that were not evident from total concentrations. Rankings were consistent across size fractions, with site P1 exhibiting the lowest hazard indices and P8 the highest, while mid-rank sites formed reproducible similarity clusters. The proposed chemical-fractionation and multivariate framework provides a reproducible screening tool for multi-element exposure, complementing compliance checks and supporting prioritisation of sites for targeted investigation and environmental management. In the sustainability context, bioavailability-based indicators strengthen air-quality assessment by linking monitoring data with health-relevant and cost-effective management strategies, supporting efficient resource allocation and reducing exposure in vulnerable populations. Full article
Show Figures

Graphical abstract

54 pages, 6893 KB  
Article
Automated OSINT Techniques for Digital Asset Discovery and Cyber Risk Assessment
by Tetiana Babenko, Kateryna Kolesnikova, Olga Abramkina and Yelizaveta Vitulyova
Computers 2025, 14(10), 430; https://doi.org/10.3390/computers14100430 - 9 Oct 2025
Viewed by 379
Abstract
Cyber threats are becoming increasingly sophisticated, especially in distributed infrastructures where systems are deeply interconnected. To address this, we developed a framework that automates how organizations discover their digital assets and assess which ones are the most at risk. The approach integrates diverse [...] Read more.
Cyber threats are becoming increasingly sophisticated, especially in distributed infrastructures where systems are deeply interconnected. To address this, we developed a framework that automates how organizations discover their digital assets and assess which ones are the most at risk. The approach integrates diverse public information sources, including WHOIS records, DNS data, and SSL certificates, into a unified analysis pipeline without relying on intrusive probing. For risk scoring we applied Gradient Boosted Decision Trees, which proved more robust with messy real-world data than other models we tested. DBSCAN clustering was used to detect unusual exposure patterns across assets. In validation on organizational data, the framework achieved 93.3% accuracy in detecting known vulnerabilities and an F1-score of 0.92 for asset classification. More importantly, security teams spent about 58% less time on manual triage and false alarm handling. The system also demonstrated reasonable scalability, indicating that automated OSINT analysis can provide a practical and resource-efficient way for organizations to maintain visibility over their attack surface. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

18 pages, 1723 KB  
Article
Sensor Placement for the Classification of Multiple Failure Types in Urban Water Distribution Networks
by Utsav Parajuli, Binod Ale Magar, Amrit Babu Ghimire and Sangmin Shin
Urban Sci. 2025, 9(10), 413; https://doi.org/10.3390/urbansci9100413 - 7 Oct 2025
Viewed by 397
Abstract
Urban water distribution networks (WDNs) are increasingly vulnerable to diverse disruptions, including pipe leaks/bursts and cyber–physical failures. A critical step in a resilience-based approach against these disruptions is the rapid and reliable identification of failures and their types for the timely implementation of [...] Read more.
Urban water distribution networks (WDNs) are increasingly vulnerable to diverse disruptions, including pipe leaks/bursts and cyber–physical failures. A critical step in a resilience-based approach against these disruptions is the rapid and reliable identification of failures and their types for the timely implementation of emergency or recovery actions. This study proposes a framework for sensor placement and multiple failure type classification in WDNs. It applies a wrapper-based feature selection (recursive feature elimination) with Random Forest (RF–RFE) to find the best sensor locations and employs an Autoencoder–Random Forest (AE–RF) framework for failure type identification. The framework was tested on the C-town WDN using the failure type scenarios of pipe leakage, cyberattacks, and physical attacks, which were generated using EPANET-CPA and WNTR models. The results showed a higher performance of the framework for single failure events, with accuracy of 0.99 for leakage, 0.98 for cyberattacks, and 0.95 for physical attacks, while the performance for multiple failure classification was lower, but still acceptable, with a performance accuracy of 0.90. The reduced performance was attributed to the model’s difficulty in distinguishing failure types when they produced hydraulically similar consequences. The proposed framework combining sensor placement and multiple failure identification will contribute to advance the existing data-driven approaches and to strengthen urban WDN resilience to conventional and cyber–physical disruptions. Full article
(This article belongs to the Special Issue Urban Water Resources Assessment and Environmental Governance)
Show Figures

Figure 1

15 pages, 1015 KB  
Article
Modelling the Presence of Smokers in Households for Future Policy and Advisory Applications
by David Moretón Pavón, Sandra Rodríguez-Sufuentes, Alicia Aguado, Rubèn González-Colom, Alba Gómez-López, Alexandra Kristian, Artur Badyda, Piotr Kepa, Leticia Pérez and Jose Fermoso
Air 2025, 3(4), 27; https://doi.org/10.3390/air3040027 - 7 Oct 2025
Viewed by 315
Abstract
Identifying tobacco smoke exposure in indoor environments is critical for public health, especially in vulnerable populations. In this study, we developed and validated a machine learning model to detect smoking households based on indoor air quality (IAQ) data collected using low-cost sensors. A [...] Read more.
Identifying tobacco smoke exposure in indoor environments is critical for public health, especially in vulnerable populations. In this study, we developed and validated a machine learning model to detect smoking households based on indoor air quality (IAQ) data collected using low-cost sensors. A dataset of 129 homes in Spain and Austria was analyzed, with variables including PM2.5, PM1, CO2, temperature, humidity, and total VOCs. The final model, based on the XGBoost algorithm, achieved near-perfect household-level classification (100% accuracy in the test set and AUC = 0.96 in external validation). Analysis of PM2.5 temporal profiles in representative households helped interpret model performance and highlighted cases where model predictions revealed inconsistencies in self-reported smoking status. These findings support the use of sensor-based approaches for behavioral inference and exposure assessment in residential settings. The proposed method could be extended to other indoor pollution sources and may contribute to risk communication, health-oriented interventions, and policy development, provided that ethical principles such as transparency and informed consent are upheld. Full article
Show Figures

Figure 1

28 pages, 567 KB  
Article
Fine-Tune LLMs for PLC Code Security: An Information-Theoretic Analysis
by Ping Chen, Xiaojing Liu and Yi Wang
Mathematics 2025, 13(19), 3211; https://doi.org/10.3390/math13193211 - 7 Oct 2025
Viewed by 520
Abstract
Programmable Logic Controllers (PLCs), widely used in industrial automation, are often programmed in IEC 61131-3 Structured Text (ST), which is prone to subtle logic vulnerabilities. Traditional tools like static analysis and fuzzing struggle with the complexity and domain-specific semantics of ST. This work [...] Read more.
Programmable Logic Controllers (PLCs), widely used in industrial automation, are often programmed in IEC 61131-3 Structured Text (ST), which is prone to subtle logic vulnerabilities. Traditional tools like static analysis and fuzzing struggle with the complexity and domain-specific semantics of ST. This work explores Large Language Models (LLMs) for PLC vulnerability detection, supported by both theoretical insights and empirical validation. Theoretically, we prove that control flow features carry the most vulnerability-relevant information, establish a feature informativeness hierarchy, and derive sample complexity bounds. We also propose an optimal synthetic data mixing strategy to improve learning with limited supervision. Empirically, we build a dataset combining real-world and synthetic ST code with five vulnerability types. We fine-tune open-source LLMs (CodeLlama, Qwen2.5-Coder, Starcoder2) using LoRA, demonstrating significant gains in binary and multi-class classification. The results confirm our theoretical predictions and highlight the promise of LLMs for PLC security. Our work provides a principled and practical foundation for LLM-based analysis of cyber-physical systems, emphasizing the role of domain knowledge, efficient adaptation, and formal guarantees. Full article
Show Figures

Figure 1

21 pages, 3358 KB  
Article
Wave-Induced Loads and Fatigue Life of Small Vessels Under Complex Sea States
by Pasqualino Corigliano, Claudio Alacqua, Davide Crisafulli and Giulia Palomba
J. Mar. Sci. Eng. 2025, 13(10), 1920; https://doi.org/10.3390/jmse13101920 - 6 Oct 2025
Viewed by 434
Abstract
The Strait of Messina poses unique challenges for small vessels due to strong currents and complex wave conditions, which critically affect structural integrity and operational safety. This study proposes an integrated methodology that combines seakeeping analysis, a comparison with classification society rules, and [...] Read more.
The Strait of Messina poses unique challenges for small vessels due to strong currents and complex wave conditions, which critically affect structural integrity and operational safety. This study proposes an integrated methodology that combines seakeeping analysis, a comparison with classification society rules, and fatigue life assessment within a unified and computationally efficient framework. A panel-based approach was used to compute vessel motions and vertical bending moments at different speeds and wave directions. Hydrodynamic loads derived from Response Amplitude Operators (RAOs) were compared with regulatory limits and applied to fatigue analysis. A further innovative aspect is the use of high-resolution bathymetric data from the Strait of Messina, enabling a realistic representation of local currents and sea states and providing a more accurate assessment than studies based on idealized conditions. The results show that forward speed amplifies bending moments, reducing safe wave heights from 2 m at rest to about 0.5 m at 16 knots. Fatigue analysis indicates that aluminum hulls are highly vulnerable to 2–3 m waves, while steel and titanium show no significant damage. The proposed workflow is transferable to other vessel types and supports safer design and operation. The case study of the Strait of Messina, the busiest and most challenging maritime corridor in Italy, confirms the validity and practical importance of the approach. By combining hydrodynamic and structural analyses into a single workflow, this study establishes the foundation for predictive maintenance and real-time structural health monitoring, with significant implications for navigation safety in complex sea environments. Full article
(This article belongs to the Special Issue Advanced Studies in Marine Mechanical and Naval Engineering)
Show Figures

Figure 1

20 pages, 620 KB  
Article
Discriminative Regions and Adversarial Sensitivity in CNN-Based Malware Image Classification
by Anish Roy and Fabio Di Troia
Electronics 2025, 14(19), 3937; https://doi.org/10.3390/electronics14193937 - 4 Oct 2025
Viewed by 453
Abstract
The escalating prevalence of malware poses a significant threat to digital infrastructure, demanding robust yet efficient detection methods. In this study, we evaluate multiple Convolutional Neural Network (CNN) architectures, including basic CNN, LeNet, AlexNet, GoogLeNet, and DenseNet, on a dataset of 11,000 malware [...] Read more.
The escalating prevalence of malware poses a significant threat to digital infrastructure, demanding robust yet efficient detection methods. In this study, we evaluate multiple Convolutional Neural Network (CNN) architectures, including basic CNN, LeNet, AlexNet, GoogLeNet, and DenseNet, on a dataset of 11,000 malware images spanning 452 families. Our experiments demonstrate that CNN models can achieve reliable classification performance across both multiclass and binary tasks. However, we also uncover a critical weakness in that even minimal image perturbations, such as pixel modification lower than 1% of the total image pixels, drastically degrade accuracy and reveal CNNs’ fragility in adversarial settings. A key contribution of this work is spatial analysis of malware images, revealing that discriminative features concentrate disproportionately in the bottom-left quadrant. This spatial bias likely reflects semantic structure, as malware payload information often resides near the end of binary files when rasterized. Notably, models trained in this region outperform those trained in other sections, underscoring the importance of spatial awareness in malware classification. Taken together, our results reveal that CNN-based malware classifiers are simultaneously effective and vulnerable to learning strong representations but sensitive to both subtle perturbations and positional bias. These findings highlight the need for future detection systems that integrate robustness to noise with resilience against spatial distortions to ensure reliability in real-world adversarial environments. Full article
(This article belongs to the Special Issue AI and Cybersecurity: Emerging Trends and Key Challenges)
Show Figures

Figure 1

Back to TopTop