Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (753)

Search Parameters:
Keywords = deep learning threats

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2495 KiB  
Article
AIM-Net: A Resource-Efficient Self-Supervised Learning Model for Automated Red Spider Mite Severity Classification in Tea Cultivation
by Malathi Kanagarajan, Mohanasundaram Natarajan, Santhosh Rajendran, Parthasarathy Velusamy, Saravana Kumar Ganesan, Manikandan Bose, Ranjithkumar Sakthivel and Baskaran Stephen Inbaraj
AgriEngineering 2025, 7(8), 247; https://doi.org/10.3390/agriengineering7080247 - 1 Aug 2025
Viewed by 91
Abstract
Tea cultivation faces significant threats from red spider mite (RSM: Oligonychus coffeae) infestations, which reduce yields and economic viability in major tea-producing regions. Current automated detection methods rely on supervised deep learning models requiring extensive labeled data, limiting scalability for smallholder farmers. [...] Read more.
Tea cultivation faces significant threats from red spider mite (RSM: Oligonychus coffeae) infestations, which reduce yields and economic viability in major tea-producing regions. Current automated detection methods rely on supervised deep learning models requiring extensive labeled data, limiting scalability for smallholder farmers. This article proposes AIM-Net (AI-based Infestation Mapping Network) by evaluating SwAV (Swapping Assignments between Views), a self-supervised learning framework, for classifying RSM infestation severity (Mild, Moderate, Severe) using a geo-referenced, field-acquired dataset of RSM infested tea-leaves, Cam-RSM. The methodology combines SwAV pre-training on unlabeled data with fine-tuning on labeled subsets, employing multi-crop augmentation and online clustering to learn discriminative features without full supervision. Comparative analysis against a fully supervised ResNet-50 baseline utilized 5-fold cross-validation, assessing accuracy, F1-scores, and computational efficiency. Results demonstrate SwAV’s superiority, achieving 98.7% overall accuracy (vs. 92.1% for ResNet-50) and macro-average F1-scores of 98.3% across classes, with a 62% reduction in labeled data requirements. The model showed particular strength in Mild_RSM-class detection (F1-score: 98.5%) and computational efficiency, enabling deployment on edge devices. Statistical validation confirmed significant improvements (p < 0.001) over baseline approaches. These findings establish self-supervised learning as a transformative tool for precision pest management, offering resource-efficient solutions for early infestation detection while maintaining high accuracy. Full article
Show Figures

Figure 1

17 pages, 3062 KiB  
Article
Spatiotemporal Risk-Aware Patrol Planning Using Value-Based Policy Optimization and Sensor-Integrated Graph Navigation in Urban Environments
by Swarnamouli Majumdar, Anjali Awasthi and Lorant Andras Szolga
Appl. Sci. 2025, 15(15), 8565; https://doi.org/10.3390/app15158565 (registering DOI) - 1 Aug 2025
Viewed by 176
Abstract
This study proposes an intelligent patrol planning framework that leverages reinforcement learning, spatiotemporal crime forecasting, and simulated sensor telemetry to optimize autonomous vehicle (AV) navigation in urban environments. Crime incidents from Washington DC (2024–2025) and Seattle (2008–2024) are modeled as a dynamic spatiotemporal [...] Read more.
This study proposes an intelligent patrol planning framework that leverages reinforcement learning, spatiotemporal crime forecasting, and simulated sensor telemetry to optimize autonomous vehicle (AV) navigation in urban environments. Crime incidents from Washington DC (2024–2025) and Seattle (2008–2024) are modeled as a dynamic spatiotemporal graph, capturing the evolving intensity and distribution of criminal activity across neighborhoods and time windows. The agent’s state space incorporates synthetic AV sensor inputs—including fuel level, visual anomaly detection, and threat signals—to reflect real-world operational constraints. We evaluate and compare three learning strategies: Deep Q-Network (DQN), Double Deep Q-Network (DDQN), and Proximal Policy Optimization (PPO). Experimental results show that DDQN outperforms DQN in convergence speed and reward accumulation, while PPO demonstrates greater adaptability in sensor-rich, high-noise conditions. Real-map simulations and hourly risk heatmaps validate the effectiveness of our approach, highlighting its potential to inform scalable, data-driven patrol strategies in next-generation smart cities. Full article
(This article belongs to the Special Issue AI-Aided Intelligent Vehicle Positioning in Urban Areas)
Show Figures

Figure 1

17 pages, 1027 KiB  
Article
AI-Driven Security for Blockchain-Based Smart Contracts: A GAN-Assisted Deep Learning Approach to Malware Detection
by Imad Bourian, Lahcen Hassine and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 53; https://doi.org/10.3390/jcp5030053 - 1 Aug 2025
Viewed by 212
Abstract
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats [...] Read more.
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats to intelligent systems and IoT applications, leading to data breaches and financial losses. Traditional detection techniques, such as manual analysis and static automated tools, suffer from high false positives and undetected security vulnerabilities. To address these problems, this paper proposes an Artificial Intelligence (AI)-based security framework that integrates Generative Adversarial Network (GAN)-based feature selection and deep learning techniques to classify and detect malware attacks on smart contract execution in the blockchain decentralized network. After an exhaustive pre-processing phase yielding a dataset of 40,000 malware and benign samples, the proposed model is evaluated and compared with related studies on the basis of a number of performance metrics including training accuracy, training loss, and classification metrics (accuracy, precision, recall, and F1-score). Our combined approach achieved a remarkable accuracy of 97.6%, demonstrating its effectiveness in detecting malware and protecting blockchain systems. Full article
Show Figures

Figure 1

30 pages, 4409 KiB  
Article
Accident Impact Prediction Based on a Deep Convolutional and Recurrent Neural Network Model
by Pouyan Sajadi, Mahya Qorbani, Sobhan Moosavi and Erfan Hassannayebi
Urban Sci. 2025, 9(8), 299; https://doi.org/10.3390/urbansci9080299 - 1 Aug 2025
Viewed by 260
Abstract
Traffic accidents pose a significant threat to public safety, resulting in numerous fatalities, injuries, and a substantial economic burden each year. The development of predictive models capable of the real-time forecasting of post-accident impact using readily available data can play a crucial role [...] Read more.
Traffic accidents pose a significant threat to public safety, resulting in numerous fatalities, injuries, and a substantial economic burden each year. The development of predictive models capable of the real-time forecasting of post-accident impact using readily available data can play a crucial role in preventing adverse outcomes and enhancing overall safety. However, existing accident predictive models encounter two main challenges: first, a reliance on either costly or non-real-time data, and second, the absence of a comprehensive metric to measure post-accident impact accurately. To address these limitations, this study proposes a deep neural network model known as the cascade model. It leverages readily available real-world data from Los Angeles County to predict post-accident impacts. The model consists of two components: Long Short-Term Memory (LSTM) and a Convolutional Neural Network (CNN). The LSTM model captures temporal patterns, while the CNN extracts patterns from the sparse accident dataset. Furthermore, an external traffic congestion dataset is incorporated to derive a new feature called the “accident impact” factor, which quantifies the influence of an accident on surrounding traffic flow. Extensive experiments were conducted to demonstrate the effectiveness of the proposed hybrid machine learning method in predicting the post-accident impact compared to state-of-the-art baselines. The results reveal a higher precision in predicting minimal impacts (i.e., cases with no reported accidents) and a higher recall in predicting more significant impacts (i.e., cases with reported accidents). Full article
Show Figures

Figure 1

29 pages, 482 KiB  
Review
AI in Maritime Security: Applications, Challenges, Future Directions, and Key Data Sources
by Kashif Talpur, Raza Hasan, Ismet Gocer, Shakeel Ahmad and Zakirul Bhuiyan
Information 2025, 16(8), 658; https://doi.org/10.3390/info16080658 - 31 Jul 2025
Viewed by 236
Abstract
The growth and sustainability of today’s global economy heavily relies on smooth maritime operations. The increasing security concerns to marine environments pose complex security challenges, such as smuggling, illegal fishing, human trafficking, and environmental threats, for traditional surveillance methods due to their limitations. [...] Read more.
The growth and sustainability of today’s global economy heavily relies on smooth maritime operations. The increasing security concerns to marine environments pose complex security challenges, such as smuggling, illegal fishing, human trafficking, and environmental threats, for traditional surveillance methods due to their limitations. Artificial intelligence (AI), particularly deep learning, has offered strong capabilities for automating object detection, anomaly identification, and situational awareness in maritime environments. In this paper, we have reviewed the state-of-the-art deep learning models mainly proposed in recent literature (2020–2025), including convolutional neural networks, recurrent neural networks, Transformers, and multimodal fusion architectures. We have highlighted their success in processing diverse data sources such as satellite imagery, AIS, SAR, radar, and sensor inputs from UxVs. Additionally, multimodal data fusion techniques enhance robustness by integrating complementary data, yielding more detection accuracy. There still exist challenges in detecting small or occluded objects, handling cluttered scenes, and interpreting unusual vessel behaviours, especially under adverse sea conditions. Additionally, explainability and real-time deployment of AI models in operational settings are open research areas. Overall, the review of existing maritime literature suggests that deep learning is rapidly transforming maritime domain awareness and response, with significant potential to improve global maritime security and operational efficiency. We have also provided key datasets for deep learning models in the maritime security domain. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
Show Figures

Figure 1

15 pages, 1600 KiB  
Article
XLNet-CRF: Efficient Named Entity Recognition for Cyber Threat Intelligence with Permutation Language Modeling
by Tianhao Wang, Yang Liu, Chao Liang, Bailing Wang and Hongri Liu
Electronics 2025, 14(15), 3034; https://doi.org/10.3390/electronics14153034 - 30 Jul 2025
Viewed by 223
Abstract
As cyberattacks continue to rise in frequency and sophistication, extracting actionable Cyber Threat Intelligence (CTI) from diverse online sources has become critical for proactive threat detection and defense. However, accurately identifying complex entities from lengthy and heterogeneous threat reports remains challenging due to [...] Read more.
As cyberattacks continue to rise in frequency and sophistication, extracting actionable Cyber Threat Intelligence (CTI) from diverse online sources has become critical for proactive threat detection and defense. However, accurately identifying complex entities from lengthy and heterogeneous threat reports remains challenging due to long-range dependencies and domain-specific terminology. To address this, we propose XLNet-CRF, a hybrid framework that combines permutation-based language modeling with structured prediction using Conditional Random Fields (CRF) to enhance Named Entity Recognition (NER) in cybersecurity contexts. XLNet-CRF directly addresses key challenges in CTI-NER by modeling bidirectional dependencies and capturing non-contiguous semantic patterns more effectively than traditional approaches. Comprehensive evaluations on two benchmark cybersecurity corpora validate the efficacy of our approach. On the CTI-Reports dataset, XLNet-CRF achieves a precision of 97.41% and an F1-score of 97.43%; on MalwareTextDB, it attains a precision of 85.33% and an F1-score of 88.65%—significantly surpassing strong BERT-based baselines in both accuracy and robustness. Full article
Show Figures

Figure 1

22 pages, 3082 KiB  
Article
A Lightweight Intrusion Detection System with Dynamic Feature Fusion Federated Learning for Vehicular Network Security
by Junjun Li, Yanyan Ma, Jiahui Bai, Congming Chen, Tingting Xu and Chi Ding
Sensors 2025, 25(15), 4622; https://doi.org/10.3390/s25154622 - 25 Jul 2025
Viewed by 318
Abstract
The rapid integration of complex sensors and electronic control units (ECUs) in autonomous vehicles significantly increases cybersecurity risks in vehicular networks. Although the Controller Area Network (CAN) is efficient, it lacks inherent security mechanisms and is vulnerable to various network attacks. The traditional [...] Read more.
The rapid integration of complex sensors and electronic control units (ECUs) in autonomous vehicles significantly increases cybersecurity risks in vehicular networks. Although the Controller Area Network (CAN) is efficient, it lacks inherent security mechanisms and is vulnerable to various network attacks. The traditional Intrusion Detection System (IDS) makes it difficult to effectively deal with the dynamics and complexity of emerging threats. To solve these problems, a lightweight vehicular network intrusion detection framework based on Dynamic Feature Fusion Federated Learning (DFF-FL) is proposed. The proposed framework employs a two-stream architecture, including a transformer-augmented autoencoder for abstract feature extraction and a lightweight CNN-LSTM–Attention model for preserving temporal and local patterns. Compared with the traditional theoretical framework of the federated learning, DFF-FL first dynamically fuses the deep feature representation of each node through the transformer attention module to realize the fine-grained cross-node feature interaction in a heterogeneous data environment, thereby eliminating the performance degradation caused by the difference in feature distribution. Secondly, based on the final loss LAEX,X^ index of each node, an adaptive weight adjustment mechanism is used to make the nodes with excellent performance dominate the global model update, which significantly improves robustness against complex attacks. Experimental evaluation on the CAN-Hacking dataset shows that the proposed intrusion detection system achieves more than 99% F1 score with only 1.11 MB of memory and 81,863 trainable parameters, while maintaining low computational overheads and ensuring data privacy, which is very suitable for edge device deployment. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

23 pages, 3875 KiB  
Article
Soil Water-Soluble Ion Inversion via Hyperspectral Data Reconstruction and Multi-Scale Attention Mechanism: A Remote Sensing Case Study of Farmland Saline–Alkali Lands
by Meichen Liu, Shengwei Zhang, Jing Gao, Bo Wang, Kedi Fang, Lu Liu, Shengwei Lv and Qian Zhang
Agronomy 2025, 15(8), 1779; https://doi.org/10.3390/agronomy15081779 - 24 Jul 2025
Viewed by 570
Abstract
The salinization of agricultural soils is a serious threat to farming and ecological balance in arid and semi-arid regions. Accurate estimation of soil water-soluble ions (calcium, carbonate, magnesium, and sulfate) is necessary for correct monitoring of soil salinization and sustainable land management. Hyperspectral [...] Read more.
The salinization of agricultural soils is a serious threat to farming and ecological balance in arid and semi-arid regions. Accurate estimation of soil water-soluble ions (calcium, carbonate, magnesium, and sulfate) is necessary for correct monitoring of soil salinization and sustainable land management. Hyperspectral ground-based data are valuable in soil salinization monitoring, but the acquisition cost is high, and the coverage is small. Therefore, this study proposes a two-stage deep learning framework with multispectral remote-sensing images. First, the wavelet transform is used to enhance the Transformer and extract fine-grained spectral features to reconstruct the ground-based hyperspectral data. A comparison of ground-based hyperspectral data shows that the reconstructed spectra match the measured data in the 450–998 nm range, with R2 up to 0.98 and MSE = 0.31. This high similarity compensates for the low spectral resolution and weak feature expression of multispectral remote-sensing data. Subsequently, this enhanced spectral information was integrated and fed into a novel multiscale self-attentive Transformer model (MSATransformer) to invert four water-soluble ions. Compared with BPANN, MLP, and the standard Transformer model, our model remains robust across different spectra, achieving an R2 of up to 0.95 and reducing the average relative error by more than 30%. Among them, for the strongly responsive ions magnesium and sulfate, R2 reaches 0.92 and 0.95 (with RMSE of 0.13 and 0.29 g/kg, respectively). For the weakly responsive ions calcium and carbonate, R2 stays above 0.80 (RMSE is below 0.40 g/kg). The MSATransformer framework provides a low-cost and high-accuracy solution to monitor soil salinization at large scales and supports precision farmland management. Full article
(This article belongs to the Special Issue Water and Fertilizer Regulation Theory and Technology in Crops)
Show Figures

Figure 1

21 pages, 2789 KiB  
Article
BIM-Based Adversarial Attacks Against Speech Deepfake Detectors
by Wendy Edda Wang, Davide Salvi, Viola Negroni, Daniele Ugo Leonzio, Paolo Bestagini and Stefano Tubaro
Electronics 2025, 14(15), 2967; https://doi.org/10.3390/electronics14152967 - 24 Jul 2025
Viewed by 239
Abstract
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic [...] Read more.
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic audio, potentially enabling unauthorized access through ASV systems. To counter these threats, forensic detectors have been developed to distinguish between real and fake speech. Although these models achieve strong performance, their deep learning nature makes them susceptible to adversarial attacks, i.e., carefully crafted, imperceptible perturbations in the audio signal that make the model unable to classify correctly. In this paper, we explore adversarial attacks targeting speech deepfake detectors. Specifically, we analyze the effectiveness of Basic Iterative Method (BIM) attacks applied in both time and frequency domains under white- and black-box conditions. Additionally, we propose an ensemble-based attack strategy designed to simultaneously target multiple detection models. This approach generates adversarial examples with balanced effectiveness across the ensemble, enhancing transferability to unseen models. Our experimental results show that, although crafting universally transferable attacks remains challenging, it is possible to fool state-of-the-art detectors using minimal, imperceptible perturbations, highlighting the need for more robust defenses in speech deepfake detection. Full article
Show Figures

Figure 1

15 pages, 2123 KiB  
Article
Multi-Class Visual Cyberbullying Detection Using Deep Neural Networks and the CVID Dataset
by Muhammad Asad Arshed, Zunera Samreen, Arslan Ahmad, Laiba Amjad, Hasnain Muavia, Christine Dewi and Muhammad Kabir
Information 2025, 16(8), 630; https://doi.org/10.3390/info16080630 - 24 Jul 2025
Viewed by 267
Abstract
In an era where online interactions increasingly shape social dynamics, the pervasive issue of cyberbullying poses a significant threat to the well-being of individuals, particularly among vulnerable groups. Despite extensive research on text-based cyberbullying detection, the rise of visual content on social media [...] Read more.
In an era where online interactions increasingly shape social dynamics, the pervasive issue of cyberbullying poses a significant threat to the well-being of individuals, particularly among vulnerable groups. Despite extensive research on text-based cyberbullying detection, the rise of visual content on social media platforms necessitates new approaches to address cyberbullying using images. This domain has been largely overlooked. In this paper, we present a novel dataset specifically designed for the detection of visual cyberbullying, encompassing four distinct classes: abuse, curse, discourage, and threat. The initial prepared dataset (cyberbullying visual indicators dataset (CVID)) comprised 664 samples for training and validation, expanded through data augmentation techniques to ensure balanced and accurate results across all classes. We analyzed this dataset using several advanced deep learning models, including VGG16, VGG19, MobileNetV2, and Vision Transformer. The proposed model, based on DenseNet201, achieved the highest test accuracy of 99%, demonstrating its efficacy in identifying the visual cues associated with cyberbullying. To prove the proposed model’s generalizability, the 5-fold stratified K-fold was also considered, and the model achieved an average test accuracy of 99%. This work introduces a dataset and highlights the potential of leveraging deep learning models to address the multifaceted challenges of detecting cyberbullying in visual content. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

25 pages, 5142 KiB  
Article
Wheat Powdery Mildew Severity Classification Based on an Improved ResNet34 Model
by Meilin Li, Yufeng Guo, Wei Guo, Hongbo Qiao, Lei Shi, Yang Liu, Guang Zheng, Hui Zhang and Qiang Wang
Agriculture 2025, 15(15), 1580; https://doi.org/10.3390/agriculture15151580 - 23 Jul 2025
Viewed by 271
Abstract
Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early [...] Read more.
Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early and accurate detection crucial for effective management. In this study, we present QY-SE-MResNet34, a deep learning-based classification model that builds upon ResNet34 to perform multi-class classification of wheat leaf images and assess powdery mildew severity at the single-leaf level. The proposed methodology begins with dataset construction following the GBT 17980.22-2000 national standard for powdery mildew severity grading, resulting in a curated collection of 4248 wheat leaf images at the grain-filling stage across six severity levels. To enhance model performance, we integrated transfer learning with ResNet34, leveraging pretrained weights to improve feature extraction and accelerate convergence. Further refinements included embedding a Squeeze-and-Excitation (SE) block to strengthen feature representation while maintaining computational efficiency. The model architecture was also optimized by modifying the first convolutional layer (conv1)—replacing the original 7 × 7 kernel with a 3 × 3 kernel, adjusting the stride to 1, and setting padding to 1—to better capture fine-grained leaf textures and edge features. Subsequently, the optimal training strategy was determined through hyperparameter tuning experiments, and GrabCut-based background processing along with data augmentation were introduced to enhance model robustness. In addition, interpretability techniques such as channel masking and Grad-CAM were employed to visualize the model’s decision-making process. Experimental validation demonstrated that QY-SE-MResNet34 achieved an 89% classification accuracy, outperforming established models such as ResNet50, VGG16, and MobileNetV2 and surpassing the original ResNet34 by 11%. This study delivers a high-performance solution for single-leaf wheat powdery mildew severity assessment, offering practical value for intelligent disease monitoring and early warning systems in precision agriculture. Full article
Show Figures

Figure 1

10 pages, 637 KiB  
Proceeding Paper
Improving Industrial Control System Cybersecurity with Time-Series Prediction Models
by Velizar Varbanov and Tatiana Atanasova
Eng. Proc. 2025, 101(1), 4; https://doi.org/10.3390/engproc2025101004 - 22 Jul 2025
Viewed by 234
Abstract
Traditional security detection methods struggle to identify zero-day attacks in Industrial Control Systems (ICSs), particularly within critical infrastructures (CIs) integrated with the Industrial Internet of Things (IIoT). These attacks exploit unknown vulnerabilities, leveraging the complexity of physical and digital system interconnections, making them [...] Read more.
Traditional security detection methods struggle to identify zero-day attacks in Industrial Control Systems (ICSs), particularly within critical infrastructures (CIs) integrated with the Industrial Internet of Things (IIoT). These attacks exploit unknown vulnerabilities, leveraging the complexity of physical and digital system interconnections, making them difficult to detect. The integration of legacy ICS networks with modern computing and networking technologies has expanded the attack surface, increasing susceptibility to cyber threats. Anomaly detection systems play a crucial role in safeguarding these infrastructures by identifying deviations from normal operations. This study investigates the effectiveness of deep learning-based anomaly detection models in revealing operational anomalies that could indicate potential cyber-attacks. We implemented and evaluated a hybrid deep learning architecture combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks to analyze ICS telemetry data. The CNN-LSTM model excels in identifying time-dependent anomalies and enables near real-time detection of cyber-attacks, significantly improving security monitoring capabilities for IIoT-integrated critical infrastructures. Full article
Show Figures

Figure 1

38 pages, 6851 KiB  
Article
FGFNet: Fourier Gated Feature-Fusion Network with Fractal Dimension Estimation for Robust Palm-Vein Spoof Detection
by Seung Gu Kim, Jung Soo Kim and Kang Ryoung Park
Fractal Fract. 2025, 9(8), 478; https://doi.org/10.3390/fractalfract9080478 - 22 Jul 2025
Viewed by 253
Abstract
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality [...] Read more.
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality and sophistication of fake images have improved, leading to an increased security threat from counterfeit images. In particular, palm-vein images acquired through near-infrared illumination exhibit low resolution and blurred characteristics, making it even more challenging to detect fake images. Furthermore, spoof detection specifically targeting palm-vein images has not been studied in detail. To address these challenges, this study proposes the Fourier-gated feature-fusion network (FGFNet) as a novel spoof detector for palm-vein recognition systems. The proposed network integrates masked fast Fourier transform, a map-based gated feature fusion block, and a fast Fourier convolution (FFC) attention block with global contrastive loss to effectively detect distortion patterns caused by generative models. These components enable the efficient extraction of critical information required to determine the authenticity of palm-vein images. In addition, fractal dimension estimation (FDE) was employed for two purposes in this study. In the spoof attack procedure, FDE was used to evaluate how closely the generated fake images approximate the structural complexity of real palm-vein images, confirming that the generative model produced highly realistic spoof samples. In the spoof detection procedure, the FDE results further demonstrated that the proposed FGFNet effectively distinguishes between real and fake images, validating its capability to capture subtle structural differences induced by generative manipulation. To evaluate the spoof detection performance of FGFNet, experiments were conducted using real palm-vein images from two publicly available palm-vein datasets—VERA Spoofing PalmVein (VERA dataset) and PLUSVein-contactless (PLUS dataset)—as well as fake palm-vein images generated based on these datasets using a cycle-consistent generative adversarial network. The results showed that, based on the average classification error rate, FGFNet achieved 0.3% and 0.3% on the VERA and PLUS datasets, respectively, demonstrating superior performance compared to existing state-of-the-art spoof detection methods. Full article
Show Figures

Figure 1

27 pages, 21494 KiB  
Article
Deep Learning and Transformer Models for Groundwater Level Prediction in the Marvdasht Plain: Protecting UNESCO Heritage Sites—Persepolis and Naqsh-e Rustam
by Peyman Heidarian, Franz Pablo Antezana Lopez, Yumin Tan, Somayeh Fathtabar Firozjaee, Tahmouras Yousefi, Habib Salehi, Ava Osman Pour, Maria Elena Oscori Marca, Guanhua Zhou, Ali Azhdari and Reza Shahbazi
Remote Sens. 2025, 17(14), 2532; https://doi.org/10.3390/rs17142532 - 21 Jul 2025
Viewed by 641
Abstract
Groundwater level monitoring is crucial for assessing hydrological responses to climate change and human activities, which pose significant threats to the sustainability of semi-arid aquifers and the cultural heritage they sustain. This study presents an integrated remote sensing and transformer-based deep learning framework [...] Read more.
Groundwater level monitoring is crucial for assessing hydrological responses to climate change and human activities, which pose significant threats to the sustainability of semi-arid aquifers and the cultural heritage they sustain. This study presents an integrated remote sensing and transformer-based deep learning framework that combines diverse geospatial datasets to predict spatiotemporal variations across the plain near the Persepolis and Naqsh-e Rustam archaeological complexes—UNESCO World Heritage Sites situated at the plain’s edge. We assemble 432 synthetic aperture radar (SAR) scenes (2015–2022) and derive vertical ground motion rates greater than −180 mm yr−1, which are co-localized with multisource geoinformation, including hydrometeorological indices, biophysical parameters, and terrain attributes, to train transformer models with traditional deep learning methods. A sparse probabilistic transformer (ConvTransformer) trained on 95 gridded variables achieves an out-of-sample R2 = 0.83 and RMSE = 6.15 m, outperforming bidirectional deep learning models by >40%. Scenario analysis indicates that, in the absence of intervention, subsidence may exceed 200 mm per year within a decade, threatening irreplaceable Achaemenid stone reliefs. Our results indicate that attention-based networks, when coupled to synergistic geodetic constraints, enable early-warning quantification of groundwater stress over heritage sites and provide a scalable template for sustainable aquifer governance worldwide. Full article
Show Figures

Graphical abstract

26 pages, 3468 KiB  
Article
A Hybrid CNN–BiLSTM Framework Optimized with Bayesian Search for Robust Android Malware Detection
by Ibrahim Mutambik
Systems 2025, 13(7), 612; https://doi.org/10.3390/systems13070612 - 19 Jul 2025
Viewed by 374
Abstract
With the rapid proliferation of Android smartphones, mobile malware threats have escalated significantly, underscoring the need for more accurate and adaptive detection solutions. This work proposes an innovative deep learning hybrid model that combines Convolutional Neural Networks (CNNs) with Bidirectional Long Short-Term Memory [...] Read more.
With the rapid proliferation of Android smartphones, mobile malware threats have escalated significantly, underscoring the need for more accurate and adaptive detection solutions. This work proposes an innovative deep learning hybrid model that combines Convolutional Neural Networks (CNNs) with Bidirectional Long Short-Term Memory (BiLSTM) networks for learning both local features and sequential behavior in Android applications. To improve the relevance and clarity of the input data, Mutual Information is applied for feature selection, while Bayesian Optimization is adopted to efficiently optimize the model’s parameters. The designed system is tested on standard Android malware datasets and achieves an impressive detection accuracy of 99.3%, clearly outperforming classical approaches such as Support Vector Machines (SVMs), Random Forest, CNN, and Naive Bayes. Moreover, it delivers strong outcomes across critical evaluation metrics like F1-score and ROC-AUC. These findings confirm the framework’s high efficiency, adaptability, and practical applicability, making it a compelling solution for Android malware detection in today’s evolving threat landscape. Full article
(This article belongs to the Special Issue Cyber Security Challenges in Complex Systems)
Show Figures

Figure 1

Back to TopTop