Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,296)

Search Parameters:
Keywords = art evaluation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2988 KiB  
Article
Enhanced Cuckoo Search Optimization with Opposition-Based Learning for the Optimal Placement of Sensor Nodes and Enhanced Network Coverage in Wireless Sensor Networks
by Mandli Rami Reddy, M. L. Ravi Chandra and Ravilla Dilli
Appl. Sci. 2025, 15(15), 8575; https://doi.org/10.3390/app15158575 (registering DOI) - 1 Aug 2025
Abstract
Network connectivity and area coverage are the most important aspects in the applications of wireless sensor networks (WSNs). The resource and energy constraints of sensor nodes, operational conditions, and network size pose challenges to the optimal coverage of targets in the region of [...] Read more.
Network connectivity and area coverage are the most important aspects in the applications of wireless sensor networks (WSNs). The resource and energy constraints of sensor nodes, operational conditions, and network size pose challenges to the optimal coverage of targets in the region of interest (ROI). The main idea is to achieve maximum area coverage and connectivity with strategic deployment and the minimal number of sensor nodes. This work addresses the problem of network area coverage in randomly distributed WSNs and provides an efficient deployment strategy using an enhanced version of cuckoo search optimization (ECSO). The “sequential update evaluation” mechanism is used to mitigate the dependency among dimensions and provide highly accurate solutions, particularly during the local search phase. During the preference random walk phase of conventional CSO, particle swarm optimization (PSO) with adaptive inertia weights is defined to accelerate the local search capabilities. The “opposition-based learning (OBL)” strategy is applied to ensure high-quality initial solutions that help to enhance the balance between exploration and exploitation. By considering the opposite of current solutions to expand the search space, we achieve higher convergence speed and population diversity. The performance of ECSO-OBL is evaluated using eight benchmark functions, and the results of three cases are compared with the existing methods. The proposed method enhances network coverage with a non-uniform distribution of sensor nodes and attempts to cover the whole ROI with a minimal number of sensor nodes. In a WSN with a 100 m2 area, we achieved a maximum coverage rate of 98.45% and algorithm convergence in 143 iterations, and the execution time was limited to 2.85 s. The simulation results of various cases prove the higher efficiency of the ECSO-OBL method in terms of network coverage and connectivity in WSNs compared with existing state-of-the-art works. Full article
25 pages, 2859 KiB  
Article
Feature-Based Normality Models for Anomaly Detection
by Hui Yie Teh, Kevin I-Kai Wang and Andreas W. Kempa-Liehr
Sensors 2025, 25(15), 4757; https://doi.org/10.3390/s25154757 (registering DOI) - 1 Aug 2025
Abstract
Detecting previously unseen anomalies in sensor data is a challenging problem for artificial intelligence when sensor-specific and deployment-specific characteristics of the time series need to be learned from a short calibration period. From the application point of view, this challenge becomes increasingly important [...] Read more.
Detecting previously unseen anomalies in sensor data is a challenging problem for artificial intelligence when sensor-specific and deployment-specific characteristics of the time series need to be learned from a short calibration period. From the application point of view, this challenge becomes increasingly important because many applications are gravitating towards utilising low-cost sensors for Internet of Things deployments. While these sensors offer cost-effectiveness and customisation, their data quality does not match that of their high-end counterparts. To improve sensor data quality while addressing the challenges of anomaly detection in Internet of Things applications, we present an anomaly detection framework that learns a normality model of sensor data. The framework models the typical behaviour of individual sensors, which is crucial for the reliable detection of sensor data anomalies, especially when dealing with sensors observing significantly different signal characteristics. Our framework learns sensor-specific normality models from a small set of anomaly-free training data while employing an unsupervised feature engineering approach to select statistically significant features. The selected features are subsequently used to train a Local Outlier Factor anomaly detection model, which adaptively determines the boundary separating normal data from anomalies. The proposed anomaly detection framework is evaluated on three real-world public environmental monitoring datasets with heterogeneous sensor readings. The sensor-specific normality models are learned from extremely short calibration periods (as short as the first 3 days or 10% of the total recorded data) and outperform four other state-of-the-art anomaly detection approaches with respect to F1-score (between 5.4% and 9.3% better) and Matthews correlation coefficient (between 4.0% and 7.6% better). Full article
(This article belongs to the Special Issue Innovative Approaches to Cybersecurity for IoT and Wireless Networks)
Show Figures

Figure 1

22 pages, 24173 KiB  
Article
ScaleViM-PDD: Multi-Scale EfficientViM with Physical Decoupling and Dual-Domain Fusion for Remote Sensing Image Dehazing
by Hao Zhou, Yalun Wang, Wanting Peng, Xin Guan and Tao Tao
Remote Sens. 2025, 17(15), 2664; https://doi.org/10.3390/rs17152664 (registering DOI) - 1 Aug 2025
Abstract
Remote sensing images are often degraded by atmospheric haze, which not only reduces image quality but also complicates information extraction, particularly in high-level visual analysis tasks such as object detection and scene classification. State-space models (SSMs) have recently emerged as a powerful paradigm [...] Read more.
Remote sensing images are often degraded by atmospheric haze, which not only reduces image quality but also complicates information extraction, particularly in high-level visual analysis tasks such as object detection and scene classification. State-space models (SSMs) have recently emerged as a powerful paradigm for vision tasks, showing great promise due to their computational efficiency and robust capacity to model global dependencies. However, most existing learning-based dehazing methods lack physical interpretability, leading to weak generalization. Furthermore, they typically rely on spatial features while neglecting crucial frequency domain information, resulting in incomplete feature representation. To address these challenges, we propose ScaleViM-PDD, a novel network that enhances an SSM backbone with two key innovations: a Multi-scale EfficientViM with Physical Decoupling (ScaleViM-P) module and a Dual-Domain Fusion (DD Fusion) module. The ScaleViM-P module synergistically integrates a Physical Decoupling block within a Multi-scale EfficientViM architecture. This design enables the network to mitigate haze interference in a physically grounded manner at each representational scale while simultaneously capturing global contextual information to adaptively handle complex haze distributions. To further address detail loss, the DD Fusion module replaces conventional skip connections by incorporating a novel Frequency Domain Module (FDM) alongside channel and position attention. This allows for a more effective fusion of spatial and frequency features, significantly improving the recovery of fine-grained details, including color and texture information. Extensive experiments on nine publicly available remote sensing datasets demonstrate that ScaleViM-PDD consistently surpasses state-of-the-art baselines in both qualitative and quantitative evaluations, highlighting its strong generalization ability. Full article
Show Figures

Figure 1

24 pages, 23817 KiB  
Article
Dual-Path Adversarial Denoising Network Based on UNet
by Jinchi Yu, Yu Zhou, Mingchen Sun and Dadong Wang
Sensors 2025, 25(15), 4751; https://doi.org/10.3390/s25154751 (registering DOI) - 1 Aug 2025
Abstract
Digital image quality is crucial for reliable analysis in applications such as medical imaging, satellite remote sensing, and video surveillance. However, traditional denoising methods struggle to balance noise removal with detail preservation and lack adaptability to various types of noise. We propose a [...] Read more.
Digital image quality is crucial for reliable analysis in applications such as medical imaging, satellite remote sensing, and video surveillance. However, traditional denoising methods struggle to balance noise removal with detail preservation and lack adaptability to various types of noise. We propose a novel three-module architecture for image denoising, comprising a generator, a dual-path-UNet-based denoiser, and a discriminator. The generator creates synthetic noise patterns to augment training data, while the dual-path-UNet denoiser uses multiple receptive field modules to preserve fine details and dense feature fusion to maintain global structural integrity. The discriminator provides adversarial feedback to enhance denoising performance. This dual-path adversarial training mechanism addresses the limitations of traditional methods by simultaneously capturing both local details and global structures. Experiments on the SIDD, DND, and PolyU datasets demonstrate superior performance. We compare our architecture with the latest state-of-the-art GAN variants through comprehensive qualitative and quantitative evaluations. These results confirm the effectiveness of noise removal with minimal loss of critical image details. The proposed architecture enhances image denoising capabilities in complex noise scenarios, providing a robust solution for applications that require high image fidelity. By enhancing adaptability to various types of noise while maintaining structural integrity, this method provides a versatile tool for image processing tasks that require preserving detail. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 1206 KiB  
Article
Application of Protein Structure Encodings and Sequence Embeddings for Transporter Substrate Prediction
by Andreas Denger and Volkhard Helms
Molecules 2025, 30(15), 3226; https://doi.org/10.3390/molecules30153226 (registering DOI) - 1 Aug 2025
Abstract
Membrane transporters play a crucial role in any cell. Identifying the substrates they translocate across membranes is important for many fields of research, such as metabolomics, pharmacology, and biotechnology. In this study, we leverage recent advances in deep learning, such as amino acid [...] Read more.
Membrane transporters play a crucial role in any cell. Identifying the substrates they translocate across membranes is important for many fields of research, such as metabolomics, pharmacology, and biotechnology. In this study, we leverage recent advances in deep learning, such as amino acid sequence embeddings with protein language models (pLMs), highly accurate 3D structure predictions with AlphaFold 2, and structure-encoding 3Di sequences from FoldSeek, for predicting substrates of membrane transporters. We test new deep learning features derived from both sequence and structure, and compare them to the previously best-performing protein encodings, which were made up of amino acid k-mer frequencies and evolutionary information from PSSMs. Furthermore, we compare the performance of these features either using a previously developed SVM model, or with a regularized feedforward neural network (FNN). When evaluating these models on sugar and amino acid carriers in A. thaliana, as well as on three types of ion channels in human, we found that both the DL-based features and the FNN model led to a better and more consistent classification performance compared to previous methods. Direct encodings of 3D structures with Foldseek, as well as structural embeddings with ProstT5, matched the performance of state-of-the-art amino acid sequence embeddings calculated with the ProtT5-XL model when used as input for the FNN classifier. Full article
Show Figures

Figure 1

26 pages, 8736 KiB  
Article
Uncertainty-Aware Fault Diagnosis of Rotating Compressors Using Dual-Graph Attention Networks
by Seungjoo Lee, YoungSeok Kim, Hyun-Jun Choi and Bongjun Ji
Machines 2025, 13(8), 673; https://doi.org/10.3390/machines13080673 (registering DOI) - 1 Aug 2025
Abstract
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a [...] Read more.
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a Bayesian GAT method specifically tailored for vibration-based compressor fault diagnosis. The approach integrates domain-specific digital-twin simulations built with Rotordynamic software (1.3.0), and constructs dual adjacency matrices to encode both physically informed and data-driven sensor relationships. Additionally, a hybrid forecasting-and-reconstruction objective enables the model to capture short-term deviations as well as long-term waveform fidelity. Monte Carlo dropout further decomposes prediction uncertainty into aleatoric and epistemic components, providing a more robust and interpretable model. Comparative evaluations against conventional Long Short-Term Memory (LSTM)-based autoencoder and forecasting methods demonstrate that the proposed framework achieves superior fault-detection performance across multiple fault types, including misalignment, bearing failure, and unbalance. Moreover, uncertainty analyses confirm that fault severity correlates with increasing levels of both aleatoric and epistemic uncertainty, reflecting heightened noise and reduced model confidence under more severe conditions. By enhancing GAT fundamentals with a domain-tailored dual-graph strategy, specialized Bayesian inference, and digital-twin data generation, this research delivers a comprehensive and interpretable solution for compressor fault diagnosis, paving the way for more reliable and risk-aware predictive maintenance in complex rotating machinery. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

40 pages, 1638 KiB  
Review
Cardiac Tissue Bioprinting: Integrating Structure and Functions Through Biomimetic Design, Bioinks, and Stimulation
by Silvia Marino, Reem Alheijailan, Rita Alonaizan, Stefano Gabetti, Diana Massai and Maurizio Pesce
Gels 2025, 11(8), 593; https://doi.org/10.3390/gels11080593 (registering DOI) - 31 Jul 2025
Abstract
Pathologies of the heart (e.g., ischemic disease, valve fibrosis and calcification, progressive myocardial fibrosis, heart failure, and arrhythmogenic disorders) stem from the irreversible deterioration of cardiac tissues, leading to severe clinical consequences. The limited regenerative capacity of the adult myocardium and the architectural [...] Read more.
Pathologies of the heart (e.g., ischemic disease, valve fibrosis and calcification, progressive myocardial fibrosis, heart failure, and arrhythmogenic disorders) stem from the irreversible deterioration of cardiac tissues, leading to severe clinical consequences. The limited regenerative capacity of the adult myocardium and the architectural complexity of the heart present major challenges for tissue engineering. However, recent advances in biomaterials and biofabrication techniques have opened new avenues for recreating functional cardiac tissues. Particularly relevant in this context is the integration of biomimetic design principles, such as structural anisotropy, mechanical and electrical responsiveness, and tissue-specific composition, into 3D bioprinting platforms. This review aims to provide a comprehensive overview of current approaches in cardiac bioprinting, with a focus on how structural and functional biomimicry can be achieved using advanced hydrogels, bioprinting techniques, and post-fabrication stimulation. By critically evaluating materials, methods, and applications such as patches, vasculature, valves, and chamber models, we define the state of the art and highlight opportunities for developing next-generation bioengineered cardiac constructs. Full article
(This article belongs to the Special Issue Hydrogel for Sustained Delivery of Therapeutic Agents (3rd Edition))
Show Figures

Figure 1

24 pages, 3598 KiB  
Article
State of the Art on Empirical and Numerical Methods for Cave Stability Analysis: Application in Al-Badia Lava Tube, Harrat Al-Shaam, Jordan
by Ronald Herrera, Daniel Garcés, Abdelmadjid Benrabah, Ahmad Al-Malabeh, Rafael Jordá-Bordehore and Luis Jordá-Bordehore
Appl. Mech. 2025, 6(3), 56; https://doi.org/10.3390/applmech6030056 (registering DOI) - 31 Jul 2025
Abstract
Empirical and numerical methodologies for the geomechanical assessment of underground excavations have evolved in recent years to adapt to the geotechnical and structural conditions of natural caves, enabling stability evaluation and ensuring safe conditions for speleological exploration. This study analyzes the evolution of [...] Read more.
Empirical and numerical methodologies for the geomechanical assessment of underground excavations have evolved in recent years to adapt to the geotechnical and structural conditions of natural caves, enabling stability evaluation and ensuring safe conditions for speleological exploration. This study analyzes the evolution of the state of the art of these techniques worldwide, assessing their reliability and application context, and identifying the most suitable methodologies for determining the stability of the Al-Badia lava tube. The research was conducted through bibliographic analysis and rock mass characterization using empirical geomechanical classifications. Subsequently, the numerical boundary element method (BEM) was applied to compare the obtained results and model the stress–strain behavior of the cavity. The results allowed the classification of the Al-Badia lava tube into stable, transition, and unstable zones, using empirical support charts and determining the safety factors of the surrounding rock mass. The study site highlights that empirical methods are rather conservative, and numerical results align better with observed conditions. Full article
Show Figures

Figure 1

13 pages, 1879 KiB  
Article
Dynamic Graph Convolutional Network with Dilated Convolution for Epilepsy Seizure Detection
by Xiaoxiao Zhang, Chenyun Dai and Yao Guo
Bioengineering 2025, 12(8), 832; https://doi.org/10.3390/bioengineering12080832 (registering DOI) - 31 Jul 2025
Abstract
The electroencephalogram (EEG), widely used for measuring the brain’s electrophysiological activity, has been extensively applied in the automatic detection of epileptic seizures. However, several challenges remain unaddressed in prior studies on automated seizure detection: (1) Methods based on CNN and LSTM assume that [...] Read more.
The electroencephalogram (EEG), widely used for measuring the brain’s electrophysiological activity, has been extensively applied in the automatic detection of epileptic seizures. However, several challenges remain unaddressed in prior studies on automated seizure detection: (1) Methods based on CNN and LSTM assume that EEG signals follow a Euclidean structure; (2) Algorithms leveraging graph convolutional networks rely on adjacency matrices constructed with fixed edge weights or predefined connection rules. To address these limitations, we propose a novel algorithm: Dynamic Graph Convolutional Network with Dilated Convolution (DGDCN). By leveraging a spatiotemporal attention mechanism, the proposed model dynamically constructs a task-specific adjacency matrix, which guides the graph convolutional network (GCN) in capturing localized spatial and temporal dependencies among adjacent nodes. Furthermore, a dilated convolutional module is incorporated to expand the receptive field, thereby enabling the model to capture long-range temporal dependencies more effectively. The proposed seizure detection system is evaluated on the TUSZ dataset, achieving AUC values of 88.7% and 90.4% on 12-s and 60-s segments, respectively, demonstrating competitive performance compared to current state-of-the-art methods. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

33 pages, 14330 KiB  
Article
Noisy Ultrasound Kidney Image Classifications Using Deep Learning Ensembles and Grad-CAM Analysis
by Walid Obaid, Abir Hussain, Tamer Rabie and Wathiq Mansoor
AI 2025, 6(8), 172; https://doi.org/10.3390/ai6080172 - 31 Jul 2025
Abstract
Objectives: This study introduces an automated classification system for noisy kidney ultrasound images using an ensemble of deep neural networks (DNNs) with transfer learning. Methods: The method was tested using a dataset with two categories: normal kidney images and kidney images with stones. [...] Read more.
Objectives: This study introduces an automated classification system for noisy kidney ultrasound images using an ensemble of deep neural networks (DNNs) with transfer learning. Methods: The method was tested using a dataset with two categories: normal kidney images and kidney images with stones. The dataset contains 1821 normal kidney images and 2592 kidney images with stones. Noisy images involve various types of noises, including salt and pepper noise, speckle noise, Poisson noise, and Gaussian noise. The ensemble-based method is benchmarked with state-of-the-art techniques and evaluated on ultrasound images with varying quality and noise levels. Results: Our proposed method demonstrated a maximum classification accuracy of 99.43% on high-quality images (the original dataset images) and 99.21% on the dataset images with added noise. Conclusions: The experimental results confirm that the ensemble of DNNs accurately classifies most images, achieving a high classification performance compared to conventional and individual DNN-based methods. Additionally, our method outperforms the highest-achieving method by more than 1% in accuracy. Furthermore, our analysis using Gradient-weighted Class Activation Mapping indicated that our proposed deep learning model is capable of prediction using clinically relevant features. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

24 pages, 4039 KiB  
Review
A Mathematical Survey of Image Deep Edge Detection Algorithms: From Convolution to Attention
by Gang Hu
Mathematics 2025, 13(15), 2464; https://doi.org/10.3390/math13152464 - 31 Jul 2025
Abstract
Edge detection, a cornerstone of computer vision, identifies intensity discontinuities in images, enabling applications from object recognition to autonomous navigation. This survey presents a mathematically grounded analysis of edge detection’s evolution, spanning traditional gradient-based methods, convolutional neural networks (CNNs), attention-driven architectures, transformer-backbone models, [...] Read more.
Edge detection, a cornerstone of computer vision, identifies intensity discontinuities in images, enabling applications from object recognition to autonomous navigation. This survey presents a mathematically grounded analysis of edge detection’s evolution, spanning traditional gradient-based methods, convolutional neural networks (CNNs), attention-driven architectures, transformer-backbone models, and generative paradigms. Beginning with Sobel and Canny’s kernel-based approaches, we trace the shift to data-driven CNNs like Holistically Nested Edge Detection (HED) and Bidirectional Cascade Network (BDCN), which leverage multi-scale supervision and achieve ODS (Optimal Dataset Scale) scores 0.788 and 0.806, respectively. Attention mechanisms, as in EdgeNAT (ODS 0.860) and RankED (ODS 0.824), enhance global context, while generative models like GED (ODS 0.870) achieve state-of-the-art precision via diffusion and GAN frameworks. Evaluated on BSDS500 and NYUDv2, these methods highlight a trajectory toward accuracy and robustness, yet challenges in efficiency, generalization, and multi-modal integration persist. By synthesizing mathematical formulations, performance metrics, and future directions, this survey equips researchers with a comprehensive understanding of edge detection’s past, present, and potential, bridging theoretical insights with practical advancements. Full article
(This article belongs to the Special Issue Artificial Intelligence and Algorithms with Their Applications)
Show Figures

Figure 1

18 pages, 9470 KiB  
Article
DCS-ST for Classification of Breast Cancer Histopathology Images with Limited Annotations
by Suxing Liu and Byungwon Min
Appl. Sci. 2025, 15(15), 8457; https://doi.org/10.3390/app15158457 - 30 Jul 2025
Abstract
Accurate classification of breast cancer histopathology images is critical for early diagnosis and treatment planning. Yet, conventional deep learning models face significant challenges under limited annotation scenarios due to their reliance on large-scale labeled datasets. To address this, we propose Dynamic Cross-Scale Swin [...] Read more.
Accurate classification of breast cancer histopathology images is critical for early diagnosis and treatment planning. Yet, conventional deep learning models face significant challenges under limited annotation scenarios due to their reliance on large-scale labeled datasets. To address this, we propose Dynamic Cross-Scale Swin Transformer (DCS-ST), a robust and efficient framework tailored for histopathology image classification with scarce annotations. Specifically, DCS-ST integrates a dynamic window predictor and a cross-scale attention module to enhance multi-scale feature representation and interaction while employing a semi-supervised learning strategy based on pseudo-labeling and denoising to exploit unlabeled data effectively. This design enables the model to adaptively attend to diverse tissue structures and pathological patterns while maintaining classification stability. Extensive experiments on three public datasets—BreakHis, Mini-DDSM, and ICIAR2018—demonstrate that DCS-ST consistently outperforms existing state-of-the-art methods across various magnifications and classification tasks, achieving superior quantitative results and reliable visual classification. Furthermore, empirical evaluations validate its strong generalization capability and practical potential for real-world weakly-supervised medical image analysis. Full article
Show Figures

Figure 1

34 pages, 8930 KiB  
Article
Network-Aware Gaussian Mixture Models for Multi-Objective SD-WAN Controller Placement
by Abdulrahman M. Abdulghani, Azizol Abdullah, Amir Rizaan Rahiman, Nor Asilah Wati Abdul Hamid and Bilal Omar Akram
Electronics 2025, 14(15), 3044; https://doi.org/10.3390/electronics14153044 (registering DOI) - 30 Jul 2025
Abstract
Software-Defined Wide Area Networks (SD-WANs) require optimal controller placement to minimize latency, balance loads, and ensure reliability across geographically distributed infrastructures. This paper introduces NA-GMM (Network-Aware Gaussian Mixture Model), a novel multi-objective optimization framework addressing key limitations in current controller placement approaches. Three [...] Read more.
Software-Defined Wide Area Networks (SD-WANs) require optimal controller placement to minimize latency, balance loads, and ensure reliability across geographically distributed infrastructures. This paper introduces NA-GMM (Network-Aware Gaussian Mixture Model), a novel multi-objective optimization framework addressing key limitations in current controller placement approaches. Three principal contributions distinguish NA-GMM: (1) a hybrid distance metric that integrates geographic distance, network latency, topological cost, and link reliability through adaptive weighting, effectively capturing multi-dimensional network characteristics; (2) a modified expectation–maximization algorithm incorporating node importance-weighting to optimize controller placements for critical network elements; and (3) a robust clustering mechanism that transitions from probabilistic (soft) assignments to definitive (hard) cluster selections, ensuring optimal placement convergence. Empirical evaluations on real-world topologies demonstrate NA-GMM’s superiority, achieving up to 22.7% lower average control latency compared to benchmark approaches, maintaining near-optimal load distribution with node distribution ratios, and delivering a 12.9% throughput improvement. Furthermore, NA-GMM achieved exquisite computational efficiency, executing 68.9% faster and consuming 41.5% less memory than state of the art methods, while achieving exceptional load balancing. These findings confirm NA-GMM’s practical viability for large-scale SD-WAN deployments where real-time multi-objective optimization is essential. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

20 pages, 1330 KiB  
Article
A Comprehensive Approach to Rustc Optimization Vulnerability Detection in Industrial Control Systems
by Kaifeng Xie, Jinjing Wan, Lifeng Chen and Yi Wang
Mathematics 2025, 13(15), 2459; https://doi.org/10.3390/math13152459 - 30 Jul 2025
Abstract
Compiler optimization is a critical component for improving program performance. However, the Rustc optimization process may introduce vulnerabilities due to algorithmic flaws or issues arising from component interactions. Existing testing methods face several challenges, including high randomness in test cases, inadequate targeting of [...] Read more.
Compiler optimization is a critical component for improving program performance. However, the Rustc optimization process may introduce vulnerabilities due to algorithmic flaws or issues arising from component interactions. Existing testing methods face several challenges, including high randomness in test cases, inadequate targeting of vulnerability-prone regions, and low-quality initial fuzzing seeds. This paper proposes a test case generation method based on large language models (LLMs), which utilizes prompt templates and optimization algorithms to generate a code relevant to specific optimization passes, especially for real-time control logic and safety-critical modules unique to the industrial control field. A vulnerability screening approach based on static analysis and rule matching is designed to locate potential risk points in the optimization regions of both the MIR and LLVM IR layers, as well as in unsafe code sections. Furthermore, the targeted fuzzing strategy is enhanced by designing seed queues and selection algorithms that consider the correlation between optimization areas. The implemented system, RustOptFuzz, has been evaluated on both custom datasets and real-world programs. Compared with state-of-the-art tools, RustOptFuzz improves vulnerability discovery capabilities by 16%–50% and significantly reduces vulnerability reproduction time, thereby enhancing the overall efficiency of detecting optimization-related vulnerabilities in Rustc, providing key technical support for the reliability of industrial control systems. Full article
(This article belongs to the Special Issue Research and Application of Network and System Security)
Show Figures

Figure 1

30 pages, 7223 KiB  
Article
Smart Wildlife Monitoring: Real-Time Hybrid Tracking Using Kalman Filter and Local Binary Similarity Matching on Edge Network
by Md. Auhidur Rahman, Stefano Giordano and Michele Pagano
Computers 2025, 14(8), 307; https://doi.org/10.3390/computers14080307 - 30 Jul 2025
Abstract
Real-time wildlife monitoring on edge devices poses significant challenges due to limited power, constrained bandwidth, and unreliable connectivity, especially in remote natural habitats. Conventional object detection systems often transmit redundant data of the same animals detected across multiple consecutive frames as a part [...] Read more.
Real-time wildlife monitoring on edge devices poses significant challenges due to limited power, constrained bandwidth, and unreliable connectivity, especially in remote natural habitats. Conventional object detection systems often transmit redundant data of the same animals detected across multiple consecutive frames as a part of a single event, resulting in increased power consumption and inefficient bandwidth usage. Furthermore, maintaining consistent animal identities in the wild is difficult due to occlusions, variable lighting, and complex environments. In this study, we propose a lightweight hybrid tracking framework built on the YOLOv8m deep neural network, combining motion-based Kalman filtering with Local Binary Pattern (LBP) similarity for appearance-based re-identification using texture and color features. To handle ambiguous cases, we further incorporate Hue-Saturation-Value (HSV) color space similarity. This approach enhances identity consistency across frames while reducing redundant transmissions. The framework is optimized for real-time deployment on edge platforms such as NVIDIA Jetson Orin Nano and Raspberry Pi 5. We evaluate our method against state-of-the-art trackers using event-based metrics such as MOTA, HOTA, and IDF1, with a focus on detected animals occlusion handling, trajectory analysis, and counting during both day and night. Our approach significantly enhances tracking robustness, reduces ID switches, and provides more accurate detection and counting compared to existing methods. When transmitting time-series data and detected frames, it achieves up to 99.87% bandwidth savings and 99.67% power reduction, making it highly suitable for edge-based wildlife monitoring in resource-constrained environments. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

Back to TopTop