Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (589)

Search Parameters:
Keywords = convolutional codes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3628 KB  
Article
From Questionnaires to Heatmaps: Visual Classification and Interpretation of Quantitative Response Data Using Convolutional Neural Networks
by Michael Woelk, Modelice Nam, Björn Häckel and Matthias Spörrle
Appl. Sci. 2025, 15(19), 10642; https://doi.org/10.3390/app151910642 - 1 Oct 2025
Viewed by 224
Abstract
Structured quantitative data, such as survey responses in human resource management research, are often analysed using machine learning methods, including logistic regression. Although these methods provide accurate statistical predictions, their results are frequently abstract and difficult for non-specialists to comprehend. This limits their [...] Read more.
Structured quantitative data, such as survey responses in human resource management research, are often analysed using machine learning methods, including logistic regression. Although these methods provide accurate statistical predictions, their results are frequently abstract and difficult for non-specialists to comprehend. This limits their usefulness in practice, particularly in contexts where eXplainable Artificial Intelligence (XAI) is essential. This study proposes a domain-independent approach for the autonomous classification and interpretation of quantitative data using visual processing. This method transforms individual responses based on rating scales into visual representations, which are subsequently processed by Convolutional Neural Networks (CNNs). In combination with Class Activation Maps (CAMs), image-based CNN models enable not only accurate and reproducible classification but also visual interpretability of the underlying decision-making process. Our evaluation found that CNN models with bar chart coding achieved an accuracy of between 93.05% and 93.16%, comparable to the 93.19% achieved by logistic regression. Compared with conventional numerical approaches, exemplified by logistic regression in this study, the approach achieves comparable classification accuracy while providing additional comprehensibility and transparency through graphical representations. Robustness is demonstrated by consistent results across different visualisations generated from the same underlying data. By converting abstract numerical information into visual explanations, this approach addresses a core challenge: bridging the gap between model performance and human understanding. Its transparency, domain-agnostic design, and straightforward interpretability make it particularly suitable for XAI-driven applications across diverse disciplines that use quantitative response data. Full article
Show Figures

Figure 1

10 pages, 532 KB  
Article
3D Non-Uniform Fast Fourier Transform Program Optimization
by Kai Nie, Haoran Li, Lin Han, Yapeng Li and Jinlong Xu
Appl. Sci. 2025, 15(19), 10563; https://doi.org/10.3390/app151910563 - 30 Sep 2025
Viewed by 204
Abstract
MRI (magnetic resonance imaging) technology aims to map the internal structure image of organisms. It is an important application scenario of Non-Uniform Fast Fourier Transform (NUFFT), which can help doctors quickly locate the lesion site of patients. However, in practical application, it has [...] Read more.
MRI (magnetic resonance imaging) technology aims to map the internal structure image of organisms. It is an important application scenario of Non-Uniform Fast Fourier Transform (NUFFT), which can help doctors quickly locate the lesion site of patients. However, in practical application, it has disadvantages such as large computation and difficulty in parallel. Under the architecture of multi-core shared memory, using block pretreatment, color block scheduling NUFFT convolution interpolation offers a parallel solution, and then using a static linked list solves the problem of large memory requirements after the parallel solution on the basis of multithreading to cycle through more source code versions. Then, manual vectorization, such as processing, using short vector components, further accelerates the process. Through a series of optimizations, the final Random, Radial, and Spiral dataset obtained an acceleration effect of 273.8×, 291.8× and 251.7×, respectively. Full article
Show Figures

Figure 1

52 pages, 3501 KB  
Review
The Role of Artificial Intelligence and Machine Learning in Advancing Civil Engineering: A Comprehensive Review
by Ali Bahadori-Jahromi, Shah Room, Chia Paknahad, Marwah Altekreeti, Zeeshan Tariq and Hooman Tahayori
Appl. Sci. 2025, 15(19), 10499; https://doi.org/10.3390/app151910499 - 28 Sep 2025
Viewed by 738
Abstract
The integration of artificial intelligence (AI) and machine learning (ML) has revolutionised civil engineering, enhancing predictive accuracy, decision-making, and sustainability across domains such as structural health monitoring, geotechnical analysis, transportation systems, water management, and sustainable construction. This paper presents a detailed review of [...] Read more.
The integration of artificial intelligence (AI) and machine learning (ML) has revolutionised civil engineering, enhancing predictive accuracy, decision-making, and sustainability across domains such as structural health monitoring, geotechnical analysis, transportation systems, water management, and sustainable construction. This paper presents a detailed review of peer-reviewed publications from the past decade, employing bibliometric mapping and critical evaluation to analyse methodological advances, practical applications, and limitations. A novel taxonomy is introduced, classifying AI/ML approaches by civil engineering domain, learning paradigm, and adoption maturity to guide future development. Key applications include pavement condition assessment, slope stability prediction, traffic flow forecasting, smart water management, and flood forecasting, leveraging techniques such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), Support Vector Machines (SVMs), and hybrid physics-informed neural networks (PINNs). The review highlights challenges, including limited high-quality datasets, absence of AI provisions in design codes, integration barriers with IoT-based infrastructure, and computational complexity. While explainable AI tools like SHAP and LIME improve interpretability, their practical feasibility in safety-critical contexts remains constrained. Ethical considerations, including bias in training datasets and regulatory compliance, are also addressed. Promising directions include federated learning for data privacy, transfer learning for data-scarce regions, digital twins, and adherence to FAIR data principles. This study underscores AI as a complementary tool, not a replacement, for traditional methods, fostering a data-driven, resilient, and sustainable built environment through interdisciplinary collaboration and transparent, explainable systems. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

76 pages, 904 KB  
Review
Theoretical Bases of Methods of Counteraction to Modern Forms of Information Warfare
by Akhat Bakirov and Ibragim Suleimenov
Computers 2025, 14(10), 410; https://doi.org/10.3390/computers14100410 - 26 Sep 2025
Viewed by 1460
Abstract
This review is devoted to a comprehensive analysis of modern forms of information warfare in the context of digitalization and global interconnectedness. The work considers fundamental theoretical foundations—cognitive distortions, mass communication models, network theories and concepts of cultural code. The key tools of [...] Read more.
This review is devoted to a comprehensive analysis of modern forms of information warfare in the context of digitalization and global interconnectedness. The work considers fundamental theoretical foundations—cognitive distortions, mass communication models, network theories and concepts of cultural code. The key tools of information influence are described in detail, including disinformation, the use of botnets, deepfakes, memetic strategies and manipulations in the media space. Particular attention is paid to methods of identifying and neutralizing information threats using artificial intelligence and digital signal processing, including partial digital convolutions, Fourier–Galois transforms, residue number systems and calculations in finite algebraic structures. The ethical and legal aspects of countering information attacks are analyzed, and geopolitical examples are given, demonstrating the peculiarities of applying various strategies. The review is based on a systematic analysis of 592 publications selected from the international databases Scopus, Web of Science and Google Scholar, covering research from fundamental works to modern publications of recent years (2015–2025). It is also based on regulatory legal acts, which ensures a high degree of relevance and representativeness. The results of the review can be used in the development of technologies for monitoring, detecting and filtering information attacks, as well as in the formation of national cybersecurity strategies. Full article
Show Figures

Figure 1

20 pages, 4394 KB  
Article
Optimization of Multilayer Metal Bellow Hydroforming Process with Response Surface Method and Genetic Algorithm
by Jing Liu, Liang Li, Jian Liu and Lanyun Li
Metals 2025, 15(9), 1046; https://doi.org/10.3390/met15091046 - 19 Sep 2025
Viewed by 363
Abstract
In this paper, an optimization strategy for the hydroforming process of bellows is proposed, based on finite element analysis, design of experiments, response surface methodology, and genetic algorithms. A numerical model of the bellows hydroforming process is developed using the finite element simulation [...] Read more.
In this paper, an optimization strategy for the hydroforming process of bellows is proposed, based on finite element analysis, design of experiments, response surface methodology, and genetic algorithms. A numerical model of the bellows hydroforming process is developed using the finite element simulation code ABAQUS and validated experimentally. A combination of experimental design, numerical simulations, and regression analysis is employed to establish the mathematical models relating the objectives to the design variables. An analysis of variance (ANOVA) is conducted to evaluate the significance of each individual factor on the response variable. The main and interaction effects of the process parameters on the outer diameter and convolution pitch are illustrated and discussed. Furthermore, the response surface methodology and a Pareto-based multi-objective genetic algorithm (MOGA) are applied to determine optimal solutions within the given optimization criteria. The optimized results show good agreement with the experimental data, demonstrating that the optimization methodology is reliable. Full article
Show Figures

Figure 1

25 pages, 1851 KB  
Article
Predicting Gene Expression Responses to Cold in Arabidopsis thaliana Using Natural Variation in DNA Sequence
by Margarita Takou, Emily S. Bellis and Jesse R. Lasky
Genes 2025, 16(9), 1108; https://doi.org/10.3390/genes16091108 - 19 Sep 2025
Viewed by 488
Abstract
Background/Objectives: The evolution of gene expression responses is a critical component of population adaptation to variable environments. Predicting how DNA sequence influences expression is challenging because the genotype-to-phenotype map is not well resolved for cis-regulatory elements, transcription factor binding, regulatory interactions, [...] Read more.
Background/Objectives: The evolution of gene expression responses is a critical component of population adaptation to variable environments. Predicting how DNA sequence influences expression is challenging because the genotype-to-phenotype map is not well resolved for cis-regulatory elements, transcription factor binding, regulatory interactions, and epigenetic features, not to mention how these factors respond to the environment. Methods: We tested if flexible machine learning models could learn some of the underlying cis-regulatory genotype-to-phenotype map to predict expression response to a specific environment. We tested this approach using cold-responsive transcriptome profiles in five Arabidopsis thaliana natural accessions. Results: We first tested for evidence that cis regulation plays a role in environmental response, finding 14 and 15 motifs that were significantly enriched within the up- and downstream regions of cold-responsive differentially regulated genes (DEGs). We next applied convolutional neural networks (CNNs), which learn de novo cis-regulatory motifs in DNA sequences to predict expression response to cold. We found that CNNs predicted differential expression with moderate accuracy, with evidence that predictions were hindered by the biological complexity of regulation and the large potential regulatory code. Conclusions: Overall, approaches for predicting DEGs between specific environments based only on proximate DNA sequences require further development. It may be necessary to incorporate additional biological information into models to generate accurate predictions that will be useful to population biologists. Full article
(This article belongs to the Section Population and Evolutionary Genetics and Genomics)
Show Figures

Figure 1

14 pages, 870 KB  
Article
VoteSim: Voting-Based Binary Code Similarity Detection for Vulnerability Identification in IoT Firmware
by Keda Sun, Shize Zhou, Yuwei Meng, Wei Ruan and Liang Chen
Appl. Sci. 2025, 15(18), 10093; https://doi.org/10.3390/app151810093 - 16 Sep 2025
Viewed by 367
Abstract
The widespread integration of third-party components (TPCs) in Internet of Things (IoT) firmware significantly increases the risk of software vulnerabilities, especially in resource-constrained devices deployed in sensitive environments. Binary Code Similarity Detection (BCSD) techniques, particularly those based on deep neural networks, have emerged [...] Read more.
The widespread integration of third-party components (TPCs) in Internet of Things (IoT) firmware significantly increases the risk of software vulnerabilities, especially in resource-constrained devices deployed in sensitive environments. Binary Code Similarity Detection (BCSD) techniques, particularly those based on deep neural networks, have emerged as powerful tools for identifying vulnerable functions without access to source code. However, individual models, such as Graph Neural Networks (GNNs), Convolutional Neural Networks (CNNs), and Transformer-based methods, often exhibit limitations due to their differing focus on structural, spatial, or semantic features. To address this, we propose VoteSim, a novel ensemble framework that integrates multiple BCSD models using an inverse average rank voting mechanism. VoteSim combines the strengths of individual models while reducing the impact of model-specific false positives, leading to more stable and accurate vulnerability detection. We evaluate VoteSim on a large-scale real-world IoT firmware dataset comprising over 800,000 binary functions and 10 high-risk CVEs. Experimental results show that VoteSim consistently outperforms state-of-the-art BCSD models in both Recall@10 and Mean Reciprocal Rank (MRR), achieving improvements of up to 14.7% in recall. Our findings highlight the importance of model diversity and rank-aware aggregation for robust binary-level vulnerability detection in heterogeneous IoT firmware. Full article
Show Figures

Figure 1

19 pages, 2675 KB  
Article
Fast Intra-Coding Unit Partitioning for 3D-HEVC Depth Maps via Hierarchical Feature Fusion
by Fangmei Liu, He Zhang and Qiuwen Zhang
Electronics 2025, 14(18), 3646; https://doi.org/10.3390/electronics14183646 - 15 Sep 2025
Viewed by 398
Abstract
As a new generation 3D video coding standard, 3D-HEVC offers highly efficient compression. However, its recursive quadtree partitioning mechanism and frequent rate-distortion optimization (RDO) computations lead to a significant increase in coding complexity. Particularly, intra-frame coding in depth maps, which incorporates tools like [...] Read more.
As a new generation 3D video coding standard, 3D-HEVC offers highly efficient compression. However, its recursive quadtree partitioning mechanism and frequent rate-distortion optimization (RDO) computations lead to a significant increase in coding complexity. Particularly, intra-frame coding in depth maps, which incorporates tools like depth modeling modes (DMMs), substantially prolongs the decision-making process for coding unit (CU) partitioning, becoming a critical bottleneck in compression encoding time. To address this issue, this paper proposes a fast CU partitioning framework based on hierarchical feature fusion convolutional neural networks (HFF-CNNs). It aims to significantly accelerate the overall encoding process while ensuring excellent encoding quality by optimizing depth map CU partitioning decisions. This framework synergistically captures CU’s global structure and local details through multi-scale feature extraction and channel attention mechanisms (SE module). It introduces the wavelet energy ratio designed for quantifying the texture complexity of depth map CU and the quantization parameter (QP) that reflects the encoding quality as external features, enhancing the dynamic perception ability of the model from different dimensions. Ultimately, it outputs depth-corresponding partitioning predictions through three fully connected layers, strictly adhering to HEVC’s quad-tree recursive segmentation mechanism. Experimental results demonstrate that, across eight standard test sequences, the proposed method achieves an average encoding time reduction of 48.43%, significantly lowering intra-frame encoding complexity with a BDBR increment of only 0.35%. The model exhibits outstanding lightweight characteristics with minimal inference time overhead. Compared with the representative methods under comparison, this method achieves a better balance between cross-resolution adaptability and computational efficiency, providing a feasible optimization path for real-time 3D-HEVC applications. Full article
Show Figures

Figure 1

15 pages, 339 KB  
Article
Hybrid MambaVision and Transformer-Based Architecture for 3D Lane Detection
by Raul-Mihai Cap and Călin-Adrian Popa
Sensors 2025, 25(18), 5729; https://doi.org/10.3390/s25185729 - 14 Sep 2025
Viewed by 785
Abstract
Lane detection is an essential task in the field of computer vision and autonomous driving. This involves identifying and locating road markings on the road surface. This capability not only helps drivers keep the vehicle in the correct lane, but also provides critical [...] Read more.
Lane detection is an essential task in the field of computer vision and autonomous driving. This involves identifying and locating road markings on the road surface. This capability not only helps drivers keep the vehicle in the correct lane, but also provides critical data for advanced driver assistance systems and autonomous vehicles. Traditional lane detection models work mainly on the 2D image plane and achieve remarkable results. However, these models often assume a flat-world scenario, which does not correspond to real-world conditions, where roads have elevation variations and road markings may be curved. Our approach solves this challenge by focusing on 3D lane detection without relying on the inverse perspective mapping technique. Instead, we introduce a new framework using the MambaVision-S-1K backbone, which combines Mamba-based processing with Transformer capabilities to capture both local detail and global contexts from monocular images. This hybrid approach allows accurate modeling of lane geometry in three dimensions, even in the presence of elevation variations. By replacing the traditional convolutional neural network backbone with MambaVision, our proposed model significantly improves the capability of 3D lane detection systems. Our method achieved state-of-the-art performance on the ONCE-3DLanes dataset, thus demonstrating its superiority in accurately capturing lane curvature and elevation variations. These results highlight the potential of integrating advanced backbones based on Vision Transformers in the field of autonomous driving for more robust and reliable lane detection. The code will be available online. Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

31 pages, 3576 KB  
Article
UltraScanNet: A Mamba-Inspired Hybrid Backbone for Breast Ultrasound Classification
by Alexandra-Gabriela Laicu-Hausberger and Călin-Adrian Popa
Electronics 2025, 14(18), 3633; https://doi.org/10.3390/electronics14183633 - 13 Sep 2025
Viewed by 417
Abstract
Breast ultrasound imaging functions as a vital radiation-free detection tool for breast cancer, yet its low contrast, speckle noise, and interclass variability make automated interpretation difficult. In this paper, we introduce UltraScanNet as a specific deep learning backbone that addresses breast ultrasound classification [...] Read more.
Breast ultrasound imaging functions as a vital radiation-free detection tool for breast cancer, yet its low contrast, speckle noise, and interclass variability make automated interpretation difficult. In this paper, we introduce UltraScanNet as a specific deep learning backbone that addresses breast ultrasound classification needs. The proposed architecture combines a convolutional stem with learnable 2D positional embeddings, followed by a hybrid stage that unites MobileViT blocks with spatial gating and convolutional residuals and two progressively global stages that use a depth-aware composition of three components: (1) UltraScanUnit (a state-space module with selective scan gated convolutional residuals and low-rank projections), (2) ConvAttnMixers for spatial channel mixing, and (3) multi-head self-attention blocks for global reasoning. This research includes a detailed ablation study to evaluate the individual impact of each architectural component. The results demonstrate that UltraScanNet reaches 91.67% top-1 accuracy, a precision score of 0.9072, a recall score of 0.9174, and an F1-score of 0.9096 on the BUSI dataset, which make it a very competitive option among multiple state-of-the-art models, including ViT-Small (91.67%), MaxViT-Tiny (91.67%), MambaVision (91.02%), Swin-Tiny (90.38%), ConvNeXt-Tiny (89.74%), and ResNet-50 (85.90%). On top of this, the paper provides an extensive global and per-class analysis of the performance of these models, offering a comprehensive benchmark for future work. The code will be publicly available. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data Processing in Healthcare)
Show Figures

Graphical abstract

44 pages, 7582 KB  
Article
Continuous Authentication in Resource-Constrained Devices via Biometric and Environmental Fusion
by Nida Zeeshan, Makhabbat Bakyt, Naghmeh Moradpoor and Luigi La Spada
Sensors 2025, 25(18), 5711; https://doi.org/10.3390/s25185711 - 12 Sep 2025
Viewed by 674
Abstract
Continuous authentication allows devices to keep checking that the active user is still the rightful owner instead of relying on a single login. However, current methods can be tricked by forging faces, revealing personal data, or draining the battery. Additionally, the environment where [...] Read more.
Continuous authentication allows devices to keep checking that the active user is still the rightful owner instead of relying on a single login. However, current methods can be tricked by forging faces, revealing personal data, or draining the battery. Additionally, the environment where the user plays a vital role in determining the user’s online security. Thanks to several security attacks, such as impersonation and replay, the user or the device can easily be compromised. We present a lightweight system that pairs face recognition with complex environmental sensing, i.e., the phone validates the user when the surrounding light or noise changes. A convolutional network turns each captured face into a 128-bit code, which is combined with a random “nonce” and protected by hashing. A camera–microphone module monitors light and sound to decide when to sample again, reducing unnecessary checks. We verified the protocol with formal security tools (Scyther v1.1.3.) and confirmed resistance to replay, interception, deepfake, and impersonation attacks. Across 2700 authentication cycles on a Snapdragon 778G testbed, the median decision time decreased from 61.2 ± 3.4 ms to 42.3 ± 2.1 ms (p < 0.01, paired t-test). Data usage per authentication cycle fell by an average of 24.7% ± 1.8%, and mean energy consumption per cycle decreased from 21.3 mJ to 19.8 mJ (∆ = 6.6 mJ, 95% CI: 5.9–7.2). These differences were consistent across varying lighting (≤50, 50–300, >300 lux) and noise conditions (30–55 dB SPL). These results show that smart-sensor-triggered face recognition can offer secure and energy-efficient continuous verification, supporting smart imaging and deep-learning-based face recognition. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

18 pages, 2934 KB  
Article
A Method for Synthesizing Self-Checking Discrete Systems with Calculations Testing Based on Parity and Self-Duality of Calculated Functions
by Dmitry V. Efanov, Tatiana S. Pogodina, Nazirjan M. Aripov, Sunnatillo T. Boltayev, Asadulla R. Azizov, Elnara K. Ametova and Zohid B. Toshboyev
Computation 2025, 13(9), 220; https://doi.org/10.3390/computation13090220 - 11 Sep 2025
Viewed by 359
Abstract
Calculations testing can be effectively used in the construction of discrete self-checking devices. Calculations testing is based on the parity and self-duality of the calculated functions. This can be used for modern blocks and nodes of control systems for responsible technological processes. However, [...] Read more.
Calculations testing can be effectively used in the construction of discrete self-checking devices. Calculations testing is based on the parity and self-duality of the calculated functions. This can be used for modern blocks and nodes of control systems for responsible technological processes. However, its use has a number of features that must be considered when building concurrent error-detection circuits. The authors used methods of discrete mathematics and Boolean algebra as well as technical diagnostics of discrete systems to investigate the problem of ensuring the testability of the parity encoder. Theorems on the testability of convolution functions modulo 2 are proved. Considering these theorems allowed the authors of the article to propose a method for synthesizing CED circuits. This method increases the testability of the encoder for parity. This method is based on the use of two diagnostic signs at once. The first sign is that the code words belong to the parity code. The second is the self-dual control function in the concurrent error-detection circuit. This method is guaranteed to increase the testability of the parity coder compared to using one of the diagnostic signs for calculations testing. Experiments with testing discrete devices have shown the effectiveness of the organization structure of the concurrent error-detection circuit that we developed. The theorems that we proved form the basis of proof of similar provisions for the use of other linear codes in the synthesis of concurrent error-detection circuits. Our proposed solutions with calculations testing based on two diagnostic signs should be used in the synthesis of discrete systems. Discrete systems should be self-checking and have improved testability indicators. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

27 pages, 16753 KB  
Article
A 1°-Resolution Global Ionospheric TEC Modeling Method Based on a Dual-Branch Input Convolutional Neural Network
by Nian Liu, Yibin Yao and Liang Zhang
Remote Sens. 2025, 17(17), 3095; https://doi.org/10.3390/rs17173095 - 5 Sep 2025
Viewed by 1018
Abstract
Total Electron Content (TEC) is a fundamental parameter characterizing the electron density distribution in the ionosphere. Traditional global TEC modeling approaches predominantly rely on mathematical methods (such as spherical harmonic function fitting), often resulting in models suffering from excessive smoothing and low accuracy. [...] Read more.
Total Electron Content (TEC) is a fundamental parameter characterizing the electron density distribution in the ionosphere. Traditional global TEC modeling approaches predominantly rely on mathematical methods (such as spherical harmonic function fitting), often resulting in models suffering from excessive smoothing and low accuracy. While the 1° high-resolution global TEC model released by MIT offers improved temporal-spatial resolution, it exhibits regions of data gaps. Existing ionospheric image completion methods frequently employ Generative Adversarial Networks (GANs), which suffer from drawbacks such as complex model structures and lengthy training times. We propose a novel high-resolution global ionospheric TEC modeling method based on a Dual-Branch Convolutional Neural Network (DB-CNN) designed for the completion and restoration of incomplete 1°-resolution ionospheric TEC images. The novel model utilizes a dual-branch input structure: the background field, generated using the International Reference Ionosphere (IRI) model TEC maps, and the observation field, consisting of global incomplete TEC maps coupled with their corresponding mask maps. An asymmetric dual-branch parallel encoder, feature fusion, and residual decoder framework enables precise reconstruction of missing regions, ultimately generating a complete global ionospheric TEC map. Experimental results demonstrate that the model achieves Root Mean Square Errors (RMSE) of 0.30 TECU and 1.65 TECU in the observed and unobserved regions, respectively, in simulated data experiments. For measured experiments, the RMSE values are 1.39 TECU and 1.93 TECU in the observed and unobserved regions. Validation results utilizing Jason-3 altimeter-measured VTEC demonstrate that the model achieves stable reconstruction performance across all four seasons and various time periods. In key-day comparisons, its STD and RMSE consistently outperform those of the CODE global ionospheric model (GIM). Furthermore, a long-term evaluation from 2021 to 2024 reveals that, compared to the CODE model, the DB-CNN achieves average reductions of 38.2% in STD and 23.5% in RMSE. This study provides a novel dual-branch input convolutional neural network-based method for constructing 1°-resolution global ionospheric products, offering significant application value for enhancing GNSS positioning accuracy and space weather monitoring capabilities. Full article
Show Figures

Figure 1

26 pages, 3073 KB  
Article
From Detection to Decision: Transforming Cybersecurity with Deep Learning and Visual Analytics
by Saurabh Chavan and George Pappas
AI 2025, 6(9), 214; https://doi.org/10.3390/ai6090214 - 4 Sep 2025
Viewed by 653
Abstract
Objectives: The persistent evolution of software vulnerabilities—spanning novel zero-day exploits to logic-level flaws—continues to challenge conventional cybersecurity mechanisms. Static rule-based scanners and opaque deep learning models often lack the precision and contextual understanding required for both accurate detection and analyst interpretability. This [...] Read more.
Objectives: The persistent evolution of software vulnerabilities—spanning novel zero-day exploits to logic-level flaws—continues to challenge conventional cybersecurity mechanisms. Static rule-based scanners and opaque deep learning models often lack the precision and contextual understanding required for both accurate detection and analyst interpretability. This paper presents a hybrid framework for real-time vulnerability detection that improves both robustness and explainability. Methods: The framework integrates semantic encoding via Bidirectional Encoder Representations from Transformers (BERTs), structural analysis using Deep Graph Convolutional Neural Networks (DGCNNs), and lightweight prioritization through Kernel Extreme Learning Machines (KELMs). The architecture incorporates Minimum Intermediate Representation (MIR) learning to reduce false positives and fuses multi-modal data (source code, execution traces, textual metadata) for robust, scalable performance. Explainable Artificial Intelligence (XAI) visualizations—combining SHAP-based attributions and CVSS-aligned pair plots—serve as an analyst-facing interpretability layer. The framework is evaluated on benchmark datasets, including VulnDetect and the NIST Software Reference Library (NSRL, version 2024.12.1, used strictly as a benign baseline for false positive estimation). Results: Our evaluation reports that precision, recall, AUPRC, MCC, and calibration (ECE/Brier score) demonstrated improved robustness and reduced false positives compared to baselines. An internal interpretability validation was conducted to align SHAP/GNNExplainer outputs with known vulnerability features; formal usability testing with practitioners is left as future work. Conclusions: The framework, Designed with DevSecOps integration in mind, the system is packaged in containerized modules (Docker/Kubernetes) and outputs SIEM-compatible alerts, enabling potential compatibility with Splunk, GitLab CI/CD, and similar tools. While full enterprise deployment was not performed, these deployment-oriented design choices support scalability and practical adoption. Full article
Show Figures

Figure 1

22 pages, 1017 KB  
Article
Optimized Generalized LDPC Convolutional Codes
by Li Deng, Kai Tao, Zhiping Shi, You Zhang, Yinlong Shi, Jian Wang, Tian Liu and Yongben Wang
Entropy 2025, 27(9), 930; https://doi.org/10.3390/e27090930 - 4 Sep 2025
Viewed by 681
Abstract
In this paper, some optimized encoding and decoding schemes are proposed for the generalized LDPC convolutional codes (GLDPC–CCs). In terms of the encoding scheme, a flexible doping method is proposed, which replaces multiple single parity check (SPC) nodes with one generalized check (GC) [...] Read more.
In this paper, some optimized encoding and decoding schemes are proposed for the generalized LDPC convolutional codes (GLDPC–CCs). In terms of the encoding scheme, a flexible doping method is proposed, which replaces multiple single parity check (SPC) nodes with one generalized check (GC) node. Different types of BCH codes can be selected as the GC node by adjusting the number of SPC nodes to be replaced. Moreover, by fine-tuning the truncated bits and the extended parity check bits, or by reasonably adjusting the GC node distribution, the performance of GLDPC–CCs can be further improved. In terms of the decoding scheme, a hybrid layered normalized min-sum (HLNMS) decoding algorithm is proposed, where the layered normalized min-sum (LNMS) decoding is used for SPC nodes, and the Chase–Pyndiah decoding is adopted for GC nodes. Based on analysis of the decoding convergence of GC node and SPC node, an adaptive weight factor is designed for GC nodes that changes as the decoding iterations, aiming to further improve the decoding performance. In addition, an early stop decoding strategy is also proposed based on the minimum amplitude threshold of mutual information in order to reduce the decoding complexity. The simulation results have verified the superiority of the proposed scheme for GLDPC–CCs over the prior art, which has great application potential in optical communication systems. Full article
(This article belongs to the Special Issue LDPC Codes for Communication Systems)
Show Figures

Figure 1

Back to TopTop