Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = fixed-length code

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 480 KB  
Review
MicroRNAs in Cardiovascular Diseases and Forensic Applications: A Systematic Review of Diagnostic and Post-Mortem Implications
by Matteo Antonio Sacco, Saverio Gualtieri, Maria Cristina Verrina, Fabrizio Cordasco, Maria Daniela Monterossi, Gioele Grimaldi, Helenia Mastrangelo, Giuseppe Mazza and Isabella Aquila
Int. J. Mol. Sci. 2026, 27(2), 825; https://doi.org/10.3390/ijms27020825 - 14 Jan 2026
Viewed by 269
Abstract
MicroRNAs (miRNAs) are small non-coding RNA molecules approximately 20–22 nucleotides in length that regulate gene expression at the post-transcriptional level. By binding to target messenger RNAs (mRNAs), miRNAs inhibit translation or induce degradation, thus influencing a wide array of biological processes including development, [...] Read more.
MicroRNAs (miRNAs) are small non-coding RNA molecules approximately 20–22 nucleotides in length that regulate gene expression at the post-transcriptional level. By binding to target messenger RNAs (mRNAs), miRNAs inhibit translation or induce degradation, thus influencing a wide array of biological processes including development, inflammation, apoptosis, and tissue remodeling. Owing to their remarkable stability and tissue specificity, miRNAs have emerged as promising biomarkers in both clinical and forensic settings. In recent years, increasing evidence has demonstrated their utility in cardiovascular diseases, where they may serve as diagnostic, prognostic, and therapeutic tools. This systematic review aims to comprehensively summarize the role of miRNAs in cardiovascular pathology, focusing on their diagnostic potential in myocardial infarction, sudden cardiac death (SCD), and cardiomyopathies, and their applicability in post-mortem investigations. Following PRISMA guidelines, we screened PubMed, Scopus, and Web of Science databases for studies up to December 2024. The results highlight several miRNAs—including miR-1, miR-133a, miR-208b, miR-499a, and miR-486-5p—as robust markers for ischemic injury and sudden death, even in degraded or formalin-fixed autopsy samples. The high stability of miRNAs under extreme post-mortem conditions reinforces their potential as molecular tools in forensic pathology. Nevertheless, methodological heterogeneity and limited standardization currently hinder their routine application. Future studies should aim to harmonize analytical protocols and validate diagnostic thresholds across larger, well-characterized cohorts to fully exploit miRNAs as reliable molecular biomarkers in both clinical cardiology and forensic medicine. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Figure 1

17 pages, 439 KB  
Article
Concatenated Constrained Coding: A New Approach to Efficient Constant-Weight Codes
by Kees Schouhamer Immink, Jos H. Weber, Tuan Thanh Nguyen and Kui Cai
Entropy 2026, 28(1), 78; https://doi.org/10.3390/e28010078 - 9 Jan 2026
Viewed by 309
Abstract
The design of low-complexity and efficient constrained codes has been a major research item for many years. This paper reports on a versatile method named concatenated constrained codes for designing efficient fixed-length constrained codes with small complexity. A concatenated constrained code comprises two [...] Read more.
The design of low-complexity and efficient constrained codes has been a major research item for many years. This paper reports on a versatile method named concatenated constrained codes for designing efficient fixed-length constrained codes with small complexity. A concatenated constrained code comprises two (or more) cooperating constrained codes of low complexity enabling long constrained codes that are not practically feasible with prior art methods. We apply the concatenated coding approach to two case studies, namely the design of constant-weight and low-weight codes. In a binary constant-weight code, each codeword has the same number, w, of 1’s, where w is called the weight of a codeword. We specifically focus on the trading between coder complexity and redundancy. Full article
(This article belongs to the Special Issue Coding and Signal Processing for Data Storage Systems)
Show Figures

Figure 1

27 pages, 11326 KB  
Article
Numerical Study on Lost Circulation Mechanism in Complex Fracture Network Coupled Wellbore and Its Application in Lost-Circulation Zone Diagnosis
by Zhichao Xie, Yili Kang, Chengyuan Xu, Lijun You, Chong Lin and Feifei Zhang
Processes 2026, 14(1), 143; https://doi.org/10.3390/pr14010143 - 31 Dec 2025
Viewed by 330
Abstract
Deep and ultra-deep drilling operations commonly encounter fractured and fracture-vuggy formations, where weak wellbore strength and well-developed fracture networks lead to frequent lost circulation, presenting a key challenge to safe and efficient drilling. Existing diagnostic practices mostly rely on drilling fluid loss dynamic [...] Read more.
Deep and ultra-deep drilling operations commonly encounter fractured and fracture-vuggy formations, where weak wellbore strength and well-developed fracture networks lead to frequent lost circulation, presenting a key challenge to safe and efficient drilling. Existing diagnostic practices mostly rely on drilling fluid loss dynamic models of single fractures or simplified discrete fractures to invert fracture geometry, which cannot capture the spatiotemporal evolution of loss in complex fracture networks, resulting in limited inversion accuracy and a lack of quantitative, fracture-network-based loss-dynamics support for bridge-plugging design. In this study, a geologically realistic wellbore–fracture-network coupled loss dynamic model is constructed to overcome the limitations of single- or simplified-fracture descriptions. Within a unified computational fluid dynamics (CFD) framework, solid–liquid two-phase flow and Herschel–Bulkley rheology are incorporated to quantitatively characterise fracture connectivity. This approach reveals how instantaneous and steady losses are controlled by key geometrical factors, thereby providing a computable physical basis for loss-zone inversion and bridge-plugging design. Validation against experiments shows a maximum relative error of 7.26% in pressure and loss rate, indicating that the model can reasonably reproduce actual loss behaviour. Different encounter positions and node types lead to systematic variations in loss intensity and flow partitioning. Compared with a single fracture, a fracture network significantly amplifies loss intensity through branch-induced capacity enhancement, superposition of shortest paths, and shortening of loss paths. In a typical network, the shortest path accounts for only about 20% of the total length, but contributes 40–55% of the total loss, while extending branch length from 300 mm to 1500 mm reduces the steady loss rate by 40–60%. Correlation analysis shows that the instantaneous loss rate is mainly controlled by the maximum width and height of fractures connected to the wellbore, whereas the steady loss rate has a correlation coefficient of about 0.7 with minimum width and effective path length, and decreases monotonically with the number of connected fractures under a fixed total width, indicating that the shortest path and bottleneck width are the key geometrical factors governing long-term loss in complex fracture networks. This work refines the understanding of fractured-loss dynamics and proposes the concept of coupling hydraulic deviation codes with deep learning to build a mapping model from mud-logging curves to fracture geometrical parameters, thereby providing support for lost-circulation diagnosis and bridge-plugging optimisation in complex fractured formations. Full article
Show Figures

Figure 1

22 pages, 603 KB  
Article
Generation of Natural-Language Explanations for Static-Analysis Warnings Using Single- and Multi-Objective Optimization
by Ivan Malashin
Computers 2025, 14(12), 534; https://doi.org/10.3390/computers14120534 - 5 Dec 2025
Viewed by 830
Abstract
Explanations for static-analysis warnings assist developers in understanding potential code issues. An end-to-end pipeline was implemented to generate natural-language explanations, evaluated on 5183 warning–explanation pairs from Java repositories, including a manually validated gold subset of 1176 examples for faithfulness assessment. Explanations were produced [...] Read more.
Explanations for static-analysis warnings assist developers in understanding potential code issues. An end-to-end pipeline was implemented to generate natural-language explanations, evaluated on 5183 warning–explanation pairs from Java repositories, including a manually validated gold subset of 1176 examples for faithfulness assessment. Explanations were produced by a transformer-based encoder–decoder model (CodeT5) conditioned on warning types, contextual code snippets, and static-analysis evidence. Initial experiments employed single-objective optimization for hyperparameters (using a genetic algorithm with dynamic search-space correction, which adaptively adjusted search bounds based on the evolving distribution of candidate solutions, clustering promising regions, and pruning unproductive ones), but this approach enforced a fixed faithfulness–fluency trade-off; therefore, a multi-objective evolutionary algorithm (NSGA-II) was adopted to jointly optimize both criteria. Pareto-optimal configurations improved normalized faithfulness by up to 12% and textual quality by 5–8% compared to baseline CodeT5 settings, with batch sizes of 10–21, learning rates 2.3×105 to 5×104, maximum token lengths of 36–65, beam width 5, length penalty 1.15, and nucleus sampling p=0.88. Candidate explanations were reranked using a composite score of likelihood, faithfulness, and code-usefulness, producing final outputs in under 0.001 s per example. The results indicate that structured conditioning, evolutionary hyperparameter search, and reranking yield explanations that are both aligned with static-analysis evidence and linguistically coherent. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

30 pages, 388 KB  
Article
On Double Cyclic Codes over Finite Chain Rings for DNA Computing
by Shakir Ali, Amal S. Alali, Mohd Azeem, Atif Ahmad Khan and Kok Bin Wong
Entropy 2025, 27(12), 1187; https://doi.org/10.3390/e27121187 - 22 Nov 2025
Cited by 1 | Viewed by 466
Abstract
Let e be a fixed positive integer and n1,n2 be odd positive integers. The main objective of this article is to investigate the algebraic structure of double cyclic codes of length (n1,n2) over [...] Read more.
Let e be a fixed positive integer and n1,n2 be odd positive integers. The main objective of this article is to investigate the algebraic structure of double cyclic codes of length (n1,n2) over the finite chain ring Re = F4e+vF4e, where v2=0. Building upon this structural framework, we further demonstrate the construction of DNA codes derived from these double cyclic codes over Re. In addition, we provide the necessary and sufficient criteria showing that these codes possess reversibility and reverse-complement properties over Re. Furthermore, we introduce a generalized Gray map that extends the classical Gray map from the ring F2+vF2 with v2=0 to the ring Re, showing a direct correspondence between elements of Re and DNA sequences over S={A,T,G,C} utilizing double cyclic codes. To illustrate the applicability of our results, we present some examples demonstrating the effectiveness of the mapping in generating reversible and reverse-complement DNA codes from algebraic structures over the ring Re. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
28 pages, 415 KB  
Article
A Scalable Symmetric Cryptographic Scheme Based on Latin Square, Permutations, and Reed-Muller Codes for Resilient Encryption
by Hussain Ahmad and Carolin Hannusch
Cryptography 2025, 9(4), 70; https://doi.org/10.3390/cryptography9040070 - 31 Oct 2025
Viewed by 905
Abstract
Symmetric cryptography is essential for secure communication as it ensures confidentiality by using shared secret keys. This paper proposes a novel substitution-permutation network (SPN) that integrates Latin squares, permutations, and Reed-Muller (RM) codes to achieve robust security and resilience. As an adaptive design [...] Read more.
Symmetric cryptography is essential for secure communication as it ensures confidentiality by using shared secret keys. This paper proposes a novel substitution-permutation network (SPN) that integrates Latin squares, permutations, and Reed-Muller (RM) codes to achieve robust security and resilience. As an adaptive design using binary representation with base-n Latin square mappings for non-linear substitutions, it supports any n (Codeword length and Latin square order), k (RM code dimension), d (RM code minimum distance) parameters aligned with the Latin square and RM(n,k,d) codes. The scheme employs 2log2n-round transformations using log2n permutations ρz, where in the additional log2n rounds, row and column pairs are swapped for each pair of rounds, with key-dependent πz permutations for round outputs and fixed ρz permutations for codeword shuffling, ensuring strong diffusion. The scheme leverages dynamic Latin square substitutions for confusion and a vast key space, with permutations ensuring strong diffusion and RM(n,k,d) codes correcting transmission errors and enhancing robustness against fault-based attacks. Precomputed components optimize deployment efficiency. The paper presents mathematical foundations, security primitives, and experimental results, including avalanche effect analysis, demonstrating flexibility and balancing enhanced security with computational and storage overhead. Full article
15 pages, 479 KB  
Article
Security of Quantum Key Distribution with One-Time-Pad-Protected Error Correction and Its Performance Benefits
by Roman Novak
Entropy 2025, 27(10), 1032; https://doi.org/10.3390/e27101032 - 1 Oct 2025
Viewed by 865
Abstract
In quantum key distribution (QKD), public discussion over the authenticated classical channel inevitably leaks information about the raw key to a potential adversary, which must later be mitigated by privacy amplification. To limit this leakage, a one-time pad (OTP) has been proposed to [...] Read more.
In quantum key distribution (QKD), public discussion over the authenticated classical channel inevitably leaks information about the raw key to a potential adversary, which must later be mitigated by privacy amplification. To limit this leakage, a one-time pad (OTP) has been proposed to protect message exchanges in various settings. Building on the security proof of Tomamichel and Leverrier, which is based on a non-asymptotic framework and considers the effects of finite resources, we extend the analysis to the OTP-protected scheme. We show that when the OTP key is drawn from the entropy pool of the same QKD session, the achievable quantum key rate is identical to that of the reference protocol with unprotected error-correction exchange. This equivalence holds for a fixed security level, defined via the diamond distance between the real and ideal protocols modeled as completely positive trace-preserving maps. At the same time, the proposed approach reduces the computational requirements: for non-interactive low-density parity-check codes, the encoding problem size is reduced by the square of the syndrome length, while privacy amplification requires less compression. The technique preserves security, avoids the use of QKD keys between sessions, and has the potential to improve performance. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

22 pages, 290 KB  
Article
Optimal Linear Codes and Their Hulls
by Stefka Bouyuklieva and Mariya Dzhumalieva-Stoeva
Mathematics 2025, 13(15), 2491; https://doi.org/10.3390/math13152491 - 2 Aug 2025
Viewed by 1136
Abstract
The hull of a linear code C is the intersection of C with its dual code. The goal is to study the dimensions of the hulls of optimal binary and ternary linear codes for a given length and dimension. The focus is on [...] Read more.
The hull of a linear code C is the intersection of C with its dual code. The goal is to study the dimensions of the hulls of optimal binary and ternary linear codes for a given length and dimension. The focus is on the lengths at which self-orthogonal (respectively, LCD) optimal codes exist at fixed dimension. Full article
(This article belongs to the Special Issue Mathematics for Algebraic Coding Theory and Cryptography)
25 pages, 8472 KB  
Article
Harnessing the Power of Pre-Trained Models for Efficient Semantic Communication of Text and Images
by Emrecan Kutay and Aylin Yener
Entropy 2025, 27(8), 813; https://doi.org/10.3390/e27080813 - 29 Jul 2025
Cited by 1 | Viewed by 1685
Abstract
This paper investigates point-to-point multimodal digital semantic communications in a task-oriented setup, where messages are classified at the receiver. We employ a pre-trained transformer model to extract semantic information and propose three methods for generating semantic codewords. First, we propose semantic quantization that [...] Read more.
This paper investigates point-to-point multimodal digital semantic communications in a task-oriented setup, where messages are classified at the receiver. We employ a pre-trained transformer model to extract semantic information and propose three methods for generating semantic codewords. First, we propose semantic quantization that uses quantized embeddings of source realizations as a codebook. We investigate the fixed-length coding, considering the source semantic structure and end-to-end semantic distortion. We propose a neural network-based codeword assignment mechanism incorporating codeword transition probabilities to minimize the expected semantic distortion. Second, we present semantic compression that clusters embeddings, exploiting the inherent semantic redundancies to reduce the codebook size, i.e., further compression. Third, we introduce a semantic vector-quantized autoencoder (VQ-AE) that learns a codebook through training. In all cases, we follow this semantic source code with a standard channel code to transmit over the wireless channel. In addition to classification accuracy, we assess pre-communication overhead via a novel metric we term system time efficiency. Extensive experiments demonstrate that our proposed semantic source-coding approaches provide comparable accuracy and better system time efficiency compared to their learning-based counterparts. Full article
(This article belongs to the Special Issue Semantic Information Theory)
Show Figures

Figure 1

16 pages, 396 KB  
Article
Investigating Reproducibility Challenges in LLM Bugfixing on the HumanEvalFix Benchmark
by Balázs Szalontai, Balázs Márton, Balázs Pintér and Tibor Gregorics
Software 2025, 4(3), 17; https://doi.org/10.3390/software4030017 - 14 Jul 2025
Cited by 1 | Viewed by 4461
Abstract
Benchmark results for large language models often show inconsistencies across different studies. This paper investigates the challenges of reproducing these results in automatic bugfixing using LLMs, on the HumanEvalFix benchmark. To determine the cause of the differing results in the literature, we attempted [...] Read more.
Benchmark results for large language models often show inconsistencies across different studies. This paper investigates the challenges of reproducing these results in automatic bugfixing using LLMs, on the HumanEvalFix benchmark. To determine the cause of the differing results in the literature, we attempted to reproduce a subset of them by evaluating 12 models in the DeepSeekCoder, CodeGemma, CodeLlama, and WizardCoder model families, in different sizes and tunings. A total of 35 unique results were reported for these models across studies, of which we successfully reproduced 12. We identified several relevant factors that influenced the results. The base models can be confused with their instruction-tuned variants, making their results better than expected. Incorrect prompt templates or generation length can decrease benchmark performance, as well as using 4-bit quantization. Using sampling instead of greedy decoding can increase the variance, especially with higher temperature values. We found that precision and 8-bit quantization have less influence on benchmark results. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

8 pages, 502 KB  
Proceeding Paper
Adaptive Frequency and Assignment Algorithm for Context-Based Arithmetic Compression Codes for H.264 Video Intraframe Encoding
by Huang-Chun Hsu and Jian-Jiun Ding
Eng. Proc. 2025, 98(1), 4; https://doi.org/10.3390/engproc2025098004 - 4 Jun 2025
Viewed by 633
Abstract
In modern communication technology, short videos are increasingly used on social media platforms. The advancement of video codecs is pivotal in communication. In this study, we developed a new scheme to encode the residue of intraframes. For the H.264 baseline profile, we used [...] Read more.
In modern communication technology, short videos are increasingly used on social media platforms. The advancement of video codecs is pivotal in communication. In this study, we developed a new scheme to encode the residue of intraframes. For the H.264 baseline profile, we used context-based arithmetic variable-length coding (CAVLC) to encode the residue of integer transforms in a block-wise manner. In the developed method, the DC and AC coefficients are separated. In addition, context assignment, adaptive scanning, range increment, and mutual learning are adopted in a mixture of fixed-length and variable-length schemes, and block-wise compressions of the frequency table are applied to obtain improved compression rates. Compressing the frequency prevents CAVLC from being hindered by horizontally/vertically dominated blocks. The developed method outperforms CAVLC, with average reductions of 7.81, 8.58, and 7.88% in quarter common intermediate format (QCIF), common intermediate format (CIF), and full high-definition (FHD) inputs. Full article
Show Figures

Figure 1

7 pages, 1414 KB  
Proceeding Paper
Improved Low Complexity Predictor for Block-Based Lossless Image Compression
by Huang-Chun Hsu, Jian-Jiun Ding and De-Yan Lu
Eng. Proc. 2025, 92(1), 38; https://doi.org/10.3390/engproc2025092038 - 30 Apr 2025
Viewed by 730
Abstract
Lossless image compression has been studied and widely applied, particularly in medicine, space exploration, aerial photography, and satellite communication. In this study, we proposed a low-complexity lossless compression for image (LOCO-I) predictor based on the joint photographic expert group–lossless standard (JPEG-LS). We analyzed [...] Read more.
Lossless image compression has been studied and widely applied, particularly in medicine, space exploration, aerial photography, and satellite communication. In this study, we proposed a low-complexity lossless compression for image (LOCO-I) predictor based on the joint photographic expert group–lossless standard (JPEG-LS). We analyzed the nature of the LOCO-I predictor and offered possible solutions. The improved LOCO-I outperformed LOCO-I by a reduction of 2.26% in entropy for the full image size and reductions of 2.70, 2.81, and 2.89% for 32 × 32, 16 × 16, and 8 × 8 block-based compression, respectively. In addition, we suggested vertical/horizontal flip for block-based compression, which requires extra bits to record and decreases the entropy. Compared with other state-of-the-art (SOTA) lossless image compression predictors, the proposed method has low computation complexity as it is multiplication- and division-free. The model is also better suited for hardware implementation. As the predictor exploits no inter-block relation, it enables parallel processing and random access if encoded by fix-length coding (FLC). Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

20 pages, 342 KB  
Article
Generalized Orthogonal de Bruijn and Kautz Sequences
by Yuan-Pon Chen, Jin Sima and Olgica Milenkovic
Entropy 2025, 27(4), 366; https://doi.org/10.3390/e27040366 - 30 Mar 2025
Cited by 1 | Viewed by 868
Abstract
A de Bruijn sequence of order k over a finite alphabet is a cyclic sequence with the property that it contains every possible k-sequence as a substring exactly once. Orthogonal de Bruijn sequences are the collections of de Bruijn sequences of the [...] Read more.
A de Bruijn sequence of order k over a finite alphabet is a cyclic sequence with the property that it contains every possible k-sequence as a substring exactly once. Orthogonal de Bruijn sequences are the collections of de Bruijn sequences of the same order, k, that satisfy the joint constraint that every (k+1)-sequence appears as a substring in, at most, one of the sequences in the collection. Both de Bruijn and orthogonal de Bruijn sequences have found numerous applications in synthetic biology, although the latter remain largely unexplored in the coding theory literature. Here, we study three relevant practical generalizations of orthogonal de Bruijn sequences, where we relax either the constraint that every (k+1)-sequence appears exactly once or the sequences themselves are de Bruijn rather than balanced de Bruijn sequences. We also provide lower and upper bounds on the number of fixed-weight orthogonal de Bruijn sequences. The paper concludes with parallel results for orthogonal nonbinary Kautz sequences, which satisfy similar constraints as de Bruijn sequences, except for being only required to cover all subsequences of length k whose maximum run length equals one. Full article
(This article belongs to the Special Issue Coding and Algorithms for DNA-Based Data Storage Systems)
Show Figures

Figure 1

7 pages, 407 KB  
Data Descriptor
Draft Genome Sequence Data of the Ensifer sp. P24N7, a Symbiotic Bacteria Isolated from Nodules of Phaseolus vulgaris Grown in Mining Tailings from Huautla, Morelos, Mexico
by José Augusto Ramírez-Trujillo, Maria Guadalupe Castillo-Texta, Mario Ramírez-Yáñez and Ramón Suárez-Rodríguez
Data 2025, 10(3), 34; https://doi.org/10.3390/data10030034 - 27 Feb 2025
Viewed by 1426
Abstract
In this work, we report the draft genome sequence of Ensifer sp. P24N7, a symbiotic nitrogen-fixing bacterium isolated from nodules of Phaseolus vulgaris var. Negro Jamapa was planted in pots that contained mining tailings from Huautla, Morelos, México. The genomic DNA was sequenced [...] Read more.
In this work, we report the draft genome sequence of Ensifer sp. P24N7, a symbiotic nitrogen-fixing bacterium isolated from nodules of Phaseolus vulgaris var. Negro Jamapa was planted in pots that contained mining tailings from Huautla, Morelos, México. The genomic DNA was sequenced by an Illumina NovaSeq 6000 using the 250 bp paired-end protocol obtaining 1,188,899 reads. An assembly generated with SPAdes v. 3.15.4 resulted in a genome length of 7,165,722 bp composed of 181 contigs with a N50 of 323,467 bp, a coverage of 76X, and a GC content of 61.96%. The genome was annotated with the NCBI Prokaryotic Genome Annotation Pipeline and contains 6631 protein-coding sequences, 3 complete rRNAs, 52 tRNAs, and 4 non-coding RNAs. The Ensifer sp. P24N7 genome has 59 genes related to heavy metal tolerance predicted by RAST server. These data may be useful to the scientific community because they can be used as a reference for other works related to heavy metals, including works in Huautla, Morelos. Full article
(This article belongs to the Special Issue Benchmarking Datasets in Bioinformatics, 2nd Edition)
Show Figures

Figure 1

20 pages, 3142 KB  
Article
RTMS: A Smart Contract Vulnerability Detection Method Based on Feature Fusion and Vulnerability Correlations
by Gaimei Gao, Zilu Li, Lizhong Jin, Chunxia Liu, Junji Li and Xiangqi Meng
Electronics 2025, 14(4), 768; https://doi.org/10.3390/electronics14040768 - 16 Feb 2025
Cited by 2 | Viewed by 1533
Abstract
Smart contracts are at the core of blockchain technology, but the cost of fixing their security vulnerabilities is high, making pre-deployment vulnerability detection crucial. Existing methods rely on fixed rules, which have limitations in accuracy and scalability, and their efficiency decreases with the [...] Read more.
Smart contracts are at the core of blockchain technology, but the cost of fixing their security vulnerabilities is high, making pre-deployment vulnerability detection crucial. Existing methods rely on fixed rules, which have limitations in accuracy and scalability, and their efficiency decreases with the complexity of the rules. Neural-network-based methods can identify some vulnerabilities but are inefficient in multi-vulnerability scenarios and depend on source code. To address these issues, we propose a multi-vulnerability-based smart contract detection method called RTMS. RTMS takes bytecode as input, disassembles it into opcodes, uses the gas consumed by the contract for data slicing, and extends the length of input opcodes through a layered structure. It employs a weighted binary cross-entropy (BCE) function to handle data imbalance and combines channel-sequence attention mechanisms to extract vulnerability correlation features. By using transfer learning, it reduces training parameters and computational costs. Our RTMS model can detect multiple vulnerabilities simultaneously, enhancing detection accuracy and efficiency. In experiments with 100,000 real contract samples, the model achieved a Jaccard coefficient of 0.9312, a Hamming loss of 0.0211, and an F1 score that improved by about 11 percentage points compared to existing models, demonstrating its superiority and stability. Full article
Show Figures

Figure 1

Back to TopTop