Next Article in Journal
Evaluating Interaction Capability in a Serious Game for Children with ASD: An Operability-Based Approach Aligned with ISO/IEC 25010:2023
Previous Article in Journal
ChatGPT in Early Childhood Science Education: Can It Offer Innovative Effective Solutions to Overcome Challenges?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LightCross: A Lightweight Smart Contract Vulnerability Detection Tool

by
Ioannis Sfyrakis
*,
Paolo Modesti
,
Lewis Golightly
* and
Minaro Ikegima
Department of Computing and Games, School of Computing, Engineering & Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK
*
Authors to whom correspondence should be addressed.
Computers 2025, 14(9), 369; https://doi.org/10.3390/computers14090369
Submission received: 31 July 2025 / Revised: 23 August 2025 / Accepted: 25 August 2025 / Published: 3 September 2025

Abstract

Blockchain and smart contracts have transformed industries by automating complex processes and transactions. However, this innovation has introduced significant security concerns, potentially leading to loss of financial assets and data integrity. The focus of this research is to address these challenges by developing a tool that can enable developers and testers to detect vulnerabilities in smart contracts in an efficient and reliable way. The research contributions include an analysis of existing literature on smart contract security, along with the design and implementation of a lightweight vulnerability detection tool called LightCross. This tool runs two well-known detectors, Slither and Mythril, to analyse smart contracts. Experimental analysis was conducted using the SmartBugs curated dataset, which contains 143 vulnerable smart contracts with a total of 206 vulnerabilities. The results showed that LightCross achieves the same detection rate as SmartBugs when using the same backend detectors (Slither and Mythril) while eliminating SmartBugs’ need for a separate Docker container for each detector. Mythril detects 53% and Slither 48% of the vulnerabilities in the SmartBugs curated dataset. Furthermore, an assessment of the execution time across various vulnerability categories revealed that LightCross performs comparably to SmartBugs when using the Mythril detector, while LightCross is significantly faster when using the Slither detector. Finally, to enhance user-friendliness and relevance, LightCross presents the verification results based on OpenSCV, a state-of-the-art academic classification of smart contract vulnerabilities, aligned with the industry-standard CWE and offering improvements over the unmaintained SWC taxonomy.

1. Introduction

Blockchain technology has enabled digital transformation by fostering trust through transparency in transactions and supporting the rise of decentralised applications. Smart contracts [1], deployed on blockchain networks, are self-executing programmable agreements designed to automate processes by fulfilling predefined conditions. Widely applicable across various industry sectors, from finance to supply chain management, smart contracts have become an essential component of decentralised systems.
However, their growing adoption introduces new attack vectors due to inherent design vulnerabilities. Similarly to other established blockchain applications, including Bitcoin (e.g., [2,3,4]), smart contracts are not immune to security flaws. Exploiting these vulnerabilities can enable various types of cyber attack. For example, Denial of Service (DoS) attacks can occur through loop code manipulation [5], while timestamp dependence vulnerabilities allow malicious miners to manipulate block timestamps to interfere with critical operations [6].
The Reentrancy vulnerability led to the Decentralized Autonomous Organisation (DAO) attack [7] in 2016, where $60 million worth of Ether was stolen by recursively withdrawing funds before the contract’s balance could be updated. Access control vulnerabilities were exposed in the Parity multisig wallet attack [8] of 2017, resulting in a loss of more than $150 million due to improper initialisation of wallet ownership. Similarly, integer overflow/underflow issues allowed the BatchOverflow exploit [9] in 2018, which was used to manipulate token balances, threatening the integrity of several ERC-20 Ethereum tokens. This demonstrates the severe financial and operational risks associated with flawed smart contract implementations.
Reentrancy vulnerabilities occur when a contract makes an external call to another contract before updating its own state, allowing an attacker to repeatedly call the function and drain funds. Access control vulnerabilities arise when the permissions governing who can execute critical functions are improperly defined or misconfigured, enabling unauthorised access. Integer overflow/underflow happens when arithmetic operations exceed or drop below the range of representable numbers, leading to incorrect contract behaviour or manipulation of balances.
As traditional code audits and manual inspections are often inadequate for identifying vulnerabilities in complex smart contracts [10], there is a need for lightweight and automated solutions to effectively detect and mitigate vulnerabilities in smart contracts [11].

1.1. Research Contributions

In this paper, we present LightCross, a lightweight vulnerability detection tool that integrates Mythril and Slither tools. To achieve vulnerability detection while maintaining practical performance, LightCross integrates two complementary analysis tools: Slither for fast static analysis and broad vulnerability coverage, and Mythril for symbolic execution to catch complex logic flaws. This combination addresses the trade-off between analysis speed and detection accuracy in smart contract security tools.
LightCross does not require a container infrastructure such as Docker version 4.44.3. The only main requirement is support for Python 3. We perform an experimental and comparative analysis of LightCross in relation to SmartBugs [12]. LightCross vulnerability detection performs similarly to SmartBugs and requires fewer dependencies than SmartBugs. LightCross does not require Docker or any other container configuration and orchestration. The main contributions of our research are:
  • The design and development of LightCross, a lightweight vulnerability detection tool for smart contracts that improves the security of smart contracts (Section 3).
  • Composition of Mythril and Slither to provide a unified vulnerability detection tool for smart contracts without depending on a Docker container infrastructure (Section 3).
  • Presentation of verification results according to OpenSCV [13], an academic state-of-the-art smart contract vulnerabilities classification aligned with the industry-standard CWE [14], which improves upon the established but unmaintained SWC [15] classification.
  • An experimental evaluation of LightCross in relation to SmartBugs tool version 2.0.14 (Section 4) demonstrating the effectiveness of a lightweight approach.

1.2. Research Methodology

The four-step research methodology for the study consists of:
Computers 14 00369 i001
  • Literature Review: In Section 2, we review related work on smart contract vulnerability analysis and classification. We discuss well-known vulnerabilities in smart contracts, their classification, and detection tools.
  • System Design: In Section 3, we present the LightCross design as a modular system to import smart contracts, processing them in parallel with the back-end tools, and aggregating the results into a CSV file.
  • Implementation: Also described in Section 3, we discuss the Python implementation of the tool, the functionalities of each program and the integration of the backend tools.
  • Evaluation: We provide a performance evaluation of the proposed LightCross tool in Section 4 where we use a SmartBugs curated dataset [16] to assess the vulnerability detection rate of LightCross. We measure against the metrics of True Positive (TP), False Positive (FP), False Negative (FN), Recall, Precision, and F1 score. We then performed a comparative analysis to the independent execution of Slither, Mythril, and SmartBugs.

2. Background

Blockchain technology, originally created as a distributed ledger for cryptocurrencies, has evolved to encompass a wide range of applications beyond digital currency transactions [17]. The technology’s key features, immutability, decentralisation, and transparency, have made it an attractive choice for various sectors seeking to enhance trust and security in digital interactions.
The concept of smart contracts gained prominence with the launch of Ethereum in 2015, which fundamentally changed the landscape of digital agreements [18]. Ethereum’s smart contract platform enables the execution of self-enforcing agreements in a decentralised manner. As a result, smart contracts have become a fundamental component of blockchain ecosystems, enabling a wide range of applications, including decentralised finance (DeFi), supply chain management, digital identity verification, and gaming platforms.

2.1. Security Vulnerabilities in Smart Contracts

While smart contracts may exhibit vulnerabilities similar to those found in traditional software, such as integer overflows, they also introduce blockchain-specific risks [19]. The security of a smart contract is significantly influenced by the robustness of its programming language [20,21], and as emphasised by Sirer [22], smart contracts should be developed with the same rigour as code for critical systems.
We discuss vulnerability classification in detail in Section 2.2, but it should be noted that various definitions exist for certain blockchain-specific vulnerabilities. For instance, what Atzei et al. [7] refers to as unpredictable state is known as Transaction Order Dependency (TOD) by other authors [23]. Similarly, the vulnerability commonly termed reentrancy in some publications (e.g., [24]) is also described as Effectively Callback-Free Objects.
Atzei et al. [7] propose a taxonomy of smart contract vulnerabilities, providing a comprehensive framework for vulnerability analysis.
Well-known vulnerabilities specific to smart contracts are described below:
  • Reentrancy: The reentrancy vulnerability occurs when a contract allows external calls to untrusted contracts before updating its internal state. This allows an attacker to repeatedly call a vulnerable function and drain funds by taking advantage of the contract’s unprotected state. For example, a malicious user could exploit contract (A) to recursively call the withdraw function of contract (B) before the contract’s state is updated. This can potentially lead to unexpected behaviour and exploitation of the smart contract’s logic [25,26].
  • Access Control: Inadequate access control mechanisms can allow unauthorised users or contracts to access sensitive smart contract’s functions or data [27]. Proper management of permissions is crucial for preventing unauthorised access. For instance, the smart contract should always verify the user’s permissions before executing the withdrawal function. In the context of Ethereum, the Solidity programming language offers access modifiers. Additionally, developers can add require statements embedded inside functions, where the transaction is rolled back if the condition of the require statement is not fulfilled.
  • Integer Overflow/Underflow: Integer overflow/underflow vulnerabilities occur when a computed value exceeds the maximum representable size or falls below the minimum representable size for a particular data type. In Solidity, an unsigned integer is defined as uint256 [28], which is limited to 256 bits in size. This translates to integers between 0 and ( 2 256 1 ) . These vulnerabilities can lead to unintended consequences, including loss of funds, unauthorised access, or DoS attacks [29]. However, since Solidity 0.8.0, these operations are automatically checked, and overflows/underflows cause the transaction to revert.
  • Timestamp Dependency: This vulnerability affects smart contracts, particularly on the Ethereum blockchain. It is closely related to the “Unpredictable State” category and pertains to the dependence of a contract’s behaviour on the current block’s timestamp. The block timestamp is a dynamic value that can be manipulated, potentially leading to security issues [30]. Smart contracts may rely on the block timestamp for various purposes, such as time-based triggers or access control. However, malicious miners in proof-of-work or validators in proof-of-stake systems could manipulate the timestamp to gain an advantage or exploit certain time-dependent functions, leading to unpredictable behaviour or vulnerabilities [8].
  • Authorisation through tx.origin: In Solidity, there exists a global variable known as tx.origin, which provides the address of the sender of a transaction. Utilising this variable for authorisation purposes can potentially introduce vulnerabilities to a contract [29]. This vulnerability arises when an authorised account interacts with a malicious contract, potentially passing the authorisation check because tx.origin reveals the original sender of the transaction [31]. To mitigate this risk, it is recommended to use “msg.sender” for authorisation purposes.
  • Frozen Ether/Token: The Frozen Ether/Token vulnerability occurs when funds become inaccessible or “frozen” due to a flaw in the contract’s code. This can happen if certain conditions are not properly handled, allowing an attacker to lock up funds indefinitely. This vulnerability is frequently caused by dependency on external contract libraries. Core functionalities may be compromised if an external library is updated or terminated. This flaw resulted in the aforementioned Parity multi-signature wallet bug [19].
  • Transaction Ordering Dependency (TOD): This vulnerability is a critical security concern in smart contracts, highlighting the potential risks associated with decision-making based on evolving information. In a TOD vulnerability scenario, the outcome of a smart contract function or transaction is influenced by external factors that change between the contract’s initiation and execution phases [23]. This time gap creates a window of opportunity for malicious actors to exploit discrepancies, leading to unexpected and potentially harmful results.

2.2. Smart Contract Vulnerabilities Classification

Smart contract vulnerability taxonomies are standardised classifications for identifying and categorising security flaws, enabling consistent communication among developers, auditors, and researchers. Furthermore, they facilitate the evaluation of the capabilities of detection tools and support the development of guidelines for secure development practices and programming language design [23,32].
Vidal et al. [13] present a comprehensive review of both community-driven and academic taxonomies. Ruggiero et al. [33] propose a unified data model that formalises key entities and relationships in smart contract vulnerability taxonomies. In this work, we focus on the most relevant classifications, both in general and in relation to our research contribution.
Early efforts to classify vulnerabilities in smart contracts emerged from industry and community initiatives. The Smart Contract Weakness Classification (SWC) [15] provides a structured taxonomy that categorises smart contract vulnerabilities along with descriptions and test cases. This framework has been extensively integrated into security tools (Mythril and Slither, among others) and auditing processes, ensuring consistent vulnerability identification. A notable feature is its mapping of the 37 SWC vulnerabilities to the Common Weakness Enumeration (CWE) [14], an established software security classification system.
The Decentralised Application Security Project (DASP) [34] focuses on the “Top 10” smart contract vulnerabilities, including real-world attack examples and impact assessments. However, it covers only nine vulnerabilities. Both DASP and SWC have remained outdated and unmaintained since 2018.
Academic research (e.g., [7,29]) has since advanced the field through fine-grained and more up-to-date classifications.
Rameder et al. [23] present a taxonomy derived from a Systematic Literature Review (SLR), offering comprehensive vulnerability categorisation that synthesises findings from academic papers and technical reports. Their framework groups 54 vulnerabilities into 10 categories, incorporating vulnerabilities identified by various detection tools.
OpenSCV [13], the open taxonomy for Smart Contract Vulnerabilities, adopts a hierarchical structure organised across three levels:
  • High-level categories (e.g., Unsafe External Calls)
  • Vulnerability groups (e.g., Reentrancy)
  • Individual vulnerabilities (e.g., Unsafe Credit Transfer)
With coverage of 94 vulnerabilities, it represents one of the most comprehensive and current classifications. OpenSCV establishes mappings to the DASP, SWC, Rameder et al. taxonomy, CWE, and IBM’s Orthogonal Defect Classification (ODC) for Software Design and Code [35]. The classification has been validated through expert questionnaires and is open to community contributions.
However, as noted in [23], mapping vulnerabilities across different classifications presents challenges. For example, DASP appears too coarse with only 10 categories, while standards such as SWC lack coverage of the categories present in newer classifications (e.g., [13,23]). Specifically, Rameder et al. [23] successfully map only 22 SWC categories within their framework. Table 1 presents a comparison of the main smart contract vulnerability taxonomies, while Table 2 lists their repositories.
Table 3 presents examples illustrating how OpenSCV achieves greater granularity and precision than SWC. For example, SWC-101 aggregates a variety of arithmetic faults under a single identifier, whereas OpenSCV decomposes these into a structured hierarchy of six distinct vulnerabilities across overflow, division, and conversion categories. In the 6 Incorrect Control Flow category, OpenSCV distinguishes multiple vulnerabilities that SWC either groups together (e.g., four different transaction-order related defects all mapped to SWC-114) or omits entirely (e.g., 6.1.7 Exposed State Variables and 6.1.8 Wrong Transaction Definition have no SWC equivalent). This finer granularity reduces ambiguity, supports more precise remediation guidance, and aligns more clearly with CWE, while preserving backward compatibility via SWC mappings.
In summary, OpenSCV currently stands as the most current and fine-grained classification scheme, offering hierarchical organisation while maintaining cross-domain compatibility with CWE. However, SWC retains relevance as many tools continue to classify findings according to this established framework.
For this reason, for our LightCross tool, we adopted the OpenSCV classification and its associated vulnerability mappings while retaining SWC as a reference, since back-end tools such as Mythril can provide outputs tagged with SWC codes. For Slither, we map the detectors to SWC codes.
Finally, it is worth mentioning that the OWASP Smart Contract Security Verification Standard (SCSVS) [36] is currently under development. As of the time of writing, version 0.0.1 is the only one publicly available. This OWASP initiative outlines security requirements for smart contracts, particularly those written in Solidity for EVM-based blockchains, and aims to guide the verification of secure smart contracts and decentralised applications. Complementing SCSVS, the OWASP Smart Contract Weakness Enumeration (SCWE) [37] catalogues 84 common security and privacy weaknesses specific to smart contracts, and their corresponding CWEs.

2.3. Vulnerability Detection Tools & Related Works

Several studies have evaluated and compared existing tools for smart contract vulnerability detection, considering factors such as the types of vulnerability detected, the accuracy of detection, and the efficiency of the tools. For example, studies by Fontein [38] and Gupta et al. [39] compared several widely used tools using a benchmark dataset of vulnerable smart contracts.
Table 4 illustrates the ability of the tools listed below to detect the vulnerabilities considered in Section 2.1. Such capabilities vary depending on tool versions, configurations, and contract complexity, and in general, reflect theoretical capacities under idealised conditions (e.g., isolated contract analysis).
  • Manticore uses symbolic execution techniques. Mossberg et al. [40] described Manticore’s capabilities in analysing smart contracts through symbolic execution. This tool enables the identification of vulnerabilities that may manifest under specific input conditions. While it can be used alongside static analysis tools, Manticore’s symbolic execution approach provides a distinct method of analysis that can uncover complex, state-dependent vulnerabilities.
  • Mythril uses symbolic execution to examine smart contracts. Bonomi et al. [41] assessed Mythril’s vulnerability detection capabilities by applying it to actual smart contracts sourced from Code4arena competitions and comparing the results with official security audits conducted during those events. Their study identifies potential inefficiencies in the tool, suggesting directions for the development of more scalable and effective methods for testing smart contracts.
  • MythX is notable for its integration with popular smart contract development environments, providing real-time feedback to developers [42]. Seamless integration improves the overall development experience by identifying vulnerabilities in the initial stages [28].
  • Oyente uses a static analysis approach for anomaly detection. Gupta et al. [39] highlighted the importance of Oyente in identifying vulnerabilities by analysing the bytecode of smart contracts, in particular in terms of the detection of potential security flaws during the contract development phase.
  • Securify takes a formal verification approach to smart contract security. Tsankov et al. [43] explored the application of Securify to ensure security assurance through mathematical proofs. This tool adds an extra layer of assurance by formally verifying that smart contracts adhere to predefined security specifications [31].
  • Slither is a free open source tool to inspect the Solidity code and identify potential problems. Released in 2018, it is developed in Python, using a unique intermediate representation known as SlithIR. Slither finds vulnerabilities using a combination of data flow analysis and tracking approaches. It detects a wide range of vulnerabilities, including reentrancy, frozen ether, and integer overflows, with high accuracy [44]. Its comprehensive coverage makes it one of the most robust tools for smart contract security.
A notable tool is SmartBugs [12] supporting 20 tools to analyse Solidity source code, deployment bytecode, and runtime code. SmartBugs optimises resource usage through parallel and randomised task execution. It maintains a standardised output format across tools, with scripts for parsing and normalising results. However, SmartBugs’ reliance on container infrastructure poses challenges when analysing smart contracts without container support.
Ressi et al. [45] conducted a qualitative analysis of machine learning-based vulnerability detection in Ethereum smart contracts. The study categorises existing tools, highlights limitations like restricted vulnerability coverage and dataset flaws, and introduces new metrics to compare solutions.

3. System Design and Implementation

This section introduces the LightCross tool. Figure 1 provides an overview of the five main modules and the workflow of the tool. The workflow begins with smart contracts under test being processed through an import procedure, which feeds them into a subprocess orchestrator. This orchestrator distributes the smart contract code to multiple back-end detector tools, specifically Mythril and Slither, which perform a parallel vulnerability scan using different detection techniques. The detection results are then sent to the results aggregator that consolidates the findings. The consolidated results are subjected to vulnerability classification, which leverages standardised vulnerability mappings (including SWC, OpenSCV, CWE, DASP, and Rameder taxonomies) to categorise detected issues according to established security frameworks. Finally, a reporter component transforms these classified vulnerabilities into a structured CSV report, providing an overview of security issues identified in smart contracts. This architecture enables efficient multi-tool analysis while standardising vulnerability reporting across different detection engines.

3.1. Importer

In this module, we analyse the inputs from the command line to check if they are either folders that include smart contracts or a single smart contract file. LightCross supports running both Mythril and Slither or one of them. After creating a list of smart contract folders for evaluation, the importer module creates a list of files and passes the information to the next module.

3.2. Subprocess Orchestrator

This module takes as input the list of files and the selected tools. For each smart contract file, LightCross creates a subprocess for each detector specified as parameters. For instance, if both detectors are selected, then Slither will first evaluate the smart contract, and then Mythril will analyse the file as well. We chose to always start with Slither as it takes less time to finish than Mythril. Running Slither usually takes one or two seconds. We iterate over all the files and create subprocesses for the detectors. Each of the created subprocesses receives as input the path of the smart contract and configuration specific to each detector. The output of each subprocess is a report which is then fed to the analysis tools module.

3.3. Analysis Tools and Aggregator

We parse each report to extract the data for the vulnerability analysis. For Slither, we employ a regular expression to capture the summary and the issues reported. For each issue, we then extract the name of the issue, the impact, and confidence. We also mapped the issue names emitted by Slither to a corresponding SWC ID. For example, when Slither outputs in the report arbitrary-send-eth we map it to SWC-105 which means Unprotected Ether Withdrawal. Mythril directly includes the SWC ID, such as Unprotected_Ether_Withdrawal_SWC_105, in its report, which we can parse to extract the SWC ID. The report resulting from Mythril is parsed extracting the following data: title, details, SWC ID, severity contract, function name, pc address, gas usage, file path, and description. In addition, the analysis time for the smart contract is added. All extracted data from Slither and Mythril are sent to the Results Aggregator.
The Results Aggregator module collates all the data for all smart contracts for both Slither and Mythril. We integrate all results into one dataset with a standardised format, which is passed to the next module for classification.

3.4. Vulnerability Classification and Reporting

Classifying vulnerabilities starts with SWC IDs for Mythril and heuristically assigning the output detectors to SWC IDs. Initially, we import the baseline data for the SmartBugs curated dataset [16] to assess whether the detectors have accurately identified the vulnerabilities. The next step is to map the SWC ID to the DASP categories. Although the DASP classification is coarse, vulnerabilities in the baseline data are specified using DASP. We utilise the OpenScvVulnerabilityMapper component to map the vulnerability to the OpenSCV classification. We initialise the OpenSCV dataset that includes a mapping of the OpenSCV classification entries to SWC IDs. An SWC ID can be assigned to one or more OpenSCV classification entries. For example, the SWC ID 101 corresponds to multiple vulnerabilities such as integer underflow/overflow, divide by zero, integer division truncation bugs, and signedness bugs. In order to determine the accurate OpenSCV entry corresponding to the vulnerability, we use keyword-based matching analysing the vulnerability descriptions and synonyms that are included in the OpenSCV dataset. In the following, we outline the procedure for mapping the vulnerabilities (Figure 2):
  • The OpenScvVulnerabilityMapper component initialises by loading vulnerability CSV files and the OpenSCV database file, validating their existence and accessibility.
  • Processes these inputs by merging vulnerability files and preparing the OpenSCV data for efficient lookup.
  • Establishes mapping rules based on SWC-IDs and relevant keywords extracted from vulnerability descriptions and synonyms.
  • For each vulnerability entry, a two-tier analysis approach is applied: first, attempting direct matching via SWC-ID fields, if present. If it is unsuccessful, we employ keyword matching with a scoring system to determine the corresponding OpenSCV classification entry if there is more than one candidate.
  • Once mapped, it retrieves the corresponding detailed entries from the OpenSCV classification.
  • Exports results into CSV, JSON files and generates visualisation plots for analysis, creating a complete audit trail.
Table 5 provides a representative example of the result of the mapping from SWC to OpenSCV. After classifying the vulnerabilities, the reporter module computes the metrics to assess if the detectors have accurately reported the vulnerabilities according to the baseline data. In addition, this module plots the result dataset according to the DASP category and computed metrics such as recall, precision, and F1 score.

4. Evaluation

In this section, we first discuss the baseline data set that we used to evaluate the effectiveness of LightCross. We compare the effectiveness of Slither, Mythril, LightCross and SmartBugs. In addition, the execution time for each category of smart contracts is discussed in Section 4.4.

4.1. Metrics for Vulnerability Detection

To assess the performance of vulnerability detection tools, we evaluated their effectiveness using precision, recall, and F1 score metrics. Precision measures the proportion of correctly identified vulnerabilities among all reported by the tool, while recall assesses the proportion of actual vulnerabilities that the tool successfully identifies. These metrics are widely used to assess the accuracy of tools in detecting vulnerabilities (e.g., [46]).
In this context, we define a positive outcome as the presence of a vulnerability and a negative outcome as its absence.
  • True Positive (TP): the tool correctly identifies a vulnerability present in the smart contract, matching the vulnerability specified in the baseline dataset.
  • False Positive (FP): the tool incorrectly reports a vulnerability in a smart contract that is not the same as the one specified in the baseline dataset.
  • False Negative (FN): the tool does not identify a vulnerability in the smart contract while there is one specified in the baseline dataset.
  • True Negative (TN): the tool correctly identifies that a smart contract is not vulnerable. The baseline dataset does not include a smart contract that is free of vulnerabilities.
Since true negatives are often not easily measurable in vulnerability datasets, we focus on precision, recall, and the F1 score. We define precision in the context of vulnerability detection as the proportion of positive vulnerability identifications that were actually correct. For example, high precision indicates there are minimal false positives, meaning that when the tool identifies a contract as vulnerable, it is generally accurate. We compute the precision using Equation (1):
P r e c i s i o n = T P T P + F P
Recall is the proportion of actual positives that were correctly identified. A high recall indicates a low false-negative rate, meaning the tool identifies most actual vulnerabilities. If we convert the result into a percentage, the result denotes the percentage of all actual vulnerabilities that the tool successfully found. We compute the recall using Equation (2):
R e c a l l = T P T P + F N
The F1 score is the harmonic mean of precision and recall. A high F1 score indicates that the vulnerability detection tool is effective in finding vulnerabilities when they exist and in avoiding false alarms in secure smart contracts. We compute the F1 score using Equation (3):
F 1 = 2 T P 2 T P + F P + F N

4.2. SmartBugs Curated Dataset

We use the SmartBugs curated dataset to assess the effectiveness of LightCross in detecting vulnerabilities. The first version of the dataset included 69 vulnerable contracts [47]. Each smart contract contains a number of vulnerabilities that are labelled in comments, including the category and location of the vulnerability. The dataset is separated into nine categories according to the DASP taxonomy [34]. The updated version of the curated dataset contains 143 vulnerable smart contracts [16] and this is the version that we use to evaluate LightCross. Each smart contract file includes one or more vulnerabilities, which is annotated as a comment showing the vulnerability category. The total number of vulnerabilities is N = 206 and the breakdown of the vulnerabilities according to the category is depicted in Table 6.

4.3. Experimental Results

In this subsection, the results achieved by LightCross for the different types of vulnerabilities are compared against Slither, Mythril, and SmartBugs, which are run independently. The SmartBugs computation is compared with the aforementioned tools and with LightCross. We also execute SmartBugs with all the available tools and determine the best combination of two detectors in terms of vulnerability detection and execution time.
Table 7 demonstrates that for all vulnerability categories Mythril when executed in LightCross has 31% precision, indicating that only 31% of the vulnerabilities reported by Mythril are actual vulnerabilities. The 84% recall demonstrates that Mythril is finding 84% of actual vulnerabilities. This suggests that Mythril is tuned to flag many potential issues, which results in catching most real vulnerabilities (high recall), but at the cost of many false alarms (low precision). In smart contract security, achieving high recall is often deemed more crucial than precision. This preference arises because overlooking even a single vulnerability carries significant financial risks, potentially resulting in exploits and financial losses. However, an abundance of false positives can unnecessarily consume developer efforts by prompting investigations into issues that do not exist.
In production environments, particularly CI/CD pipelines, excessive false positives can lead to alert fatigue, where developers can begin to ignore or automatically dismiss warnings, potentially overlooking genuine security vulnerabilities. Similarly, in formal security audit contexts, high false positive rates increase manual review overhead and may undermine auditor confidence in automated smart contract analysis results. In addition, the extended time to perform the security audit creates a higher cost for the audited company.
Mythril’s F1 score of 45% signifies the balance between precision and recall. Since this value is closer to the precision than the recall, it reflects the significant impact of low precision (high false-positive rate) on Mythril’s overall performance. Slither demonstrates better precision (73%) than Mythril (31%). In addition, Slither exhibits a recall of 45%, which is less than Mythril’s recall. Slither’s F1 score (58%) is better than Mythril. The results indicate that Slither produces fewer false positives while still maintaining reasonable detection, resulting in an improved overall balance between precision and recall.
Detection coverage. Table 8 depicts the results of the detection coverage that reveal shortcomings in both vulnerability detectors used in LightCross. Overall, Mythril demonstrates moderately better detection capabilities with 53% coverage compared to Slither’s 48%, but both tools miss nearly half of all vulnerabilities in the SmartBugs curated dataset. Looking at specific categories, there are notable strengths and weaknesses for each tool. Mythril excels at identifying vulnerabilities in reentrancy with a high coverage rate of 94%. It shows moderate effectiveness in detecting access control issues and unchecked return values, achieving coverage levels of 62% and 61%, respectively. However, its performance is lacking in other categories, with only 10% coverage in bad randomness, and 14% each in denial of service and front-running vulnerabilities. Slither exhibits additional strengths, achieving flawless detection of unknown unknowns with a success rate of 100%, and demonstrating robust effectiveness in identifying vulnerabilities in reentrancy at 90% and unchecked return values at 66%. However, it lacks the ability to identify vulnerabilities in three distinct categories: arithmetic issues, front-running, and short addresses. These results highlight the importance of using multiple detection tools in combination, as neither tool alone provides comprehensive coverage across all types of vulnerabilities. The subpar performance in detecting bad randomness vulnerabilities by both tools (10% each) despite their prevalence in the dataset (30 vulnerabilities) indicates a critical area for improvement in smart contract security tooling.
The large number of execution paths overwhelms Mythril’s constraint solver, which can cause timeouts before the detection of bad randomness vulnerabilities (e.g., in smart_billions.sol). In addition, both Slither and Mythril lack understanding of the broader context of the way random values are used within the smart contract. One way to mitigate this is to create domain-specific detection modules for Mythril and Slither that complement the detection of application-specific randomness requirements. Another way is to integrate a taint analysis component to assess bad randomness vulnerabilities in smart contracts as proposed in [48].
Table 9 depicts the vulnerability detection for all detectors available in SmartBugs. Analysis of vulnerability detection results across the SmartBugs framework reveals patterns in detector capabilities and category specialisation. Slither demonstrates the highest overall detection rate with 100 true positives and recall (0.48), but suffers from precision issues (0.20) due to 392 false positives. Confuzzius achieves the best balance with the highest F1 score (0.39), maintaining strong recall (0.45) while offering substantially better precision (0.34). The specialisation of the categories is evident throughout the results: The performance of Confuzzius is notable in detecting access control vulnerabilities, achieving a score of 9. Osiris finds 13 vulnerabilities, while Oyente finds 14, and Mythril achieves a score of 11 in addressing arithmetic issues. In terms of reentrancy detection, Conkas, Confuzzius, Slither, and Smartcheck all achieve high scores of 30, 29, 29, and 29 respectively. Additionally, identifying unchecked return values vulnerabilities or in another name unchecked low-level calls vulnerabilities, Smartcheck and Solhint lead with scores of 51, closely followed by Securify and Slither with a score of 50. Subpar performers include Manticore with zero true positives due to exceptions during the execution of the detector. Semgrep exhibits only one single detection. The results highlight the complementary nature of these tools, suggesting that combining detectors with different strengths, particularly pairing Slither’s high recall with Mythril’s capability of detecting vulnerabilities from most categories, produces a more thorough vulnerability report. While Mythril in LightCross identifies 109 vulnerabilities, in SmartBugs only 81 vulnerabilities are identified. This is likely due to the timeouts, such as (DOCKER_TIMEOUT) or segmentation faults (DOCKER_SEGV) reported.
Comparison between LightCross and SmartBugs detectors. Our comparative analysis shown in Table 10 reveals that using LightCross Mythril and Slither only instead of all detectors in SmartBugs results in minimal vulnerability detection losses in most categories. The table shows the conservative estimate, which is the maximum detected vulnerabilities of each tool as the Min column, and the optimistic estimate, which is the sum of both tools capped by the number of total vulnerabilities as the Max column. In the following, we focus on the conservative estimate. When Mythril and Slither are included in the SmartBugs framework, a user switching to LightCross would primarily experience detection losses in five specific vulnerability types. Access Control (losing at least 3 vulnerabilities with Solhint detecting 16 versus LightCross detecting minimum to 13), Arithmetic Issues (losing 3 vulnerabilities with Oyente detecting 14 versus LightCross detecting 11), Front-Running (losing 1 vulnerability with Confuzzius/Securify detecting 2 versus LightCross detecting 1), Time Manipulation losing 2 vulnerabilities with Conkas, Slither, Solhint detecting 5 versus LightCross detecting 3 and Unchecked Return Values losing 1 vulnerability with Smartcheck, Solhint detecting 51 versus LightCross detecting 50. In particular, LightCross maintains the optimal detection capability in the Bad Randomness, Denial of Service, Reentrancy, and Unknown Unknowns categories, where Mythril or Slither already represents the best performing tool in SmartBugs. For Reentrancy and Unchecked Return Values, two critical vulnerability categories, LightCross demonstrate equivalent detection capability to the best performing individual SmartBugs tools, with potential for complete coverage depending on tool detection overlap. These findings suggest that adopting LightCross offers an efficient vulnerability detection approach with minimal compromise (a potential loss of 10 vulnerabilities across all categories) while significantly reducing the operational complexity associated with maintaining and integrating 14 separate analysis tools deployed as different Docker containers. This efficiency gain is particularly valuable in resource-constrained development environments or continuous integration pipelines where execution speed and reliability are paramount.
Comparable vulnerability detection results have also been reported in [47]. In particular, the conclusion that Slither and Mythril provide the best combination in terms of vulnerability detection (Table 7). Taking also into account the results of the vulnerability detection results using the LightCross and SmartBugs tools and their execution time, we determine that the best combination when using two detectors is Slither and Mythril and these are the detectors for the LightCross  tool.

4.4. Performance Evaluation

For the performance evaluation of LightCross we used Python 3.12.4, Mythril version 0.24.7, Slither version 0.10.1, and SmartBugs 2.0.10. The experiments were carried out using an Intel(R) Core(TM) i5-6260U CPU with 16GB of RAM and Ubuntu 22.04.5 operating system. LightCross can be executed from the command line or executed from other Python applications. There are two main arguments for the tool. First, we can add the single file we want to analyse and the second option is to add the path of the folder where we can analyse all the smart contracts that reside in the folder. We developed a bash script (exec-lightcross.sh) to automate the process of running LightCross with multiple folders that contain smart contracts.
LightCross leverages the subprocess Application Programming Interface (API), and consists of two files: run_smartvulscap.py and smartvulscan.py. The first file is the entry point for the tool which checks if the input is a smart contract file or folder and outputs an error message to the user. The second file is the main component of the tool. In the second file, both Mythril and Slither create a subprocess where the input is either a specific solidity smart contract or a folder that contains solidity smart contracts. Each tool persists its output on a file which is then parsed to construct the CSV file with the output of each tool in a structured way. Slither is configured with the following options: (a) -exclude-optimisation, (b) -exclude-informational, (c) -exclude-low. The first option excludes detector with optimisation findings, the second excludes detectors with informational findings, and the last option removes detectors designed to identify low-risk vulnerabilities. We believe that this configuration offers the best approach for optimising the tool’s output quality by reducing noise and unnecessary information. When running Mythril within LightCross, the only configuration option we use is the execution timeout, which is set to 900 s.
Figure 3 illustrates the execution time for analysing smart contracts in the SB-curated dataset using all the tools supported by SmartBugs. Solhint completes its tasks in an average duration of 18.9 s, Slither in 21.9 s, and Oyente in 25.6 s, positioning them as the fastest tools, generally finishing analyses in less than 30 s. Detectors that are moderate in speed, such as SmartCheck at 33 s, Semgrep at 36.8 s, and Conkas at 99.5 s, provide a balance of quick execution and high accuracy in identifying issues. In contrast, detectors at the lower end of the speed, including Manticore with an average of 1186 s, SFuzz with 1262 s, and Confuzzius with 1775 s per analysis, exhibit significantly longer durations, often spanning between 20 and 30 min. Confuzzius takes around 100 times longer than the other detectors. This difference emphasises the influence of vulnerability complexity on tool efficiency, as demonstrated by categories such as reentrancy, averaging 487.1 s, and unchecked low-level calls, averaging 473.4 s, which require higher processing times due to their complex nature.
Comparing the execution times between the SmartBugs and LightCross tools unveils significant performance variances for the Mythril and Slither detectors. Using SmartBugs, Mythril displays considerable computation time, with execution durations spanning from 190.896 s for front-running vulnerabilities to 1174 s for reentrancy vulnerabilities. In contrast, Slither achieves more favourable performance results, ranging from 10.48 s for other vulnerabilities to 47.97 s for addressing short-address vulnerabilities. However, upon transitioning to the LightCross tool, both detectors realise significant efficiency improvements: Mythril execution times are reduced, with a range from 12.26 s for short addresses to 600.8 s for reentrancy, reflecting up to a 15-fold increase in speed for particular vulnerability categories. Slither’s improvement is even more noticeable, achieving execution times from 0.64 s for access control to 1.13 s for unchecked return values, which is faster than when using SmartBugs. These observations highlight the important influence of the implementation framework on detector efficiency, with LightCross providing performance improvements that could greatly improve the practical usage and effectiveness of these detectors in conducting large-scale security evaluations for smart contracts, especially in a CI/CD environment.
Figure 4 shows the time it takes for LightCross and SmartBugs to analyse smart contracts that reside in a specific category of the SB-curated dataset. LightCross outperforms SmartBugs in terms of execution time across most vulnerability categories, particularly when using the Slither detector. The most significant difference is observed with the Slither tool, where LightCross can achieve average execution times below one second in most vulnerability categories, with times varying from 0.6 to 0.9 s. In contrast, the SmartBugs tool requires substantially longer execution durations, with times ranging from 8.4 to 26.3 s. In the case of the Mythril detector, LightCross once again exhibits better performance, particularly evident through its faster execution times in various categories. For example, in the access control category, the execution time is shorter, being 47.2 s compared to 211.3 s, while in the arithmetic category, the time required is 35.6 s as opposed to 270.5 s. However, both tools experience execution times similar to those of Mythril for vulnerability categories such as reentrancy and unchecked low-level calls. This outcome is attributed to the fact that the Mythril tool takes more time to analyse the smart contracts in the dataset since it uses Satisfiability Modulo Theories (SMT) solvers such as Z3 and checks whether a path exists for the contract to execute in a particular path.
CPU and memory. To evaluate CPU and memory usage for LightCross, we selected a smart contract from each category with the most lines of code. We developed a process profiler using psutil version 7.0 [49] to measure CPU usage and memory consumption when running Slither and Mythril in LightCross. We evaluated the same contracts using SmartBugs and captured the output of the docker stats [50] command to retrieve each container’s CPU utilisation and memory consumption for Slither and Mythril. For both cases, we recorded the peak CPU utilisation and memory usage. Figure 5 depicts the results of the peak CPU usage for the selected smart contracts. The comparative assessment between LightCross and SmartBugs demonstrates significant differences in CPU utilisation patterns, with LightCross consistently showing higher CPU consumption across both the Slither and Mythril analysis engines. Specifically, LightCross Slither operations averaged 95.52% CPU usage (ranging from 79% to 99.80%), while SmartBugs Slither maintained a more moderate 74.04% average (52.58% to 82.24%). The difference becomes more evident with the Mythril analysis, where LightCross averaged 107.50% CPU usage with peaks reaching 120.20%, compared to SmartBugs’s 100.45% average. In particular, the LightCross Mythril analysis exceeded the usage of 100% CPU for all contracts tested, indicating multi-core resource consumption. The most CPU-intensive analysis occurred with the smart_billions.sol contract under the LightCross Mythril configuration, which consumed 120.20% CPU resources. LightCross exhibits higher CPU utilisation in most of the smart contracts evaluated in relation to SmartBugs.
Figure 6 illustrates the results of measuring memory consumption when running LightCross and SmartBugs. There are distinct performance characteristics between the LightCross and SmartBugs platforms when Slither and Mythril are executed on smart contracts. For Slither, LightCross demonstrates improved memory efficiency with an average consumption of 42.68 MB that ranges from 36.89 to 63.61 MB, outperforming SmartBugs, which averaged 92.10 MB with a range of 73.39 to 166.40 MB. This represents a 2 × memory advantage for LightCross, indicating that LightCross manages memory more efficiently. In contrast, the Mythril-based symbolic execution analysis presents a different performance dynamic where LightCross averaged 1010.29 MB which ranges from 235.19 to 2781.03 MB compared to SmartBugs’ higher average of 1360.49 MB, with range from 388.70 to 4294.00 MB. Although SmartBugs consumed 34.7% more memory on average during Mythril execution, it exhibited greater variability with peak consumption reaching 4294 MB for the open_address_lottery.sol smart contract, compared to LightCross’ maximum of 2781.03 MB. The memory efficiency ratio of 0.74 (LightCross/SmartBugs) for the Mythril analysis indicates that LightCross maintains more consistent memory usage patterns, although both platforms require on average substantial memory resources. In particular, using Mythril in SmartBugs requires considerable memory consumption, which is on average approximately 15 × higher than Slither in SmartBugs. Similarly, Mythril in LightCross requires on average approximately 24 × more memory consumption than Slither in LightCross. In conclusion, the results show that LightCross has a higher CPU utilisation and enhanced memory management than SmartBugs due to LightCross having direct access to these resources without the overhead exhibited by Docker containers [51].

4.5. Limitations

Despite its contributions, LightCross is affected by some limitations, mostly inherited from the backend tool detection and performance limits:
  • Limited Detection Coverage: The tool relies solely on Slither and Mythril, which together detect only 53% and 48% of vulnerabilities in the SmartBugs dataset, respectively. Critical gaps exist for categories like bad randomness (10% coverage) and front-running (14%).
  • High False Positives/Negatives: Mythril’s low precision (31%) introduces noise, while Slither misses 52% of vulnerabilities, risking undetected exploits.
  • Performance Bottlenecks: Symbolic execution (e.g., Mythril’s 600 s analysis for reentrancy) limits scalability. Subprocess orchestration lacks advanced parallelism compared to containerised approaches like SmartBugs.
  • Isolated Contract Analysis: LightCross scans contracts in isolation, ignoring vulnerabilities arising from inter-contract interactions (e.g., cross-contract reentrancy).

5. Conclusions and Future Work

In this paper, we presented a lightweight vulnerability detection tool for smart contracts called LightCross. The tool works by creating subprocesses for Slither and Mythril detectors for analysing smart contracts. We conducted an experimental analysis measuring the vulnerability detection rate using the SB-curated dataset, which contains 143 vulnerable smart contracts with 206 total vulnerabilities. We found that LightCross exhibits the same detection rate compared to SmartBugs when using the same backend detectors (Slither, Mythril), while not depending on Docker containers. We also measured the time it takes to analyse the smart contracts according to the specified categories. The results showed that when using the Mythril detector, LightCross achieves execution times comparable to those of SmartBugs. With Slither in LightCross, the analyses are completed in a timeframe of 0.64–5.0 s between categories, while the use of SmartBugs extends these times to a range of 5.66–26.35 s. In general, LightCross enables developers to efficiently examine smart contracts for vulnerabilities in their smart contract using a lightweight tool that does not depend on software containers.
Additionally, to improve user-friendliness and currency, LightCross presents the verification results based on OpenSCV, a state-of-the-art academic classification of smart contract vulnerabilities, aligned with the industry-standard CWE and offering improvements over the unmaintained SWC taxonomy. More generally, we advise tool developers to adopt maintained and up-to-date classifications such as OpenSCV and OWASP SCWE.
Future work could expand on LightCross in several directions. First, integrating additional vulnerability detection tools beyond Slither and Mythril could enhance the tool’s coverage and detection. Second, developing an improved orchestration mechanism for the subprocesses could potentially improve performance, especially for complex contracts or large-scale analyses. For example, we could use the PyCOMPSs framework [52] to improve the parallel processing of a large number of smart contracts. Third, implementing a user-friendly interface, possibly as a web application or IDE plugin, could make LightCross more accessible to a wider range of smart contract developers and security engineers. Additionally, exploring ways to reduce the execution time for the reentrancy category, perhaps through optimisation of the Mythril subprocess, would be valuable. Finally, conducting a more extensive benchmarking against a wider range of smart contract vulnerability detection tools and on larger, more diverse datasets such as the SmartBugs wild dataset [53] would provide a more thorough evaluation of the capabilities and limitations of LightCross. Alternatively, an extensive live dataset can be created using Etherscan to download the smart contracts’ source code from Ethereum Mainnet and testnet as demonstrated in [54].
Furthermore, we could also bridge this research to professional practice by implementing a GUI or IDE extension for LightCross, making the tool more accessible to developers by providing a familiar interface and removing some of the more technical barriers to industrial adoption, as studied by Davis et al. [55].

Author Contributions

Conceptualisation, I.S., P.M., L.G. and M.I.; methodology, I.S., P.M., L.G. and M.I.; software, I.S. and M.I.; validation, I.S., P.M., L.G. and M.I.; formal analysis, I.S. and M.I.; investigation, I.S., P.M., L.G. and M.I.; resources, I.S. and M.I.; data curation, I.S. and M.I.; writing—original draft preparation, I.S., P.M., L.G. and M.I.; writing—review and editing, I.S., P.M., L.G. and M.I.; visualisation, I.S., P.M., L.G. and M.I.; supervision, L.G. and P.M.; project administration, L.G.; funding acquisition, L.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The software code for LightCross can be accessed at https://github.com/gsfyrakis/lightcross (accessed on 18 July 2025), and the SmartBugs curated dataset is available at https://github.com/SmartBugs/SmartBugs-curated (accessed on 18 July 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
APIApplication Programming Interface
CSVComma Separated Values
CWECommon Weakness Enumeration
DAODecentralized Autonomous Organisation
DASPDecentralized Application Security Project
DeFiDecentralised Finance
DoSDenial of Service
ETHEther (Ethereum’s cryptocurrency)
EVMEthereum Virtual Machine
FNFalse Negative
FNRFalse Negative Rate
FPFalse Positive
FPRFalse Positive Rate
GUIGraphical User Interface
IDEIntegrated Development Environment
LoCLines of Code
MLMachine Learning
ODCOrthogonal Defect Classification
OpenSCVOpen taxonomy for Smart Contract Vulnerabilities
PyCOMPSsPython Programming COMPSs Framework
SBSmartBugs
SCSmart Contract
SCSVSOWASP Smart Contract Security Verification Standard
SCWEOWASP Smart Contract Weakness Enumeration
SLRSystematic Literature Review
SMTSatisfiability Modulo Theories
SWCSmart Contract Weakness Classification
TNTrue Negative
TODTransaction Ordering Dependency
TPTrue Positive

References

  1. Szabo, N. Formalizing and Securing Relationships on Public Networks. First Monday 1997, 2. [Google Scholar] [CrossRef]
  2. Kowalski, T.; Chowdhury, M.M.; Latif, S.; Kambhampaty, K. Bitcoin: Cryptographic Algorithms, Security Vulnerabilities and Mitigations. In Proceedings of the 2022 IEEE International Conference on Electro Information Technology, EIT 2022, Mankato, MN, USA, 19–21 May 2022; pp. 544–549. [Google Scholar] [CrossRef]
  3. Farokhnia, S.; Goharshady, A.K. Options and Futures Imperil Bitcoin’s Security. In Proceedings of the IEEE International Conference on Blockchain, Blockchain 2024, Copenhagen, Denmark, 19–22 August 2024; pp. 157–164. [Google Scholar] [CrossRef]
  4. Modesti, P.; Shahandashti, S.F.; McCorry, P.; Hao, F. Formal modelling and security analysis of Bitcoin’s payment protocol. Comput. Secur. 2021, 107, 102279. [Google Scholar] [CrossRef]
  5. Samreen, N.F.; Alalfi, M.H. SmartScan: An approach to detect Denial of Service Vulnerability in Ethereum Smart Contracts. In Proceedings of the 2021 IEEE/ACM 4th International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB), Madrid, Spain, 31 May 2021; pp. 17–26. [Google Scholar] [CrossRef]
  6. Yu, X.; Zhao, H.; Hou, B.; Ying, Z.; Wu, B. Deescvhunter: A deep learning-based framework for smart contract vulnerability detection. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar] [CrossRef]
  7. Atzei, N.; Bartoletti, M.; Cimoli, T. A Survey of Attacks on Ethereum Smart Contracts (SoK). In Lecture Notes in Computer Science, Principles of Security and Trust, POST 2017; Maffei, M., Ryan, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10204, pp. 164–186. [Google Scholar] [CrossRef]
  8. Mueller, B. Smashing Ethereum smart contracts for fun and real profit. HITB SECCONF Amst. 2018, 9, 4–17. [Google Scholar]
  9. Khan, Z.A.; Namin, A.S. A Survey of Vulnerability Detection Techniques by Smart Contract Tools. IEEE Access 2024, 12, 70870–70910. [Google Scholar] [CrossRef]
  10. Groce, A.; Feist, J.; Grieco, G.; Colburn, M. What are the Actual Flaws in Important Smart Contracts (And How Can We Find Them)? In Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; pp. 634–653. [Google Scholar] [CrossRef]
  11. Ji, S.; Kim, D.; Im, H. Evaluating Countermeasures for Verifying the Integrity of Ethereum Smart Contract Applications. IEEE Access 2021, 9, 90029–90042. [Google Scholar] [CrossRef]
  12. Angelo, M.D.; Durieux, T.; Ferreira, J.F.; Salzer, G. SmartBugs 2.0: An Execution Framework for Weakness Detection in Ethereum Smart Contracts. In Proceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering, ASE 2023, Luxembourg, 11–15 September 2023; pp. 2102–2105. [Google Scholar] [CrossRef]
  13. Vidal, F.R.; Ivaki, N.; Laranjeiro, N. OpenSCV: An open hierarchical taxonomy for smart contract vulnerabilities. Empir. Softw. Eng. 2024, 29, 101. [Google Scholar] [CrossRef]
  14. MITRE Corporation. Common Weakness Enumeration (CWE). 2025. Available online: https://cwe.mitre.org/ (accessed on 9 April 2025).
  15. SmartContractSecurity. Smart Contract Weakness Classification (SWC) and Test Cases. 2020. Available online: https://swcregistry.io/ (accessed on 21 August 2025).
  16. Salzer, G.; Ferreira, J.F.; Jin, M. SB Curated: A Curated Dataset of Vulnerable Solidity Smart Contracts. 2023. Available online: https://github.com/smartbugs/smartbugs-curated (accessed on 23 April 2025).
  17. Sklaroff, J.M. Smart contracts and the cost of inflexibility. Univ. Pa. Law Rev. 2017, 166, 263. [Google Scholar]
  18. Akca, S.; Rajan, A.; Peng, C. SolAnalyser: A framework for analysing and testing smart contracts. In Proceedings of the 2019 26th Asia-Pacific Software Engineering Conference (APSEC), Putrajaya, Malaysia, 2–5 December 2019; pp. 482–489. [Google Scholar] [CrossRef]
  19. Xu, J.; Dang, F.; Ding, X.; Zhou, M. A Survey on Vulnerability Detection Tools of Smart Contract Bytecode. In Proceedings of the 2020 IEEE 3rd International Conference on Information Systems and Computer Aided Education (ICISCAE), Dalian, China, 27–29 September 2020. [Google Scholar] [CrossRef]
  20. Chen, H.; Pendleton, M.; Njilla, L.; Xu, S. A Survey on Ethereum Systems Security: Vulnerabilities, Attacks, and Defenses. ACM Comput. Surv. 2021, 67. [Google Scholar] [CrossRef]
  21. Bartoletti, M.; Benetollo, L.; Bugliesi, M.; Crafa, S.; Sasso, G.D.; Pettinau, R.; Pinna, A.; Piras, M.; Rossi, S.; Salis, S.; et al. Smart contract languages: A comparative analysis. Future Gener. Comput. Syst. 2025, 164, 107563. [Google Scholar] [CrossRef]
  22. Cryptopedia. What Was The DAO? 2016. Available online: https://www.gemini.com/cryptopedia/the-dao-hack-makerdao (accessed on 23 August 2025).
  23. Rameder, H.; Di Angelo, M.; Salzer, G. Review of automated vulnerability analysis of smart contracts on Ethereum. Front. Blockchain 2022, 5, 814977. [Google Scholar] [CrossRef]
  24. Grossman, S.; Abraham, I.; Golan-Gueta, G.; Michalevsky, Y.; Rinetzky, N.; Sagiv, M.; Zohar, Y. Online detection of effectively callback free objects with applications to smart contracts. Proc. ACM Program. Lang. 2017, 2, 1–28. [Google Scholar] [CrossRef]
  25. Delmolino, K.; Arnett, M.; Kosba, A.; Miller, A.; Shi, E. Step by Step Towards Creating a Safe Smart Contract: Lessons and Insights from a Cryptocurrency Lab. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2016; pp. 79–94. [Google Scholar] [CrossRef]
  26. Dika, A.; Nowostawski, M. Security Vulnerabilities in Ethereum Smart Contracts. In Proceedings of the 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Halifax, NS, Canada, 30 July–3 August 2018. [Google Scholar] [CrossRef]
  27. Ghaleb, A.; Rubin, J.; Pattabiraman, K. AChecker: Statically Detecting Smart Contract Access Control Vulnerabilities. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), Melbourne, Australia, 14–20 May 2023; pp. 945–956. [Google Scholar] [CrossRef]
  28. Zhu, H.; Yang, L.; Wang, L.; Sheng, V.S. A Survey on Security Analysis Methods of Smart Contracts. IEEE Trans. Serv. Comput. 2024, 17, 4522–4539. [Google Scholar] [CrossRef]
  29. Tikhomirov, S.; Voskresenskaya, E.; Ivanitskiy, I.; Takhaviev, R.; Marchenko, E.; Alexandrov, Y. SmartCheck: Static analysis of Ethereum smart contracts. In Proceedings of the 1st International Workshop on Emerging Trends in Software Engineering for Blockchain, Gothenburg, Sweden, 27 May 2018. [Google Scholar] [CrossRef]
  30. Khan, Z.A.; Siami Namin, A. Ethereum Smart Contracts: Vulnerabilities and their Classifications. In Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020. [Google Scholar] [CrossRef]
  31. Staderini, M.; Palli, C.; Bondavalli, A. Classification of Ethereum Vulnerabilities and their Propagations. In Proceedings of the 2020 Second International Conference on Blockchain Computing and Applications (BCCA), Antalya, Turkey, 2–5 November 2020. [Google Scholar] [CrossRef]
  32. Dingman, W.; Cohen, A.; Ferrara, N.; Lynch, A.; Jasinski, P.; Black, P.E.; Deng, L. Defects and Vulnerabilities in Smart Contracts, a Classification using the NIST Bugs Framework. Int. J. Networked Distrib. Comput. 2019, 7, 121–132. [Google Scholar] [CrossRef]
  33. Ruggiero, C.; Mazzini, P.; Coppa, E.; Lenti, S.; Bonomi, S. SoK: A Unified Data Model for Smart Contract Vulnerability Taxonomies. In Proceedings of the 19th International Conference on Availability, Reliability and Security, Vienna, Austria, 30 July–2 August 2024; pp. 1–13. [Google Scholar] [CrossRef]
  34. NCC Group. Decentralized Application Security Project (DASP). 2018. Available online: https://dasp.co/ (accessed on 10 September 2024).
  35. IBM Corporation. Orthogonal Defect Classification v5.2 for Software Design and Code, Version 5.2; IBM: Armonk, NY, USA, 2013.
  36. OWASP Smart Contract Security Project. Smart Contract Security Verification Standard (SCSVS). 2025. Available online: https://scs.owasp.org/SCSVS/ (accessed on 21 August 2025).
  37. OWASP Smart Contract Security Project. Smart Contract Security Weakness Enumeration (SCWE). 2025. Available online: https://scs.owasp.org/SCWE/ (accessed on 21 August 2025).
  38. Fontein, R. Comparison of static analysis tooling for smart contracts on the EVM. In Proceedings of the 28th Twente Student Conference on IT, Enschede, The Netherlands, 2 February 2018. [Google Scholar]
  39. Gupta, B.C.; Kumar, N.; Handa, A.; Shukla, S.K. An Insecurity Study of Ethereum Smart Contracts. In Lecture Notes in Computer Science, Proceedings of the Security, Privacy, and Applied Cryptography Engineering—10th International Conference, SPACE 2020, Kolkata, India, 17–21 December 2020; Batina, L., Picek, S., Mondal, M., Eds.; Springer: Cham, Switzerland, 2020; Volume 12586, pp. 188–207. [Google Scholar] [CrossRef]
  40. Mossberg, M.; Manzano, F.; Hennenfent, E.; Groce, A.; Grieco, G.; Feist, J.; Brunson, T.; Dinaburg, A. Manticore: A User-Friendly Symbolic Execution Framework for Binaries and Smart Contracts. In Proceedings of the 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), San Diego, CA, USA, 11–15 November 2019. [Google Scholar] [CrossRef]
  41. Bonomi, S.; Cappai, S.; Coppa, E. On the Efficacy of Smart Contract Analysis Tools. In Proceedings of the 2023 IEEE 34th International Symposium on Software Reliability Engineering Workshops (ISSREW), Florence, Italy, 9–12 October 2023; pp. 37–38. [Google Scholar] [CrossRef]
  42. Sayeed, S.; Marco-Gisbert, H.; Caira, T. Smart Contract: Attacks and Protections. IEEE Access 2020, 8, 24416–24427. [Google Scholar] [CrossRef]
  43. Tsankov, P.; Dan, A.; Drachsler-Cohen, D.; Gervais, A.; Bünzli, F.; Vechev, M. Securify: Practical Security Analysis of Smart Contracts. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS ’18, Toronto, ON, Canada, 15–19 October 2018. [Google Scholar] [CrossRef]
  44. Banisadr, E. How $800k Evaporated from the PoWH Coin Ponzi Scheme Overnight. 2018. Available online: https://medium.com/@ebanisadr/how-800k-evaporated-from-the-powh-coin-ponzi-scheme-overnight-1b025c33b530 (accessed on 10 January 2025).
  45. Ressi, D.; Spanò, A.; Benetollo, L.; Piazza, C.; Bugliesi, M.; Rossi, S. Vulnerability Detection in Ethereum Smart Contracts via Machine Learning: A Qualitative Analysis. arXiv 2024, arXiv:2407.18639. [Google Scholar] [CrossRef]
  46. Antunes, N.; Vieira, M. Assessing and Comparing Vulnerability Detection Tools for Web Services: Benchmarking Approach and Examples. IEEE Trans. Serv. Comput. 2015, 8, 269–283. [Google Scholar] [CrossRef]
  47. Durieux, T.; Ferreira, J.F.; Abreu, R.; Cruz, P. Empirical Review of Automated Analysis Tools on 47,587 Ethereum Smart Contracts. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, Seoul, Republic of Korea, 27 June–19 July 2020; pp. 530–541. [Google Scholar] [CrossRef]
  48. Qian, P.; He, J.; Lu, L.; Wu, S.; Lu, Z.; Wu, L.; Zhou, Y.; He, Q. Demystifying Random Number in Ethereum Smart Contract: Taxonomy, Vulnerability Identification, and Attack Detection. IEEE Trans. Softw. Eng. 2023, 49, 3793–3810. [Google Scholar] [CrossRef]
  49. Rodola, G. psutil: Cross-Platform Lib for Process and System Monitoring in Python. 2009. Available online: https://github.com/giampaolo/psutil (accessed on 21 August 2025).
  50. Docker Inc. Docker Container Stats. 2025. Available online: https://docs.docker.com/reference/cli/docker/container/stats/ (accessed on 21 August 2025).
  51. Baresi, L.; Quattrocchi, G.; Rasi, N. A Qualitative and Quantitative Analysis of Container Engines. J. Syst. Softw. 2024, 210, 111965. [Google Scholar] [CrossRef]
  52. Tejedor, E.; Becerra, Y.; Alomar, G.; Queralt, A.; Badia, R.M.; Torres, J.; Cortes, T.; Labarta, J. PyCOMPSs: Parallel Computational Workflows in Python. Int. J. High Perform. Comput. Appl. 2017, 31, 66–82. [Google Scholar] [CrossRef]
  53. Ferreira, J.F.; Durieux, T.; Maranhao, R. SmartBugs Wild Dataset. 2020. Available online: https://github.com/smartbugs/smartbugs-wild (accessed on 23 April 2025).
  54. Sendner, C.; Petzi, L.; Stang, J.; Dmitrienko, A. Large-Scale Study of Vulnerability Scanners for Ethereum Smart Contracts. In Proceedings of the 2024 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 19–23 May 2024; pp. 2273–2290. [Google Scholar] [CrossRef]
  55. Davis, J.A.; Clark, M.A.; Cofer, D.D.; Fifarek, A.; Hinchman, J.; Hoffman, J.A.; Hulbert, B.W.; Miller, S.P.; Wagner, L.G. Study on the Barriers to the Industrial Adoption of Formal Methods. In Lecture Notes in Computer Science, Proceedings of the Formal Methods for Industrial Critical Systems—18th International Workshop, FMICS 2013, Madrid, Spain, 23–24 September 2013; Pecheur, C., Dierkes, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8187, pp. 63–77. [Google Scholar] [CrossRef]
Figure 1. LightCross system workflow.
Figure 1. LightCross system workflow.
Computers 14 00369 g001
Figure 2. Vulnerability classification workflow from SWC IDs to OpenSCV mappings.
Figure 2. Vulnerability classification workflow from SWC IDs to OpenSCV mappings.
Computers 14 00369 g002
Figure 3. Execution time for all SmartBugs detectors.
Figure 3. Execution time for all SmartBugs detectors.
Computers 14 00369 g003
Figure 4. Execution time for all categories in the dataset for the LightCross and SmartBugs tools when using Slither and Mythril vulnerability detection tools.
Figure 4. Execution time for all categories in the dataset for the LightCross and SmartBugs tools when using Slither and Mythril vulnerability detection tools.
Computers 14 00369 g004
Figure 5. Peak CPU usage for Slither and Mythril when they are executed with LightCross and SmartBugs.
Figure 5. Peak CPU usage for Slither and Mythril when they are executed with LightCross and SmartBugs.
Computers 14 00369 g005
Figure 6. Peak memory consumption for Slither and Mythril when they are executed with LightCross and SmartBugs.
Figure 6. Peak memory consumption for Slither and Mythril when they are executed with LightCross and SmartBugs.
Computers 14 00369 g006
Table 1. Comparison of Main Smart Contract Vulnerability Taxonomies (L1 = Level 1 categories).
Table 1. Comparison of Main Smart Contract Vulnerability Taxonomies (L1 = Level 1 categories).
TaxonomyVulnL1 CatsLevelsUpdatedNotes
SWC [15]372018Industry, CWE-aligned
DASP [34]92018Industry
Rameder et al. [23]541022022Academic
OpenSCV [13]94832024Academic, CWE-aligned
Table 2. Smart Contract Vulnerability Taxonomies Repositories.
Table 2. Smart Contract Vulnerability Taxonomies Repositories.
TaxonomyURL (Accessed on 21 August 2025)
SWC [15]https://swcregistry.io/
DASP [34]https://dasp.co/
Rameder et al. [23]https://www.frontiersin.org/articles/10.3389/fbloc.2022.814977/full#supplementary-material
OpenSCV [13]https://openscv.dei.uc.pt/
Table 3. Examples of Mapping from SWC to OpenSCV (L2/L3 = Level 2/Level 3 categories).
Table 3. Examples of Mapping from SWC to OpenSCV (L2/L3 = Level 2/Level 3 categories).
SWC IDSWC DescriptionOpenSCV Category (L2)OpenSCV Category (L3)
SWC-101Integer Overflow and Underflow7.1 Overflow and Underflow
7.2 Division Bugs
7.3 Conversion Bugs
7.1.1 Integer Underflow
7.1.2 Integer Overflow
7.2.1 Divide by Zero
7.2.2 Integer Division
7.3.1 Truncation Bugs
7.3.2 Signedness Bugs
SWC-114Transaction Order Dependence6.1 Incorrect Sequencing of Behaviour6.1.2 Incorrect Function Call Order
6.1.4 Transfer Pre-Condition Dependent on Transaction Order
6.1.5 Transfer Amount Depending on Transaction Order
6.1.6 Transfer Recipient Depending on Transaction Order
SWC-116Block values as a proxy for time6.1 Incorrect Sequencing of Behaviour6.1.1 Incorrect Use of Event Blockchain Variables for Time
SWC-132Unexpected Ether balance6.1 Incorrect Sequencing of Behaviour6.1.3 Improper Locking
6.1 Incorrect Sequencing of Behaviour6.1.7 Exposed State Variables
6.1.8 Wrong Transaction Definition
Table 4. Smart contract vulnerability tool detection capabilities.
Table 4. Smart contract vulnerability tool detection capabilities.
Vulnerabilities ToolsReentrancyAccess ControlTimestamp dep.tx.originFrozen EtherOver/UnderflowTODs
Manticore×××✓✓×
Mythril✓✓××✓✓
MythX✓✓✓✓×✓✓
Oyente×××
Securify✓✓✓✓×✓✓
Slither✓✓✓✓✓✓✓✓✓✓
✓✓ = Detection, ✓ = Partial detection, × = No detection.
Table 5. Smart Contract Vulnerability Mapping from SWC to OpenSCV-Representative Vulnerabilities.
Table 5. Smart Contract Vulnerability Mapping from SWC to OpenSCV-Representative Vulnerabilities.
ToolContractVulnerabilitySeveritySWC-IDOpenSCV IndexDefect Type
MythrilSmartBillionsDependence on predictable environment variableHighSWC-1205.1 Bad RandomnessAlgorithm/Method
SlitherLuckyDoublerweak-prngHighSWC-1205.1 Bad RandomnessAlgorithm/Method
MythrilPredictTheBlockHashChallengeUnprotected Ether WithdrawalHighSWC-1054.2 Unprotected Transfer ValueChecking
SlitherRandomNumberGeneratorweak-prngHighSWC-1205.1 Bad RandomnessAlgorithm/Method
SlitherGuessTheRandomNumberChallengearbitrary-send-ethHighSWC-1054.2 Unprotected Transfer ValueChecking
slitherBlackJackcontrolled-array-lengthHighSWC-1282.1.2 Improper Exception Handling in a LoopAlgorithm/Method
Table 6. Number of vulnerabilities, number of contracts, and lines of code (LoC) per category from the SmartBugs curated dataset.
Table 6. Number of vulnerabilities, number of contracts, and lines of code (LoC) per category from the SmartBugs curated dataset.
CategoryVulnerabilitiesContractsLoC
Access Control2118933
Arithmetic2315304
Bad Randomness3081079
Denial of Service76177
Front Running74137
Unknown Unknowns33116
Reentrancy32312164
Short Addresses1118
Time Manipulation75100
Unchecked Return Values75524036
Total2061439064
Table 7. Performance Metrics by Vulnerability Category when using LightCross. Category abbreviations: AC = Access Control, AI = Arithmetic Issues, BR = Bad Randomness, DoS = Denial of Service, FR = Front-Running, RE = Reentrancy, SA = Short Addresses, TM = Time Manipulation, URV = Unchecked Return Values, UU = Unknown Unknowns.
Table 7. Performance Metrics by Vulnerability Category when using LightCross. Category abbreviations: AC = Access Control, AI = Arithmetic Issues, BR = Bad Randomness, DoS = Denial of Service, FR = Front-Running, RE = Reentrancy, SA = Short Addresses, TM = Time Manipulation, URV = Unchecked Return Values, UU = Unknown Unknowns.
TPFPFNPrec.Rec.F1Samples
CategoryMSMSMSMSMSMSMS
AC13718364140.420.160.330.210.540.223557
AI1107583230.610.000.790.000.690.002181
BR336103270.330.230.500.100.400.141240
DoS13547340.170.060.250.430.200.10954
FR1040270.200.000.330.000.250.0077
RE30297728030.280.511.000.910.440.6510760
SA0000010.000.000.000.000.000.0001
TM35718120.300.220.750.720.430.331125
URV4650113364250.290.580.920.670.440.62163111
UU13617100.140.150.501.000.220.26820
Overall109100243250211060.310.730.840.450.450.58373456
M = Mythril, S = Slither, Prec. = Precision, Rec. = Recall, F1 = F1 Score.
Table 8. Vulnerability Detection Coverage by Category when using LightCross. Detection Coverage represents the percentage of real vulnerabilities in the SmartBugs dataset that are detected by each tool. Category abbreviations: AC = Access Control, AI = Arithmetic Issues, BR = Bad Randomness, DoS = Denial of Service, FR = Front-Running, RE = Reentrancy, SA = Short Addresses, TM = Time Manipulation, URV = Unchecked Return Values, UU = Unknown Unknowns.
Table 8. Vulnerability Detection Coverage by Category when using LightCross. Detection Coverage represents the percentage of real vulnerabilities in the SmartBugs dataset that are detected by each tool. Category abbreviations: AC = Access Control, AI = Arithmetic Issues, BR = Bad Randomness, DoS = Denial of Service, FR = Front-Running, RE = Reentrancy, SA = Short Addresses, TM = Time Manipulation, URV = Unchecked Return Values, UU = Unknown Unknowns.
TotalDetectedMissedDetection Coverage
CategoryVulnerabilitiesMSMSMS
AC2113781462%33%
AI23110122348%0%
BR3033272710%10%
DoS7136414%43%
FR7106714%0%
UU3132033%100%
RE3230292394%90%
SA100110%0%
TM7354243%71%
URV754650292561%66%
Overall2061091009710653%48%
M = Mythril, S = Slither, Detected = TP, Missed = Total Vulnerabilities − TP, Detection Coverage = TP/Total Vulnerabilities.
Table 9. Performance Metrics by Vulnerability Category for SmartBugs detectors. Category abbreviations: AC = Access Control, AI = Arithmetic Issues, BR = Bad Randomness, DoS = Denial of Service, FR = Front-Running, RE = Reentrancy, SA = Short Addresses, TM = Time Manipulation, URV = Unchecked Return Values, UU = Unknown Unknowns.
Table 9. Performance Metrics by Vulnerability Category for SmartBugs detectors. Category abbreviations: AC = Access Control, AI = Arithmetic Issues, BR = Bad Randomness, DoS = Denial of Service, FR = Front-Running, RE = Reentrancy, SA = Short Addresses, TM = Time Manipulation, URV = Unchecked Return Values, UU = Unknown Unknowns.
ToolACAIBRDoSFRRESATMURVUUTPFPFNPrec.Rec.F1Sa.
Confuzzius9110022903380921761140.340.450.39382
Conkas090013005490942341120.290.460.35440
Honeybadger00000180000181421880.110.090.10348
Maian700000000071431990.050.030.04349
Manticore000000000001422060.000.000.00348
Osiris0130002702130551951510.220.270.24401
Oyente0140002700380791811270.300.380.34387
Securify100022800500811791250.310.390.35385
Semgrep100000000011452050.010.000.01351
Sfuzz080002101320621691440.270.300.28375
Smartcheck210102901510852061210.290.410.34412
Solhint160010005511743531320.170.360.23559
Slither7033029055031003921060.200.480.28598
Mythril8113111802361812041240.280.390.33409
TP = True Positives, FP = False Positives, FN = False Negatives, Prec. = Precision, Rec. = Recall, F1 = F1 Score, Sa. = No. Samples.
Table 10. Comparison of Vulnerability Detection: LightCross vs. SmartBugs Detectors. Category abbreviations: AC = Access Control, AI = Arithmetic Issues, BR = Bad Randomness, DoS = Denial of Service, FR = Front-Running, RE = Reentrancy, SA = Short Addresses, TM = Time Manipulation, URV = Unchecked Return Values, UU = Unknown Unknowns.
Table 10. Comparison of Vulnerability Detection: LightCross vs. SmartBugs Detectors. Category abbreviations: AC = Access Control, AI = Arithmetic Issues, BR = Bad Randomness, DoS = Denial of Service, FR = Front-Running, RE = Reentrancy, SA = Short Addresses, TM = Time Manipulation, URV = Unchecked Return Values, UU = Unknown Unknowns.
TotalLightCrossCombinedBest SmartBugsDifference
CategoryVulns.MythrilSlitherMinMaxTool (Detected)MinMax
AC211371320Solhint (16)−3+4
AI231101111Oyente (14)−3−3
BR303336Mythril/Slither (3)0+3
DoS71334Slither (3)0+1
FR71011Confuzzius/Securify (2)−1−1
RE3230293032Conkas (30)0+2
SA10000None (0)00
TM72335Conkas/Slither/Solhint (5)−20
URV7546505075Smartcheck/Solhint (51)−1+24
UU31333Slither (3)00
Total206109100117154(127)−10+30
Notes: Column LightCross Combined Min represents the conservative estimate (maximum of either tool) assuming complete overlap. The LightCross Combined Max column represents the optimistic estimate (sum of both tools, capped at total vulnerabilities) assuming minimal overlap. The Difference column depicts the potential gain/loss when using LightCross instead of the best SmartBugs tool, with negative values indicating vulnerabilities missed by LightCross. The Min Difference column uses the optimistic LightCross estimate, while the Max Difference column uses the conservative estimate.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sfyrakis, I.; Modesti, P.; Golightly, L.; Ikegima, M. LightCross: A Lightweight Smart Contract Vulnerability Detection Tool. Computers 2025, 14, 369. https://doi.org/10.3390/computers14090369

AMA Style

Sfyrakis I, Modesti P, Golightly L, Ikegima M. LightCross: A Lightweight Smart Contract Vulnerability Detection Tool. Computers. 2025; 14(9):369. https://doi.org/10.3390/computers14090369

Chicago/Turabian Style

Sfyrakis, Ioannis, Paolo Modesti, Lewis Golightly, and Minaro Ikegima. 2025. "LightCross: A Lightweight Smart Contract Vulnerability Detection Tool" Computers 14, no. 9: 369. https://doi.org/10.3390/computers14090369

APA Style

Sfyrakis, I., Modesti, P., Golightly, L., & Ikegima, M. (2025). LightCross: A Lightweight Smart Contract Vulnerability Detection Tool. Computers, 14(9), 369. https://doi.org/10.3390/computers14090369

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop