Next Article in Journal
Large Language Models as Kuwaiti Annotators
Previous Article in Journal
Technology Innovation and Social and Behavioral Commitment: A Case Study of Digital Transformation in the Moroccan Insurance Industry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dependency Reduction Techniques for Performance Improvement of Hyperledger Fabric Blockchain

Department of Electronic Engineering, Sogang University, Seoul 04107, Republic of Korea
*
Author to whom correspondence should be addressed.
Current address: LG Uplus, Seoul 07795, Republic of Korea.
Current address: Com2us Holdings, Seoul 08056, Republic of Korea.
Big Data Cogn. Comput. 2025, 9(2), 32; https://doi.org/10.3390/bdcc9020032
Submission received: 12 December 2024 / Revised: 31 January 2025 / Accepted: 5 February 2025 / Published: 7 February 2025

Abstract

We propose dependency reduction techniques for the performance enhancement of the Hyperledger Fabric blockchain. A dependency hazard may result from the parallelism in Hyperledger Fabric, which executes multiple transactions simultaneously in a single block. Since multiple transactions in a block are executed in parallel for throughput enhancement, dependency problems may arise among transactions involving the same key (If Z = A + D is executed in parallel with A = B + C, a read-after-write hazard for A will occur). To address these issues, our scheme proposes a transaction dependency checking system that integrates a dependency-tree-based management approach to dynamically prioritize transactions based on factors such as the tree level, arrival time, and starvation possibility. Our scheme constructs a dependency tree for transactions in a block to be executed in parallel over multiple execution units. We rearrange the transactions into blocks in such a way that the dependency among the transactions are removed as far as possible. This allows parallel execution of transactions to be performed without collision, enhancing the throughput against the conventional implementation of Hyperledger Fabric. Our illustrative implementation of the proposed scheme in a testbed for trading renewable energy shows a performance improvement as big as 27%, depending on the input mixture of transactions. A key innovation is the introduction of the Starve-Avoid method, which mitigates data starvation by dynamically adjusting the transaction priorities to balance throughput and fairness, ensuring that no transaction experiences indefinite delays. Unlike existing approaches that require structural modifications to the conventional Hyperledger Fabric, the proposed scheme optimizes the performance as an independent module, maintaining compatibility with the conventional Hyperledger Fabric architecture.

1. Introduction

Decentralization has become a significant objective for various platforms utilizing blockchain technology. Through blockchain, participants can engage in high-speed transactions [1] with guaranteed data integrity, without the need for third-party intervention [2]. Hyperledger Fabric [3] stands out as a private blockchain framework tailored for corporate applications, differing from public blockchains like Bitcoin [4] and Ethereum [5]. Hyperledger Fabric operates as a permissioned blockchain, where participants are authenticated and access is restricted to authorized entities. Hyperledger Fabric uses a channel-based framework [6], allowing multiple private ledgers within the same network for enhanced data privacy. In contrast, Ethereum is a permissionless [7], public blockchain, where anyone can join the network and participate in consensus. It uses a globally shared state, where all the nodes maintain the same ledger, ensuring transparency but limiting privacy and scalability. These differences highlight Hyperledger Fabric’s suitability for controlled, enterprise-grade networks and Ethereum’s focus on open, decentralized ecosystems.
Unlike public blockchains, Hyperledger Fabric employs a parallel processing consensus algorithm, which enhances the transaction speed [8]. A transaction represents a record of data changes, such as asset transfers or chaincode executions, which is proposed, validated, and added to the blockchain. In addition, its modular architecture [9] allows customization of the consensus algorithms [10], which ensure agreement among the nodes in a blockchain network on the validity and order of transactions, maintaining a consistent state across the distributed ledger and transaction guarantee policies for the network, enabling scalability and efficient resource utilization [11]. It is widely adopted in fields such as healthcare [12], P2P energy [13], and supply chain management [14].
However, due to this parallel processing, Hyperledger Fabric encounters a risk of transaction collisions [15]. Transaction collisions arise when multiple transactions within a single block use the same world-state key [16], which serves as a unique identifier in the blockchain network. This risk of collision increases when a single user repeatedly sends requests involving the same key in a short period of time, thus reducing the transaction success rate and hindering real-time performance due to multi-version concurrency control (MVCC) read conflicts [17].
Transaction collisions and dependency-related hazards pose critical challenges to the scalability and efficiency of Hyperledger Fabric. These issues are particularly pronounced in high-dependency scenarios, such as supply chain networks and financial systems, where frequent interactions with shared keys are inevitable. If unresolved, these challenges can lead to inefficient resource utilization, increased latency, and overall degradation of system reliability.
Recent studies have proposed innovative approaches to address these challenges. One approach involves client-side transaction execution validation, as exemplified by the Fabric/CA system [18]. This method reduces transaction collisions by introducing a randomized exponential backoff (REB) mechanism, which allows clients to control the timing of transaction submissions, thereby avoiding conflicts even before simulation begins. By grouping transactions accessing the same keys and sequentially submitting them, Fabric/CA minimizes both intra-client and inter-client conflicts. Experimental results demonstrate that this approach reduces MVCC conflicts by 98% and improves Fabric’s goodput (successful transaction throughput) by over 10 times under high-contention workloads. Another approach [19] emphasizes dependency management and parallel transaction processing. This method analyzes the transaction dependencies to optimize the order of execution, effectively reducing contention and enhancing throughput. Experimental results show that these strategies significantly improve the overall performance and scalability of Hyperledger Fabric, achieving higher success rates and transaction speeds compared to conventional methods.
However, these approaches have limitations that restrict their broader applicability. Client-side transaction execution validation relies heavily on the client’s ability to accurately detect and filter conflicting transactions, which may impose additional computational overheads and complexity, particularly in large-scale systems with diverse transaction patterns. Similarly, dependency management and parallel transaction processing often require detailed information about transaction dependencies and execution environments, which may not always be readily available or feasible in highly dynamic networks. Furthermore, many existing solutions require architectural modifications to the Hyperledger Fabric framework, potentially limiting their compatibility with existing deployments. These limitations highlight the need for a more robust and adaptable solution that can address transaction collisions and dependency-related hazards without imposing significant overheads or requiring fundamental changes to the underlying architecture.
In this paper, motivated by these challenges, we propose a transaction dependency checking system to solve the transaction collision problem that may occur in the Hyperledger Fabric network, where transaction dependency is defined as an association based on the key. The proposed method is an API that enhances performance while retaining the structure of the conventional Hyperledger Fabric blockchain. Transaction collision is prevented by selectively processing the requested transaction in the Hyperledger Fabric network. It reads the key and occurrence time from the transactions to create a priority-based dependency graph, which is a tree structure of transaction depending on the factors. The proposed API receives the block information, size, and time from the network, and it selectively executes the transaction according to algorithm priority, leading to the overall enhancement of performance.
A significant innovation presented in this paper is the Starve–Avoid method, which effectively addresses transaction starvation in Hyperledger Fabric. Unlike the conventional method that prioritizes transactions solely based on the dependency tree height or arrival time, the Starve-Avoid method dynamically adjusts the transaction priorities based on a predefined starvation limit. Transactions delayed beyond this threshold are given immediate priority, ensuring timely processing and avoiding prolonged delays. Experimental results highlight the method’s ability to balance throughput and fairness. Specifically, the Starve-Avoid method reduces the worst-case latency of delayed transactions to a single block, compared to significantly higher delays in the Deep-First method. While the Deep-First method achieves the highest throughput, it sacrifices fairness by causing severe starvation for certain transactions. In contrast, the Starve-Avoid method demonstrates a more balanced trade-off, achieving a throughput improvement compared to the conventional Hyperledger Fabric method, while maintaining significantly lower latency for high-priority transactions. This innovative approach makes the Starve-Avoid method particularly well suited for high-dependency networks, where fairness and performance must coexist to ensure system reliability and efficiency.

2. Related Work

2.1. Hyperledger Fabric Distributed Ledger

The distributed ledger of Hyperledger Fabric consists of two factors: world state, which represents the current values of a set of ledger states, and blockchain, which is a transaction log that records all the changes that have resulted in the current world state [20]. The world state shows the result created after all the previous transactions recorded in the block, displaying the keys of network participants. The blockchain is simply a storage of transaction records from the time of creation to the present. The world state is organized in the form of a database because data are frequently recorded, modified, and read through the chaincode. The blockchain is organized as a file system because there are almost no data requested, and the purpose of the data chapter is in append-only method.
In the execution process, which is the first process of Hyperledger Fabric’s consensus algorithm, the endorsing peers that received the transaction compare the corresponding value of their world state. It stimulates the result value to issue a certificate. The transaction content is updated when the version value of the transaction and the version value of the world state are the same. In the world state, the version is incremented every time the world state is updated through a transaction. If the version value is different from the version value of the world state, the update for the transaction fails, resulting in an MVCC read conflict [17].

2.2. P2P Energy Trading System Using Hyperledger Fabric [13]

Hyperledger Fabric is a private blockchain suitable for achieving data integrity between participants since it can identify users participating in the network. In addition, the parallel processing consensus algorithm improves the speed compared to the existing Ethereum-based P2P Energy Trading System. However, problems arise when transaction conflicts occur. In situations where prosumers sequentially generate transactions with the identical key faster than the network can manage, transaction conflict may occur, causing incorrect results for the transaction requested.

2.3. Hyperledger Fabric High-Throughput

The high-throughput [21] technique is proposed to solve the above-mentioned MVCC read conflict. It makes the transaction request into a composite key at first, then gathers and synthesizes such composite keys into one. This synthesized key becomes the information to be updated, and since it becomes the only key with no duplicates, the collision between transactions will be avoided.
However, there are still some drawbacks in the case of the query function. In order to perform a proper query, the user needs to separately execute the synthetic key by reinterpreting it back into composite keys. Otherwise, only one transaction out of multiple will be read since they are all synthesized into one. Therefore, it interprets that high-throughput can only partially solve the MVCC read conflict, even with the attempt to divide the parallel processing method and distributed ledger of Hyperledger Fabric into the world state and block.

2.4. ParBlockchain

ParBlockchain [22] is a Hyperledger Fabric blockchain system that improves the performance by identifying hazards in parallelism. However, it requires the complete redesign of Hyperledger Fabric’s core to achieve the desired enhancement. Basically, Hyperledger Fabric is an order-after-execute architecture, while ParBlockchain is an execute-after-order architecture. Our proposed method differs from ParBlockchain in that we do not require any change to the Hyperledger Fabric blockchain. We keep the current Hyperledger Fabric architecture as it is.

2.5. HTFabric

HTFabric [23], an innovative blockchain system designed to overcome the performance limitations of the execute–order–validate (EOV) model. The system tackles two critical bottlenecks in existing blockchain systems: the high computational cost of transaction re-ordering and the inefficient re-execution of invalid transactions. To address these challenges, HTFabric implements two key innovations: the write-ascending and read-descending (WARD) ordering algorithm, which provides faster transaction re-ordering with a lower computational overhead, and the transaction streaming partitioning (TSP) system for efficient parallel re-execution. The implementation also features an early gossip mechanism to reduce the version gaps between peers, enhancing the overall system efficiency. HTFabric’s technical advantages include the significantly lower time complexity compared to existing re-ordering methods, more efficient parallel execution through intelligent partitioning, and better handling of transaction conflicts. HTFabric’s innovative approach not only improves the transaction throughput but also provides a more scalable and efficient solution for high-throughput blockchain applications, making it particularly valuable for enterprise-level blockchain implementations.
However, despite these advancements, HTFabric does not completely eliminate MVCC-related issues. While it improves throughput by reducing invalid transactions and enabling parallel re-execution, the fundamental challenge of concurrent transaction conflicts inherent in the EOV model remains. MVCC conflicts can still arise due to out-of-date key versions in high-contention environments, which means MVCC failures cannot be entirely eliminated.

3. Data Dependency in Hyperledger Fabric Blockchain: A Bottleneck for Speed-Up

While the transactions in an Ethereum block are executed sequentially, the transactions in a block of Hyperledger Fabric are executed in parallel. This may cause a data dependency problem when the duplicate keys are used for multiple transactions in a block of Hyperledger Fabric.
Consider the two transactions (Tx0, Tx1) at the top of Figure 1, where the same key (‘A’) is used. We expect ‘B’ to be 150 after Block 0 of two transactions is processed. If the transactions are executed in a block of Ethereum, the result of ‘B’ would be 150 as expected (Tx0 and Tx1 are sequentially executed). Tx0 reads ‘A’ as 0 and updates ‘A’ to be 50. Tx1 reads ‘A’ as 50 and writes ‘B’ to be 150. However, if Tx0 and Tx1 are executed in Block 0 of Hyperledger Fabric, as shown in the middle of Figure 1, the unexpected result of ‘B’ would be 100. In Hyperledger Fabric, a key (‘A’) is only updated after Block 0 is written to the ledger. In this case, Tx0 reads ‘A’ as 0 and updates ‘A’ to be 50 (only after Block 0 is written). Tx1 reads ‘A’ as 0 (since ‘A’ is not updated yet) and writes ‘B’ to be 100.
This dependency problem can be prevented by separating Tx0 and Tx1 into two blocks, as shown at the bottom of Figure 1. Key ‘A’ is updated to be 50 by Tx0 in Block 0. Then, in Block 1, Tx1 reads ‘A’ as 50 (since ‘A’ has been updated by Tx0 in Block 0) and writes ‘B’ to be 150, which is the expected result. However, this solution uses two blocks instead of one block, doubling the processing time (assuming each block takes the same processing time). The example shows us how data dependency is a significant bottleneck to speed-up in Hyperledger Fabric.
In this paper, we propose a method to rearrange transactions into blocks in such a way as to avoid data dependency in blocks, enhancing the throughput and latency of Hyperledger Fabric.

4. Proposed Method

The proposed method is an API located between the user and the Hyperledger Fabric blockchain. It improves the success rate and speed of the blockchain network by avoiding data dependency. Different from methods introduced in related works, our method does not alter the structure of Hyperledger Fabric and works in the front end of the overall structure. At the same time, the high-throughput [20] and ParBlockchain [21] modify the original structure and work in the back end of the overall structure.
We consider an example list of 10 transactions on the left of Figure 2 to illustrate our approach. To compare the performance of our methods against the conventional Hyperledger Fabric, we estimate the block time needed (by the latter) to process the 10 transactions (with the block size of 2) as a baseline.
In the baseline processing, we identify dependency problems with blocks #1, #3, #4, #5, and #6. We assume that the initial values of ‘A’, ‘B’, ‘C’, and ‘D’ are all 0. In block #1, Tx2 and Tx3 are executed in parallel. The execution of Tx3 leads to failure since it reads the execution result of Tx1 instead of the intended Tx2. Therefore, ‘B’ is 20 instead of 30, resulting in an error (called an MVCC read conflict [17]). Tx3 will be moved on to the next block, 2, where Tx3 and Tx4 will be processed without error. The remaining five transactions, Tx5 to Tx9, will collide sequentially in the block because they all have key ‘D’. Therefore, as shown in block #1, Tx5 to Tx9 will be processed one by one from block #3 to block #7.
A total of 8 blocks were consumed to process the illustrative sequence of transactions, which is significantly different from the ideal case of 5 blocks without collisions (since each block can contain two transactions). This transaction sequence will be used throughout this section to understand the details of our method.
Figure 3 shows the overall structure of the dependency tree generation system, which is mainly used in our proposed method. A dependency tree is created with two simple rules. First, transactions are sorted chronologically. Second, if the transaction to be added has the same key, the tree level is increased by 1. A dependency tree is created when the rules are applied to all incoming transactions. Our method will then be applied to the dependency tree to avoid transaction conflicts. It will determine how many transactions could fit in one block without causing failure. The priority between transactions will be determined through the dependency priority factors explained in Section 4.2 and will be submitted according to the assigned priority.

4.1. Transaction Processing with Dependency Tree

This section discusses the processing procedure for transactions. The illustrative transaction sequence from Figure 2 will be used again to understand the proposed methods. Then, a dependency tree will be generated through this sequence, where a detailed explanation will follow for how to apply our method to the dependency tree.
Figure 4 shows the dependency tree generated with the previous example sequence from Figure 2. Consider the example in Figure 4, where transactions Tx0 to Tx9 will be added chronologically. Tx0 and Tx1 will be added to tree level 0. Since Tx2 has the same key ‘B’ as Tx1, a tree height of 1 will be added, and Tx2 will be added above Tx1 with tree level 1. An arrow will be drawn between Tx1 and Tx2 to show that data dependency is in place. Tx3 with key ‘B’ will be above Tx2. Tx4 has key ‘C’, which does not overlap with the existing ones; therefore, it will be added to tree level 0. Tx5, Tx6 and Tx7, all with key ‘D’, will be added on different tree levels for the same reason as Tx1, Tx2 and Tx3. Tx8 has keys ‘B’ and ‘D’ and will be added above Tx3 and Tx7, with two arrows representing the data dependency. Tx9 will be added above Tx8 to complete the dependency tree.
Figure 5 shows how the conventional Hyperledger Fabric method processes the example transaction set. For comparison with our proposed methods, if a transaction fails due to key overlap, the failed transaction is immediately detected and resubmitted on the next block. Transactions are processed in chronological order, where the dependency between each transaction is not considered as the conventional Hyperledger Fabric transaction processing method. Initially, Tx0 and Tx1 are processed in the first block. Afterward, Tx2 and Tx3 are processed in the same block, but only Tx2 is reflected into the network due to key collision. Tx3 is not reflected due to a key conflict and must be processed again in the next block. As a result, Tx3 and Tx4 are processed in the third block. While Tx5 and Tx6 are processed in the fourth block, Tx6 will not be reflected due to key overlap with Tx5. The rest of the transactions will all be reflected one by one due to key overlap. As a result, 8 blocks are used to process ten transactions.

4.1.1. Deep-First Method

Figure 6 shows the transaction processing procedure with the Deep-First method. This method processes transactions with the tallest height first due to them having the highest priority. In the first block, Tx1 and Tx5 have the highest height value of 4 since both have four transaction descendants connected through the tree structure; therefore, they are processed with main priority. Tx2 and Tx6 are processed in the second block due to the most significant height value of 3. Tx3 and Tx7 are processed similarly. After Tx3 and Tx7 are processed, Tx8 has the highest height value of 1. Therefore, it will be processed first with the highest priority among the three transactions in the root level of the dependency tree. An additional transaction can be processed since the block size is 2 in the example set. Two transactions, Tx0 and Tx4, have the same height value of 0, the same tree level of the root level. However, Tx0 has a faster requested time stamp. Therefore, with a slightly higher priority, Tx0 is processed first with Tx8. Finally, the remaining transactions, Tx4 and Tx9, are processed in one block because there is no key conflict. As a result, five blocks are used to process ten transactions. This can be seen as an optimal result regarding the number of blocks; however, data starvation occurs in some transactions due to the modified processing orders.

4.1.2. Delay-Hazard Method

Figure 7 shows the transaction processing procedure with the Delay-Hazard method. Similar to the conventional Hyperledger Fabric method, transactions are processed in chronological order. However, transaction collision is prevented due to the processing being based on the dependency tree. As a result, Tx0 and Tx1 are processed in the first block like the conventional Hyperledger because there is no key overlap. In the second block, where Tx2 and Tx3 were supposed to be processed in the conventional Hyperledger, Tx2 and Tx4 are processed primarily due to the main priority factor of this method: lowest tree level. Continuing, Tx3 and Tx5 are processed due to the main priority factor. The rest of the transactions are then processed sequentially in separate blocks due to the different tree levels of the dependency tree. As a result, seven blocks are used to process ten transactions without any transaction collision.

4.1.3. Starve-Avoid Method

The number of blocks needed for processing transactions can be optimized through the Deep-First method. However, there is a problem in that latency is significantly increased regarding data starvation for some transactions. For example, Tx0 is processed in the fourth block in Figure 6 due to the lower priority than other transactions processed first. Figure 8 shows the transaction processing procedure with the Starve-Avoid method, where the highest priority is given to transactions with possible data starvation. Suppose the starvation limit in this example is 1. Transactions will be compared with the conventional Hyperledger Fabric to determine the original processing order. Due to the given starvation limit of 1, transactions delayed at least one time will be given the highest priority. In the first block, Tx1 and Tx5 are processed similarly for the Deep-First method. At this time, since Tx0 was supposed to be processed in the conventional Hyperledger, the starvation count of Tx0 is increased by 1. The starvation count of Tx0 immediately reaches the starvation limit of 1. As a result, Tx0 is processed with Tx2 in the second block, whereas Tx2 and Tx6 are processed in the former method. Next, considering the height priority, Tx3 and Tx6 are processed in the third block. At this time, since Tx4 was supposed to be processed in the conventional Hyperledger, the starvation count of Tx4 is increased by 1. As a result, Tx4 immediately reaches the starvation limit of 1 and is processed in the next block with Tx7. Tx8 and Tx9 are processed in separate blocks to prevent key collision. As a result, six blocks are used to process ten transactions. This uses one block more than the Deep-First but dramatically reduces the worst-case latency to 1 block, which satisfies the starvation limit. In the case of data starvation, this method has the optimal performance of high TPS while minimizing the individual latency of transactions.

4.1.4. Method Comparison

Figure 9 shows the comparison of all the transaction processing methods. The transaction in each block indicates that it is submitted to the Hyperledger Fabric network. The time delay takes place in Hyperledger Fabric for resubmitting failed transactions to the next block. At the same time, the delay also takes place in our proposed method for updating the dependency tree right after every transaction submission process. All the methods showed faster processing speed than the conventional Hyperledger Fabric method by using fewer blocks to process the example transaction set. Through the comparison, Deep-First showed optimal performance by using five blocks to process all the transactions. However, this method showed the lowest performance in terms of data starvation occurrence. From this example, we can infer that the speed and data starvation are correlated with each other.

4.2. Dependency Priority Factors

4.2.1. Tree Level

The tree level refers to the level of the transaction in the tree structure. In order to avoid dependency problems due to duplicate transaction key submissions, the subsequent transaction must be processed in different blocks. If multiple transactions with the same key are expected to be placed in one block, the tree level of the subsequent transaction is increased to add height to the tree structure. For this process, we adapt the concept of the tree structure’s parent node and child node. When a transaction with the same key is requested, the first transaction becomes the parent node and the subsequent transaction becomes the child node. The proposed method will submit transactions with a low tree level, which is the parent node transaction, to process the tree structure of transactions efficiently.

4.2.2. Time

Transactions in the Hyperledger Fabric blockchain are submitted in order of arrival. The proposed method also submits transactions in chronological order after all the priority factors are considered. This is necessary to minimize data starvation while maximizing overall performance.

4.2.3. Height

In the tree structure, transactions with the same key will form a parent/child node pair, and the transactions will increase the tree level based on the parent node. The tree level will be determined in the tree structure according to the number of parent nodes connected to a specific transaction. The current tree level of the specific transaction will correspond to the height value of the transaction. This priority will be considered for efficiently submitting transactions of the tree structure to decrease the overall tree level while preventing transaction collision.

4.2.4. Starvation Limit

Transactions in the tree structure can be processed late by other priority factors. In that case, the latency for a single transaction will increase. When determining the worst case, one transaction’s priority may be lower than the priority of all the incoming transactions. This may endlessly increase the latency of low-priority transactions, causing massive data starvation. If a transaction is processed later than the block that should be processed, the starvation count of the transaction increases. The transaction is processed with the highest priority when the starvation count value reaches a specific value. This specific value will be named the starvation limit, and the current priority will reduce data starvation.

4.3. Dependency Check Methods

4.3.1. Three Types of Dependency Check Methods

Focusing on the above elements, we proposed three types of dependency check methods with different priority criteria.
Table 1 shows the priority of each dependency check method for submitting transactions. Transactions are primarily submitted through the highest priority factor and go on to the next priority factor for determination. The priority orders for each method are explained below.
  • Deep-First
Compared with the conventional Hyperledger Fabric method, the top priority goes to the lowest tree level of the dependency tree. If transactions have the same tree level, the next priority of the highest height would be the critical factor to determine. The fastest arrival time would be the next priority to determine within the same height between transactions.
2.
Delay-Hazard
Compared with the conventional Hyperledger Fabric method, the top priority goes to the lowest tree level of the dependency tree. Within the same tree level between transactions, the next priority of the fastest arrival time would be the essential factor to determine.
3.
Starve-Avoid
The priority factors of the current method are similar to Deep-First, except top priority would be a transaction with data starvation during processing. The rest of the priority factors are identical to Deep-First.
Table 1. Dependency priority factors in each dependency check method.
Table 1. Dependency priority factors in each dependency check method.
PriorityConventional
Hyperledger Fabric
Deep-FirstDelay-HazardStarve-Avoid
1TimeLowest
Tree Level
Lowest
Tree Level
Transaction beyond Starvation Limit
2-Highest
Height
Fastest
Arrival Time
Lowest
Tree Level
3-Fastest
Arrival Time
-Highest
Height
4---Fastest
Arrival Time

4.3.2. Details of Dependency Check Methods

This section shows the details of the dependency check methods explained above. All the proposed methods in this paper consist of four interactive algorithms. The four interactive algorithms shown in this section only refer to the Delay-Hazard method since the rest of the algorithms follow the same structure except for several parts. An in-depth explanation of where the differences between the three different methods will follow.
The pseudocode for the Grafting algorithm is shown in Figure 10, where Tlist stands for the transaction list that represents the memory pool of unprocessed transactions and where Rlist represents the root list of transactions on the root level of the dependency tree. Qcount represents a query count, a variable necessary to count transactions that perform read-only functions since these transactions are not included in the block processing. D represents the descendant list, W the write set, R the read set, C the child set, and P the parent set. All the transactions in the memory pool Tlist are sorted in ascending order of time to primarily process transactions of the fastest requested. Transactions in Tlist are sorted through a comparison process one by one, generating a tree structure to organize the transactions into a dependency tree. For the specific transaction, all the descending transactions are brought up for the comparison process. If a transaction’s write set is identical to the read and write set of a transaction being compared, the former transaction becomes the parent node, whereas the latter becomes the child node, forming a tree structure.
The pseudocode for the Descendants algorithm is shown in Figure 11, where Dlist represents the descendant list to fetch all the descendant nodes of a specific transaction and DTlist represents the temporary descendant list to participate in the descendant node recalling the process. This algorithm precisely fetches all the descending nodes of the input transaction X without overlapping.
The pseudocode for the Pruning algorithm is shown in Figure 12, where B represents the block size of the Hyperledger Fabric network and Slist represents the send list to proceed with transactions for the next block of the network. This algorithm removes the transactions about to be processed in the network from the Rlist and returns Slist. Recalling all the child nodes of transactions scheduled to be processed accurately removes the parent/child relation to prepare for creating a new dependency tree without the transactions that are about to be processed. An additional data starvation counting process is added for the Deep-First method.
The pseudocode for the Submit Transaction algorithm is shown in Figure 13, where T represents the newly requested transaction in real time and where BT (block batch time of the Hyperledger Fabric network) represents the block batch time of the Hyperledger Fabric network. This algorithm is a superset of all the other interactive algorithms that consequently adds newly requested transactions to Tlist and sends priority transactions to the Hyperledger Fabric network. Dependency trees are generated every time a transaction is requested into the memory pool and are submitted to the Hyperledger Fabric network when the Rlist exceeds the current block size or standby time exceeds the block batch time of the network. The Deep-First Rlist is sorted by height before the Pruning algorithm process. An additional priority transaction list is added to process any transactions where data starvation occurred by top priority for the Starve-Avoid method.

5. Experiments and Results

As mentioned earlier, in the conventional Hyperledger Fabric, an error occurs when multiple transactions with the same key overlap. In addition, submitting a query transaction before the write transaction is reflected in the network will cause transaction failure. The conventional Hyperledger Fabric method will be defined as submitting an incoming transaction regardless of error occurrence, recalling and retransmitting the failed transaction, and executing it until all the transactions succeed without errors. The proposed method prevents transaction errors by identifying the dependency of random requested transactions in advance. We will compare our method with the conventional Hyperledger Fabric method through simulation experiments. The experiment’s purpose is to analyze each method’s performance when all the transactions are submitted without collision.
All the methods were applied to the identical transaction set for precise comparison, forming a complicated dependency tree under the P2P Energy Trading System scenario. It consists of three organizations, six peers, and one solo orderer. The virtual machine’s RAM is set by 16 G, 1vCPU, and runs on the base of AMD (Advanced Micro Devices, Santa Clara, CA, USA) Ryzen 7 3700X 8-Core Processor 3.5 Ghz.

5.1. Experimental Setup

5.1.1. API-Integrated Experimental Setup for Dependency Management in Hyperledger Fabric

The experimental setup for evaluating the proposed transaction dependency checking System was implemented within a Hyperledger Fabric blockchain network. The network consists of three organizations: Org1 (Prosumers), Org2 (Consumers), and Org3 (DSO), with each organization containing two peer nodes. All the peer nodes are connected to a shared channel, enabling communication and transaction exchange. The network utilizes a solo ordering service for transaction sequencing and block creation. Transactions are generated by a Docker-based client, which communicates with the API for dependency resolution before interacting with the Hyperledger Fabric network. The experimental environment is depicted in Figure 14a.
The API was developed as a middleware layer to optimize transaction processing within the Hyperledger Fabric infrastructure. It receives transaction metadata, including the key dependencies and timestamps, from the client. Using these data, the API constructs a dependency graph and dynamically prioritizes transactions based on four key factors: tree level, height, time, and starvation count. By addressing dependency conflicts at the API level, this design ensures smoother transaction processing and significantly reduces the overhead on the Hyperledger Fabric network. The API then submits the prioritized transactions back to the peer nodes for endorsement and eventual inclusion in the blockchain. This process ensures that transaction collisions and delays caused by dependency-related hazards are minimized.
The entire Hyperledger Fabric network was deployed using Docker containers to ensure consistency and reproducibility during the experiments. Each component of the network, including the peer nodes, orderer, and client, operates within its own container, as shown in Figure 14b. The peer nodes are responsible for endorsing transactions and maintaining the blockchain ledger, while the orderer sequences transactions and batches them into blocks. The Docker-based client simulates transaction generation and interacts with the API for dependency management. This containerized setup allows for precise control of variables such as the block size, transaction volume, and dependency levels, ensuring that the results are reproducible and reflect real-world scenarios.
In this setup, the API seamlessly integrates with the Hyperledger Fabric network without requiring any modifications to its core architecture. The API plays a critical role in the experimental setup by managing transaction processing and optimization. First, it dynamically analyzes the transaction metadata received from the client, such as the key dependencies and timestamps, to adjust the transaction priorities based on predefined factors. Once the transactions are prioritized, the API submits them back to the peer nodes for endorsement and ensures their inclusion in the ordered blocks. Additionally, the API monitors the transaction flow to detect and resolve potential collisions or delays, effectively mitigating the dependency-related hazards before final submission to the blockchain network. This integration directly influences the experimental results by improving transaction throughput, reducing latency, and optimizing resource utilization.
Since the error occurrence varies according to the range of the key, the result is organized based on the percentage of data dependency. For instance, setting the range of keys as much as possible would set the data dependency near 0%, while setting only one key would set the data dependency at 100%. In addition, only verified transactions enter the network so that proposal errors do not occur in the endorsing peer. This is necessary since successful transaction processing at the block would only matter for precise result comparison. A random transaction has two functions: an InjectToken function that puts a token in its key recalling one key and a Transfer function that transfers token to another key recalling two keys. For example, a transaction where ‘A’ adds 1000 Token to itself was marked as {InjectToken, A, 1000}, and a transaction where ‘A’ sends 1000 Token to ‘B’ was marked as {transfer, A, B, 1000}.
For precise comparison between the conventional Hyperledger Fabric method and our proposed method, all the measurements were performed in situations where transactions are stacked up at once, forming a large dependency tree. Three critical factors are set for the results: the total number of transactions, the total number of keys, and the block size of the Hyperledger Fabric network. For each variable setup, the results will differ between the original processing and the proposed method. Since the percentage of collision rates compares the results, the keys are calculated via the block size.
The result of our proposed method is compared with the conventional Hyperledger Fabric method; therefore, the graph results are based on the conventional Hyperledger Fabric method for comparison. In the conventional Hyperledger Fabric method, if a transaction fails to be submitted into a block due to key collision, it will automatically detect the transaction failure and resubmit it immediately to the next block. All the failed transactions will be recorded in the Fabric log, and resubmission of the failed transaction will take place until all the transactions are successfully submitted to the Hyperledger Fabric network. Since the conventional Hyperledger Fabric method would not handle failed transactions that need to be resubmitted until it succeeds, the submitting methods are slightly altered for proper and meaningful results comparison.
As the total number of transactions increases, there is a higher chance of transactions overlapping with the same key with each other in the randomly generated memory pool, resulting in creating a complex dependency tree. As the size of the block increases, there is a higher chance that transactions will collide inside the same block, eventually costing more time for all the transactions to be submitted. Therefore, our proposed method would show a more drastic result comparison when the total number of transactions is set as high, proceeding with a large size of blocks.
Three of our proposed methods are compared with the conventional Hyperledger Fabric method. The result of each comparison is measured based on the number of blocks created for the whole transaction submitting process. For instance, with average latency, if the conventional Hyperledger Fabric method took five blocks to submit all the transactions and the proposed method took four blocks, the average latency for the proposed method would be −1 since one block was reduced during the whole process. The minimum value for the average latency would be zero when the number of processed blocks is the same as the conventional Hyperledger Fabric method, which would likely occur when the data dependency is set to 0% or 100%. For the worst-case latency (highest data starvation), transactions proceeding with our proposed method would be compared with the conventional Hyperledger Fabric method. If transaction ‘A’ was supposed to be submitted at block number 1 with the conventional Hyperledger Fabric method, our proposed method handled such transaction ‘A’ at block number 4 due to the dependency tree. If this was the maximum delay during the entire process, the worst-case latency would be three since a total of three blocks were delayed in this case. This result would be a helpful indicator to understand the trade-backs that can happen when transactions are preceded for faster performance. The minimum value possible for the worst-case latency would be zero when there is no delay in transactions between the method comparisons. For TPS, if the conventional Hyperledger Fabric method took ten blocks for the submitting process, and the proposed method took five, the TPS would be two since the proposed method handled transactions twice as fast as the conventional Hyperledger Fabric method. The minimum value for the TPS would be one when no difference occurred between the method comparisons.
A random set of transactions are generated in advance through the simulation for precise and meaningful comparison results. The randomly generated set of transactions is processed by the conventional Hyperledger Fabric method and three of our proposed methods. Attempts to compare the results for random transactions are maximized by measuring more than 1000 times each for every possible performance factor to represent the results as a general case. If the randomly generated set of transactions shows relative differences (even with the slightest difference), performance evaluations would be added to the result and averaged down with the nested counts.

5.1.2. P2P Energy Trading System Using Hyperledger Fabric

Figure 15 shows the throughput comparison between Hyperledger Fabric [13] and the private Ethereum blockchain against our dependency reduction method. For fair comparison, the conventional Hyperledger Fabric is modified to detect and retry failed transactions. We have built a miniature P2P Energy Trading System, as in Figure 16, for performance comparison in a realistic environment. Hyperledger Fabric (both conventional and our proposed method) shows similar super performance due to the parallelism at a low data dependency (little or no dependency hazard).
However, the performance of the conventional Hyperledger Fabric rapidly degrades as the dependency increases. Figure 15 compares the performance of the P2P Energy Trading System on the conventional Hyperledger Fabric [8], Ethereum blockchain and our method as the dependency varies from 40% to 100%. The performance of the private Ethereum blockchain is shown as a solid line for reference in Figure 15 and stays constant over varying dependency since the transactions are executed in a sequential manner. The performance of the conventional Hyperledger Fabric for the same P2P Energy Trading System is shown as a broken black line (----), while the performance of our dependency reduction method is shown as a broken blue line (-ㆍ-ㆍ-). Our method exceeds the conventional Hyperledger Fabric by as much as 27% (Figure 15).
Table 2 summarizes the performance comparison of the P2P Energy Trading System on Ethereum and the conventional Hyperledger Fabric [8] and our method. Hyperledger Fabric (both conventional and our proposed method) shows similar super performance due to the parallelism at a low data dependency (0% data dependency hazard). Assume the total number of transactions to be n T h ˜ d , where the total number is approximated through the data dependency d , and the inter-block batch generation time is given as t b h . The throughput of Hyperledger Fabric can be derived as follows.
T h r o u g h p u t T P S   f o r   H y p e r l e d g e r = n T h ˜ d t b h
Assume the average total number of transactions to be n T e ¯ , the intercept to be β 0 , and the regression coefficient to be β 1 [24]. The throughput of the private Ethereum can be derived as follows.
T h r o u g h p u t T P S   f o r   a   p r i v a t e   E t h e r e u m = n T e ¯ β 0 + β 1 n T e ¯
Through this derivation, we can understand that the throughput of Hyperledger decreases significantly at high data dependency. Assume that the total number of transactions is 100 and the maximum block size is 10. For 0% data dependency, the ideal case for processing would be ten blocks. At 100% data dependencies, all the transactions cannot be included in the same block because they collide with each other; therefore, they would have to be processed by 100 blocks. Increasing the BT (block batch time of the Hyperledger Fabric network) by ten times results in a 10 × reduction in throughput.

5.2. Results

5.2.1. Throughput Comparison According to Data Dependency

Figure 17 shows the throughput comparison according to data dependency with the confidence intervals. The following equation is applied to calculate the confidence interval for our comparison.
X ¯   ±   z α / 2 σ n
The confidence interval is the interval into which the actual average (over all the possible input matrices in this example) falls with a certain probability (confidence) [25]. X ¯ , z α / 2 , σ , and n represent the average energy over (randomly generated) sample input matrices, the standard average percentile, the standard deviation, and the number of sample matrices, respectively. The probability that the actual average energy dissipation belongs to the interval in (1) is 1 α [25]. Figure 17 compares the throughput results over 1000 randomly generated 50 transaction sets of our design with that of the Hyperledger Fabric. The 95% confidence intervals are compared in Figure 17. All the designs in this paper follow the simulation method based on this confidence interval [26]. The delay limit of the Starve-Avoid method was set to one for meaningful interpretation of the results of all the experiments.

5.2.2. Average Throughput and Average Worst-Case Latency When #Tx = 30

Since our method is proposed to show an advantage in adequately controlling transactions via the dependency tree, speed-up of the transaction sending process would be a significant achievement. The graph in Figure 18 shows the average speed-up for all our proposed methods compared with the conventional Hyperledger. The graph shows that the difference between the three methods is easily distinguishable when the transaction collision rates are about 40%. For all circumstances, Deep-First showed the best performance for processing all 30 transactions, while Delay-Hazard showed the least. The Starve-Avoid method showed performance results near average with the other two methods. Comparison results always tend to show consistent relevance, where Deep-First processed the set of transactions the fastest, Starve-Avoid the second, and Delay-Hazard slowest. Even though there are speed-up differences between the three proposed methods, significant performance development was achieved, except for extreme cases where the transactions form meaningless dependency trees for processing (data dependency with 0% or 100%). All three methods have a better speed-up advantage than the conventional Hyperledger when the transaction collision rate is 40%, where Deep-First shows a maximum performance increase of 23.8%.
Figure 19 shows the average worst-case latency for all our proposed methods compared with the conventional Hyperledger. With the graph, we can figure out that Deep-First has the maximum worst-case latency, while the other methods tend to show lower values. The graph shown in Figure 19 clearly shows the trade-off with the Deep-First, where significant speed development is achieved by taking maximized height control with the dependency tree; however, there is the inevitable occurrence of data starvation. The difference between the three methods is easily distinguishable when the transaction collision rates are below 50%. This is because, at low collision rates, the overall height of the dependency tree created by the random set of transactions will tend to be low. Transactions processed based on height priority will affect the original sequence even more since the transactions will be proceeded ahead due to the higher priority of height, altering the original sequence. The results with the Delay-Hazard showed superior performance in preventing data starvation, where none occurred at all the collision probabilities. This is because instead of pulling the order according to the height values, transactions are only added to the remaining spaces of the block compared with the conventional Hyperledger. Another significant result would be the worst-case latency of Starve-Avoid. Although at low collision rates, it tends to show higher worst-case latency than Delay-Hazard, it consistently manages to minimize the worst-case latency by one, which would have been higher by applying Deep-First. The Starve-Avoid method manages to handle the trade-off between the other two methods. It would be preferred as the most well-balanced method for submitting delayed sets of transactions.

5.2.3. Average Throughput and Average Worst-Case Latency When #Tx = 40

The graph in Figure 20 shows the average speed-up for all our proposed methods compared with the conventional Hyperledger Fabric method. A noticeable change between the current and previous graphs is the gap difference between all the methods at the same percentage. The methods of Deep-First and Starve-Avoid showed a minimal difference from the previous result in Figure 18; however, Delay-Hazard tended to show an increase in speed-up performance. More efficient usage of the block capacity was achieved compared with the previous result. The optimal performance of a 24.5% increase was also achieved by adapting the Deep-First at a data dependency of 40%.
The graph in Figure 21 shows the average worst-case latency for all our proposed methods compared with the conventional Hyperledger Fabric method. Compared with the worst-case latency result when the transaction numbers were 30 in Figure 19, the current graph shows a broader range of differences between all the proposed methods. This is because as the complete set of transaction numbers increases, there is a higher chance of a more complex dependency tree being generated, leading to a significant difference in results between the proposed methods. The results in the current graph tend to show similar results to the previous graph of the total number of transactions being 30, except the maximum average worst-case latency increases by about 1.5 blocks more.

5.2.4. Average Throughput and Average Worst-Case Latency When #Tx = 50

With a linear increase of 10 transactions for the entire submitting process, it would show the overall tendency of all three proposed methods compared to the conventional Hyperledger Fabric method.
The graph in Figure 22 shows the average speed-up for all our proposed methods compared with the conventional Hyperledger Fabric method. A noticeable change between the current and previous graphs is the slight increase in performance at a data dependency of 40%. The conclusion that may be drawn from the previous and current graphs is that the performance is optimized when the data dependency is nearly 40%. After several experiments by shifting the data dependency between 30 and 50%, we figured out that the performance was maximized at a data dependency rate of 37.5%. Therefore, we will identify the tendency through additional experiments at a fixed data dependency of 37.5% as the total number of transactions increases.
The graph in Figure 23 shows the average worst-case latency for all our proposed methods compared with the conventional Hyperledger Fabric method. There are two significant results in this graph that we can figure out. First is that the transaction set processed by Deep-First has a slightly linear increase with the latency value of 1.5 blocks for each increment of 10 transactions. Next, the most important is that the Starve-Avoid method consistently solves the starvation problem occurring in Deep-First. This would be a massive advantage of the Starve-Avoid method since it is a reasonable solution for the trade-off between the other two methods. The difference between the starvation considered method will increase drastically as the full set of transactions increases. It would be preferred as an acceptable method when a huge delay would be caused in large blockchain networks since it has the advantage of fast performance compared to Delay-Hazard and a huge advantage compared to the Deep-First.

5.2.5. Average Throughput at Fixed Data Dependency of 37.5%

Figure 24 shows the tendency of the average speed-up at a fixed data dependency of 37.5% when the total number of transactions increases from 30 to 100. A performance increase with the Delay-Hazard is achieved until it reached the limit of 20% when the total number of transactions was above 60. On the contrary, the other two methods show relatively consistent results. As the number of transactions increases, the fluctuation decreases, with an enhancement limit of 27% through Deep-First and 26% through Starve-Avoid. We can conclude that Deep-First and Starve-Avoid show optimal performance results compared to the Delay-Hazard. Although Deep-First shows the highest performance results, the Starve-Avoid shows the most optimal results in minimizing data starvation.

5.2.6. Latency Distribution with 50 Transactions

Figure 25 shows the ogive graph of the three methods with latency when the total number of transactions is 50. Each graph shows the CDF and the frequency at different latency values, where the latency is based on the block number compared with the conventional Hyperledger Fabric method. Latency below zero means it processed transactions faster than it was supposed to, while above zero means it processed slower, that is the order, in other words, the occurrence of data starvation. As seen from previous results, Delay-Hazard shows no data starvation has occurred. The slight advance of processing by −1 or −2 latency is achieved through filling transactions in the remaining block spaces. On the contrary, Deep-First shows the widest CDF, achieving better performance results by shifting the overall delay frequency to the left, however, with more frequent data starvation. We can figure out that the Starve-Avoid method adequately manages data starvation by shifting all the transactions with more than two delays to the left of one block delay. Through the three graphs, it can be seen once again that the Starve-Avoid method shows the most optimized performance.

6. Conclusions

This paper proposes a method to solve the dependency problem between transactions occurring in a Hyperledger Fabric blockchain network. Transaction key conflicts arise when multiple transactions attempt to modify the same world-state key simultaneously, resulting in MVCC read conflicts. These conflicts disrupt the real-time performance by introducing delays and proposal errors during transaction verification. Consequently, users are often required to manually resubmit failed transactions, which increases the processing time and reduces the overall network efficiency. Moreover, even unexecuted failed transactions are logged in the blockchain’s world state, consuming valuable storage space and creating resource inefficiencies. Over time, this accumulation of failed transactions leads to performance degradation and scalability challenges in the network.
The system incorporates three innovative methods—Deep-First, Delay-Hazard, and Starve-Avoid—to optimize transaction processing and improve performance. These methods address critical challenges in managing transaction dependencies by dynamically prioritizing transactions and mitigating conflicts. The Deep-First method achieves the highest throughput, with a maximum improvement of 27%, by prioritizing transactions with the largest dependency trees. However, this approach often results in frequent data starvation, making it less suitable for scenarios requiring fairness. The Delay-Hazard method, on the other hand, eliminates data starvation entirely by allocating delayed transactions to available block spaces, ensuring consistent processing. While it provides lower throughput compared to the other methods, it excels in fairness. The Starve-Avoid method balances throughput and fairness by dynamically adjusting the transaction priorities based on predefined limits, achieving a 26% throughput improvement while minimizing the worst-case latency to one block. This balance makes it the most well-rounded solution for large-scale blockchain networks.
Experimental results demonstrate that the performance optimization is most significant when the data dependency is between 37.5% and 40%. In this range, all three methods outperform the conventional Hyperledger Fabric method, with the Starve-Avoid method offering the best trade-off between throughput and fairness. Latency distribution analysis highlights that while the Deep-First method achieves faster overall transaction processing, it frequently causes delays for lower-priority transactions. In contrast, the Starve-Avoid method effectively mitigates such delays, ensuring competitive performance and minimizing data starvation. These findings demonstrate that the proposed system is particularly effective in environments with moderate to high data dependency and large transaction volumes.
Despite these advancements, the proposed system has limitations. The Deep-First method sacrifices fairness for throughput, while the Starve-Avoid method relies on predefined parameters, such as starvation limits, which may require tuning for different network scenarios. Additionally, as transaction volumes and dependency tree complexities increase, scalability challenges may arise in dynamic networks with frequent changes in the transaction patterns or participant nodes. Future research should focus on developing adaptive algorithms that dynamically adjust priority factors in response to real-time network conditions.
In conclusion, the transaction dependency checking system demonstrates significant improvements in throughput, latency, and fairness compared to the conventional Hyperledger Fabric method. Among the proposed methods, the Starve-Avoid method emerges as the most balanced solution, offering substantial contributions to the performance optimization and scalability of enterprise-grade blockchain networks.

Author Contributions

Conceptualization, J.-W.K., J.-G.S. and I.-H.P.; methodology, J.-W.K., J.-G.S. and I.-H.P.; software, J.-W.K. and J.-G.S.; validation, J.-W.K., J.-G.S., I.-H.P., D.-H.J., Y.-J.K. and J.-W.J.; formal analysis, J.-W.K. and I.-H.P.; investigation, J.-W.K. and Y.-J.K.; resources, J.-W.K., J.-G.S. and D.-H.J.; data curation, J.-W.K.; writing—original draft preparation, J.-W.K.; writing—review and editing, J.-W.K., J.-G.S., D.-H.J., Y.-J.K. and J.-W.J.; visualization, J.-W.K. and J.-G.S. supervision, J.-W.J.; project administration, J.-W.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the BK21 FOUR (Fostering Outstanding Universities for Research) program, funded by the Ministry of Education (MOE, Republic of Korea) and the National Research Foundation of Korea (NRF) in the Department of Electronic Engineering, Sogang University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

We thank all the editors for their comments on this manuscript. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

Authors Ju-Won Kim and In-Hwan Park were employed by the company LG Uplus. Author Jae-Geun Song was employed by the company Com2us Holdings. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest. The authors declare that this research was supported by the BK21 FOUR (Fostering Outstanding Universities for Research) funded by the Ministry of Education (MOE, Republic of Korea) and National Research Foundation of Korea (NRF) in the Deptartment of Electronic Engineering, Sogang University. The funder was not involved in the study design, data collection, analysis, interpretation, writing of the article, or decision to submit it for publication.

References

  1. Tapscott, D.; Tapscott, A.; Revolution, B. How the technology behind bitcoin is changing money, business, and the world. Inf. Syst. 2016, 100–150. [Google Scholar]
  2. Zikratov, I.; Kuzmin, A.; Akimenko, V.; Niculichev, V.; Yalansky, L. Ensuring data integrity using blockchain technology. In Proceedings of the 2017 20th Conference of Open Innovations Association (FRUCT), St. Petersburg, Russia, 3–7 April 2017; pp. 534–539. [Google Scholar]
  3. Introduction to Hyperledger Fabric. Available online: https://hyperledger-fabric.readthedocs.io/en/release-2.5/whatis.html (accessed on 12 December 2024).
  4. Nakamoto, S. Bitcoin: A Peer-to-Peer Electronic Cash System. 2008. Available online: https://bitcoin.org/bitcoin.pdf (accessed on 3 February 2025).
  5. Ethereum. Available online: https://www.ethereum.org (accessed on 12 December 2024).
  6. Surjandari, I.; Yusuf, H.; Laoh, E.; Maulida, R. Designing a Permissioned Blockchain Network for the Halal Industry using Hyperledger Fabric with multiple channels and the raft consensus mechanism. J. Big Data 2021, 8, 10. [Google Scholar] [CrossRef]
  7. Zhang, W.; Anand, T. Ethereum architecture and overview. In Blockchain and Ethereum Smart Contract Solution Development: Dapp Programming with Solidity; Apress: Berkeley, CA, USA, 2022; pp. 209–244. [Google Scholar]
  8. Ke, Z.; Park, N. Performance modeling and analysis of Hyperledger Fabric. Clust. Comput. 2023, 26, 2681–2699. [Google Scholar] [CrossRef]
  9. Wen, Y.F.; Hsu, C.M. A performance evaluation of modular functions and state databases for Hyperledger Fabric blockchain systems. J. Supercomput. 2023, 79, 2654–2690. [Google Scholar] [CrossRef]
  10. Yang, G.; Lee, K.; Lee, K.; Yoo, Y.; Lee, H.; Yoo, C. Resource analysis of blockchain consensus algorithms in hyperledger fabric. IEEE Access 2022, 10, 74902–74920. [Google Scholar] [CrossRef]
  11. Baliga, A.; Solanki, N.; Verekar, S.; Pednekar, A.; Kamat, P.; Chatterjee, S. Performance characterization of hyperledger fabric. In Proceedings of the 2018 Crypto Valley Conference on Blockchain Technology (CVCBT), Zug, Switzerland, 20–22 June 2018; pp. 65–74. [Google Scholar]
  12. Khatri, S.; al-Sulbi, K.; Attaallah, A.; Ansari, M.T.J.; Agrawal, A.; Kumar, R. Enhancing Healthcare Management during COVID-19: A Patient-Centric Architectural Framework Enabled by Hyperledger Fabric Blockchain. Information 2023, 14, 425. [Google Scholar] [CrossRef]
  13. Park, I.H.; Moon, S.J.; Lee, B.S.; Jang, J.W. A p2p surplus energy trade among neighbors based on hyperledger fabric blockchain. In Information Science and Applications: ICISA 2019; Springer: Singapore, 2020; pp. 65–72. 2p. [Google Scholar]
  14. Rehan, M.; Javed, A.R.; Kryvinska, N.; Gadekallu, T.R.; Srivastava, G.; Jalil, Z. Supply chain management using an industrial internet of things hyperledger fabric network. Hum. Centric Comput. Inf. Sci. 2023, 13, 4. [Google Scholar]
  15. Chacko, J.A.; Mayer, R.; Jacobsen, H.A. Why do my blockchain transactions fail? In A study of hyperledger fabric. In Proceedings of the 2021 International Conference on Management of Data, Xi’an, China, 20–25 June 2021; pp. 221–234. [Google Scholar]
  16. World State. Available online: https://hyperledger-fabric.readthedocs.io/en/release-2.5/ledger/ledger.html#world-state (accessed on 12 December 2024).
  17. Read and Write Operations. Available online: https://hyperledger-fabric.readthedocs.io/en/release-2.5/readwrite.html (accessed on 12 December 2024).
  18. Ji, B.J.; Kuo, T.W. Better Clients, Less Conflicts: Hyperledger Fabric Conflict Avoidance. In Proceedings of the 2024 IEEE International Conference on Blockchain and Cryptocurrency (ICBC), Dublin, Ireland, 27–31 May 2024; pp. 368–376. [Google Scholar]
  19. Bappy, F.H.; Zaman, T.S.; Sajid, M.S.I.; Pritom, M.M.A.; Islam, T. Maximizing Blockchain Performance: Mitigating Conflicting Transactions through Parallelism and Dependency Management. In Proceedings of the 2024 IEEE International Conference on Blockchain (Blockchain), Copenhagen, Denmark, 19–22 August 2024; pp. 140–147. [Google Scholar]
  20. What Is a Ledger? Available online: https://hyperledger-fabric.readthedocs.io/en/release-2.5/ledger/ledger.html#what-is-a-ledger (accessed on 12 December 2024).
  21. High-Throughput. Available online: https://github.com/hyperledger/fabric-samples/tree/main/high-throughput (accessed on 12 December 2024).
  22. Amiri, M.J.; Agrawal, D.; El Abbadi, A. Parblockchain: Leveraging transaction parallelism in permissioned blockchain systems. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), Dallas, TX, USA, 7–9 July 2019; pp. 1337–1347. [Google Scholar]
  23. Serra, E.; Spezzano, F. HTFabric: A Fast Re-ordering and Parallel Re-execution Method for a High-Throughput Blockchain. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, Boise, ID, USA, 21–25 October 2024; pp. 2118–2127. [Google Scholar]
  24. Leal, F.; Chis, A.E.; González–Vélez, H. Performance evaluation of private ethereum networks. SN Comput. Sci. 2020, 1, 285. [Google Scholar] [CrossRef]
  25. Hogg, R.V.; Tanis, E.A. Probability and Statistical Inference; Prentice Hall: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
  26. Jang, J.W.; Choi, S.B.; Prasanna, V.K. Energy-and time-efficient matrix multiplication on FPGAs. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2005, 13, 1305–1319. [Google Scholar] [CrossRef]
Figure 1. An illustration to show the data dependency in Hyperledger Fabric.
Figure 1. An illustration to show the data dependency in Hyperledger Fabric.
Bdcc 09 00032 g001
Figure 2. An illustrative sequence of transactions.
Figure 2. An illustrative sequence of transactions.
Bdcc 09 00032 g002
Figure 3. Overall structure of the dependency tree generation system.
Figure 3. Overall structure of the dependency tree generation system.
Bdcc 09 00032 g003
Figure 4. An illustrative sequence of transactions and the associated dependency tree (arrows represent the data dependency).
Figure 4. An illustrative sequence of transactions and the associated dependency tree (arrows represent the data dependency).
Bdcc 09 00032 g004
Figure 5. An illustrative scenario of the conventional Hyperledger Fabric method.
Figure 5. An illustrative scenario of the conventional Hyperledger Fabric method.
Bdcc 09 00032 g005
Figure 6. (a) An illustrative scenario of the Deep-First method; and (b) Pseudocode for Deep-First method.
Figure 6. (a) An illustrative scenario of the Deep-First method; and (b) Pseudocode for Deep-First method.
Bdcc 09 00032 g006
Figure 7. (a) An illustrative scenario of the Delay-Hazard method; and (b) Pseudocode for the Delay-Hazard method.
Figure 7. (a) An illustrative scenario of the Delay-Hazard method; and (b) Pseudocode for the Delay-Hazard method.
Bdcc 09 00032 g007
Figure 8. (a) An illustrative scenario of the Starve-Avoid method; and (b) Pseudocode for the Starve-Avoid method.
Figure 8. (a) An illustrative scenario of the Starve-Avoid method; and (b) Pseudocode for the Starve-Avoid method.
Bdcc 09 00032 g008
Figure 9. Comparison of all the transaction processing methods.
Figure 9. Comparison of all the transaction processing methods.
Bdcc 09 00032 g009
Figure 10. Pseudocode for the Grafting algorithm.
Figure 10. Pseudocode for the Grafting algorithm.
Bdcc 09 00032 g010
Figure 11. Pseudocode for the Descendants algorithm.
Figure 11. Pseudocode for the Descendants algorithm.
Bdcc 09 00032 g011
Figure 12. Pseudocode for the Pruning algorithm.
Figure 12. Pseudocode for the Pruning algorithm.
Bdcc 09 00032 g012
Figure 13. Pseudocode for the Submit Transaction algorithm.
Figure 13. Pseudocode for the Submit Transaction algorithm.
Bdcc 09 00032 g013
Figure 14. (a) An experimental setup; and (b) the Docker setup for experimentation.
Figure 14. (a) An experimental setup; and (b) the Docker setup for experimentation.
Bdcc 09 00032 g014
Figure 15. P2P Energy Trading System throughput comparison between Hyperledger Fabric and the private Ethereum blockchain.
Figure 15. P2P Energy Trading System throughput comparison between Hyperledger Fabric and the private Ethereum blockchain.
Bdcc 09 00032 g015
Figure 16. A miniature trading system for renewable energy (also used in [13]).
Figure 16. A miniature trading system for renewable energy (also used in [13]).
Bdcc 09 00032 g016
Figure 17. Throughput comparison according to the data dependency with the confidence intervals: (a) 40% data dependency; (b) 20% data dependency; and (c) 70% data dependency.
Figure 17. Throughput comparison according to the data dependency with the confidence intervals: (a) 40% data dependency; (b) 20% data dependency; and (c) 70% data dependency.
Bdcc 09 00032 g017
Figure 18. Average throughput depends on the data dependency ratio (the number of transactions is 30).
Figure 18. Average throughput depends on the data dependency ratio (the number of transactions is 30).
Bdcc 09 00032 g018
Figure 19. Average worst-case latency depends on the data dependency ratio (the number of transactions is 30).
Figure 19. Average worst-case latency depends on the data dependency ratio (the number of transactions is 30).
Bdcc 09 00032 g019
Figure 20. Average throughput depends on the data dependency ratio (the number of transactions is 40).
Figure 20. Average throughput depends on the data dependency ratio (the number of transactions is 40).
Bdcc 09 00032 g020
Figure 21. Average worst-case latency depends on the data dependency ratio (the number of transactions is 40).
Figure 21. Average worst-case latency depends on the data dependency ratio (the number of transactions is 40).
Bdcc 09 00032 g021
Figure 22. Average throughput depends on the data dependency ratio (the number of transactions is 50).
Figure 22. Average throughput depends on the data dependency ratio (the number of transactions is 50).
Bdcc 09 00032 g022
Figure 23. Average worst-case latency depends on the data dependency ratio (the number of transactions is 50).
Figure 23. Average worst-case latency depends on the data dependency ratio (the number of transactions is 50).
Bdcc 09 00032 g023
Figure 24. Throughput of three methods at a fixed data dependency of 37.5%.
Figure 24. Throughput of three methods at a fixed data dependency of 37.5%.
Bdcc 09 00032 g024
Figure 25. Latency distribution of the three methods with 50 transactions: (a) Deep-First; (b)Starve-Avoid; and (c) Delay-Hazard.
Figure 25. Latency distribution of the three methods with 50 transactions: (a) Deep-First; (b)Starve-Avoid; and (c) Delay-Hazard.
Bdcc 09 00032 g025
Table 2. Performance analysis of the P2P Energy Trading System.
Table 2. Performance analysis of the P2P Energy Trading System.
EthereumConventional
Hyperledger Fabric
Our MethodSpeed Up
(Our Method)
Blockchain typePrivatePrivatePrivate-
Throughput (TPS) at
0% data dependency
21253 (12 ×)253 (12 ×)0%
40% data dependency21135 (6.4 ×)171 (8.14 ×)27%
50% data dependency21108 (5.1 ×)132 (6.2 ×)22%
100% data dependency218 (0.4 ×)8 (0.4 ×)0%
A private Ethereum blockchain with 21 TPS is constructed for comparison.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J.-W.; Song, J.-G.; Park, I.-H.; Jo, D.-H.; Kim, Y.-J.; Jang, J.-W. Dependency Reduction Techniques for Performance Improvement of Hyperledger Fabric Blockchain. Big Data Cogn. Comput. 2025, 9, 32. https://doi.org/10.3390/bdcc9020032

AMA Style

Kim J-W, Song J-G, Park I-H, Jo D-H, Kim Y-J, Jang J-W. Dependency Reduction Techniques for Performance Improvement of Hyperledger Fabric Blockchain. Big Data and Cognitive Computing. 2025; 9(2):32. https://doi.org/10.3390/bdcc9020032

Chicago/Turabian Style

Kim, Ju-Won, Jae-Geun Song, In-Hwan Park, Dong-Hwan Jo, Yong-Jin Kim, and Ju-Wook Jang. 2025. "Dependency Reduction Techniques for Performance Improvement of Hyperledger Fabric Blockchain" Big Data and Cognitive Computing 9, no. 2: 32. https://doi.org/10.3390/bdcc9020032

APA Style

Kim, J.-W., Song, J.-G., Park, I.-H., Jo, D.-H., Kim, Y.-J., & Jang, J.-W. (2025). Dependency Reduction Techniques for Performance Improvement of Hyperledger Fabric Blockchain. Big Data and Cognitive Computing, 9(2), 32. https://doi.org/10.3390/bdcc9020032

Article Metrics

Back to TopTop