Next Article in Journal
Secure Multifaceted-RAG: Hybrid Knowledge Retrieval with Security Filtering
Previous Article in Journal
A Container-Native IAM Framework for Secure Green Mobility: A Case Study with Keycloak and Kubernetes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TFR-LRC: Rack-Optimized Locally Repairable Codes: Balancing Fault Tolerance, Repair Degree, and Topology Awareness in Distributed Storage Systems

School of Information and Software Engineering, East China Jiaotong University, Nanchang 330000, China
*
Author to whom correspondence should be addressed.
Information 2025, 16(9), 803; https://doi.org/10.3390/info16090803
Submission received: 12 August 2025 / Revised: 11 September 2025 / Accepted: 12 September 2025 / Published: 15 September 2025

Abstract

Locally Repairable Codes (LRCs) have become the dominant design in wide-stripe erasure coding storage systems due to their excellent locality and low repair bandwidth. In such systems, the repair degree—defined as the number of helper nodes contacted during data recovery—is a key performance metric. However, as stripe width increases, the probability of multiple simultaneous node failures grows, which significantly raises the repair degree in traditional LRCs. Addressing this challenge, we propose a new family of codes called TFR-LRCs (Locally Repairable Codes for balancing fault tolerance and repair efficiency). TFR-LRCs introduce flexible design choices that allow trade-offs between fault tolerance and repair degree: they can reduce the repair degree by slightly increasing storage overhead, or enhance fault tolerance by tolerating a slightly higher repair degree. We design a matrix-based construction to generate TFR-LRCs and evaluate their performance through extensive simulations. The results show that, under multiple failure scenarios, TFR-LRC reduces the repair degree by up to 35% compared with conventional LRCs, while preserving the original LRC structure. Moreover, under identical code parameters, TFR-LRC achieves improved fault tolerance, tolerating up to g + 2 failures versus g + 1 in conventional LRCs, with minimal additional cost. Notably, in maintenance mode, where entire racks may become temporarily unavailable, TFR-LRC demonstrates substantially better recovery efficiency compared to existing LRC schemes, making it a practical choice for real-world deployments.

1. Introduction

In recent years, with the rapid development of the information technology era, the volume of data has been growing explosively [1]. Distributed storage systems ensure high availability and reliability of data through the use of erasure coding [2,3]. For example, Azure [4] adopts LRC(12, 2, 2), while Hadoop Distributed File System [5], Ceph [6], and Google File System [7] employ Reed-Solomon (RS) codes. Compared to replication techniques, erasure coding significantly reduces storage overhead while maintaining the same level of fault tolerance [8].
To further improve storage efficiency, researchers have explored the use of wide codes [9,10,11], which refer to erasure codes with larger stripe widths. For instance, in Reed–Solomon (RS) codes [12], both RS(12, 8) and RS(154, 150) [13] can tolerate up to four simultaneous node failures. However, RS(154, 150) incurs a storage overhead of only about 1.03×, which is approximately 68.7% of the overhead of RS(12, 8). This demonstrates that wide codes can significantly reduce storage cost without compromising reliability. In practical deployments, most wide codes are implemented as Locally Repairable Codes (LRCs) to minimize I/O overhead during data repair [14].
However, deploying wide codes also introduces several inevitable challenges. In both centralized and decentralized storage clusters, systems are susceptible to node failures. As stripe width increases, the number of healthy nodes required for data repair also grows. Several empirical studies have shown that wide stripes are particularly vulnerable to multiple correlated or simultaneous failures [10,15,16]. For example, a 24 h monitoring study of two LRC clusters by Google [10] revealed that wide stripes tend to experience compound failures more frequently than narrow stripes, which amplifies repair complexity. This indicates that the failure probability in wide stripes does not simply scale linearly, but increases disproportionately with stripe width. In such cases, the system must contact a larger number of available nodes during the repair process, which significantly increases data transmission overhead across the cluster network.
In distributed storage systems, the repair degree is typically defined as the number of helper nodes that must be contacted during the data repair process. Different types of erasure coding exhibit varying impacts on the repair degree. For example, in Reed–Solomon (RS) codes, an RS(n, k) code requires accessing any k out of the remaining available nodes to reconstruct a lost block, resulting in a repair degree of k. In contrast, Locally Repairable Codes partition data nodes into local groups, each augmented with a local parity block. This structure enables the repair of a single failed data block using only the other blocks within the same local group. Consequently, the repair degree of an LRC is generally equal to the number of healthy nodes in the local repair group to which the failed block belongs.
However, when multiple nodes fail simultaneously in an LRC, particularly when the failed nodes belong to the same local repair group, the repair degree increases rapidly. In such cases, the repair process must access not only the global parity blocks but also all available data blocks in the stripe. As a result, the repair cost becomes significantly higher and may be unacceptable for wide LRCs.
In addition, traditional LRCs are generally only able to tolerate any g + 1 failures, but it can also tolerate g + 2 failures in some cases, where g is the number of global parity blocks. For example, in LRC(12, 2, 2) used by Azure [14], it is only able to tolerate any 3 failures and 86% of any 4 failures even though there are 2 local parity blocks and 2 global parity blocks. Thus, for wide stripes, fault tolerance performance remains to be further explored.
This paper presents a new family of locally repairable codes, TFR-LRC (Locally Repairable Codes for balancing fault tolerance and repair efficiency), which can greatly reduce the repair degree when multiple blocks fail without changing the structures of the traditional LRC and improve the fault tolerance by sacrificing a little bit of repair degree. It is able to allow selection of the repair subset with the optimal repair degree to repair the failed data. Based on experiments, it is shown that TFR-LRC can obtain higher fault tolerance than traditional LRC in the same parameter settings. The main contributions of this paper include the following:
  • In this paper, we construct a new family of LRC, TFR-LRC, which can exhibit excellent repair performance in the case of multiple failures. TFR-LRC establishes a relationship between data blocks, local parity blocks, and global parity blocks by grouping all data blocks and global parity blocks within the structure and requiring each data block and global parity block to participate in the check of two local parity blocks. By this construction method, TFR-LRC can achieve the goal of tolerating any g + 2 failures in the coding structure;
  • In this paper, we design an algorithm for generating TFR-LRC coding matrix. The TFR-LRC structure constructed from this algorithm can perform excellently repair performance when multiple blocks fail without changing the structures of the traditional LRC, and obtain higher fault tolerance by increasing a little bit of repair degree;
  • Unlike the existing schemes that use experiments to demonstrate coding fault tolerance, this paper rigorously proves that TFR-LRC has a fault tolerance of g + 2 from a theoretical point of view, i.e., TFR-LRC can tolerate any g + 2 failures within the structure. The existing LRC schemes can only tolerate any g + 1 node failures, while TFR-LRC achieves the goal of improving the fault tolerance by sacrificing a little bit of repair degree.
The remainder of this paper is organized as follows. Section 2 introduces the related work. Section 3 illustrates the motivation of this paper through a simple example. Section 4 defines the terminology used in this paper, and describes the core idea and design details of TFR-LRC. Section 5 proves the fault tolerance of TFR-LRC from a theoretical point of view. Section 6 demonstrates a comparison between TFR-LRC and several other mainstream LRC schemes through simulation experiments, and evaluates the feasibility of TFR-LRC. Section 7 introduces the placement problem analysis of TFR-LRC under maintenance area deployment problem. Section 8 concludes this paper and looks forward to future work.

2. Related Work

Erasure coding has been widely used in large-scale distributed storage systems as a coding mechanism with high fault tolerance and storage efficiency. In the 1960s, MIT Lincoln Laboratory developed RS(n, k) code, which slices a data file into k blocks and encodes them to obtain a total of n blocks. RS code is able to repair the lost data through decoding operations to provide higher data reliability and fault tolerance. Compared with replication techniques, erasure coding reduces storage overhead while maintaining equivalent fault tolerance.
In recent years, researchers have increasingly leveraged the local repair property of LRC to further improve system performance. By organizing nodes into local repair groups, LRC enables single-node failures to be repaired within a group, reducing both I/O and network overhead.

2.1. Data Repair

Erasure coding data repair refers to the need to repair the lost data on another newborn node when a node in the system loses the stored data due to failure or other reasons. In RS code, as it satisfies the MDS property [17], when a node fails, k blocks need to be accessed arbitrarily from the remaining n 1 available blocks, and the lost data can be repaired by carrying out the coding and decoding operations, at which time the repair degree is k. For example, in RS(14, 10) deployed by Facebook f4 [18], the repair cost for one failure is k = 10 . As the stripe width increases, the repair degree grows linearly, leading to higher network and I/O costs, especially in wide-stripe scenarios.
Compared with RS code, LRC is more suitable for large-scale storage systems through a more localized repair method, which reduces the I/O and repair degree during the data repair process. However, wider stripes increase the likelihood of multiple simultaneous node failures, which can force the system to fall back to global decoding, significantly increasing repair cost.

2.2. Locality in Erasure Coding

In wide stripes, LRC has been studied and deployed due to its practical improvement over MDS codes. Among deployed systems, Azure-LRC [14] and Azure-LRC+1 [19] are two widely adopted structures. The key difference lies in global parity protection: Azure-LRC does not protect global parities within local groups, so a global parity failure results in high repair costs. In Azure-LRC+1, global parity blocks are grouped with local parity blocks to reduce repair cost. Nevertheless, this approach introduces extra storage overhead, highlighting a tradeoff between repair efficiency and storage cost.
Building on these designs, Google’s Optimal and Uniform Cauchy LRCs [10] target wide stripes. Optimal Cauchy LRC evenly distributes data blocks across local groups and connects global parities through XOR checks. Uniform Cauchy LRC includes both data and global parity blocks within local groups. Although Uniform Cauchy LRC reduces repair bandwidth for global parity failures, the repair degree can still reach O(k) in certain scenarios, limiting practical benefits. Furthermore, these schemes generally tolerate only g + 1 failures, where g is the number of global parities, leaving fault tolerance improvement for wide stripes unexplored.
Motivated by these limitations, TFR-LRC is proposed. TFR-LRC maintains the traditional LRC structure while allowing each block to participate in two local parities. As a result, it can tolerate up to g + 2 failures with a small increase in repair degree, improving both fault tolerance and repair efficiency in wide-stripe and multi-failure scenarios.
In addition to LRC variants, Piggybacking Codes [20] have been proposed as another class of modern erasure codes to reduce repair bandwidth without increasing storage overhead. The key idea is to overlay carefully designed auxiliary functions on top of RS-coded symbols, thereby achieving significant bandwidth savings for single-block repairs. However, Piggybacking Codes are primarily optimized for single failure scenarios, and their advantages diminish when multiple blocks fail simultaneously, as repair often still requires contacting a large number of nodes. In contrast, TFR-LRC is explicitly designed for wide-stripe settings with multiple failures, where each block participates in two local groups. This structure enables TFR-LRC to tolerate up to g + 2 failures while maintaining low repair degrees, providing a more balanced tradeoff between fault tolerance and repair efficiency in large-scale distributed systems.
In existing work, there are also studies focusing on the generation of wide stripes. For example, Staged Stripe Merging [11] proposes a scheme that gradually merges narrow stripes into wide ones, effectively reducing cross-cluster merging overhead while maintaining repair efficiency. Such studies are complementary to our exploration of repair optimization in wide-stripe scenarios.

3. Motivation

For wide stripes, the value of k is generally large, and both RS code and LRC schemes are looking for a trade-off between storage cost, repair cost and fault tolerance. This motivates us to explore the essence of this relationship and whether some of the existing ideas can be combined to obtain a better trade-off, thus making more suitable for wide stripes.
Next, we will explain the construction idea and highlights of TFR-LRC by a simple example. As shown in Figure 1, there are 30 blocks in TFR-LRC(30, 24, 4, 2) (see the definition 1 in A of Section 4 for the specific meaning and details of each parameter), in which there exist 24 data blocks, 4 local parity blocks and 2 global parity blocks. For the purpose of illustration, we assume a uniform failure distribution; detailed analysis under general failure patterns and formal proofs of fault tolerance and repair degree optimization are provided in Section 5.
In Figure 1, we can clearly see that each data block and global parity block are divided into two local repair groups, where D1–D24 denote data blocks, L1–L4 denote local parity blocks and G1-G2 denote global parity blocks. Firstly, we observe the upper part of Figure 1 and we can see that this is consistent with the construction method of Uniform Cauchy LRC, i.e., all the data blocks and global parity blocks are divided into local repair groups together. And the changes we have made are mainly in the addition of two local parity blocks L3 and L4 according to Algorithm 1 (see Section 4.2 of Section 4 for more details), where L3 is a linear encoding of {D2, D3, D4, D6, D7, D9, D10, D14, D19, D20, D22, D24, G1} and L4 is a linear encoding of {D1, D5, D8, D11, D12, D13, D15, D16, D17, D18, D21, D23, G2}. This design ensures that each data block and global parity block has two local repair groups that can help repair it. And when multiple blocks fail simultaneously, it can cope with multiple failures at a much lower repair cost than that of the MDS level, thus achieving a great reduction in the repair degree and higher fault tolerance.
Algorithm 1 Generating a TFR-LRC matrix
Input: Number of data blocks k, Number of global parities g, Number of local parities l, Optional random seed seed.
Output: A generation matrix satisfying specific conditions.
1: Begin
2: Set Lhalf = l/2
3: Set Q = 2 * (k + g) mod l
4: Set R = 2 * (k + g) % l
5: Set M = Lhalf—R
6: Initialize random number generator with seed
7: for i = 1 to k do
8:  data block list = chr(ord(‘a’) + i)
9:  if k is greater than the number of lowercase letters then
10:    Other characters can be used to represent the block of data, but these characters cannot be uppercase letters or numbers.
11:  end if
12: end for
13: duplicate data block list once
14: for i = 1 to g do
15:  global blocks list = str(i)
16: end for
17: duplicate global block list once
18: Loop
19: if R = 0 then
20:  Initialize a matrix of l rows and Q columns, all elements set to 0
21:  Fill the first Lhalf rows of the matrix with these ( k + g ) elements in order, with Q elements in each row.
22:  Fill randomly another set of ( k + g ) elements into the last Lhalf rows of the matrix, with Q elements in each row.
23: else
24:  Initialize a matrix of l rows and ( Q + 1 ) columns, all elements set to 0.
25:  Fill these ( k + g ) elements into the first Lhalf rows of the matrix in order, where the first M rows are filled with Q elements and the remaining rows are filled with ( Q + 1 ) elements.
26:  Fill randomly another set of ( k + g ) elements into the last Lhalf rows of the matrix.
27: end if
28: if the matrix satisfies the condition then
29:  Add a capital letter as a row number at the beginning of each row.
30:  Return the matrix
31: end if
32: End Loop
33: End
Table 1 compares the average repair degree of TFR-LRC(30, 24, 4, 2) and Uniform Cauchy LRC(28, 24, 2, 2) when one block fails and two blocks fail, respectively. It can be found that TFR-LRC does not change the structure of Uniform Cauchy LRC, and can obtain lower repair degree when two blocks fail by just adding two local parity blocks, thus seeking a new trade-off between the storage cost and the repair degree.
Furthermore, we compared the fault tolerance performance of TFR-LRC with that of Uniform Cauchy LRC for the same parameters (n = 30, k = 24, l = 4, g = 2) in Table 2. At this point, the structure of the TFR-LRC remains as shown in Figure 1, while the structure of the Uniform Cauchy LRC changes. It can be found that TFR-LRC achieves a higher fault tolerance by increasing a little bit of repair cost when multiple blocks fail, i.e., it can tolerate any g + 2 failures in the coding structure.

4. TFR-LRC

4.1. Definitions

Definition 1 (TFR-LRC(n, k, l, g)).
A TFR-LRC contains four parameters, which are the number of total blocks n, the number of data blocks k, the number of local parity blocks l, and the number of global parity blocks g, where  n = k + l + g , and k, l, g > 0.
Definition 2 (Local repair group).
Given a TFR-LRC(n, k, l, g), each data block and global parity block is divided into two local repair groups, the blocks that are divided into a group are linearly encoded to generate a local parity block. At this point, we refer to the set of all blocks in this group as a local repair group. According to the local repair property, any block in a local repair group can be reconstructed by all other blocks in this group.
Definition 3 (Locality).
The maximum value of the number of blocks to be read when repairing a single block.
Definition 4 (The number of failures, f).
The number of blocks in the coding structure that fail simultaneously.
Definition 5 (Average repair degree, ARD).
ARD is defined as the average repair degree when repairing the failed block, and the repair degree of f failed blocks is defined as ARD(f) with the following expression: A R D ( f ) = i = 1 ,   j i n c o s t ( b i ,   j ) n f .

4.2. The Core Idea of TFR-LRC

In Azure-LRC, each data block participates in the encoding of one local parity block, while the global parity block is not protected by any local repair group. Azure-LRC+1 treats all global parity blocks as a local repair group and generates a local parity block. In addition, in Uniform Cauchy LRC, although the global parity blocks are also divided into a local repair group, each block still participates in the check of only one local repair group in the structure.
As can be seen from Figure 1, the biggest difference between the TFR-LRC structure and the above LRC schemes is that TFR-LRC requires each data block and global parity block to participate in the check of two local parity blocks. In addition, TFR-LRC can be regarded as an improved scheme of Uniform Cauchy LRC after adding a little bit of storage cost. This is because TFR-LRC, based on Uniform Cauchy LRC, achieves higher fault tolerance by sacrificing a little bit of repair degree when multiple blocks fail.
Since some of the helper nodes may be temporarily unavailable during the repair process, we would like to have multiple different repair sets for each node. Similar work is mentioned in the reference [21], but they do not discuss the repair of multiple failures.
Therefore, the most important step in the construction process of TFR-LRC is the design of the coding structure, as it is directly related to the storage cost, repair degree, and fault tolerance performance.
In the design of TFR-LRC, we add some local parity blocks by sacrificing a little bit of storage cost and try to establish connections between each local repair group, and between data blocks, local parity blocks, and global parity blocks, thus obtaining higher fault tolerance by sacrificing a little bit of repair degree in the case of multiple failures.
In order to construct a TFR-LRC(n, k, l, g) containing k data blocks, l local parity blocks, and g global parity blocks, we designed Algorithm 1 to generate the TFR-LRC matrix under the specified parameters. Once the generation matrix is obtained, the construction of TFR-LRC is determined.
The coding matrix generation algorithm for TFR-LRC is provided in Algorithm 1. First, let the number of data blocks be k, the number of local parity blocks be l, and the number of global parity blocks be g. Calculate the value of Lhalf, and the integer and remainder after division of 2 × ( k + g ) with l. The integer portion is denoted as Q, and the remainder portion is denoted as R. Lhalf, Q and R are used to constrain the number of elements that should be filled in each row of the matrix. Next, lines 7–17 of the algorithm are used to ensure that these k data blocks and g global parity blocks both appear twice in the matrix. Then, lines 18–32 of the main algorithm present the process of generating the matrix, where lines 19–27 focus on the process of filling the data blocks and global parity blocks.
In the matrix generation process of TFR-LRC, it should be judged whether all the elements can be uniformly divided into the rows of the matrix, and then an initial matrix under the corresponding conditions should be generated according to the result. After that, we generate a matrix under the corresponding parameters according to lines 19–20 and 23–24 of the algorithm. Finally, if the generated matrix satisfies all the conditions, outputting the final matrix, otherwise falling back to line 18 of the algorithm to regenerate the matrix.
To ensure reproducibility, Algorithm 1 allows the user to specify a random seed. The seed initializes the random number generator used in the random placement of elements in the matrix. Using the same seed produces identical TFR-LRC matrices, ensuring consistent repair performance across experiments. When different seeds are used, the matrices vary slightly, introducing minor performance variability that reflects the diversity of repair paths, without affecting the overall fault tolerance or repair efficiency.
Taking TFR-LRC(18, 12, 4, 2) as an example, the initial generation matrix obtained by the algorithm is shown below, which is a 4-row and 8-column matrix representing that there are 4 local repair groups, and the length of each local repair group is 8. Among them, the lowercase letters a-l denote 12 data blocks, the uppercase letters A-D denote 4 local parity blocks, and the numbers 1–2 denote 2 global parity blocks.
A a B h         b c i j         d e k l         f g 1 2 C a D b         e f c d         h i g k         j 1 l 2
Then perform some replacements to obtain the final generation matrix for TFR-LRC(18, 12, 4, 2), as shown below. In the generation matrix, the number of rows represents the number of local parity blocks and the number of columns represents the number of blocks in a local repair group.
L 1 D 1 L 2 D 8         D 2 D 3 D 9 D 10         D 4 D 5 D 11 D 12         D 6 D 7 G 1 G 2 L 3 D 1 L 4 D 2         D 5 D 6 D 3 D 4         D 8 D 9 D 7 D 11         D 10 G 1 D 12 G 2

4.3. Matrix Generation Complexity Analysis

For TFR-LRC(n, k, l, g), the main operations include the initialization of data and global block lists ( O ( k + g ) ), matrix filling and randomization ( O ( l × ( k + g ) ) ), and loop iterations until the matrix meets constraints. Assuming the average number of iterations is c , the overall complexity is as follows:
O ( c × l × ( k + g ) )
Thus, in large-scale deployments, the matrix generation complexity scales linearly with the total number of blocks and local parity blocks, enabling efficient construction of TFR-LRC for wide-stripe systems. In practice, the time complexity of Algorithm 1 increases linearly with the number of data and global parity blocks. Even for large-scale stripes with hundreds of blocks, matrix generation typically requires only tens to hundreds of milliseconds, which prevents it from becoming a system bottleneck. The algorithm also employs random selection when filling certain matrix elements, which may introduce minor variability in the generated matrices. To ensure reproducibility, a fixed random seed can be specified, guaranteeing identical TFR-LRC matrices across different runs. Importantly, this randomness does not affect local repair performance or overall fault tolerance of the code.

5. Fault Tolerance Analysis of TFR-LRC

In distributed storage systems, the fault tolerance is usually measured in terms of code distance d, which implies that the coding structure can tolerate any d 1 simultaneous failures. For example, an MDS-(n, k) code has a code distance of n k + 1 , which means that it can tolerate at most n k blocks to fail simultaneously. As for the LRC schemes mentioned in this paper, Azure-LRC, Azure-LRC+1 and Uniform Cauchy LRC have code distance of g + 2 , which means they can tolerate any g + 1 blocks failing simultaneously in structures. In addition, Uniform Cauchy LRC can tolerate 90% of any g + 2 failures.
Since most of the computations in the coding and decoding process of erasure coding belong to the byte level, the size of the finite field is generally set to GF(28). In addition, since we divide each data block and global parity block in TFR-LRC into two local repair groups, we obtain a higher fault tolerance than that of traditional LRC. Next, we will theoretically prove that TFR-LRC can tolerate any g + 2 block failures.
We note that the proof is based on the common assumption that parity-check equations generated by global and local parities are linearly independent in GF(28), which is consistent with constructions of RS codes and existing LRC variants. This assumption has been validated in prior work and holds in our TFR-LRC construction.
Theorem 1.
The code distance of TFR-LRC(n, k, l, g) is  g + 3 , i.e., it can tolerate any  g + 2  failures.
Proof of Theorem 1.
Given a TFR-LRC(n, k, l, g), suppose that f blocks fail simultaneously, where the number of failed data blocks is x, the number of failed local parity blocks is y, and the number of failed global parity blocks is z. Note that although the number of blocks failed simultaneously is f, there are really only x unknowns in essence due to the parity blocks being generated by linear encoding of data blocks in a finite field. So, we try to find no less than x equations to solve for these x unknowns. Thus, there exist the following known relations with x, y, z, f:
f = x + y + z , 0 x f , 0 y f , 0 z g .
Firstly, if the number of failed data blocks does not exceed the number of remaining available global parity blocks, then all failed data blocks can be repaired by the healthy global parity block. Secondly, from the perspective of equations, data blocks are unknowns, and local and global parity blocks can be viewed as equations about these unknowns. Then, x failed data blocks can be regarded as x unknowns, while z failed global parity blocks indicate that g z available global parity blocks still remain in the structure, i.e., g z available equations can be obtained. Therefore, when x g z , these x failed data blocks can be repaired by the healthy global parity blocks. Next, we use this idea to prove the fault tolerance of TFR-LRC in the cases of f = g + 2   and f = g + 3 .
  • f = g + 2 ;
Since f = x + y + z = g + 2 , when x g z , it can be known that the failed blocks can be repaired directly by the surviving global parity blocks when y 2 . Therefore, we only need to consider the cases when y = 0 and y = 1 .
In the worst case, the remaining x + z blocks appear in the same two local repair groups, and the failed local parity blocks are exactly the local parity blocks in these two local repair groups. Since there are z failed global parity blocks, it can obtain g z global parity block equations. Then, when y = 0 , there are still two available local parity block equations that can be obtained, at which point a total of g z + 2 = x equations can be obtained. While when y = 1 , there is only one available local parity block that can be obtained, at which point a total of g z + 1 = x equations can be obtained. Therefore, these x unknowns can be solved in both these cases. So, TFR-LRC can successfully repair any f = g + 2 failures.
2.
f = g + 3 .
Since f = x + y + z = g + 3 , when x g z , it can be known that the failed blocks can be repaired directly by the surviving global parity blocks when y 3 . Therefore, we only need to consider the cases when y = 0, y = 1 and y = 2 .
Similarly, in the worst case, the remaining x + z blocks appear in the same two local repair groups, and the failed local parity blocks are exactly the local parity blocks in these two local repair groups. Since there are z failed global parity blocks, it can obtain g z global parity block equations. Then, when y = 0 , there are still two available local parity blocks that can be obtained, at which point a total of g z + 2 = x 1 equations can be obtained. When y = 1 , there is only one available local parity block equation that can be obtained, at which point a total of g z + 1 = x 1 equations can be obtained. While when y = 2 , it is unable to obtain any available local parity block equations, so at which point a total of g z = x 1 equations can be obtained. Therefore, these x unknowns can not be solved in these three cases. So, TFR-LRC can not repair any f = g + 3 failures.
In summary, the code distance of TFR-LRC(n, k, l, g) is g + 3 , which means the upper bound of its fault tolerance is g + 2 .
Furthermore, boundary cases such as g = 1 or l = 2 have been examined, and the results are consistent with the conclusion that TFR-LRC achieves a code distance of g + 3 . Although a full counterexample analysis is not included, the worst-case scenarios analyzed above already capture the maximum achievable tolerance. □

6. Experiments and Analysis

In Figure 2, we show the coding structures of Azure-LRC, Azure-LRC+1, and Uniform Cauchy LRC under the parameters of (n = 28, k = 24, l = 2, g = 2), so that to observe more intuitively the difference between TFR-LRC and these three structures.
In these three structures, Azure-LRC divides evenly data blocks into all local repair groups, while global parity blocks are not protected by local repair groups. Azure-LRC+1 treats all global parity blocks as a local repair group and generates a local parity block to ensure that global parity blocks can be repaired locally as well. And Uniform Cauchy LRC carries over the idea of Azure-LRC+1, i.e., evenly divides data blocks and global parity blocks together into all local repair groups to ensure that the locality of each group is as consistent as possible.
In this paper, we will compare TFR-LRC with these three LRC schemes in four aspects: storage cost, fault tolerance, locality, and average repair degree (ARD) when any one block fails and two blocks fail. Table 3 reacts to the results of the comparison between TFR-LRC and these three LRC schemes, and shows the trade-off between storage cost and repair degree of TFR-LRC. In addition, Figure 3 shows the recovery ratio of these four LRC structures in cases of g + 2 ~ g + 4 failures, and Figure 4 demonstrates the trade-off between fault tolerance and repair degree for TFR-LRC and other different codes with the same parameters (n = 55, k = 48, l = 4, g = 3).

6.1. Storage Cost

In contrast, TFR-LRC can achieve a lower repair cost by sacrificing a little bit of storage cost, and we define this as a trade-off between storage cost and degree. Therefore, among the four parameters in Table 3, TFR-LRC has a relatively higher storage cost. However, as the stripe becomes wider, this cost becomes smaller and smaller.
As storage systems scale to thousands or tens of thousands of nodes, the overhead introduced by TFR-LRC’s additional local parity blocks may seem more significant. However, our analysis shows that the relative storage overhead decreases as stripe width increases. For example, in wide stripes with hundreds of blocks, the storage cost increase remains below 4%, while still providing improved fault tolerance and lower repair degree. As shown in Table 3, when k = 96 , the storage cost of Azure-LRC(105, 96, 4, 5), Azure-LRC+1(105, 96, 4, 5), and Uniform Cauchy LRC(105, 96, 4, 5) is 1.094×, whereas TFR-LRC(109, 96, 8, 5) incurs only 1.135×, reflecting an increase of less than 4%. This indicates that TFR-LRC scales well in large-scale deployments, balancing storage efficiency with reliability.

6.2. Fault Tolerance

We compare the fault tolerance of TFR-LRC with these three LRC structures in Figure 2. We use Monte Carlo experiments to test the value of the average repair degree under different numbers of failures. The higher the number of successful repair, the more reliable the coding structure is. We use Algorithm 1 to generate the TFR-LRC coding matrix, and then randomly remove f different elements from the matrix and try to repair the f elements (think of these f elements as randomly failing blocks and these f elements can be any combination of all the elements in the matrix). Meanwhile, in conjunction with the fault tolerance analysis in Section 5, it is clear that TFR-LRC has the best random failure tolerance performance. In addition, a comparison of the fault tolerance (recovery ratio) of these four LRC structures is shown in Figure 3, which also demonstrates the structure of TFR-LRC has the optimal reliability.

6.3. Locality

In Definition 3, this paper defines locality as the maximum value of the number of blocks that need to be read to repair a single block. As can be seen from Table 3, Azure-LRC has the worst locality, which is due to the fact that when the global parity block in its structure fails, all data blocks need to be read to help repair it. Azure-LRC+1 treats all global parity blocks as a local repair group as well, hence locality is improved. Whereas, in both Uniform Cauchy LRC and TFR-LRC structures, the global parity blocks are divided into local repair groups along with the data blocks, thus locality is optimal.

6.4. ARD

In Definition 5, we define ARD as the average of the number of helper nodes that need to be contacted in order to perform a repair process on a failed block in the system, i.e., the average repair degree. In Table 3, we list the ARD when any one block fails and two blocks fail for each of the four structures under four parameter settings, respectively. Table 3 shows that TFR-LRC can significantly reduce the repair degree in case of multiple failures without sacrificing the repair degree in case of one failure. This is because in the construction of TFR-LRC, each data block and global parity block are divided into two local repair groups, i.e., the repair subset is increased, which eliminates the need for nearly MDS level repair schemes in case of multiple failures, and thus TFR-LRC can be better employed to large-scale storage systems.
Furthermore, Figure 3 compares the recovery ratio of these four LRCs structures in cases of g + 2 ~ g + 4 failures (the parameter settings are consistent with those in Table 3). According to the fault tolerance proof in Section 5, it is clear that TFR-LRC can repair any g + 2 failures when f = g + 2 . Secondly, TFR-LRC is still able to show excellent repair performance with g + 3 and g + 4 failures, and the proportion of unrepairable cases with each number of failures is very small, thus TFR-LRC has optimal reliability and robustness among these four structures.
In addition, Figure 4 demonstrates the trade-off between fault tolerance and repair degree for TFR-LRC and other different codes with the same parameters. It can be found that although RS code has optimal effect in terms of fault tolerance, it has a huge repair cost for one failure. And although LRC greatly reduce this repair cost, they sacrifice large data reliability. However, TFR-LRC can well balance these two code types and find a new trade-off between repair degree and fault tolerance: For RS code, TFR-LRC reduces the repair cost significantly, while for LRCs, TFR-LRC improves the fault tolerance.
We note that all the evaluations in this paper are based on Monte Carlo simulations rather than real-system implementations. This simulation-based methodology is consistent with prior work on LRCs (e.g., Azure-LRC, Uniform Cauchy LRC) and allows us to systematically explore different parameter settings and multi-failure cases that are difficult to reproduce in practice. Nevertheless, we acknowledge that real-system traces and variance reporting would further strengthen the generalizability of our conclusions. As future work, we plan to integrate TFR-LRC into a prototype storage system (e.g., HDFS or Ceph) and validate its repair performance with real-world workloads.

7. Maintenance-Robust Deployment

7.1. Maintenance Area Deployment Problem Analysis

In large-scale distributed storage systems deploying erasure coding, it is common to place blocks from the same stripe on different nodes across multiple racks. This strategy ensures data reliability while minimizing repair overhead during rack maintenance, as such overhead typically arises from remote reads involving cross-rack or even cross-cluster traffic. Since cross-rack repair bandwidth is often more constrained than intra-rack bandwidth, this section aims to explore solutions to reduce such repair overhead through an analysis of deployment strategies in maintenance zones.
In erasure-coded systems, a maintenance zone is typically defined as the smallest unit that can be independently updated. In other words, during a maintenance operation on a specific zone, all blocks within that zone become temporarily unavailable. If a user attempts to access data during this time, the system must reconstruct the unavailable data using blocks from other healthy zones. The size of the maintenance zone directly impacts the reliability of the system: smaller zones contain fewer blocks, increasing the likelihood of successful recovery from outside the zone during maintenance. However, too many maintenance zones can complicate the overall maintenance process. In practical deployments, the number of maintenance zones within a single cluster is usually limited to fewer than 20.
In this section, we assume a homogeneous cluster topology with uniform cross-rack bandwidth, consistent with prior studies on Azure-LRC and Uniform Cauchy LRC. This assumption allows us to isolate the impact of code constructions on repair cost without additional interference from topology heterogeneity.
To evaluate the performance of TFR-LRC compared to existing schemes under maintenance zone deployments, we adopt the Average Maintenance Cost (AMC) metric as defined in reference [10]. AMC is computed by summing the repair costs for each block in all maintenance zones and dividing this sum by the total number of blocks. In a well-designed placement strategy, blocks within a maintenance zone should ideally come from distinct local repair groups, so that during maintenance, local repairs can be performed as much as possible.
First, using the previously introduced TFR-LRC matrix generation algorithm, we construct a random TFR-LRC(59, 48, 8, 3) matrix. The first four rows match the structure of Uniform Cauchy LRC(55, 48, 4, 3), and the remaining four rows are appended using a Vandermonde-based encoding. Figure 5 shows the deployment of this TFR-LRC across 14 and 15 maintenance zones, respectively. The main difference lies in whether the local parity blocks are placed in the same maintenance zone.
c 1 d 1 c 2 d 13         d 2 d 3 d 14 d 15         d 4 d 5 d 16 d 17         d 6 d 7 d 18 d 19         d 8 d 9 d 20 d 21         d 10 d 11 d 22 d 23         d 12 0 d 24 d 25 c 3 d 26 c 4 d 39         d 27 d 28 d 40 d 41         d 29 d 30 d 42 d 43         d 31 d 32 d 44 d 45         d 33 d 34 d 46 d 47         d 35 d 36 d 48 r s 1         d 37 d 38 r s 2 r s 3 c 5 d 3 c 6 d 2             d 7 d 10 d 6 d 12           d 15 d 23 d 13 d 17         d 24 d 28 d 21 d 34         d 29 d 35 d 36 d 39         d 45 r s 1 d 40 d 41         r s 3 0 d 46 d 48 c 7 d 8 c 8 d 1           d 14 d 19 d 4 d 5         d 20 d 22 d 9 d 11         d 25 d 26 d 16 d 18         d 27 d 31 d 30 d 32         d 33 d 37 d 38 d 43         d 42 d 47 d 44 r s 2
In both deployment scenarios illustrated in Figure 5, all blocks in each maintenance zone come from different local repair groups. Since TFR-LRC preserves the structure of Uniform Cauchy LRC, all maintenance events are recoverable through local repairs. Moreover, by adding new local parity blocks randomly, TFR-LRC introduces more diverse repair paths with potentially lower repair degrees. As a result, it achieves lower AMC compared to Uniform Cauchy LRC, especially in cases where multiple blocks are recoverable from newly added local groups.
Taking Figure 5a as an example, except for one zone that requires the repair of three blocks, every other maintenance zone needs to repair four blocks. The repair process for a maintenance zone includes the following steps:
(1)
Observe block distribution: Analyze the layout of blocks in the maintenance zone. d 1 appears in c 1 and c 8 , d 13 appears in c 2 and c 6 , d 26 appears in c 3 and c 7 , and d 39 appears in c 4 and c 6 .
(2)
Select optimal repair strategies: For blocks covered by local parities, apply local repair first. For blocks with unique local groups, choose the corresponding local parity for decoding. Since both d 13 and d 39 are verified by the local verification block c 6 , c 2 can be used to repair d 13 first, and then c 6 can be used to repair d 39 . Since the local blocks for verifying d 1 and d 26 are not repeated with other local blocks, c 1 can be used to repair d 1 , and c 7 can be used to repair d 26 .
(3)
Estimate repair cost: Assume direct block transfers without intra-rack coding. Based on the local repair strategies, repairing some blocks requires transmitting 13 blocks, while others require 12. The internal encoding of each rack is not considered here, that is, each block that helps to repair is directly transmitted to the destination rack. According to the description in step (2), the local repair scheme for each block in the region is shown in Figure 6. When using c 8 to repair d 1 , 13 blocks need to be transmitted; similarly, when using c 7 to repair d 26 , 13 blocks need to be transmitted; when using c 2 and c 6 to repair d 13 and d 26 , 23 blocks need to be transmitted.
(4)
Remove duplicate transfers: Identify blocks such as d 14 , d 16 , d 18 , d 19 , d 20 , d 22 , and d 25 that are transmitted more than once. Since repeated reads are unnecessary, the actual number of transmissions is reduced to 42. Right now: A M C Z 1 = 42 4 = 10 .5.
(5)
Compute total AMC: Aggregate the repair costs across all maintenance zones to obtain the overall AMC for this deployment scheme. According to steps (3) to (4), the final calculation is:   A M C Z = 15 = 10.36 .
Comparing Figure 5a,b, respectively, the distinction lies in the placement of local parity blocks. Grouping local parities in the same maintenance zone increases the likelihood of global repairs, thereby raising the AMC.
In addition, under the premise that all faults are repairable, the size of the maintenance area will affect the repair cost. Then, in the three schemes of Azure-LRC, Azure-LRC+1 and Uniform Cauchy LRC under the same encoding parameters, the fewer the number of maintenance areas, the more blocks each area will contain. At this time, if a certain area is maintained, the possibility of using global blocks for global repair will be greater, and the repair cost will also be greater. Similarly, the more maintenance areas there are, the fewer blocks each area will contain. At this time, the possibility that the repair can be completed through local repair will be greater, and the repair cost will also decrease accordingly.
However, TFR-LRC is subject to smaller restrictions. As the number of maintenance areas decreases, it is no longer possible to ensure that all blocks contained in each maintenance area come from different local repair groups. Therefore, a reasonable layout is required to ensure that the repair overhead of the maintenance area is reduced as much as possible without sacrificing the coding fault tolerance performance. Therefore, when considering the deployment of maintenance areas, in order to ensure that local repairs are used as much as possible during repairs instead of global repairs, it is necessary to keep the layout of local check blocks unchanged, or only change the layout of these newly added local check blocks, and make the number of blocks in each maintenance area as similar as possible. Since TFR-LRC has added some local check blocks, it can still recover the blocks in the maintenance area through local repairs when there are fewer maintenance areas.
Figure 7 shows the deployment scheme for Z = 8 (the corresponding TFR-LRC generation matrix is consistent with Figure 5). As shown in Figure 7, when the maintenance area Z 3 is unavailable, observe the distribution of the seven blocks d 6 , d 18 , d 31 , d 44 , d 14 , d 15 , d 17 . Here, we provide a repair scheme: first use c 1 to repair d 6 , c 3 to repair d 31 , c 4 to repair d 44 , and then use c 8 , c 7 , c 5 and c 6 to repair d 18 , d 14 , d 15 , and d 17 . At this time, a total of 49 blocks need to be transmitted to complete the repair. Based on this, it can be calculated that:   A M C Z = 8 = 6.92 .
To make the comparison more intuitive, this paper measures the average maintenance cost of TFR-LRC, Azure-LRC, Azure-LRC+1, and Uniform Cauchy LRC under condition Z = 8 ~ 17 . As shown in Figure 8, TFR-LRC has the best average maintenance cost in Z = 8 ~ 17 . Therefore, this section will analyze Z = 14 ~ 17 and Z = 8 ~ 13 .
In Z = 14 ~ 17 , all four LRC structures can ensure that the blocks in each maintenance area come from different local repair groups. Under this condition, the repair of any area can be completed directly through local repair, so AMC remains basically unchanged. Based on the previous analysis, since TFR-LRC adds some local check blocks, there is a better repair scheme to obtain a lower average maintenance cost. In Z = 8 ~ 13 , as the number of maintenance areas decreases, the possibility that Azure-LRC, Azure-LRC+1 and Uniform Cauchy LRC need to use global repairs increases, so the maintenance cost will gradually increase. Since TFR-LRC requires that each data block and global check block participate in the verification of two local check blocks, when the number of maintenance areas decreases, all blocks in the maintenance area can still be restored through local repair. In addition, when repairing the maintenance area, there may be some repeated blocks in the available blocks that need to be read in the repair scheme of each block, so there is no need to transmit the same available blocks multiple times, which greatly reduces the average maintenance cost of each area.
It is important to note that real-world deployments often exhibit heterogeneous rack connectivity and bandwidth constraints across racks. In such environments, the advantages of TFR-LRC can be further amplified because its additional local parities provide more repair diversity, enabling repairs with fewer cross-rack transmissions. While our current evaluation assumes uniform bandwidth, extending the analysis to heterogeneous topologies represents a promising direction for future work.

7.2. Analysis and Evaluation of Degraded Read Experiments

In distributed or cloud-based storage systems, erasure coding usually involves two major operations: degraded reads of temporarily unavailable data and full recovery of permanently lost nodes. Although erasure codes tolerate multiple failures, single-node failure repair remains a critical research problem. With the growing adoption of wide-stripe erasure codes, such failures are becoming increasingly common [18].
As data scale increases, temporary unavailability and frequent access requests degrade read performance. A degraded read refers to accessing a block that is temporarily unavailable due to issues like power failure, network interruptions, or maintenance. The Degraded Read Time (DRT) is defined as the time from initiating a degraded read request to successfully reconstructing the data. DRT is usually slower than reading a healthy block. Studies show that over 90% of temporary failures last less than 15 min [22], and Google accordingly delays repair initiation by 15 min [23].
In our experiments, TFR-LRC(30, 24, 4, 2) and Uniform Cauchy LRC(28, 24, 2, 2) were deployed on a real system. A Hadoop 3.3.4-based HDFS cluster with 33 virtual machines (VMware Workstation, Palo Alto, USA) and rack topology was configured. One node issued degraded read requests. The block size was 64 MiB, the packet size was 1 MiB, and the cross-rack bandwidth was set to 1 Gbps [3]. The CPU was an Intel Core i7-8700K (Intel, Santa Clara, USA) at 3.70 GHz. The degraded read procedure involved: (1) randomly deleting a block; (2) issuing a degraded read request; (3) reading from helper nodes and decoding. The evaluation assumes a symmetric rack topology where each rack is interconnected with uniform 1 Gbps cross-rack bandwidth, following the standard setup in [3].
As shown in Figure 9, two scenarios were tested: standard single-block failure and rack-wide maintenance failure. In the first case, only the failed block was unavailable; in the second, all blocks within the failed rack were unavailable. The Average Degraded Read Time (ADRT) was measured across all data blocks.
Figure 9 presents the Average Degraded Read Time of TFR-LRC and Uniform Cauchy LRC. The results demonstrate that TFR-LRC achieves lower degraded read latency under both typical single-block failure and rack maintenance scenarios. Specifically, compared to Uniform Cauchy LRC, TFR-LRC reduces degraded read latency by approximately 28.8% in the single-block failure case, and by approximately 20.8% under the rack maintenance mode.
We acknowledge that our current experiments are limited to homogeneous settings. In more realistic heterogeneous networks with non-uniform cross-rack bandwidth, we anticipate that TFR-LRC may preserve, or potentially enhance, its relative advantages owing to its reduced reliance on global repairs. A comprehensive evaluation under such heterogeneous topologies will be an important aspect of our future work.

8. Conclusions

This paper propose a new family of locally repairable codes for large-scale distributed storage systems in wide stripe scenarios, TFR-LRC, which can seek a trade-off between storage cost, repair degree and fault tolerance and allow adjusting the parameter conditions flexibly according to the actual demand. If pursuing lower repair cost, TFR-LRC can strike a trade-off between storage cost and repair degree while improving fault tolerance; if pursuing higher coding reliability, TFR-LRC can sacrifice a little bit of repair cost of multiple failures for higher fault tolerance performance. TFR-LRC is characterized by the fact that it adds a repair subset to each data block and global parity block, which effectively alleviates the repair bottleneck of the system, and greatly improves the repair performance and fault tolerance. In this paper, we show the construction details of TFR-LRC, prove that TFR-LRC can tolerate any g + 2 failures from the theoretical point of view, and verify the feasibility of TFR-LRC by simulation experiments. In addition, in the paper we further demonstrated that TFR-LRC has higher repair efficiency in rack maintenance scenarios. In our future work, we will further consider the application of LRC in wide stripe scenarios and strive for higher fault tolerance performance in order to provide new theoretical and technical support in the field of distributed storage.

Author Contributions

Conceptualization, Y.W.; methodology, Y.W.; software, Y.C.; validation, Y.C.; formal analysis, J.S.; writing—original draft, Y.C.; writing—review & editing, J.S.; supervision, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Reinsel, D.; Gantz, J.; Rydaing, J. Data Age 2025: The Digitization of the World from Edge to Core. IDC White Pap. 2018, 1, 1–29. [Google Scholar]
  2. Balaji, S.B.; Krishnan, M.N.; Vajha, M.; Ramkumar, V.; Sasidharan, B.; Kumar, P.V. Erasure coding for distributed storage: An overview. Sci. China Inf. Sci. 2018, 61, 100301. [Google Scholar] [CrossRef]
  3. Sathiamoorthy, M.; Asteris, M.; Papailiopoulos, D.; Dimakis, A.G.; Vadali, R.; Chen, S.; Borthakur, D. Xoring elephants: Novel erasure codes for big data. arXiv 2013, arXiv:1301.3791. [Google Scholar] [CrossRef]
  4. Calder, B.; Wang, J.; Ogus, A.; Nilakantan, N.; Skjolsvold, A.; McKelvie, S.; Xu, Y.; Srivastav, S.; Wu, J.; Simitci, H.; et al. Windows azure storage: A highly available cloud storage service with strong consistency. In Proceedings of the 23rd ACM Symposium on Operating Systems Principles (SOSP), Cascais, Portugal, 23–26 October 2011; pp. 143–157. [Google Scholar]
  5. Shvachko, K.; Kuang, H.; Radia, S.; Chansler, R. The hadoop distributed file system. In Proceedings of the IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), Incline Village, NV, USA, 3–7 May 2010; pp. 1–10. [Google Scholar]
  6. Weil, S.; Brandt, S.A.; Miller, E.L.; Long, D.D.; Maltzahn, C. Ceph: A scalable, high-performance distributed file system. In Proceedings of the 7th Conference on Operating Systems Design and Implementation (OSDI’06), Seattle, WA, USA, 6–8 November 2006; pp. 307–320. [Google Scholar]
  7. Ghemawat, S.; Gobioff, H.; Leung, S.T. The Google file system. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP), Bolton Landing, NY, USA, 19–22 October 2003; pp. 29–43. [Google Scholar]
  8. Weatherspoon, H.; Kubiatowicz, J.D. Erasure coding vs. replication: A quantitative comparison. In Peer-to-Peer Systems; International Workshop on Peer-to-Peer Systems; Springer: Berlin/Heidelberg, Germany, 2002; pp. 328–337. [Google Scholar]
  9. Hu, Y.; Cheng, L.; Yao, Q.; Lee, P.P.; Wang, W.; Chen, W. Exploiting combined locality for Wide-Stripe erasure coding in distributed storage. In Proceedings of the 19th USENIX Conference on File and Storage Technologies (FAST 21), Olivia, MN, USA, 14 December 2020; pp. 233–248. [Google Scholar]
  10. Kadekodi, S.; Silas, S.; Clausen, D.; Merchant, A. Practical design considerations for wide locally recoverable codes (LRCs). ACM Trans. Storage 2023, 19, 1–26. [Google Scholar] [CrossRef]
  11. Wu, S.; Lin, G.; Lee, P.P.; Li, C.; Xu, Y. Optimal Wide Stripe Generation in Locally Repairable Codes via Staged Stripe Merging. In Proceedings of the 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS), Jersey City, NJ, USA, 23–26 July 2024; IEEE: New York, NY, USA, 2024; pp. 450–460. [Google Scholar]
  12. Reed, I.S.; Solomon, G. Polynomial codes over certain finite fields. J. Soc. Ind. Appl. Math. 1960, 8, 300–304. [Google Scholar] [CrossRef]
  13. VastData. Available online: https://vastdata.com/providing-resilience-efficiently-part-ii/ (accessed on 15 January 2021).
  14. Huang, C.; Simitci, H.; Xu, Y.; Ogus, A.; Calder, B.; Gopalan, P.; Li, J.; Yekhanin, S. Erasure coding in windows azure storage. In Proceedings of the 2012 USENIX Annual Technical Conference (USENIX ATC 12), Boston, MA, USA, 13–15 June 2012; pp. 15–26. [Google Scholar]
  15. Cheng, K.; Wu, S.; Li, X.; Lee, P.P. Harmonizing Repair and Maintenance in LRC-Coded Storage. In Proceedings of the 2024 43rd International Symposium on Reliable Distributed Systems (SRDS), Charlotte, NC, USA, 30 September–3 October 2024; IEEE: New York, NY, USA, 2024; pp. 1–11. [Google Scholar]
  16. Shen, Z.; Cai, Y.; Cheng, K.; Lee, P.P.; Li, X.; Hu, Y.; Shu, J. A survey of the past, present, and future of erasure coding for storage systems. ACM Trans. Storage 2025, 21, 1–39. [Google Scholar] [CrossRef]
  17. Wang, Y.; Xu, F.; Pei, X. Research on erasure code-based fault-tolerant technology for distributed storage. Chin. J. Comput. 2017, 40, 236–255. [Google Scholar]
  18. Muralidhar, S.; Lloyd, W.; Roy, S.; Hill, C.; Lin, E.; Liu, W.; Pan, S.; Shankar, S.; Sivakumar, V.; Tang, L.; et al. f4: Facebook’s warm BLOB storage system. In Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), Broomfield, CO, USA, 6–8 October 2014; pp. 383–398. [Google Scholar]
  19. Kolosov, O.; Yadgar, G.; Liram, M.; Tamo, I.; Barg, A. On fault tolerance, locality, and optimality in locally repairable codes. ACM Trans. Storage (TOS) 2020, 16, 1–32. [Google Scholar] [CrossRef]
  20. Rashmi, K.V.; Shah, N.B.; Ramchandran, K. A piggybacking design framework for read-and download-efficient distributed storage codes. IEEE Trans. Inf. Theory 2017, 63, 5802–5820. [Google Scholar] [CrossRef]
  21. Pamies-Juarez, L.; Hollmann, H.D.L.; Oggier, F. Locally repairable codes with multiple repair alternatives. In Proceedings of the 2013 IEEE International Symposium on Information Theory, Istanbul, Turkey, 7–12 July 2013; IEEE: New York, NY, USA, 2013; pp. 892–896. [Google Scholar]
  22. Khan, O.; Burns, R.C.; Plank, J.S.; Plank, J.; Pierce, W.; Huang, C. Rethinking erasure codes for cloud file systems: Minimizing I/O for recovery and degraded reads. In Proceedings of the FAST’12: 10th USENIX Conference on File and Storage Technologies, San Jose, CA, USA, 14–17 February 2012. [Google Scholar]
  23. Ford, D.; Labelle, F.; Popovici, F.I.; Stokely, M.; Truong, V.A.; Barroso, L.; Grimes, C.; Quinlan, S. Availability in globally distributed storage systems. In Proceedings of the 9th USENIX Symposium on Operating Systems Design and Implementation (OSDI 10), Vancouver, BC, Canada, 4–6 October 2010. [Google Scholar]
Figure 1. The coding structure of TFR-LRC(30, 24, 4, 2).
Figure 1. The coding structure of TFR-LRC(30, 24, 4, 2).
Information 16 00803 g001
Figure 2. Three different LRC structures used in our experiment.
Figure 2. Three different LRC structures used in our experiment.
Information 16 00803 g002
Figure 3. The durability comparison of the four different LRC structures.
Figure 3. The durability comparison of the four different LRC structures.
Information 16 00803 g003
Figure 4. The trade-off between fault tolerance and repair degree for TFR-LRC, RS code and LRCs with the same parameters (n = 55, k = 48, l = 4, g = 3).
Figure 4. The trade-off between fault tolerance and repair degree for TFR-LRC, RS code and LRCs with the same parameters (n = 55, k = 48, l = 4, g = 3).
Information 16 00803 g004
Figure 5. Deployment of TFR-LRC(59, 48, 8, 3) with Z = 15 and Z = 14 maintenance zones.
Figure 5. Deployment of TFR-LRC(59, 48, 8, 3) with Z = 15 and Z = 14 maintenance zones.
Information 16 00803 g005
Figure 6. The specific repair process for each block when repairing maintenance zone Z 1 (Z = 15).
Figure 6. The specific repair process for each block when repairing maintenance zone Z 1 (Z = 15).
Information 16 00803 g006
Figure 7. TFR-LRC(59, 48, 8, 3) deployment with Z = 8 zones.
Figure 7. TFR-LRC(59, 48, 8, 3) deployment with Z = 8 zones.
Information 16 00803 g007
Figure 8. Comparison of AMC between TFR-LRC and three other LRC constructions.
Figure 8. Comparison of AMC between TFR-LRC and three other LRC constructions.
Information 16 00803 g008
Figure 9. Comparison of ADRT between TFR-LRC and Uniform Cauchy LRC.
Figure 9. Comparison of ADRT between TFR-LRC and Uniform Cauchy LRC.
Information 16 00803 g009
Table 1. Comparison of average repair degree of TFR-LRC and Uniform Cauchy LRC in cases of one failure and two failures.
Table 1. Comparison of average repair degree of TFR-LRC and Uniform Cauchy LRC in cases of one failure and two failures.
1 Failure2 Failures
TFR-LRC(30, 24, 4, 2)1318.75
Uniform Cauchy LRC(28, 24, 2, 2)1327.92
Table 2. Comparison of average repair degree of TFR-LRC and Uniform Cauchy LRC in the same parameters.
Table 2. Comparison of average repair degree of TFR-LRC and Uniform Cauchy LRC in the same parameters.
2 Failures3 Failures4 Failures
TFR-LRC(30, 24, 4, 2)18.752424
Uniform Cauchy LRC(30, 24, 4, 2)16.8622.15/
Table 3. Comparison results of four different LRC structures under four parameters.
Table 3. Comparison results of four different LRC structures under four parameters.
Storage CostFault ToleranceLocalityARD (f = 1)ARD (f = 2)
Azure-LRC(28, 24, 2, 2)1.167× g + 1 = 3 2412.8530.66
Azure-LRC+1(28, 24, 2, 2)1.167× g + 1 = 3 2421.6443.46
Uniform Cauchy LRC(28, 24, 2, 2)1.167× g + 1 = 3 131327.92
TFR-LRC(30, 24, 4, 2)1.25× g + 2 = 4 131318.75
Azure-LRC(55, 48, 4, 3)1.146× g + 1 = 4 4813.9635.49
Azure-LRC+1(55, 48, 4, 3)1.146× g + 1 = 4 1615.0539.22
Uniform Cauchy LRC(55, 48, 4, 3)1.146× g + 1 = 4 1312.7633.85
TFR-LRC(59, 48, 8, 3)1.229× g + 2 = 5 1312.1021.75
Azure-LRC(80, 72, 4, 4)1.111× g + 1 = 5 7220.752.8
Azure-LRC+1(80, 72, 4, 4)1.111× g + 1 = 5 2422.7559.38
Uniform Cauchy LRC(80, 72, 4, 4)1.111× g + 1 = 5 191949.22
TFR-LRC(84, 72, 8, 4)1.167× g + 2 = 6 191932.25
Azure-LRC(105, 96, 4, 5)1.094× g + 1 = 6 9627.4270.68
Azure-LRC+1(105, 96, 4, 5)1.094× g + 1 = 6 3230.4579.73
Uniform Cauchy LRC(105, 96, 4, 5)1.094× g + 1 = 6 2625.2667.69
TFR-LRC(109, 96, 8, 5)1.135× g + 2 = 7 2625.2642.75
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Cao, Y.; Shi, J. TFR-LRC: Rack-Optimized Locally Repairable Codes: Balancing Fault Tolerance, Repair Degree, and Topology Awareness in Distributed Storage Systems. Information 2025, 16, 803. https://doi.org/10.3390/info16090803

AMA Style

Wang Y, Cao Y, Shi J. TFR-LRC: Rack-Optimized Locally Repairable Codes: Balancing Fault Tolerance, Repair Degree, and Topology Awareness in Distributed Storage Systems. Information. 2025; 16(9):803. https://doi.org/10.3390/info16090803

Chicago/Turabian Style

Wang, Yan, Yanghuang Cao, and Junhao Shi. 2025. "TFR-LRC: Rack-Optimized Locally Repairable Codes: Balancing Fault Tolerance, Repair Degree, and Topology Awareness in Distributed Storage Systems" Information 16, no. 9: 803. https://doi.org/10.3390/info16090803

APA Style

Wang, Y., Cao, Y., & Shi, J. (2025). TFR-LRC: Rack-Optimized Locally Repairable Codes: Balancing Fault Tolerance, Repair Degree, and Topology Awareness in Distributed Storage Systems. Information, 16(9), 803. https://doi.org/10.3390/info16090803

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop