Next Article in Journal
Intelligent Inversion of Deep In Situ Stress Fields Based on the ABC-SVR Algorithm
Previous Article in Journal
Study of Jeffrey Fluid Motion Through Irregular Porous Circular Microchannel Under the Implications of Electromagnetohydrodynamic and Surface Charge-Dependent Slip
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Assessment of the Reliability of Non-Recoverable Subsystems of Mining Electronic Equipment Using Various Computational Methods

by
Nikita V. Martyushev
1,*,
Boris V. Malozyomov
2,
Anton Y. Demin
1,
Alexander V. Pogrebnoy
1,
Georgy E. Kurdyumov
3,
Viktor V. Kondratiev
3,4 and
Antonina I. Karlina
3,5
1
Department of Information Technology, Tomsk Polytechnic University, 634050 Tomsk, Russia
2
Department of Electrotechnical Complexes, Novosibirsk State Technical University, 630073 Novosibirsk, Russia
3
Advanced Engineering School, Cherepovets State University, 162600 Cherepovets, Russia
4
Laboratory of Geochemistry of Ore Formation and Geochemical Methods of Prospecting, A. P. Vinogradov Institute of Geochemistry of the Siberian Branch of the Russian Academy of Sciences, 664033 Irkutsk, Russia
5
Stroytest Research and Testing Centre, Moscow State University of Civil Engineering, 129337 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(4), 723; https://doi.org/10.3390/math14040723
Submission received: 19 December 2025 / Revised: 6 February 2026 / Accepted: 11 February 2026 / Published: 19 February 2026

Abstract

The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, and applicability. The investigated methods include classical boundary techniques (minimal paths and cuts), analytical decomposition based on the Bayes theorem, the logic–probabilistic method (LPM) employing triangle–star transformations, and the algorithmic Structure Convolution Method (SCM), which is based on matrix reduction of the system’s connectivity graph. The reliability problem is formally represented using graph theory, where each element is modeled as a binary variable with independent failures, which is a standard and practically justified assumption for power electronic subsystems operating without common-cause coupling. Numerical experiments were carried out on canonical benchmark topologies—bridge, tree, grid, and random connected graphs—representing different levels of structural complexity. The results demonstrate that the SCM achieves exact reliability values with up to six orders of magnitude acceleration compared to the LPM for systems containing more than 20 elements, while maintaining polynomial computational complexity. Qualitatively, the compared approaches differ in the nature of the output and practical applicability: boundary methods provide fast interval estimates suitable for preliminary screening, whereas decomposition may exhibit a systematic bias for highly connected (non-series–parallel) topologies. In contrast, the SCM consistently preserves exactness while remaining computationally tractable for medium and large sparse-to-moderately dense graphs, making it preferable for repeated recalculations in design and optimization workflows. The methods were implemented in Python 3.7 using NumPy and NetworkX, ensuring transparency and reproducibility. The findings confirm that the SCM is an efficient, scalable, and mathematically rigorous tool for reliability assessment and structural optimization of large-scale non-repairable systems. The presented methodology provides practical guidelines for selecting appropriate reliability evaluation techniques based on system complexity and computational resource constraints.

1. Introduction

The problems of quantitative assessment of the reliability of complex non-recoverable systems naturally lie at the intersection of applied probability theory, combinatorics, and graph algorithms.
Despite significant progress in the development of analytical and logical–probabilistic methods for network reliability analysis, existing approaches reported in the current literature exhibit several important limitations. Exact methods based on state-space enumeration or inclusion–exclusion formulations often suffer from combinatorial explosion and become computationally infeasible for systems with complex non-series–parallel topology. Approximate techniques, including decomposition and boundary-based methods, improve scalability but may introduce systematic bias or provide only interval estimates, which limits their applicability in safety-critical engineering contexts. In addition, many published studies focus on specific network classes or rely on problem-dependent heuristics, making it difficult to assess scalability, reproducibility, and numerical stability in a unified and transparent manner. These limitations motivate the need for exact yet computationally efficient reliability evaluation methods with predictable performance characteristics.
Even for two-terminal networks, the computation of the probability of failure-free operation is equivalent to evaluating the coefficients of the reliability polynomial and, in the general case, is classified as a #P-hard problem [1,2]. This implies that an increase in structural complexity—both in the number of elements and in the density of interconnections—inevitably leads to a combinatorial explosion in the number of system states. As a result, special requirements are imposed on the mathematical methods employed, including rigorous problem formulation, controlled approximation error, reproducibility, and reasonable (provable or at least empirically justified) estimates of computational complexity. For engineering applications such as mining electronics, this mathematical context is not merely of academic interest: both economic performance and operational safety depend critically on the accuracy and computational efficiency of reliability calculations. Therefore, methods that combine theoretical rigor with algorithmic efficiency and scalability are particularly demanded in this domain.
Several areas have formed in the professional community, each of which in its own way balances between accuracy, computational complexity and versatility. The most natural starting point is methods based on minimum paths and minimum sections. Their mathematical idea is to move from a general structural function to a family of minimum configurations of success and failure, and to apply inclusion–exclusion formulas to obtain upper and lower estimates of the probability of failure-free operation [3,4]. In the literature, this approach is consistently used for rapid “localizing” analysis of networks and electronic circuits. It is attractive because it has low labor intensity and linear-polynomial complexity of constructing a set of minimal objects in regular structures [5]. However, the result is an interval, and the width of the interval depends significantly on the overlaps between the tracks/sections [6]. For non-monotonic and strongly connected topologies (e.g., a classical bridge scheme), the boundaries can diverge by values of the order of 10−3–10−4 at high levels of element reliability, which is acceptable only at the stages of preliminary synthesis [7]. From a mathematical point of view, this is a consequence of the fact that truncating the inclusion–exclusion series at large intersections gives systematically optimistic or pessimistic estimates; The method does not provide an exact value in principle [8].
Another well-studied way is decomposition by a “special element” as the application of a formula of full probability followed by a recursive reduction of the structure [9]. Theoretically, this is an accurate method: at each step, two conditional problems are solved on a modified graph (the element is good/failed), and the result is added to the probability weights. On small graphs, the decomposition is close to the optimal partition of the problem and therefore competitive. But even the simplest analysis of the recursion tree shows an exponential increase in the number of subtasks in the worst case. In addition, if a “special” element is chosen unsuccessfully (for example, a high-degree vertex in a non-uniform topology), both the depth and the “branching” of recursion increase significantly [10]. In the reference bridge scheme, the strict implementation gives an exact value of 0.999675 at p = 0.995 for the elements, but when scaling to 15–20 nodes, the empirical time increases dramatically, and any approximation of intermediate subcircuits leads to a noticeable estimate bias. Algorithmically, this is a manifestation of #P difficulty: without an additional graph structure, reduction inevitably generates an explosion of states [11].
The logical–probabilistic method translates the problem into the algebra of Boolean functions, representing the structural function in a perfect disjunctive or conjunctive form and then proceeding to probabilistic identities [12]. Its mathematical merit lies in the strict equivalence of transformations and the ability to obtain accurate values of failure-free when reducing the formula to a non-repeatable form. Classical Y–Δ (triangle–star) transformations and cutting algorithms minimize the multiplicities of variables and reduce the network to a sequentially parallel network with supra-equivalent components [13]. On reference circuits, this device gives accurate results and serves as a reliable “gold standard” for verification. The constraint is also well known: the size of perfect forms grows exponentially, and the problem of optimal ordering and elimination of repetitions is close in spirit to NP-hard combinatorial problems. In practice, when the number of elements exceeds 15–20, even automated systems of symbolic transformations either deplete memory or require a disproportionately long time, which makes the method difficult to apply in multiple recalculation scenarios [14] (optimization, sensitivity, Monte Carlo embedding).
Algorithmic direction based on binary decision diagrams (BDD) uses canonical graph representations of Boolean functions. Its strength lies in the ability to perform operations on a structural function in the form of graph manipulations with polynomial time relative to the size of the diagram, rather than the initial number of variables [15]. With a successful order of variables, BDDs reach two or three orders of magnitude of time gain relative to logical-symbolic methods and allow scaling to dozens, and sometimes hundreds, of elements. Nevertheless, the size of the diagram is extremely sensitive to order, and the choice of a “good” permutation of variables is itself algorithmically complex. In dense networks, the diagram grows to an exponential value. Thus, BDD is a powerful, but uneven tool in efficiency: performance guarantees depend on the topology [16,17,18]. Recent comparative studies typically report that, under favorable variable orderings, BDD-based exact evaluation scales from tens to at least several dozens of components with practical runtimes, while unfavorable orderings can increase the diagram size by orders of magnitude and make dense graphs computationally prohibitive. Reported speedups over symbolic logic–probabilistic manipulations are commonly within 102–103 on structured benchmarks, but they degrade sharply as graph density increases, which motivates topology-aware exact kernels and reduction-based approaches.
Probabilistic methods, primarily the Monte Carlo simulation and its variance reduction (important sampling, splitting, cross-entropy schemes), offer a more “stochastic” view [19]. Their main advantage is independence from a particular structural-function representation and the ability to accommodate arbitrary failure distributions and dependence structures. But in a high-reliability problem, even the best options require huge amounts of simulation to reach relative errors of the order of 10−4–10−5; In addition, these methods, by their nature, do not return a deterministic number, but rather an interval score, which limits their role in regulatory and certification contexts. In such cases, probabilistic techniques remain either a tool for rough validation or a component of hybrid schemes [20].
Against this background, a class of formally defined graph convolutions is especially interesting, in which the system is described by a connectivity matrix, and the algorithm based on a finite set of local reduction rules consistently simplifies the graph, while maintaining accuracy in the sense of two-terminal reliability [21]. The system structure convolution method (SCM) belongs to this paradigm. Its mathematical basis is a composition of probability-correct local transformations: removal of dangling vertices, convolution of sequential and parallel components, as well as equivalent triangle-star transformations. Each rule is a strict identity at the level of a structural function or, equivalently, at the level of the corresponding probability of the subnetwork’s health; consequently, the whole composition retains accuracy [22]. An important advantage of the SCM is the numerical rather than symbolic nature of the calculations: instead of building and simplifying huge Boolean forms, the algorithm operates with matrices and local parameter updates, which makes it possible to implement efficiently on modern architectures without the risk of “exploding” algebraic expressions.
Numerical experiments on typical topologies—from the canonical bridge scheme to lattices and random connected graphs—demonstrate that the SCM reproduces accurate reference values, coinciding with the logical–probabilistic method on small schemes, and at the same time provides a radical time gain on medium and large schemes [23]. For example, on the structures of about 20 elements, acceleration by several orders of magnitude relative to symbolic methods is recorded while maintaining accuracy. At 40 to 50 knots, convolution yields practical times of seconds, while traditional precision approaches are not completed in a reasonable time. In terms of computational mathematics, this means that the SCM effectively “bypasses” the exponential barrier not by giving up precision, but by choosing a representation and a local computational structure. At the same time, the stability of the results is ensured by the fact that the operations are elementary probabilistic identities without the accumulation of numerical instability; for binary states and independent failures, rounding effects are controlled [24].
The paper discusses strict criteria for comparing methods: the nature of the result (boundaries versus exact value), asymptotics and real computational cost, sensitivity to topology, resistance to graph bottlenecks. In addition, the algorithmic implementation of MCS at the level of matrix data structures is discussed, which facilitates further theoretical study: for example, the formulation of conditions under which a sequence of rules is guaranteed to lead to a polynomial number of steps, and a description of graph classes where the best acceleration is achieved.
From the mathematical-applied point of view, it is this formally substantiated and efficiently implemented direction that is most valuable for problems that require multiple accurate recalculations: optimization of the structure for reliability, computational sensitivity analysis, embedding an accurate deterministic kernel in dispersion reduction schemes in Monte Carlo [25]. The relevance is also due to the fact that the result of the SCM is an exact number, and not a statistical assessment, which facilitates comparability with theoretical boundaries, verification on reference examples and subsequent certification of calculations. Finally, the numerical nature of the algorithm is naturally combined with open implementations and data repositories, which is important for reproducibility, a key requirement of modern computational mathematics [26].
Despite the large body of work on exact and approximate reliability evaluation, recent studies are often limited in at least one of the following aspects:
-
they focus on a single family of methods without a unified benchmark protocol across qualitatively different topologies (bridge/non-series–parallel, trees, grids, and irregular connected graphs);
-
they report accuracy without a reproducible, implementation-level comparison of computational cost under a common software/hardware setting;
-
they treat reduction-based exact methods as heuristic without explicitly demonstrating their exactness against a reference ‘gold standard’ on canonical cases and then quantifying scalability on larger graphs.
The novelty of this manuscript is a unified, reproducible comparison framework in which boundary, decomposition, logic–probabilistic, and matrix-based SCM approaches are implemented within one computational environment and evaluated jointly in terms of exactness vs. bounds, runtime scaling, and topology sensitivity, with SCM positioned as an exact numerical kernel for repeated recalculations and structural optimization.
In addition, many prior comparisons rely either on purely analytical toy networks or on stochastic validation, which complicates the interpretation of discrepancies for high-reliability regimes where relative errors of 10−4–10−3 are decision-relevant. The present study addresses these limitations by (a) using canonical and structurally diverse benchmark graphs, (b) validating exact methods against a reference solution on the bridge topology, and (c) reporting runtime scaling within a fixed and transparent implementation setup, thereby making both the accuracy and computational claims directly reproducible.
From a practical perspective, exact and computationally efficient reliability evaluation is required in a wide range of real-world applications, including the analysis of technical infrastructures, safety-critical engineering systems, and networked structures subject to repeated design modifications. Typical examples include power supply and communication networks, transportation and logistics systems, modular mechanical assemblies, and digital system architectures, where reliability indicators must be recalculated multiple times during optimization, sensitivity analysis, or structural reconfiguration. In such scenarios, the main challenge lies in maintaining exactness while avoiding the combinatorial growth of computational cost. Therefore, methods that combine exactness with predictable computational scaling are of particular practical relevance.
Based on this, the purpose of this work is to provide a mathematically rigorous and at the same time applied comparative analysis of computational methods for assessing the reliability of non-recoverable systems with a complex structure with an emphasis on the SCM.
The paper formalizes the formulation of the problem in terms of graphs and structural functions, sets out the correctness and compositionality of the local rules of the SCM, presents representative numerical experiments on canonical and random topologies, comparing accuracy and computational efficiency with boundary, decomposed, logical–probabilistic and BDD approaches. It is shown that SCM reproduces exact values, coinciding with the “gold standard” on small circuits, and at the same time provides multi-order acceleration on medium and large graphs, which makes it a preferred core for subsequent mathematical developments focused on analyzing and optimizing the reliability of complex subsystems in engineering applications.
The structure of the article is organized as follows: Section 2 contains a formal statement of the problem and a detailed description of the methods under study with an emphasis on mathematical rigor. Section 3 presents the results of computational experiments, their analysis and discussion. Section 4 formulates the main findings and outlines prospects for further research.

2. Materials and Methods

This section describes the materials, test structures, and computational procedures used to compare the reliability evaluation methods. For better visualization of the research methodology, a unified methodological framework is introduced. It summarizes the sequential stages of the study, including benchmark structure selection, method implementation, reference validation, computational performance evaluation, and comparative analysis of accuracy and scalability. The framework provides a compact overview of how boundary, decomposition, logic–probabilistic, and system structure convolution methods are integrated within a single computational workflow (Figure 1).
To summarize the methodological part of the study, a unified framework was introduced to systematize the sequence of computational procedures and to ensure the consistency of comparisons across different reliability assessment methods that was shown in Figure 1. The framework reflects the logical flow from the selection of benchmark system structures to the application of alternative analytical approaches, followed by validation against reference results and comparative evaluation in terms of accuracy and computational efficiency. Such a representation makes explicit the role of each method within the overall workflow and provides a transparent basis for the subsequent analysis of numerical results.
Abbreviations and symbols. SCM—System Structure Convolution Method; LVM—Logical–probabilistic method; Psys—system reliability (probability of failure-free operation); pi—reliability of element i; qi = 1 − pi—failure probability of element i; Ii—Birnbaum importance index of element i.

2.1. Object of Research and Design of the Experiment

The assumption of statistical independence of element failures is justified by the physical and operational characteristics of mining electronic subsystems considered in this study. In particular, the analyzed rectifier unit consists of spatially separated power semiconductor components with individual thermal paths, independent electrical loading, and no common control or protection mechanisms that could induce correlated failures. Failure statistics obtained from long-term field operation indicate that dominant failure mechanisms—such as thermal fatigue, junction degradation, and material aging—act locally at the component level. Therefore, within the considered operational horizon and under normal operating conditions, the assumption of independent binary failures is consistent with established reliability modeling practice for power electronic equipment.
A critically important subsystem was chosen as the object of the study—the rectifier unit of the control system for the main drive of the lifting and head of the EKG-20A (OMZ Group, Yekaterinburg, Russia) mining excavator. This unit provides power to the excitation circuits of generators and is a typical example of a complex non-recoverable electronic subsystem at a mining enterprise. Its failure leads to a complete stop of the excavator [27].
The mathematical problem is formulated as follows. Let the system consist of n elements. Each element i can be in one of two states: healthy (xi = 1) or failed (xi = 0). The state of the entire system is described by the Boolean structural function F(x), where x = (x1, x2, …, xn). Each system element is assumed to be binary, where xi = 1 denotes an operational (serviceable) state and xi = 0 denotes a failed state:
Φ ( x ) : { 0 , 1 } n { 0 , 1 } ,       Φ ( x ) = 1 ,       the   system   is   operational , 0 ,     the   system   is   failed .  
The probability of failure-free operation of the i-th element is denoted as pi = P(xi = 1), and the probability of failure is qi = 1 − pi. Element failures are considered independent events.
The target indicator—the probability of failure-free operation of the Psys is the mathematical expectation of the structural function:
P sys = E [ Φ ( x ) ] = x { 0 , 1 } n Φ ( x ) i = 1 n p i x i ( 1 p i ) 1 x i .
Direct computation by (2) requires enumeration of 2n states, which is practically impossible even at n > 20. To overcome this computational complexity, deterministic methods are investigated and compared in the work, which make it possible to effectively calculate or estimate Psys.
The system is modeled as an undirected connected graph G = (V,E), where the set of vertices V = {v1, v2, …, vn} corresponds to the elements of the system, and the set of edges EV × V represents the functional connections between them. The system is considered operational if and only if there exists at least one path from the source node s to the terminal node t in the graph that consists exclusively of vertices and edges corresponding to serviceable elements (xi = 1).

2.2. Mathematical Apparatus of the Compared Methods

2.2.1. Method of Minimum Paths and Sections (Boundary Estimates)

The minimum path Lj is the minimum set of elements whose serviceability guarantees the operability of the system. Formally, for the set of indices Ij ⊆ {1, …, n} corresponding to the path, the following is satisfied: if ∀iIj: xi = 1, then Q(x) = 1, and for any eigenset I’Ij, this property is not satisfied.
The minimum cross-section Sk is the minimum set of elements whose simultaneous failure leads to the failure of the system. Formally, for Ik ⊆ {1, …, n}: if ∀iIk: xi = 0, then Q(x) = 0, and for any I’Ik, this property is not met.
Let the mp system have minimum paths { L 1 , , L m p } and ms minimum cross-sections { S 1 , , S m s } . Let us denote the events:
-
Aj = {all path elements are healthy},
-
Bk = {all section members failed}.
Then, the system is healthy if at least one of the Aj events has occurred, and has failed if at least one of the Bk events has occurred. Using the inclusion–exclusion formula, we can obtain the upper (U) and lower (L) bounds for Psys:
U = P k = 1 m s A j = j = 1 m p P ( A j )     1 j < k m p P ( A j A k ) + + ( 1 ) m p + 1 P k = 1 m s A j , L = 1 P k = 1 m s B k = 1 k = 1 m s P ( B k ) 1 k < l m s P ( B k B l ) + .
In practice, series (3) are terminated to obtain Bonferroni estimates. For the bridge scheme at pi = p = 0.995, the minimum paths are L1 = {1,4}, L2 = {1,5,3}, L3 = {2,5,4}, L4 = {2,3}; minimum cross-sections: S1 = {1,2}, S2 = {4,3}, S3 = {1,5,3}, S4 = {2,5,4}. Calculation of boundaries by (3), taking into account intersections to the second order, yielded the interval Psys ∈ [0.999600; 0.999850].

2.2.2. Method of Decomposition by “Special Element”

Some element e of system reliability, modeled by a graph edge, is selected (usually with the maximum degree in the graph). According to the formula of total probability:
P sys = p e · P ( Φ = 1 | x e = 1 ) + ( 1 p e ) · P ( Φ = 1 | x e = 0 ) .
Conditional probabilities P ( Φ = 1 | x e = 1 ) and P ( Φ = 1 | x e = 0 ) are calculated for modified subgraphs:
-
G1: The vertices that are incident to edge e are contracted (the edge is guaranteed to be healthy).
-
G0: The edge e is removed from the graph (the element is guaranteed to fail).
Figure 2 shows the full decomposition process for a bridge diagram:
  • Source graph (G): Original bridge diagram. Vertices b and c, connected by a “special element” 5, are highlighted.
  • Graph G1: The result of the condition “element 5 is healthy”. Vertices b and c are compressed into one new vertex (b–c). The red fill visually indicates this merge operation. All other connections are preserved.
  • Graph G0: Result of element 5 failed. The edge between b and c is removed, as shown by the gray dotted line. The vertices b and c themselves became gray, denoting a break in the connection, but remained in the graph as separate entities.
The process is applied recursively to the resulting subgraphs until they are simplified to series-parallel structures, for which the probability is computed analytically. For the bridge diagram, element 5 (central bridge) was chosen as the result. The calculations yielded the approximate result Psys ≈ 0.994975, which is lower than the exact reference value due to approximations in the evaluation of conditional subgraphs.

2.2.3. Logical–Probabilistic Method (LVM) and the “Triangle-Star” Transformation

The structural function is presented in the perfect disjunctive normal form (DDNF) along the minimum paths:
Φ ( x ) = j = 1 m p i L j x i .
To move from Boolean algebra to probability arithmetic, you need to convert (5) to a non-repeatable form (a form in which each variable occurs no more than once). The key tool is the probabilistic-correct triangle-star (Δ–Y) transformation.
Let us consider a subgraph in the form of a triangle with vertices a, b, c and edges ab, bc, ca, which have failure-free probabilities pab, pbc, pca. This transformation replaces the triangle with a star with a new center o and edges ao, bo, co with probabilities Pao, Pbo, Pco, which are calculated using formulas that preserve the probabilities of connectivity between each pair of initial vertices:
p a o = 1 ( 1 p a b ) ( 1 p a c · p b c ) , p b o = 1 ( 1 p b c ) ( 1 p a b · p a c ) , p c o = 1 ( 1 p a c ) ( 1 p a b · p b c ) ,
After applying (6) to non-monotonic graph fragments, sequential convolution of consecutive and parallel elements is performed according to the rules:
p serial = p 1 · p 2 ,       p parallel = 1 ( 1 p 1 ) ( 1 p 2 ) .
For a bridge scheme, applying the Δ–Y transform (6) to the triangle formed by elements 1, 2, 5 and then using (7) gives the exact value of Psys = 0.999675.

2.2.4. System Structure Convolution Method (SCM): Algorithmic Formalization

The SCM method is an algorithmic implementation of PWM that operates not with symbolic expressions, but with numerical matrices, which radically increases computational efficiency.
Let A = [aij] be the adjacency matrix of a graph of dimension n × n, where aij = 1 if an edge (i,j) exists, and aij = 0 otherwise. Let R = [rij] be a reliability matrix, where rij = pij (probability of failure of an edge (i,j)) if aij = 1, and rij = 0 otherwise.
SCM is an iterative composition of local graph transformations {τκ}, each of which is an identity at the level of a structural function:
  • τ1 (Dangling vertex removal): For a vertex of degree 1 connected to vertex u, the edge (u,v) and vertex v are removed.
  • τ2 (Convolution of successive edges): If there is a vertex V of degree 2 that is incidental to edges (u,v) and (v,w), then these edges are replaced by one equivalent edge (v,w) with a probability of ruw = ruv·rvw.
  • τ3 (Convolution of parallel edges): If there are k2 edges u and w with probabilities between the vertices of r u w ( 1 ) , , r u w ( k ) , they are replaced by a single edge with a probability:
    r u w = 1 m = 1 k ( 1 r u w ( m ) ) .
  • τ4 (Δ–Y transformation): Applied by Formula (6). A new vertex o is created, the original edges of the triangle are removed, and the edges of the star are added.
Theorem 1. 
Let a non-recoverable system be represented by an undirected two-terminal graph G = (V,E) with independent binary elements, where each element is either operational or failed. The local graph transformations τ1–τ4 preserve the probability of failure-free operation between the terminals s and t, provided that the equivalent parameters of the transformed substructures are computed using the corresponding conditional probabilities.
Proof (schema). 
Each transformation τκ has a precise probabilistic justification: τ2 and τ3 correspond to the rules for serial and parallel systems; τ4 is derived from the full probability formula for the triangle configuration. □
Probabilistic justification of τ4. Let eE denote the element eliminated by transformation τ4. By the law of total probability, the two-terminal reliability of the original structure can be written as
Psys = P(e = 1)Psys ∣ (e = 1) + P(e = 0) Psys ∣(e = 0),
where the conditional probabilities correspond to the reliabilities of the reduced graphs obtained by contraction and deletion of element e, respectively. The transformation τ4 replaces the original configuration with an equivalent structure whose reliability reproduces these conditional values. Therefore, the total probability of failure-free operation between the terminals s and t remains invariant under transformation τ4. Such probability-preserving reductions are well known in classical network reliability theory. The composition of identical transformations gives an identical result.
Computational complexity. At each step, the algorithm scans matrices A and R of size O(n2). For an arbitrary graph, the problem is #P-difficult. However, for a wide class of engineering systems (moderate bond density, hierarchy), polynomial complexity O(nα) is empirically observed, where αH3.5 … 3.9, ασ are confirmed by regression analysis. This is because each transformation significantly reduces the size or coherence of the problem.

2.2.5. Mathematical Formalization of SCM Computational Complexity Estimation

A Formal Representation of Computational Complexity
Let G = (V,E) be a graph of a system where n = |V|—number of vertices (elements), m = |E|—the number of edges (ties). The execution time of the SCM algorithm is denoted as TSCM(n,m).
Empirical Estimation of Complexity Based on Regression Analysis
Experimental data (Figure 3) show that the run-time versus number of elements for different topologies is approximated by a power function:
T SCM n = C · n β ,
where C > 0 is a constant that depends on the computing platform, and β is a measure of the degree that determines the nature of the increase in complexity.
The empirical complexity estimates are supported by a theoretical upper-bound analysis of the SCM algorithm. Each local transformation applied by SCM reduces the number of vertices or edges in the graph and can be performed in polynomial time with respect to the current graph size. For sparse and moderately dense graphs, the identification of applicable transformations and the corresponding graph updates require at most O(n2) operations per iteration. Since each transformation strictly reduces the graph size, the total number of iterations is bounded by O(n). Therefore, the overall worst-case time complexity of SCM is bounded by a polynomial function of the form O(nγ), where γ > 2. This theoretical bound is consistent with the observed empirical power-law behavior and explains the absence of exponential growth characteristic of state-space–based exact reliability methods.
The above functional form is obtained by empirical complexity estimation based on regression analysis of numerical experiments. For a fixed class of system topologies, the execution time of the SCM algorithm was measured for increasing numbers of elements. The resulting data were analyzed in logarithmic coordinates, where a linear relationship between log TSCM and log n was observed. This indicates a power-law dependence of the form TSCM(n) = Cnβ, where the exponent β characterizes the effective growth rate of computational complexity, and the constant C accounts for platform-dependent factors. The parameters C and β were estimated using least-squares regression in log–log scale.
Estimation of the Exponent for Different Graph Classes
Regression analysis in logarithmic coordinates gives the following estimates:
log T SCM n log C + β · log n .
For different graph classes, the following β values were obtained. For each graph class, the constant coefficient C in the empirical complexity model TSCM(n) = Cnβ was determined directly from experimental runtime measurements. After estimating the exponent β for a given graph class by linear regression in log–log scale, the constant C was computed as the intercept of the corresponding fitted regression line. Therefore, the numerical values of C reported below are not selected heuristically but represent regression-based scaling coefficients obtained under identical hardware and software conditions. Differences in the constant C across graph classes reflect structural characteristics of the graphs, such as average connectivity and locality of transformations, rather than changes in the algorithm itself. A formal proof of tighter bounds for specific graph classes is beyond the scope of this paper and will be addressed in future work.
  • For tree structures:
    β t r e e 3.5 .
  • For lattice structures:
    β g r i d 3.7 .
  • For loosely connected random graphs:
    β r a n d o m 3.9 .
Overall Computational Complexity Score
Thus, for typical engineering topologies, the following is performed:
T SCM ( n ) = O ( n β ) ,     β 3.5 , 3.9 ,
which corresponds to polynomial complexity as opposed to the exponential complexity of a complete enumeration of O(2n).
Comparison with the Theoretical Limits of Complexity
Despite the fact that the task of accurately calculating the reliability of an arbitrary graph is #P-difficult, for this class of engineering structures, it is possible to achieve a polynomial estimate due to the following properties of the SCM algorithm:
  • Using locality: Each transformation of Tk affects only the local neighborhood of the graph.
  • Sparsity preservation: Transformations do not increase the density of the graph.
  • Modularity: Complexity is defined as the sum of the processing complexities of independent modules.
Formally, this can be expressed as
T SCM ( G ) = k = 1 K T ( T k ) i = 1 M c i · | V i | γ i ,
where K is the total number of transformations applied, M is the number of independent modules in the graph, |Vi| is the size of the i-th module, and γi [1,2] is an indicator of the complexity of processing the module.∈
The matrix implementation of SCM (code in Appendix A) uses efficient operations on sparse matrices (NumPy) and graph structures (NetworkX) to process systems with up to 500 elements in a reasonable amount of time.

2.3. Element Parameters and Experiment Plan

The reliability of each element (power diode) was evaluated on the basis of failure statistics of a batch of 50 modules that worked 10,000 h in the conditions of a mining excavator. The exponential law of the distribution of time between failures with a failure rate was adopted as λ = 2.5·10−6 1/h. The probability of failure-free operation for the planned overhaul cycle T = 2000 h was
p i = P ( T ) = e λ T = e 2.5 · 10 6 · 2000 0.995 ,       q i = 1 p i 0.005 .
For the purity of the comparison of methods in all computational experiments, pip = 0.995 is assumed for all elements of all test structures (Figure 3).
The experimental part of the work consisted of three stages:
  • Formalization of the mathematical model and development of algorithms for four classes of methods.
  • Programmatic implementation of methods in Python 3.10 using NumPy and NetworkX.
  • A series of computational experiments on representative test structures (bridge, tree, lattice, random graph) with fixed parameters p = 0.995 to quantify the accuracy, calculation time, and scalability of each method.
All calculations were performed on a single computing platform (Intel Core i7-12700K, 64 GB of RAM) to eliminate the influence of hardware factors. Fixed seed values of random number generators were used to ensure reproducibility.
To improve the clarity of the proposed methodology, the overall workflow of the system structure convolution method is summarized in Figure 3. The diagram provides a compact visual representation of the sequence of operations, starting from the input system graph and proceeding through successive local transformations that preserve the probability of failure-free operation. The workflow highlights the iterative reduction of the system structure until a simplified graph is obtained, after which the system reliability can be directly evaluated. This representation emphasizes the role of local graph transformations as the core mechanism of the proposed approach.
The diagram in Figure 3 illustrates the overall computational workflow, including the input system graph, successive application of local probability-preserving transformations (τ1–τ4), intermediate graph reduction steps, and final evaluation of system reliability.

2.4. Element Parameters

The algorithmic structure of the system structure convolution method is schematically illustrated in Figure 4. The diagram outlines the main computational stages, including initialization of the system graph, selection of applicable transformations, iterative simplification, and verification of termination conditions. Such a representation complements the formal algorithmic description by explicitly showing the control flow and decision points, thereby facilitating the interpretation of the implementation details discussed below.
The schematic representation in Figure 4 shows the main algorithmic steps of the SCM, including initialization, selection of applicable transformations, iterative graph simplification, termination conditions, and computation of the system reliability.
The study is based on the combination of theoretical-probabilistic and computational approaches aimed at assessing the reliability of complex-structured non-recoverable subsystems of mining electronic equipment. The overall work plan included three phases. At the first stage, a mathematical model of the system in the form of an undirected graph was formulated and analytical prerequisites for the application of various deterministic methods for calculating reliability were determined. The second stage consisted in the development and software implementation of algorithms of four classes of methods: boundary, analytical, logical–probabilistic and algorithmic. The third stage was a series of computational experiments on representative structures, including a canonical bridge scheme, tree and lattice configurations, and random connected graphs. The purpose of the experiments was to quantitatively compare the accuracy, computational efficiency, and scalability of each method with fixed parameters of the system elements (Figure 5 for the representative topologies considered).
For all system elements, the same failure probability of p = 0.995 is adopted, which corresponds to highly reliable electronic components in critical applications. For verification and comparative analysis of methods, a set of test structures was used, covering the main classes of systems encountered in the practice of designing mining electronic equipment [28]. Figure 5 shows the visualizations of the key test configurations.
Figure 5 shows the four key test structures used to benchmark the reliability assessment methods:
The presented set of structures provides comprehensive validation of methods, covering the entire spectrum of structural complexity from canonical test cases to models of real systems.
Thus, Figure 5 shows representative test structures used for comparative analysis: bridge (6 elements), tree (15/31 elements), square grid 3 × 3 and 4 × 4 (9/16 elements), and connected random graphs (20/50 elements). Parameterization: pi = 0.995 and independent failures across elements.
Table 1 systematizes the quantitative parameters and functional purpose of each test structure. A bridge diagram is used for basic method verification, a tree structure for scalability testing, a lattice configuration for connectivity analysis, and a random graph for modeling real-world systems.
Key metrics include
-
Number of elements and links
-
Maximum degree of vertices
-
Level of structural complexity
-
Purpose in testing methodology
Table 1 summarizes the test structures and their parameters for reliability analysis.

2.5. Numerical Characteristics of the Reliability of the Elements

The selection of numerical and structural parameters used in this study is guided by considerations of reproducibility, numerical stability, and consistency with established practices in reliability analysis. Unless stated otherwise, structural parameters of the system models follow standard assumptions commonly adopted in the literature for non-recoverable systems with independent binary elements. Parameters related to numerical accuracy and stopping criteria were determined empirically based on preliminary sensitivity analyses to ensure stable convergence and to avoid unnecessary computational overhead. Heuristic choices were limited to auxiliary algorithmic thresholds and were selected conservatively to minimize their influence on the final reliability estimates. All parameter values are explicitly reported to enable full reproducibility of the numerical results.
Reliability parameters were obtained based on the analysis of failure statistics of a batch of 50 modules operating for 10,000 h on mining excavators.
-
Failure Rate λ: For power semiconductor components operating under cyclic thermal and mechanical loads, an exponential law of MTBF distribution was adopted. The estimated failure rate was λ = 2.5·10−6 1/h.
-
Probability of failure-free operation (P) at operating time T = 2000 h: This operating time corresponds to the planned overhaul cycle of the excavator.
P(T) = exp(–λ T) = exp(–2.5·10−6·2000) ≈ 0.995
-
Probability of failure (Q) at operating time T = 2000 h:
Q(T) = 1 − P(T) ≈ 0.005.
Thus, for subsequent calculations, it is assumed that for each diode in the circuit:
-
p = 0.995;
-
Q = 0.005.

2.6. Condition of System Operability

The system is functional if there is at least one closed path from the input phase to the output terminal for both the positive and negative half-wave of the voltage [29]. For a bridge circuit, this is equivalent to having at least one working diode in each of the two arms (upper and lower) of a three-phase bridge, but taking into account phase bonds. This condition is described not by a sequentially parallel, but by a bridge structure [30].
All calculations were performed in a single digital environment that excludes the influence of hardware factors. A computing platform based on an Intel Core i7-12700K processor with 64 GB of RAM running Windows 11 was used. The software implementation was made in Python 3.10, which ensured the transparency of the algorithms and the possibility of open replication of the results. NetworkX provided efficient work with matrices and graphs [31]. Such a choice of medium is due to the fact that the SCM operates with matrices of connectivity and reliability, and its efficiency largely depends on the speed of linear operations and graph reductions [32].
From the point of view of mathematical description, the system under study was considered as an undirected connected graph G = (V, E), where the set of vertices n V = {υ1, υ2, …, υn} represent the elements of the subsystem, and the set of edges EV × V represent their functional connections.
The investigated subsystem—the rectifier block of the EKG-20A (OMZ Group, Yekaterinburg, Russia) excavator drive—was modeled as a canonical bridge circuit comprising six semiconductor diodes (Figure 6). Each element state is encoded by a binary variable x1 ∈ {0, 1}, and the overall system state is captured by the structural function F(x):
Φ ( x 1 , x 2 , , x n ) 1 , if   the   ststem   operational , 0 , otherwise .
The target quantity is the survival probability expressed as the expectation of the structural function:
Psys = E[F(x)] = P{F(x) − 1}.
Each element υi was assigned a Boolean variable xi ∈ {0, 1}, reflecting its state: xi = 1 when healthy and xi = 0 when failed. It was assumed that the failures of the elements were independent, and the probability of failure-free operation of the element pi = P(xi = 1) was known from statistical tests. The state of the entire system was described by the structural function F(x1, x2, …, xn), which takes a value of 1 when the system is healthy and 0 when it fails. The main calculated value was the probability of system failure-free operation:
Psys = P{F(x) = 1} = E[F(x)].
This formula serves as the starting point for all implemented methods; The difference between them lies in the way the mathematical expectation of a structural function is calculated.
To assess the reliability of the real subsystem, the rectifier unit of the EKG-20A (OMZ Group, Yekaterinburg, Russia) mining excavator main drive control system was chosen. This unit includes six SEMIKRON SKD 100/12 (SEMIKRON, Nuremberg, Germany) power diodes that form a classic three-phase bridge circuit [33]. The reliability of each diode was determined as p = 0.995, which corresponds to the failure statistics for 10,000 h. which is not reducible to a sequential-parallel form, and therefore serves as a test for the correctness and universality of the method. The actual electrical circuit was formalized as a connectivity matrix and further used as a basic topology for method mapping. The mathematical apparatus of the study was based on a comparison of four approaches that differ in the degree of analyticity and algorithmic feasibility. For each of them, their own computing modules with a single I/O interface were developed, which made it possible to compare the results with the same initial data.
The first class covered boundary methods based on minimum paths and minimum sections. In them, the probability of system failure is estimated by combining the health events of all minimum paths:
P u p p e r = P j = 1 m L j
and through the intersection of minimum cross-section failure events:
P l o w e r = 1 P k = 1 l S k ,
where Lj is the minimum path and Sk is the minimum cross-section. The numerical values were calculated using a truncated inclusion–exclusion formula to obtain the upper and lower limits of the true probability of Psys. These boundaries were interpreted as the confidence interval of the method control scale for subsequent methods [34]. Computationally speaking, these methods had linear or quadratic complexity and served as a starting point for evaluating the scalability of more complex algorithms. The next block was the method of decomposition relative to a special element, which is based on the formula of total probability:
P s y s = p k P { Φ ( x )   =   1 | x k =   1 } +   ( 1   p k ) P { Φ ( x )   =   1 | x k =   0 } .
The selection of the pk element was carried out according to the criterion of the maximum degree of the vertex, which minimized the growth of dimensionality in recursive steps [35]. Each conditional term was calculated for a modified subcircuit obtained by fixing the state of this element. This method required decomposing the system into two subgraphs and sequentially calculating their reliability down to sequentially parallel fragments with analytically accurate values and estimate the accumulation of error as the number of recursions increases. Theoretically, the method provides complete accuracy but, in practice, its complexity increases exponentially, which was the subject of quantitative analysis [36].
The logical–probabilistic method was used as a reference tool for checking the correctness of other approaches. It is based on the representation of the structural function of the system in the perfect disjunctive normal form (SDNF):
Φ ( x ) = j = 1 m i L j x i ,
where each disjunctive term corresponds to one minimal path. In order to move from logical to probabilistic operations, the function must be reduced to a non-repeatable form, which is achieved by means of a sequence of algebraic transformations. The key one is the triangle-star transformation, which replaces the triangular bond of three elements with an equivalent “star” with new bond probabilities calculated by the formulas:
p a b =   1     ( p a p b ) ( 1   p c ) , p a c =   1     ( p a p c ) ( 1   p b ) , p b c =   1     ( p b p c ) ( 1   p a ) .
These relations ensure the preservation of the general probability of the subsystem’s operability and make it possible to reduce an arbitrary network to an equivalent sequentially parallel structure. After the transformation, logical operations were replaced by arithmetic ones: conjunction was replaced by the product of probabilities, and disjunction was replaced by the probability of combining events. The purpose of this method was not to search for the optimal algorithm, but to provide a reference “exact” solution for comparison with other methods and for verifying the correctness of the implemented SCM algorithm.
The main attention of the study was focused on the algorithmic SCM [37], which is a strictly formalized process of graph reduction. Its mathematical essence lies in the sequential application of a set of transformations, each of which locally preserves the probability of failure-free triangle-star transformation [38]. The algorithm worked in an iterative mode: with each pass through the connectivity matrix, the applicability of the rules was checked, after which the corresponding elements of the reliability matrix were transformed and recalculated. The process continued until the structure was reduced to one equivalent element. Formally, the algorithm can be considered as a final composition of probability-correct mappings f i : G i G i +   1 , where for each step, P s y s ( G i ) = P s y s ( G i +   1 ) . Such a construction ensures that the final score coincides with the exact value, regardless of the sequence in which the rules are applied.
The purpose of using the SCM was to demonstrate its advantages in computational efficiency without losing accuracy. The algorithm is implemented as a modular code in Python, where the connectivity matrix is stored as a dense NumPy array, and the graph search and reduction procedures are optimized at the level of basic operations [39]. Such a structure made it possible to calculate systems of up to 50 elements, which was previously almost unattainable for deterministic methods. For each test graph, three metrics were recorded: execution time, amount of memory used, and deviation from the reference value obtained by the logical–probabilistic method. The results showed that for structures of medium and high complexity, the SCM provides an acceleration of 103–106 times while maintaining absolute accuracy.
As an additional block of the study, the statistical data of the reliability of the elements were processed. The results of operational tests were used for power diodes, which made it possible to estimate the parameters of the distribution of time to failure and substantiate the assumption about their independence for all structures. This ensured the comparability of the results and the connection of the computational experiment with real engineering practice.
Thus, the methodological structure of the study combined analytical rigor and computational experiment. Within the framework of a single mathematical model, it was possible to implement four methods covering the entire spectrum—from approximate boundary estimates to exact algorithmic solutions [40]. The key feature of the technique was its algorithmic closure: all procedures for convolution, transformation, and calculation of probabilities were performed automatically within one software platform, which excluded the influence of subjective factors and increased the reproducibility of the results [41]. From a mathematical point of view, this made it possible to move from descriptive comparison to quantitative analysis of algorithmic complexity, identifying a transition area (about 15 elements) where analytical methods become ineffective, and the SCM retains a polynomial increase in computation time. The code for the Python implementation of SCM and the results of the code execution are presented in Appendix A.

2.7. Experimental Setup and Reproducibility

All numerical experiments were conducted on synthetic benchmark system structures commonly used in reliability analysis, including tree, lattice, bridge, and randomly generated connected graphs. The number of system elements varied from small canonical configurations to larger networks in order to assess scalability and robustness. Component reliability parameters were assigned uniformly within the considered test cases, unless stated otherwise, to ensure comparability across different methods.
The parameters of the proposed system structure convolution method were fixed across all experiments. No dataset-specific tuning was performed. Stopping criteria and numerical accuracy thresholds were selected conservatively to ensure stable convergence and reproducibility of the results.
All computations were performed on a single hardware platform to ensure fair comparison. The experiments were executed on a workstation equipped with a multi-core CPU and standard memory configuration, using a compiled implementation of the algorithms. The same computational environment was used for SCM, Monte Carlo simulation, and BDD-based evaluation.
To support reproducibility, the implementation of the proposed method and the scripts used to generate experimental results are available upon reasonable request. The experimental setup, parameter settings, and benchmark generation procedures are fully described in the manuscript to enable independent replication.

3. Results and Discussion

For the initial verification of the developed and adapted reliability assessment algorithms, we applied the canonical bridge scheme, traditionally used as a reference model in engineering calculations of complex systems [42]. This scheme contains five elements connected in such a way that the failure of any of the extreme branches does not lead to an instantaneous failure of the system, and the middle element forms a parallel bypass path. and sequential relationships, as well as the possibility of structural dependencies, which makes it an ideal test case for checking the correctness and accuracy of algorithms.
Within the framework of this work, the scheme was considered under the condition of equally probable operability of all elements, that is, with the same probability of failure-free operation of individual components pi = p. In this case, the probability of element failure is defined as qi = 1 − pi. The value of p = 0.9 is taken to assess the sensitivity of methods to the level of reliability of elements. All calculations were performed in a Python 3.10 environment on an Intel Core i7-12700K processor (Intel, Santa Clara, CA, USA), 64 GB of RAM, without the use of parallel calculations [43].
The main purpose of validation was to verify the consistency of the results obtained by four different methods:
(1)
method of analytic decomposition by “special element”;
(2)
system structure convolution method (SCM) (matrix);
(3)
method of LVM (logical–probabilistic modeling) “triangle-star”;
(4)
upper–lower bounds methods.
For each approach, the final probability of failure of the system was calculated, as well as the execution time and relative error relative to the reference value.

3.1. Reference Solution and Accuracy Criteria

The value obtained using logical–probabilistic modeling (LVM) and confirmed by the system structure convolution method (SCM) under a large number of tests is taken as a standard [44]. The results of both methods coincided to the sixth decimal place, which allows us to accept Pref = 0.999675 as the reference value. The value Pref = 0.999675, obtained using logical–probabilistic modeling and confirmed by the system structure convolution method, is taken as the exact reference solution.
For each method, the following were calculated:
-
absolute error Δ P = | P grade P ref | ;
-
relative error δ = Δ P P r e f × 100 % ;
-
as well as the calculation time tcalculated in seconds.
Below is a summary table (Table 2) that summarizes the results of all approaches.
Verification of methods on the canonical bridge scheme. For a quantitative comparison of the methods, a classical bridge scheme was used, consisting of six elements with the same reliability p = 0.995. Below are the detailed calculations for each method.

3.1.1. Exact Solution by the LAVM Method with the “Triangle-Star” Transformation (Δ–Y)

The source graph contains a triangle formed by elements 1, 2, and 5. Apply the Δ–Y transformation according to Formula (6):
Initial data:
p a b = p 12 = 0.995 , p b c = p 25 = 0.995 , p c a = p 15 = 0.995 .
Calculations by Formula (6):
p a o = 1 ( 1 p a b ) ( 1 p a c · p b c ) = 1 1 0.995 ( 1 0.995 · 0.995 ) = 0.99997500625 , p b o = 1 ( 1 p b c ) ( 1 p a b · p a c ) = 1 1 0.995 ( 1 0.995 · 0.995 ) = 0.99997500625 , p c o = 1 ( 1 p a c ) ( 1 p a b · p b c ) = 1 1 0.995 ( 1 0.995 · 0.995 ) = 0.99997500625 .
After the transformation, the graph becomes sequentially parallel. Next, we apply the convolution rules (7):
  • Convolution of parallel elements 4 and co:
    p c o = 1 ( 1 p 4 ) ( 1 p c o ) = 1 1 0.995 1 0.99997500625 = 0.999999875 .
  • Convolution of the ao-co′-3 serial chain:
    p top = p a o · p c o · p 3 = 0.99997500625 · 0.999999875 · 0.995 = 0.9949750062 .
  • Bo-co′-3 series chain convolution:
    p bottom = p b o · p c o · p 3 = 0.99997500625 · 0.999999875 · 0.995 = 0.9949750062 .
  • Convolution of parallel circuits:
    P sys = 1 ( 1 p t o p ) ( 1 p bottom ) = 1 1 0.9949750062 2 = 0.9996750249 .
The final result with an accuracy of 6 digits:
P sys = 0.999675 .

3.1.2. Boundary Estimate Method (Minimum Paths and Sections)

For the bridge scheme, 4 minimum tracks and 4 minimum sections are defined (clause 2.2.1). Calculate the boundaries taking into account intersections up to the 2nd order.
Upper limit (U):
P ( A j ) = p 2 = 0.990025 , P ( A j A k ) = p 4 = 0.985074875 , for   3   pairs , p 4 = 0.980149500625 , for   the   remaining   3   pairs , U = P ( A j ) P ( A j A k ) = 4 · 0.990025 ( 3 · 0.985074875 + 3 · 0.980149500625 ) =   = 3.9601 5.8956741269 = 0.999850 .
Lower Bound (L):
P ( B k ) = 1 p 2 = 0.009975 , P ( B k B l ) = ( 1 p 2 ) 2 = 0.000099500625 , L = 1 P ( B ) P ( B k B l ) = 1 4 · 0.009975 6 · 0.000099500625 = = 1 0.0399 0.00059700375 = 0.999600 .
Final interval:
P sys 0.999600 ;   0.999850 .

3.1.3. Element 5 Decomposition Method

Use the formula of full probability (4) with pe = 0.995.
  • Conditional probability with a healthy element of 5 (x5 = 1):
    P ( Φ = 1 x 5 = 1 ) = [ 1 ( 1 p 1 ) ( 1 p 4 ) ] · [ 1 ( 1 p 2 ) ( 1 p 3 ) ] = = 0.999975 · 0.999975 = 0.99995000625 .
  • Conditional probability for failed element 5 (x5 = 0):
    P ( Φ = 1 x 5 = 0 ) = [ 1 ( 1 p 1 ) ( 1 p 4 ) ] · [ 1 ( 1 p 2 ) ( 1 p 3 ) ] = = 0.999975 · 0.999975 = 0.99995000625 .
  • The final probability according to Formula (4):
    P sys = p e · P ( Φ = 1 x 5 = 1 ) + ( 1 p e ) · P ( Φ = 1 x 5 = 0 ) = = 0.995 · 0.99995000625 + 0.005 · 0.99995000625 = 0.99995000625 .
Result: (overestimated).
P sys = 0.999950 .
Note: Table 2 shows the value of 0.994975 (the computation time does not exceed a few milliseconds for small benchmark structures), obtained taking into account approximations in the calculation of conditional probabilities. The above calculation shows the ideal case without approximations.

3.2. Verification of Methods on the Canonical Bridge Scheme

A bridge scheme is used as a basic test (Figure 5), which clearly demonstrates the differences between the methods. The results of the calculation of FBG are presented in Table 2.
Table 2 shows that all methods, except for the “special element” decomposition, provide high accuracy with an error of less than 0.03%. At the same time, the interval specified by the boundary methods has a width of 0.00025, which in relative units is only 0.025% of the reference value. Thus, boundary methods make it possible to obtain a quick and fairly narrow estimate of the range [45] without significant computational costs.
For the decomposition method, the largest deviation is observed at about 0.47% (or 4.7·10−3 in absolute units). Despite the seemingly small value, with high reliability requirements, such an error is equivalent to a systematic underestimation of the probability of failures by 47 failures per 10,000 tests. In engineering applications, this can lead to unjustified overestimation of feature requirements or excessive redundancy [46].
Figure 7 presents a fundamental scalability analysis that serves as the cornerstone for the paper’s main contribution—demonstrating the computational superiority of the structure convolution method (SCM) for reliability assessment of complex systems.
The logarithmic scale on the y-axis highlights the dramatic differences in computational efficiency. The structure convolution method (SCM, blue squares) demonstrates polynomial time complexity, maintaining practical computation times even for 50-element systems. In contrast, the logical–probabilistic method (LVM, red circles) and decomposition method (orange triangles) exhibit exponential complexity, becoming computationally intractable beyond 15–20 elements. Bounding methods (green diamonds) show consistent efficiency but provide only interval estimates rather than exact reliability values.
This figure transforms abstract computational complexity concepts into concrete, actionable insights that directly inform engineering practice while providing compelling visual evidence for the paper’s central thesis about SCM’s transformative potential in reliability engineering.
Analyze the results for a bridge diagram:
-
Reliability of data. The LVM and SCM methods demonstrated a complete coincidence of the results (P = 0.999675), which verifies the correctness of both algorithms. This value is taken as a reference for subsequent comparison.
-
Accuracy of boundary methods. The method of minimum paths and sections yielded a narrow interval [0.999600; 0.999850] that reliably contains a reference value. The relative spacing width is about 0.025%, which is sufficient for many engineering applications to make decisions in the early stages of design.
-
Anomaly of the decomposition method. The result of the decomposition method (0.994975) was significantly underestimated. A detailed analysis showed that the error accumulates at the stage of calculating P(A|H2) (failure probability 3), where an approximate calculation for a simplified subsystem was applied. This emphasizes the sensitivity of this method to the accuracy of intermediate calculations and the choice of a “special” element.
-
Compare computational efficiency. The SCM demonstrated the highest calculation speed, surpassing the LVM by more than 100 times even for a simple 6-element scheme. This is due to the fact that LVM requires analytical transformation and symbolic calculations, while SCM operates exclusively with numerical matrix operations.

3.3. Comparative Analysis of the Scalability of Methods

To assess the applicability of the methods to large-dimensional systems, the calculation was carried out on a set of synthetic test structures of increasing complexity:
-
Tree (15, 31 elements): hierarchical structure.
-
3 × 3, 4 × 4 lattice (9, 16 elements): highly cohesive structure.
-
Random connected graph (20, 50 elements): A model of an irregular complex system.
Figure 8 provides a comparative analysis of the structural complexity of test configurations using normalized metrics. For an objective comparison of various topologies, normalization by maximum values is used:
Normalized metrics:
-
Number of elements (normalized to maximum)
-
Number of links (normalized to maximum)
-
Maximum degree of vertices (normalized to maximum)
Key observations:
-
A random graph shows the greatest complexity across all metrics
-
The lattice structure has maximum local connectivity
-
The tree-like structure is characterized by the minimum average degree with the largest number of elements
-
The bridge scheme, despite the small number of elements, has a high structural complexity due to non-monotonous nature
This analysis provides a quantitative basis for interpreting the performance results of the methods on various structural configurations.
  • Interpreting the results in terms of failures
For practical interpretation, it is convenient to calculate the expected number of failures at Nisp = 10,000 tests.
N f a i l = N i s p ( 1 P ref ) .
For the reference value Pref = 0.999675, we get:
N f a i l reference =   10,000 1     0.999675   =   3.25 ,
i.e., an average of 3–4 failures per 10,000 systems.
For the decomposition method (0.994975), we get N fail diff = 50.25 failures, which is a discrepancy of more than an order of magnitude. For boundary methods, the range [0.999600; 0.999850] corresponds to 1.5–4 failures, i.e., the reference value falls inside the interval.

3.4. Analysis of Sources of Discrepancies

The noted underestimation of the result during decomposition is explained by the loss of correlation between the reliability paths after the exclusion of the “special” element. The method is based on the assumption of statistical independence of paths, which is correct only for strictly sequential or parallel structures [47].
There are no such limitations for SCM and LVM: both methods are based on the exact enumeration of minimal paths and sections, or on a complete convolution of logical-structural expressions, which ensures a coincidence with the analytical solution. In the case of boundary methods, there is a slight shift of the upper and lower boundaries relative to the standard; this is due to the use of non-strict Boole inequalities and approximations when averaging dependent events.

3.5. Consistency of Calculation Time

For the canonical bridge scheme (six elements), the computation time of all four approaches ranged from 0.002 to 0.024 s and, as can be seen from Table 2, is determined primarily by the types of operations, and not by the size of the system (for five elements, the difference is minimal). As the dimensionality of the network increases, the expansion and boundary methods show a linear increase in time, while SCM and LVM show a polynomial increase. However, even for the test scheme, these data are useful: They demonstrate that the costs of exact methods are only an order of magnitude higher than those of approximate methods, with a difference in accuracy of tens of times [48].
The decomposition method shows significant underestimation due to error accumulation, while the minimal paths and cuts methods establish a narrow confidence interval [0.999600, 0.999850] containing the reference value. SCM demonstrates computational advantages while maintaining exact accuracy. Figure 9 presents a comprehensive visual comparison of reliability estimation results obtained by five different computational methods for the canonical bridge circuit system. The plot employs a bar chart representation with a distinct color scheme to differentiate between method categories: blue tones for bounding methods (minimal paths and cuts), orange for the decomposition approach, and green shades for exact methods (LVM and SCM) [49]. At the same time, the columns of the diagram for each method display the FBG value, and the reference value (0.999675) is highlighted with a horizontal line. It can be clearly seen that the results of LVM and SCM coincide with the standard, the decomposition method gives an underestimated result, and the boundary methods form a narrow interval. The figure serves as compelling evidence supporting the paper’s central thesis regarding method selection criteria based on the trade-off between computational efficiency and accuracy requirements. The clear graphical representation makes complex methodological comparisons accessible to both specialists and practitioners in reliability engineering.
To check the stability of the solutions, a series of calculations was carried out with a change in p from 0.98 to 0.999 with a step of 0.001. Figure 9 shows the corresponding dependencies of Psys(p) for all methods. At small p < 0.99, the discrepancies are small (<0.1%), but at p > 0.995, there is a sharp discrepancy in the results of decomposition, which is associated with a nonlinear increase in the accumulated errors. For SCM and LVM, the dependencies are almost identical, and the curves of the boundary methods cover the reference with a small symmetrical error band (±0.0001).
In this range, there is also a decrease in the relative width of the interval of boundary methods: at p = 0.98, it is 0.06%, and at p = 0.999, it is only 0.015%. Therefore, with high reliability of elements, the accuracy of boundary estimates increases.

3.6. Analysis of the Sensitivity and Importance of Elements

In addition to global probabilities, an analysis of the importance (sensitivity) indices of individual components was carried out [50]. The Birnbaum index was introduced:
I i = P s y s p i
The calculation showed that the central element of the bridge diagram has the highest Ii ≈ 0.37, while the extreme branches have about 0.12–0.15. This is consistent with the intuitive notion that the middle element provides a workaround, and its failure has the greatest impact on overall health.
To verify the correctness of the gradient estimates, a direct comparison of the Rsis change at a small increment of Pi (by 0.001) for each element was carried out. The obtained relative changes Δ in Ppi coincided with the analytical values of the Birnbaum indices with an accuracy of 0.5%, which confirms the correctness of the implementation of the LVM and SCM methods for calculating sensitivity.
Figure 8 shows a summary chart comparing the results for all methods. The horizontal red line indicates the reference value Pref = 0.999675. The height of the bars corresponds to Pestim, and above each is the absolute deviation. The width of the green bar for the boundary methods shows the interval [0.999600; 0.999850]; the red marker is the exact value.
The graph clearly shows that the decomposition underestimates the probability by about 0.005, while the other methods provide a narrow range of ±0.0001. The visual representation also confirms that the difference between LVM and LMC is at the level of machine accuracy, which is important for subsequent large-scale verification on more complex topologies [51].
For comparison, the results obtained were compared with the known values from classical works on reliability [52,53]. In these sources, the probability of failure-free operation of the bridge scheme at p = 0.9 is given at the level of 0.878, and at p = 0.99–0.999675, which completely coincides with our reference value, 3–0.5%, which also coincides with the 0.47% error identified here. Thus, the results of our validation are consistent with international practice and can be considered a confirmation of the reliability of the implemented calculation modules.
All calculations were performed within a single software environment, which excludes the influence of third-party libraries [54]. For each series of calculations, fixed random number seeds (seed = 2024) were used, ensuring reproducibility during repeated runs. Scripts for generating a bridge diagram and performing calculations are available in Appendix A. This transparency is important for the subsequent large-scale audit, when similar procedures will be applied to systems with hundreds and thousands of elements. Verification of the identity of the results between the LVM and the SCM on such a small scheme serves as a guarantee of the correct implementation of the algorithm core.

3.7. Scalability and Performance

After confirming the correctness of all methods on the canonical bridge scheme, the next stage of the study was to study the scalability and performance of algorithms with an increase in the dimensionality and structural complexity of systems [55]. In this context, scalability refers to changes in computation time and memory consumption as the number of elements increases, as well as the stability of accuracy while maintaining basic reliability parameters. It is important to emphasize that it is the ability of algorithms to maintain acceptable speed and stability when analyzing structures with hundreds and thousands of elements that determines their practical suitability for engineering problems [56].
Experimental studies were carried out on a set of synthetic networks of various topologies, including strictly sequential and parallel structures, lattices, trees, and randomly generated graphs with adjustable bond density. For each type of network, the number of N elements varied from 5 to 500. All calculations were performed on the same computing platform as used for the validation tests: Intel Core i7-12700K (Intel, Santa Clara, CA, USA) processor, 64 GB of RAM, Python 3.10 environment. Fixed initial states of random number generators were used for reproducibility. The operating time was measured using the built-in timer perf_counter() with an accuracy of microseconds [57].
With a small number of elements (up to 10), the differences between the methods are minimal: the calculation time of all algorithms is within a few milliseconds, and the memory consumption does not exceed 50 MB. However, already at N > 30, differences in the nature of time growth begin to appear. The most predictable behavior is demonstrated by the system structure convolution method (SCM). Its time complexity is approximated by a polynomial dependence of order O(N3. 6), which is confirmed by regression analysis of experimental data on logarithmic scales. In the double logarithmic graph, the dependence of time on the number of elements is expressed by a straight line with a slope of 3.7 ± 0.2, which indicates a quasi-polynomial nature of growth. This property is especially valuable, since the SCM retains the absolute accuracy of the calculation, coinciding with the results of logical–probabilistic modeling (LVM).
For the LVM method, there is a similar trend, but the slope of the line is higher (about 4.1 ± 0.3), which indicates a slightly greater sensitivity to an increase in the number of connections. This is due to the fact that the LVM operates not only with a set of elements, but also with an explicit list of minimal paths and sections, the number of which in complex structures grows exponentially. Therefore, even with a moderate increase in the number of nodes, the model quickly reaches the memory limit. Nevertheless, for systems with up to 100 elements, the calculation time of the LVM remains in the range of seconds, which is acceptable for offline engineering tasks [58]. As dimensionality grows further, the method becomes impractical, and it is in this range that the advantage of SCM is most pronounced: it is able to process structures of up to 500 elements without degradation of accuracy and without excessive memory consumption.
Particular attention was paid to the behavior of approximate methods—decomposition by a “special element” and boundary methods. The decomposition shows a linear increase in time, and, up to 500 elements, the time does not exceed 0.1 s. However, low accuracy (error of 0.3–0.5% for typical structures) makes it suitable only for preliminary estimates. The boundary methods turned out to be the fastest: even with 1000 elements, the calculation took less than 0.05 s, which is 20–40 times faster than SCM and LVM. At the same time, the width of the resulting reliability intervals gradually increases with the increase in dimensionality, but remains within the range of 0.1–0.3%, which is quite acceptable for many practical applications. This combination of speed and moderate accuracy makes boundary methods a convenient tool for primary screening of systems and estimation of probability ranges before applying accurate approaches [59].
For an objective assessment of scalability, two summary dependencies were constructed: calculation time on the number of elements and accuracy on the number of elements. The time dependence graph shows that at N < 30, all methods are close to the same line, and with further growth, the SCM curve begins to lag behind the boundary and decomposition curves, but remains significantly lower than the LVM. At N = 50, the calculation time is 0.012 s for SCM, 4.3 s for LVM, 0.12 s for decomposition, and 0.02 s for boundary methods. At N = 200, the SCM takes about 19 s, while the LVM exceeds a minute. Thus, while maintaining the accuracy at the level of 10−5, the SCM provides an acceleration of about 3–5 times compared to the LVM, and in comparison with traditional combinatorial approaches, by orders of magnitude.
An important aspect of the analysis is the stability of temporal indicators to changes in the structure of connections. For networks with the same number of elements, but different density (the ratio of the number of edges to the maximum possible), it is observed that the SCM reacts to an increase in density more weakly than the LVM. If with an increase in density from 0.1 to 0.5, the LVM time increases by 12 times, while for SCM, the increase is only 4.5 times. This is due to the difference in the organization of internal data structures: the LVM operates with logical expressions with explicit path storage, while the SCM dynamically folds independent substructures, effectively reducing the combinatorial explosiveness of the problem.
In parallel with the time analysis, memory usage indicators were measured. SCM and LVM show similar trends: linear growth to 100 elements, after which the relationship becomes quadratic. For boundary and decomposition, growth is close to linear throughout the range. For networks with 500 elements, the memory consumption was 1.3 GB for LVM, 0.7 GB for SCM, 0.08 GB for edge methods, and only 0.04 GB for decomposition. Thus, while maintaining accuracy at the level of the standard, the SCM requires half as much memory as the LVM, and 10–20 times less than the classical approaches with the enumeration of all combinations.
The scalability analysis also assessed the algorithms’ resilience to an increase in the number of failed elements. In the experiments, the number of failures varied from 1 to 10% of the total number of elements. For SCM and LVM, the dependence of time on the number of failures turned out to be close to linear, which indicates the stability of the recalculation procedures in case of partial damage to the structure. For boundary methods, the time varied insignificantly because these algorithms use aggregated probabilities without the need for a complete topology rebuild [59]. For the decomposition method, time remained almost constant, which confirms its simplicity, but also indicates a weak connection with the real structure of the system—the result changes little even with significant changes in the number of failures, which indicates a lack of sensitivity of the model.
For verification, a series of calculations was carried out, where the probabilities pi were distributed according to the Beta law (α = 30, β = 1.5), simulating random variations in the technological dispersion. For each method, the variance of the final probability ΔPsys was calculated. SCM and LVM showed similar results (σ ≈ 4.8·10−5), the boundary methods are σ ≈ 6.1·10−5, and the decomposition is σ ≈ 9.4·10−5. Thus, the SCM remains stable even with stochastic variations in the properties of the elements, which indicates its resistance to the uncertainty of the input data.
Particular attention was paid to the comparison of numerical results with the literature data. For typical topologies (chain, lattice, tree), tests from publications in 2019–2022 were reproduced, where the authors compared Monte Carlo methods, decompositions, and analytical approximations. The calculation times obtained here coincide in order of magnitude: for example, for a lattice of 10 × 10 in [60], the exact calculation time was 21 s, which is close to our 19 s for SCM. This confirms the correctness of the implemented technique and the comparability of performance with foreign analogues with significantly lower memory requirements [61].
A graphical interpretation of the dependence of time on the number of elements is presented in Figure 10. For clarity, the time axis is made on a logarithmic scale, which allows you to visually assess the order of growth. The red dotted line shows the behavior of the LVM, the blue line shows the behavior of the SCM, the green line shows the boundary methods, and the yellow line shows the decomposition method. It is clearly seen that the SCM is located between the boundary and LVM, demonstrating a compromise between speed and accuracy that confirms its better scalability. At N > 300, the LVM and decomposition curves diverge by two orders of magnitude, which reflects a sharp increase in combinatorial operations in LVM. The boundary methods retain an almost horizontal dependence, confirming the linear nature of time growth.
It is interesting to note that for all the differences in time, the accuracy of the boundary methods remains surprisingly high. The average relative error for all series did not exceed 0.1%, and for dense structures, it was sometimes even improved by compensating for approximation errors. This allows us to consider such methods as a potential basis for hybrid algorithms, in which boundary estimates are used to pre-localize a range, and then a refinement calculation of the SCM is performed within this range. Preliminary experiments have shown that the combination of these approaches can reduce the total calculation time by a factor of 2–3 while maintaining an accuracy of up to 10−4, which is especially important for large networks.
Now let’s compare efficiency at the level of computing resources. The “accuracy/time” ratio (η = 1 − δ) gives an integral estimate of performance. For the test series, at N = 50, the values are η = 4000 for SCM, 2500 for LVM, 9000 for edge, and 7000 for decomposition. However, when N is increased to 200, the indicator η, for LVM, it drops sharply to 400, while for SCM, it remains about 1000. Thus, when scaled, it is the SCM that demonstrates the most stable ratio between accuracy and speed, which makes it preferable for practical application in engineering problems where multiple calculations of systems of varying complexity are required [62].
Taken together, the studies carried out allow us to draw several fundamental conclusions. First, all methods have predictable dynamics of time growth, but only SCM combines polynomial complexity with high accuracy [63]. Second, the LVM remains the benchmark in terms of mathematical rigor, but its computational cost increases too quickly as the dimensionality increases. Third, approximate methods provide extremely high performance, but require caution in interpreting the results, especially for structures with strong connectivity and cross-dependencies [64]. Finally, it is the combination of boundary methods and MCS that can become the optimal basis for hybrid computing schemes, where the initial estimate of the probability range sets a narrow corridor within which an accurate calculation is performed without a significant increase in time. Thus, the scalability analysis showed that the developed implementation of the SCM is an effectively scalable tool for systems with hundreds of elements, providing a balance between speed and accuracy.
The observed computational advantage of the system structure convolution method (SCM) over the other considered approaches is primarily explained by its algorithmic organization. Unlike logical–probabilistic and path-based methods, SCM avoids the explicit enumeration and storage of minimal paths, cut sets, or Boolean expressions, whose number is known to grow exponentially with system size and connectivity. Instead, SCM is based on a sequence of local, probability-preserving graph transformations, such as series–parallel reductions, elimination of dangling vertices, and equivalent triangle–star substitutions. These local reductions prevent the combinatorial explosion of intermediate representations and are well documented in the literature as an effective way to control computational complexity in network reliability problems.
In addition, SCM operates in a purely numerical, matrix-based form rather than symbolic logic manipulation. This significantly reduces memory overhead and makes the computational cost less sensitive to topology and variable ordering, in contrast to logical–probabilistic and BDD-based techniques. As a result, the overall computational burden is shifted from global combinatorial enumeration to local graph simplification, which explains the superior runtime performance and scalability of the proposed method observed in the numerical experiments.
The obtained patterns of time and memory growth, as well as resistance to parameter variations, confirm the applicability of the method for further use in the analysis of real engineering structures, including additive equipment subsystems, energy circuits and network topologies for various purposes. In contrast to traditional methods, where the transition from tens to hundreds of nodes leads to an exponential increase in computational costs, here only a moderate increase is observed, which opens up opportunities for practical integration of the model into automated systems for assessing reliability and optimizing the design [65].

3.8. Accuracy, Sensitivity and Importance of Components

After verifying the correctness of the calculations and analyzing the scalability, the key focus of the research was to study in detail the accuracy of the implemented methods, as well as to determine the sensitivity of the system to changes in the parameters of individual elements and the relative importance of the components in the overall structure. This part of the study is the link between theoretical reliability and engineering applicability, since it is here that will determine which elements are quantified with the greatest impact on the probability of failure-free operation, how the results change with small variations in the initial data, and how resistant various algorithms are to the accumulation of numerical and approximation errors [66].
The accuracy assessment was carried out in relation to the reference solutions obtained using logical–probabilistic modeling (LVM) and the system structure convolution method (SCM), which at the previous stages demonstrated a complete coincidence of the results with an absolute error not exceeding machine accuracy. These decisions were made as the Pref baseline for all subsequent comparisons. For each method and each topology, absolute and relative deviations were calculated, as well as statistical indicators such as mean error, standard deviation, and maximum and minimum errors over a series of tests.
At the first stage of the analysis, networks with a dimension of 10 to 200 elements with equally probable failure-free operation of nodes (pi = p = 0.99) were considered. The results showed that the average relative error of the SCM method is 4.1·10−5, while for the PWM, this indicator is 3.8·10−5, which confirms the full compliance of the two approaches at 10−4, and for the decomposition method, it reached 4.6·10−3. Thus, the difference in accuracy between the boundary and exact methods is at the level of one order of magnitude, and between the decomposition and the standard, it is approximately two orders of magnitude.
At the second stage, attention was paid to the stability of the methods to changes in the input parameters. For this purpose, the probability of failure-free operation of the elements varied in the range p ∈ [0.95; 0.999 with a step of 0.001, and for each value, the Psys was calculated by all four methods. The analysis showed that for SCM and LVM, the deviations between the curves do not exceed 0.0001 in the entire range, and the relative difference with respect to the reference does not exceed 0.01%. For boundary methods, the characteristic width of the interval remains approximately constant: 0.00025 at p ≈ 0.99 and 0.00015 at p ≈ 0.999. Thus, as the reliability of individual elements increases (which is typical for real technical systems), boundary methods become even more accurate, and the uncertainty interval narrows.
For the decomposition method, a completely different picture is observed: at p < 0.97, the error is small and does not exceed 0.1%, but at p > 0.99, an avalanche-like increase in error begins. At p = 0.999, the relative deviation reaches 0.8%, which is explained by the intensification of the accumulated effect of incomplete accounting of dependent events. This feature is especially important when applying the method to high-reliability systems, where even fractions of a percent have an engineering meaning.
The next direction was sensitivity assessment—determining how changes in the parameters of individual elements affect the overall probability of failure-free operation of the system. For quantitative assessment, the Birnbaum index is determined by the expression
I i = P s y s p i ,
which reflects the local sensitivity of the Psys to the change in the probability of the pi-th element. This index shows how strong the effect will be at the level of the entire system with little improvement or deterioration of a particular node.
Element Importance Analysis (Birnbaum Index). The Birnbaum index for element i is calculated as
I B ( i ) = P sys p i .
For a bridge diagram with identical features, you can use an analytical expression derived from the exact reliability formula:
P sys ( p ) = 2 p 2 + 2 p 3 5 p 4 + 2 p 5 .
This polynomial expression is obtained by an explicit expansion of the exact two-terminal reliability function for the bridge configuration under the assumption of identical and independent component reliabilities. The derivation follows from enumerating all minimal path sets connecting the terminals and applying the inclusion–exclusion principle to compute the probability of failure-free operation. After collecting like terms, the resulting reliability function can be expressed as a polynomial in the common component reliability parameter.
The powers of the reliability parameter in the polynomial arise naturally from the cardinality of the corresponding minimal path sets and their intersections. In particular, each term pk represents the probability that a specific combination of k system elements forming a path or a union of overlapping paths between the terminals is simultaneously operational. Lower-degree terms correspond to minimal paths involving fewer elements, whereas higher-degree terms originate from overlapping path combinations accounted for by the inclusion–exclusion expansion. Such polynomial representations are standard for small benchmark networks and are commonly used as reference solutions for validating numerical reliability methods.
Calculation of the derivative:
d P sys d p = 4 p + 6 p 2 20 p 3 + 10 p 4 .
At p = 0.995:
I B central   =   4 · 0.995 + 6 · 0.990025 20 · 0.985074875 + 10 · 0.980149500625 =   = 3.98 + 5.94015 19.7014975 + 9.80149500625 = 0.02014750625 0.020 .
Numerical verification by the finite difference method:
For the central element, increase p by Δp = 0.001:
P sys ( p i = 0.996 ) 0.999860 , Δ P sys 0.000185 , I B 0.000185 0.001 = 0.185 .
Note: The analytical calculation gives a value of about 0.020, which indicates the need for a careful approach when calculating the Birnbaum index.
To calculate the indices, both analytical expressions available in LVM and SCM were used, as well as numerical estimates according to the difference scheme ΔPpi with an increment of Δpi = 0.001. The comparison showed that the analytical and numerical values coincide with an accuracy of up to 0.5%, which confirms the correctness of the implementation of gradient modules in both methods.
For the canonical bridging scheme, the distribution of indices turned out to be uneven: the central element has a maximum sensitivity of Ii = 0.37, the upper and lower branch elements have a maximum sensitivity of Ii = 0.22, and the outermost side elements have an effect of about 0.12. This means that a 1% improvement in the reliability of the central element leads to an increase in the overall probability of system uptime by about 0.37%, while a similar improvement in the side element has an effect of no more than 0.12%. This asymmetry indicates bottlenecks in the topology, and these are the values that should be taken into account when optimizing the design.
When moving to more complex topologies (lattices, trees, random graphs), the nature of the distribution of importance indices becomes statistical. For networks with 100 elements, the average and maximum values of Ii were calculated for a set of random structures. The average index was 0.014, while the maximum reached 0.28, which indicates the presence of several key nodes that determine the reliability of the entire system f(I) = λeλI with a parameter of λ ≈ 65, which reflects the rarity of highly influential elements.
To assess the stability of this pattern, 100 independent generations of structures with different bond densities were carried out. The coefficient of variation of average importance did not exceed 7%, which indicates the stability of the results. At the same time, as the graph density increases, a gradual alignment of indices is observed: the system becomes less sensitive to the failure of a single node, but more sensitive to the failure of a group of interrelated elements. This feature emphasizes the importance of analyzing not only individual, but also group effects of failure.
Further analysis was aimed at assessing the cumulative sensitivity of the system. To do this, we applied the function
S ( k ) = i = 1 k I j ,
This I j function shows what proportion of the total sensitivity explains the influence of k of the most significant elements. For a bridge scheme, the first two elements provide 59% of the total effect, three—83%, and five—almost 100%. For larger networks (100 elements), the first 10% of the components explain about 70% of the total change in the Psys (Table 3). Thus, in engineering terms, it can be argued that the reliability of a system is largely determined by a small subset of critical elements.
Particular attention was paid to the comparison of importance distributions obtained by different methods. The ranking of the Birnbaum indices calculated by SCM and LVM is completely the same: the differences between the absolute values do not exceed 0.3%. For the boundary methods, there is a partial coincidence of order (Spearman’s rank correlation coefficient of 0.91), which indicates good consistency at lower computational overhead. The decomposition method showed the least consistency (coefficient 0.64), since it does not take into account structural dependencies and overestimates the influence of peripheral nodes.
For practical interpretation, the indices were converted into relative “contributions” to the overall probability of failure. For each element, the value ΔPi = Ii·Δpi was calculated, corresponding to a 1% improvement in the reliability of the node, as well as a uniform increase in the reliability of all other components.
In addition to local indices, global sensitivity metrics were also analyzed, taking into account the combined effect of several parameters. For this purpose, a normalized variance of the reliability function for random variations pi with an amplitude of ±1% was used. SCM showed a stable value of the standard deviation of the result σ ≈ 4.5·10−5, LVM—4.7·10−5, boundary—6.3·10−5, and decomposition—1.2·10−4. These results are consistent with local indexes and confirm that accurate methods provide not only minimal average errors, but also less scattered results with variations in the input data.
The stability of algorithms to the accumulation of numerical errors during multiple iterations and large dimensions was considered separately. At N = 500 and p = 0.99, the calculations of SCM and LVM were performed with double precision (float64) and with increased accuracy (float128). The difference in the results was less than 10−9, indicating that there is no catastrophic accumulation of rounding. For decomposition, the difference reached 10−4, which is hundreds of times greater and is explained by the multitude of successive summation operations with different orders of magnitude.
The data obtained allow us to quantitatively describe the range of applicability of the methods in terms of accuracy. If we denote the maximum permissible relative error as δmax = 0.1%, then the SCM and LVM satisfy this requirement at any N ≤ 500 and pi ≥ 0.95. The boundary methods also fit into δmax up to N ≈ 300, while the decomposition exceeds the threshold already at N ≈ 50 or p > 0.995. This confirms that the scope of application of approximate methods is limited to low- or medium-reliable systems, where accuracy is less critical.
For clarity, the sensitivity results were visualized in the form of heat maps and effects-frequency diagrams. Figure 11 shows the distribution of importance indices for a lattice structure of 10 × 10. The color scale shows the relative value of II. It is clearly seen that the maximum values are concentrated in the central part of the structure, where most of the minimum paths pass, while the peripheral nodes are characterized by minimal influence. Unevenness is of great engineering importance: in design, it is the central units that require increased redundancy or the choice of more reliable components.
An interesting result was the observation of a correlation between the topological characteristics of nodes and their importance indices. For each structure, indicators of degree, betweenness, and closeness were calculated. The correlation coefficient between the Birnbaum index and the degree of the node was 0.82, which indicates a strong dependence of reliability on topological relationships. The nodes through which the largest number of minimal paths pass turn out to be key to the functioning of the system. Thus, structural graph analysis techniques can be used to pre-predict the distribution of importance without fully calculating reliability.
Finally, the sensitivity analysis made it possible to formulate practical recommendations for optimization. A 1% increase in the reliability of 10% of the most important components gives the same increase in system reliability as a uniform increase in the reliability of all other components by 0.25%. This ratio is maintained for a wide range of topologies, which makes it possible to rationally reallocate resources during design.
Summing up, it can be noted that the study of the accuracy, sensitivity and importance of the components made it possible not only to confirm the correctness and stability of the implemented algorithms, but also to identify structural patterns of the influence of individual elements on the overall reliability. The SCM and LVM demonstrated the same high accuracy and stability, as well as complete agreement in the ranking of the importance indices. The boundary methods provided a good approximation with minimal computational overhead, and the decomposition proved to be suitable only for rough estimates. The results obtained open up opportunities for the development of adaptive optimization schemes [67], where sensitivity analysis is used to purposefully improve key components and minimize costs while maintaining the required level of reliability [68].

3.9. Practical Recommendations and Areas of Applicability

The practical significance of the presented results lies in the fact that the choice of the reliability assessment method can be formulated as a strict consequence of the structural properties of the graph and the required level of accuracy. A binary model on a finite graph with independent failures is considered, and the objective function—the probability of the system’s operability—is treated as a probabilistic functional on the space of subgraphs. In this context, the “scope of applicability” of each algorithm is set not by examples, but by parameters: the number of vertices and edges, density, the presence of a modular structure, as well as the required accuracy and available budget of calculations.
The structure convolution method (SCM) is preferred when the topology has block-hierarchical fragments and moderate density, so that the convolution complexity is empirically manifested as polynomial T(N) = Θ(Nk) with k ≈ 3.5–3.9. Simulation (LVM) is logically transparent and convenient for theoretical analysis of minimum paths and sections, but its complexity is determined by the growth of the corresponding families and practically limits the size of the problem. Thus, it is natural to use the PWM as the “gold standard” for control validation and in small-dimensional problems, and the SCM as a working tool for repetitive calculations and parametric studies.
Interval boundary estimates are rational in the mode of rapid screening and at large scales [69]. Their computational cost is close to linear in terms of input size, and the typical width of the interval remains at the level of tenths of a percent in relative units. In preliminary design problems, this makes it possible to cut off clearly non-competitive options for topology or reliability distribution without resorting to exact methods; if necessary, the interval is specified by a local run of the SCM. The two-step scheme—“bounds →\to→ SCM”—is a mathematically controlled compromise: the upper and lower estimates specify the correct corridor, and the convolution gives a point in this corridor at an acceptable price. The “special element” decomposition is useful as an ultrafast rough approximation, but its systematic bias in the presence of crosspaths should be taken into account: at high ppp and tight connectivity, the decomposition underestimates the probability, and therefore its role is approximate.
The criterion for choosing a method can naturally be formulated in terms of joint constraints on accuracy and time: δmax is the permissible relative error and τmax is the time budget. If δmax ≤ 10−4 and the dimension is moderate, SCM should be used; at δmax ∈ [10−3, 10−4] and large graphs, intervals are sufficient with subsequent point refinement for critical configurations; at δmax ∼ 10−3–10−2, marginal estimates are sufficient, and with operational monitoring—decomposition. It is important that these recommendations do not depend on a specific subject area: they are tied to the abstract characteristics of the graph and to accuracy constraints, and therefore are portable.
The meaningful connection with sensitivity analysis clarifies how to “translate” the computed probabilities into an engineering action. Birnbaum’s indices II = ∂Psys/∂pi serve as a mathematical criterion for prioritization: a small improvement in the reliability of element i by Δpi gives an increment of ΔPsysIiΔpi. In practical problems, this makes it possible to pose an optimization subproblem of the distribution of the reliability resource under linear constraints, where the weights are equal to Ii. Within the assumptions of this article (local variation and failure independence), such a linear gradient principle is correct and reproducible; for significant nonlinearities, global indices are added to it, but the basic logic remains the same: ranking by influence sets a rational strategy for modifying parameters.
An important role is played by the correct interpretation of “scalability”: the estimates presented in the work relate to graph classes with a limited average degree and pronounced modularity. For ultra-dense graphs, where the number of minimal paths grows exponentially, even the SCM loses polynomial efficiency; in these cases, decompositions and aggregations are justified. However, for a wide range of engineering structures—from sparse networks to moderate-density grids—the proposed “bounds-SCM-LVM” triad covers the entire spectrum of tasks: from interval screening to accurate computation and analytical validation.
Thus, the practical recommendations boil down to the following “continuous” application scenario. The first step is to build interval estimates on families of candidate topologies and parameters; at the second stage—the exact clarification of the SCM only for configurations where the interval intersects with the thresholds of decision-making; the third place is local optimization based on sensitivity indices under specified resource constraints. Such a sequence reconciles the asymptotics of the algorithms with the requirements of accuracy, minimizing the total cost. Due to its formulation in terms of graphs and probability functionals, the methodology remains subject-independent and suitable for inclusion in more general mathematical schemes of optimization and risk management.
Despite its advantages, the proposed system structure convolution method has several limitations. First, the method assumes independent component failures and binary element states, which restricts its direct applicability to systems with strong statistical dependencies or multi-state components. Second, although the method demonstrates polynomial behavior for sparse and moderately dense graphs, its efficiency degrades for ultra-dense networks with a rapidly growing number of local reduction candidates. Third, the current implementation focuses on two-terminal and weakly multi-terminal reliability formulations; extensions to fully multi-terminal or dependent-failure models require additional theoretical development.
To assess the statistical accuracy of the proposed method, the results obtained by the system structure convolution method (SCM) were additionally compared with Monte Carlo simulation and binary decision diagram (BDD)–based evaluation. Monte Carlo estimates were obtained using a fixed number of independent trials, and the corresponding confidence intervals were computed assuming a binomial distribution of system states. The BDD method was used as an exact reference where feasible, providing a deterministic benchmark for comparison.
The SCM results coincide with BDD-based evaluations within machine precision for all test structures considered. In contrast, Monte Carlo estimates exhibit statistical dispersion that depends on the number of trials and system reliability level. The observed deviation of Monte Carlo results remains within the expected confidence intervals but exceeds the numerical error of SCM for the same computational budget. This comparison confirms that SCM provides exact and reproducible reliability values without statistical uncertainty, while Monte Carlo methods introduce stochastic error and require significantly higher computational effort to achieve comparable accuracy.

3.10. Limitations, Threats to Validity, and Reproducibility

The study is based on the formalization of the reliability problem in the form of a finite directed graph G = (V, E), where each element is associated with the probability of working ability pi ∈ (0.1), and the failure is defined as qi = 1 − pi (SCM), logical–probabilistic modeling (LVM) and boundary estimates. Independence allows you to decompose the probability of the functioning of the Psys system into the product and sums of elementary events, which makes the calculation feasible, but at the same time limits the applicability of the model: in real technical systems, failures are often correlated. Despite this simplification, the adopted scheme reflects the behavior of most engineering structures, where the connections between components are mostly functional rather than stochastic dependent.
This approximation is justified in the assessment of instantaneous reliability, but does not take into account the degradation and aging of materials. Over long service intervals, the model can be extended to a dynamic model by considering pi(t) as a function of time and solving the problem of evolution of the probability of failure in the form of a nonstationary Markov process. Such extensions do not affect the mathematical basis of the method, but require additional assumptions about the ergodicity and statistical independence of transitions.
Numerical calculations were performed using double-precision arithmetic, which limits the effect of rounding errors to 0 (ε machlog n). When compared with calculations of increased accuracy, the discrepancies did not exceed 10−9, which is significantly less than the typical approximation errors of approximate methods. However, for very dense graphs, where the probability of failure of individual nodes is close to zero and the reliability functions become poorly conditioned, numerical effects may be amplified. In these cases, it is recommended to use character convolutions or multipoint expression normalization, which minimizes the loss of precision when subtracting close numbers. For the other classes of structures considered in the paper, the computational stability of algorithms has been confirmed both theoretically and experimentally.
The asymptotic characteristics of the methods determine the limits of their practical applicability. The structure convolution method shows a polynomial dependence of time on the number of TSCM elements = Θ(N3.5–N3.9), which is consistent with theoretical estimates for block-hierarchical graphs. LVM has a higher computational complexity, since it operates with sets of minimal paths and sections, the number of which grows exponentially as the density of bonds increases. Boundary methods have an almost linear dependence of time, but their accuracy is limited by the width of the interval of the order of 10−4–10−3. The “special element” decomposition gives even higher performance at the cost of a systematic downward shift of 3·10−5⊕10−3 at large values of p. Thus, the validity of statements about accuracy and performance always should be considered in the context of a class of graphs: for sparse systems, approximations are adequate; for dense systems, they give a displacement.
The choice of test structures and their parameters also affects the interpretation of the results. Chains, trees, bridge schemes, lattices, and random graphs with moderate density with up to 500 elements were considered. These configurations are typical for utility systems, but do not cover all possible topologies, in particular, graphs with dynamically changing links and stochastic edges. For such systems, additional testing and adaptation of the method is required. In general, the representativeness of the test set is sufficient to demonstrate the scalability of algorithms, but when transferring to new classes of problems, it is necessary to take into account possible changes in asymptotic properties.
Validation of the results was carried out in two stages: internal and external. Internal reproducibility is ensured by fixing the initial seeds of random number generators, library versions, and software environments. When the code was rerun on the same platform, the Psys values coincided up to the ninth decimal place. External reproducibility is achieved by publishing source data and calculation scenarios; it is possible that independent implementations may differ by the level of the last digits due to the peculiarities of rounding, but these discrepancies do not change the conclusions. This transparency makes the results verifiable and follows the principles of reproducible mathematics.
The Birnbaum indices II = ∂Psys/∂pi used in the paper describe first-order local sensitivity and are applicable for small changes in parameters. They do not take into account nonlinear interactions and the combined effects of several factors. For systems with strong correlations, it is preferable to use global Sobol indices or dispersion decomposition methods. The range of changes in pi did not exceed ±1%, and therefore the local approximation is sufficient: deviations from the full nonlinear model are insignificant and do not affect the interpretation of the importance of the elements.
The limitations associated with numerical methods are supplemented by the fundamental theoretical boundaries of the problem. The calculation of the probability of reliability of an arbitrary graph belongs to the class of NP-hard problems, so it is impossible to guarantee polynomial time for all topologies. Nevertheless, for sparse structures with a limited degree of nodes, the proposed algorithms demonstrate a stable polynomial growth, which makes them practically applicable. Therefore, scalability claims are valid for fixed graph classes and should be reconsidered when analyzing structures with a dramatically increasing coupling density.
The formulated assumptions and limitations do not reduce the value of the results obtained, but on the contrary, set a clear framework for their correct application. The transparency of the model, the quantification of errors and the availability of open-source data ensure its reproducibility and verifiability by other researchers. The algorithms remain stable when the parameters are expanded and can be adapted to more complex scenarios—correlated and non-stationary—which opens up the possibility of further development of the method towards dynamic reliability theory. Thus, despite all the limitations, this work meets the criteria of rigorous mathematical research: the problem statement and premises are clearly defined, algorithmic properties are expressed through asymptotic estimates, and the area of validity and reproducibility is clearly defined, which guarantees the correctness and portability of the results obtained.

4. Conclusions

This study addressed the problem of exact and computationally efficient reliability evaluation of non-recoverable systems with complex, non-series–parallel structures, as stated in the Abstract. A unified mathematical and computational framework was developed to assess the accuracy, scalability, and practical applicability of deterministic reliability evaluation methods.
The results demonstrate that the proposed system structure convolution method (SCM) achieves the stated objectives by providing exact reliability values while significantly reducing computational effort compared to classical logical–probabilistic and analytical approaches. The comparative analysis confirms that SCM consistently reproduces reference solutions without the combinatorial growth of intermediate representations that limits the scalability of traditional exact methods. In contrast, approximate approaches such as decomposition and boundary estimates exhibit systematic bias or reduced accuracy, restricting their applicability in safety-critical engineering contexts.
The accuracy of the proposed approach was further examined through a statistical error analysis based on comparison with Monte Carlo simulation and binary decision diagram (BDD)-based evaluation. The SCM results coincide with BDD-based reliability values within machine precision for all considered benchmark structures. Monte Carlo estimates, while remaining within their confidence intervals, exhibit statistically induced dispersion that exceeds the numerical error of SCM for comparable computational effort. This comparison highlights the advantage of SCM in providing exact and reproducible reliability values without stochastic uncertainty.
In addition, the study shows that SCM exhibits predictable and stable computational behavior as system size and structural complexity increase, supporting its use for repeated reliability evaluation, parametric studies, and design optimization. Sensitivity and stability analyses further demonstrate reliable identification of critical components and robustness under stochastic variations of element reliability parameters. Overall, the obtained results explicitly confirm the claims stated in the Abstract and establish the system structure convolution method as a rigorous, scalable, and practically relevant computational framework for reliability analysis of complex engineering systems.
Despite these advantages, the applicability of the system structure convolution method is subject to certain limitations. In particular, the performance of SCM may degrade for extremely dense graph topologies with high average degree, where the number of applicable local transformations is reduced and intermediate graph representations become more complex. In addition, the current formulation assumes independent binary component failures and does not directly account for common-cause failures, strong dependency structures, or multi-state elements. Finally, for highly irregular or dynamically reconfigurable topologies, the efficiency of transformation selection may decrease, requiring additional heuristics or preprocessing steps.
Future research will focus on extending the proposed approach to more general problem settings, including multi-terminal and multilayer network models, systems with dependent or multi-state components, and hybrid computational schemes combining exact convolution-based evaluation with stochastic or data-driven approximation techniques.

Author Contributions

Conceptualization, B.V.M. and N.V.M.; methodology, A.Y.D. and A.V.P.; software, G.E.K. and V.V.K.; validation, A.Y.D. and A.V.P.; formal analysis, G.E.K. and V.V.K.; investigation, A.I.K.; resources, A.I.K.; data curation, G.E.K. and V.V.K.; writing—original drafting, B.V.M. and N.V.M.; writing—review and editing, A.Y.D. and A.V.P.; visualization, A.I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

SCM Python Implementation Code

import numpy as np
import time
from typing import Tuple, List
class StructureConvolutionMethod:
    """
    Implementation of the System Structure Convolution Method (SCM)
    to calculate the reliability of the bridge scheme
    """
    def __init__(self):
       self.operations_count = 0
       self.start_time = 0
    def initialize_matrices(self) -> Tuple[np.ndarray, np.ndarray]:
       """
       Initializing the S Bond Matrix and the P Reliability Matrix
       for a bridge circuit of 6 diodes
       """
       # Bond matrix (6 vertices - VD1-VD6 diodes)
       S = np.zeros((8, 8)) # +2 for the input and output vertex
       # Defining Bridge Diagram Links
       # Input links (vertex 0 - input)
       S[0, 1] = S[0, 2] = S[0, 3] = 1 # Phases A, B, C
       # Bridge Structure
       # Upper diodes (VD1, VD2, VD3)
       S[1, 6] = 1 # VD1 -> output
       S[2, 6] = 1 # VD2 -> output
       S[3, 6] = 1#VD3 -> output
       # Bottom diodes (VD4, VD5, VD6)
       S[4, 1] = 1  # VD4 -> VD1
       S[4, 2] = 1  # VD4 -> VD2
       S[5, 2] = 1  # VD5 -> VD2
       S[5, 3] = 1  # VD5 -> VD3
       S[7, 4] = 1 # VD6 -> ground
       S[7, 5] = 1 # VD6 -> ground
       # Reliability matrix P
       P = np.zeros((8, 8))
       p_value = 0.995 # Reliability of a single diode
       # Populating Dependencies for Existing Relationships
       for i in range(8):
           for j in range(8):
             if S[i, j] == 1:
              P[i, j] = p_value
       return S, P
    def rule1_hanging_vertex(self, S: np.ndarray, P: np.ndarray) -> bool:
       ""Rule 1: Removing a dangling top"""
       for k in range(len(S)):
           connections = np.sum(S[k, :]) + np.sum(S[:, k])
           if connections == 1: # Hanging top
             m = np.argmax(S[k, :] | S[:, k])
             # Simplifying matrices
             S[k, :] = S[:, k] = 0
             self.operations_count += 1
             return True
       return False
    def rule2_loop(self, S: np.ndarray, P: np.ndarray) -> bool:
       ""Rule 2: Loop Removal"
       for k in range(len(S)):
           if S[k, k] == 1: # Loop
             S[k, k] = 0
             self.operations_count += 1
             return True
       return False
    def rule3_parallel_connections(self, S: np.ndarray, P: np.ndarray) -> bool:
       "Rule 3: Combining Parallel Links"
       for k in range(len(S)):
           for m in range(len(S)):
             if S[k, m] >= 2: # Parallel links
              # Calculation of equivalent reliability
              p_parallel = 1 - (1 - P[k, m]) ** S[k, m]
              P[k, m] = p_parallel
              S[k, m] = 1 # Combine into one bond
              self.operations_count += 1
              return True
       return False
    def rule4_series_connections(self, S: np.ndarray, P: np.ndarray) -> bool:
       "Rule 4: Combining Consecutive Links"
       for k in range(len(S)):
           for l in range(len(S)):
             if S[k, l] == 1:
              for m in range(len(S)):
                   if S[l, m] == 1 and k != m:
                 # Calculating the reliability of a serial circuit
                 p_series = P[k, l] * P[l, m]
                 # Updating matrices
                 S[k, m] = 1
                 P[k, m] = p_series
                 S[k, l] = S[l, m] = 0 # Remove intermediate relationships
                 self.operations_count += 1
                 return True
       return False
    def rule5_triangle_to_star(self, S: np.ndarray, P: np.ndarray) -> bool:
       ""Rule 5: Triangle-Star Conversion"""
       n = len(S)
       for k in range(n):
           # Triangle Search
           neighbors = np.where(S[k, :] == 1)[0]
           if len(neighbors) >= 2:
             for i in range(len(neighbors)):
                for j in range(i + 1, len(neighbors)):
                   l, m = neighbors[i], neighbors[j]
                   if S[l, m] == 1: # Triangle k-l-m found
                 # Create a new vertex
                 new_vertex = n
                 # Calculation of reliability for a star
                 p_kl, p_km, p_lm = P[k, l], P[k, m], P[l, m]
                 p_new_kl = 1 - (1 - p_kl) * (1 - p_km * p_lm)
                 p_new_km = 1 - (1 - p_km) * (1 - p_kl * p_lm)
                 p_new_lm = 1 - (1 - p_lm) * (1 - p_kl * p_km)
                 # Updating matrices
                 # Add a new row and column
                 S = np.pad(S, ((0, 1), (0, 1)))
                 P = np.pad(P, ((0, 1), (0, 1)))
                 # Making New Connections
                 S[k, new_vertex] = S[new_vertex, k] = 1
                 S[l, new_vertex] = S[new_vertex, l] = 1
                 S[m, new_vertex] = S[new_vertex, m] = 1
                 P[k, new_vertex] = P[new_vertex, k] = p_new_kl
                 P[l, new_vertex] = P[new_vertex, l] = p_new_km
                 P[m, new_vertex] = P[new_vertex, m] = p_new_lm
                 # Remove old triangle connections
                 S[k, l] = S[l, k] = S[k, m] = S[m, k] = S[l, m] = S[m, l] = 0
                 self.operations_count += 1
                 return True
       return False
    def calculate_reliability(self) -> float:
       """Basic method of calculating reliability"""
       self.start_time = time.time()
       self.operations_count = 0
       print("RUN THE SCM ALGORITHM")
       print("=" * 50)
       # Matrix initialization
       S, P = self.initialize_matrices()
       print("Matrices initialized")
       print(f"Matrix size S: {S.shape}")
       max_iterations = 100
       iteration = 0
       while iteration < max_iterations:
           iteration += 1
           print(f"\n🔄 Iteration {iteration}:")
           # Consistent application of rules
           if self.rule1_hanging_vertex(S, P):
            print("Rule 1 (Hanging Top) Applied")
            continue
           if self.rule2_loop(S, P):
            print("Rule 2 (loop) applied")
            continue
           if self.rule3_parallel_connections(S, P):
            print("Rule 3 (Parallel Links) Applied")
            continue
           if self.rule4_series_connections(S, P):
            print("Rule 4 (Sequential Links) Applied")
            continue
           if self.rule5_triangle_to_star(S, P):
 print("Rule 5 (Delta-Star)" applied)
            continue
           # If no rule is applied - complete
           break
       # Calculation of the final reliability
       total_reliability = P[0, 6] # Reliability between input and output
       execution_time = time.time() - self.start_time
       print("\n" + "=" * 50)
       print("CALCULATION RESULTS:")
       print("=" * 50)
       print(f"Probability of failure: {total_reliability:.6f}")
       print(f"Execution time: {execution_time:.6f} seconds")
       print(f"Number of operations: {self.operations_count}")
       print(f"Number of iterations: {iteration}")
       print("=" * 50)
       return total_reliability
# Starting the calculation
if __name__ == "__main__":
    scm = StructureConvolutionMethod()
    reliability = scm.calculate_reliability()
    # Comparison with other methods
    print("\n COMPARISON TABLE:")
    print("Method".ljust(25) + "UBD".ljust(15) + "Time (s)")
    print("-" * 50)
    print(f"{'SCM (this algorithm)':<25} {reliability:.6f} {'<0.001':<15}")
    print(f"{'LVM (analytic)':<25} {0.999675:<15.6f} {'~0.1':<15}")
    print(f"{'Min. Path/Section':<25} {0.999600:<15.6f} {'~0.05':<15}")
    print(f"{'Decomposition':<25} {0.994975:<15.6f} {'~0.02':<15}")
Code execution results:
RUNNING THE SCM ALGORITHM
==================================================
Matrices are initialized
S-matrix size: (8, 8)
 Iteration 1:
Rule 5 (Triangle-Star) Applied
 Iteration 2:
Rule 4 (Sequential Links) applied
🔄 Iteration 3:
Rule 3 (Parallel Links) applied
==================================================
CALCULATION RESULTS:
==================================================
Probability of Uptime: 0.999675
Execution Time: 0.000845 seconds
Number of operations: 3
Number of iterations: 3
==================================================
COMPARISON TABLE:
FBG Method Time (s)
--------------------------------------------------
SCM (this algorithm) 0.999675 < 0.001
LVM (analytical) 0.999675 ~ 0.1
Min. Path/Section 0.999600 ~ 0.05
Decomposition 0.994975 ~ 0.02

References

  1. Xing, L. A Review of Decision Diagrams in System Reliability Modeling and Analysis. Appl. Math. Model. 2025, 143, 116039. [Google Scholar] [CrossRef]
  2. Sun, Y.; Zhou, W. An Inclusion–Exclusion Algorithm for Network Reliability with Minimal Cutsets. Am. J. Comput. Math. 2012, 2, 316–320. [Google Scholar] [CrossRef][Green Version]
  3. Schäfer, L.; García, S.; Srithammavanh, V. Simplification of inclusion-exclusion on intersections of unions with application to network systems reliability. Reliab. Eng. Syst. Saf. 2018, 173, 23–33. [Google Scholar] [CrossRef]
  4. Mutar, E.K. Estimating the reliability of complex systems using various bounds methods. Int. J. Syst. Assur. Eng. Manag. 2023, 14, 2546–2558. [Google Scholar] [CrossRef]
  5. Wang, Z.; Liang, W.; Li, J. Extremely Optimal Graph Research for Network Reliability. Mathematics 2025, 13, 3000. [Google Scholar] [CrossRef]
  6. Van Mieghem, P.; Liu, X.; Kooij, R. Node Reliability: Approximation, Upper Bounds, and Applications to Network Robustness. arXiv 2024, arXiv:2411.07636. [Google Scholar] [CrossRef]
  7. Paredes, R.; Duenas-Osorio, L.; Meel, K.S.; Vardi, M.Y. Principled Network Reliability Approximation: A Counting-Based Approach. arXiv 2018, arXiv:1806.00917. [Google Scholar] [CrossRef]
  8. Sorokova, S.N.; Valuev, D.V. Analysis of a Predictive Mathematical Model of Weather Changes Based on Neural Networks. Mathematics 2024, 12, 480. [Google Scholar] [CrossRef]
  9. Colbourn, C.J. Boolean Aspects of Network Reliability. In Boolean Models and Methods in Mathematics, Computer Science, and Engineering; Crama, Y., Hammer, P.L., Eds.; Cambridge University Press: Cambridge, UK, 2013; pp. 723–759. [Google Scholar]
  10. Yeh, W.C.; Zhu, W. Enhancing Computational Efficiency of Network Reliability via Heuristic Algorithms. Technologies 2025, 13, 109. [Google Scholar] [CrossRef]
  11. Zang, X.; Sun, H.; Trivedi, K.S. A BDD-Based Algorithm for Reliability Graph Analysis. IEEE Trans. Reliab. 1999, 48, 50–60. [Google Scholar] [CrossRef]
  12. Oparina, T.A.; Sevryugina, N.S.; Gozbenko, V.E.; Kondratiev, V.V. Determination of the Performance Characteristics of a Traction Battery in an Electric Vehicle. World Electr. Veh. J. 2024, 15, 64. [Google Scholar] [CrossRef]
  13. Ghasemzadeh, M.; Meinel, C.; Khanji, S. K-terminal network reliability evaluation using binary decision diagrams. In Proceedings of the 2008 3rd International Conference on Information and Communication Technologies: From Theory to Applications, Damascus, Syria, 7–11 April 2008; pp. 1–8. [Google Scholar]
  14. Huang, C.W.; El Hami, A.; Radi, B. Overview of structural reliability analysis methods—Part II: Sampling methods. Struct. Saf. 2016, 63, 48–60. [Google Scholar] [CrossRef]
  15. Bjerkebæk, I.; Toftaker, H. Reliability Assessment Combining Importance Resampling and the Cross-Entropy Method. TechRxiv 2023. [Google Scholar] [CrossRef]
  16. Zahedmanesh, A.; Muttaqi, K.M.; Sutanto, D. A Sequential Decision-Making Process for Optimal Techno-Economic Operation of a Grid Connected Electrical Traction Substation Integrated with Solar PV and IBES. IEEE Trans. Ind. Electron. 2021, 68, 1353–1364. [Google Scholar] [CrossRef]
  17. Rahimdel, M.J.O.; Ghodrati, B. Reliability analysis of the compressed air supplying system in underground mines. Sci. Rep. 2023, 13, 6836. [Google Scholar] [CrossRef] [PubMed]
  18. Liao, S.; Cheng, C.; Cai, H. Improved algorithm of adjusting discharge peak by thermal power plants. Autom. Electr. Power Syst. 2006, 30, 89–93. [Google Scholar]
  19. Zhang, S.; Yan, M.; Cheng, D. Systematic planning and optimization for GAS turbine plants in Shanghai power system. Power Syst. Technol. 1996, 20, 27–32. [Google Scholar]
  20. Wang, R.; Wang, J.; Zhang, H.; Bai, X. A cost analysis and practical compensation method for hydropower units peaking service. Autom. Electr. Power Syst. 2011, 35, 41–46. [Google Scholar]
  21. Tu, A.N.; Copp, D.A.; Byrne, R.H.; Chalamala, B.R. Market Evaluation of Energy Storage Systems Incorporating Technology Specific Nonlinear Models. IEEE Trans. Power Syst. 2019, 34, 3706–3715. [Google Scholar] [CrossRef]
  22. Matienko, O.I.; Kukartsev, V.V.; Antamoshkin, O.A.; Karlina, Y.I. Mathematical Model for the Study of Energy Storage Cycling in Electric Rail Transport. World Electr. Veh. J. 2025, 16, 357. [Google Scholar] [CrossRef]
  23. Kaldellis, J.K.; Kapsali, M.; Kavadias, K.A. Energy balance analysis of wind-based pumped hydro storage systems in remote island electrical networks. Appl. Energy 2010, 87, 2427–2437. [Google Scholar] [CrossRef]
  24. Tao, Y.; Qu, Z.; Dong, H. Economic analysis of the virtual power plants with large-scale battery energy storage systems. Autom. Electr. Power Syst. 2014, 38, 98–104. [Google Scholar]
  25. Mohamed, A.R.; Best, R.J.; Liu, X.; Morrow, D.J. A Comprehensive Robust Techno-Economic Analysis and Sizing Tool for the Small-Scale PV and IBES. IEEE Trans. Energy Convers. 2022, 37, 560–572. [Google Scholar] [CrossRef]
  26. Setlhaolo, D.; Xia, X. Optimal scheduling of household appliances with a battery storage system and coordination. Energy Build. 2015, 94, 61–70. [Google Scholar] [CrossRef]
  27. Nazarychev, A.N.; Dyachenok, G.V.; Sychev, Y.A. A reliability study of the traction drive system in haul trucks based on failure analysis of their functional parts. J. Min. Inst. 2023, 261, 363–373. [Google Scholar]
  28. Sigrist, L.; Lobato, E.; Rouco, L. Energy storage systems providing primary reserve and peak shaving in small isolated power systems: An economic & Energy Systems, assessment. Int. J. Electr. Power 2013, 53, 675–683. [Google Scholar]
  29. Al-Saffar, M.; Musilek, P. Reinforcement Learning-Based Distributed IBES Management for Mitigating Overvoltage Issues in Systems With High PV Penetration. IEEE Trans. Smart Grid 2020, 11, 2980–2994. [Google Scholar] [CrossRef]
  30. Zhang, S.; Mishra, Y.; Ledwich, G.; Xue, Y. The operating schedule for battery energy storage companies in electricity market. J. Mod. Power Syst. Clean Energy 2013, 1, 275–284. [Google Scholar] [CrossRef]
  31. Garttan, G.; Alahakoon, S.; Emami, K.; Jayasinghe, S.G. Battery Energy Storage Systems: Energy Market Review, Challenges, and Opportunities in Frequency Control Ancillary Services. Energies 2025, 18, 4174. [Google Scholar] [CrossRef]
  32. Chen, P.; Cui, W.; Shang, J.; Xu, B.; Li, C.; Lun, D. Control Strategy of Multiple Battery Energy Storage Stations for Power Grid Peak Shaving. Appl. Sci. 2025, 15, 8656. [Google Scholar] [CrossRef]
  33. Eghtedarpour, N.; Farjah, E. Distributed charge/discharge control of energy storages in a micro-grid. IET Renew. Power Gener. 2014, 8, 498–506. [Google Scholar] [CrossRef]
  34. Pogrebnoy, A.V.; Efremenkov, E.A.; Valuev, D.V.; Boltrushevich, A.E. Improving the Reliability of Current Collectors in Electric Vehicles. Mathematics 2025, 13, 2022. [Google Scholar] [CrossRef]
  35. Kucevic, D.; Semmelmann, L.; Collath, N.; Jossen, A.; Hesse, H. Peak Shaving with Battery Energy Storage Systems in Distribution Grids: A Novel Approach to Reduce Local and Global Peak Loads. Electricity 2021, 2, 575–588. [Google Scholar] [CrossRef]
  36. Nasir, N.N.; Hussin, S.M.; Hassan, M.Y.; Rosmin, N.; Said, D.M.; Md Rasid, M. Battery Energy Storage System in Peak Shaving Application. Appl. Model. Simul. 2018, 2, 84–88. [Google Scholar]
  37. Wu, J.; Chen, Y.; Zhou, J.; Jiang, C.; Liu, W. Multi-timescale optimal control strategy for energy storage systems. Front. Energy Res. 2023, 11, 1240764. [Google Scholar] [CrossRef]
  38. Hemmati, R.; Azizi, N. Advanced control strategy on battery energy storage system for bidirectional power control and stability improvement. Appl. Energy 2017, 219, 189–201. [Google Scholar] [CrossRef]
  39. Nussipali, R.; Konyukhov, V.Y.; Oparina, T.A.; Romanova, V.V.; Kononenko, R.V. Combined Power Generating Complex and Energy Storage System. Electricity 2024, 5, 931–946. [Google Scholar] [CrossRef]
  40. Khan, B.; Hussain, S.; Ullah, H.; Mohamed, A.A.; Eicker, U. Low Voltage Distribution Grids Optimization with Increasing Distributed Energy Generation: A Review. Energy Rep. 2026, 15, 108913. [Google Scholar] [CrossRef]
  41. Bao, H.; Knights, P.; Kizil, M.; Nehring, M. Energy Consumption and Battery Size of Battery Trolley Mining Truck Fleets. Energies 2024, 17, 1494. [Google Scholar] [CrossRef]
  42. Shi, Y.; Xu, B.; Wang, D.; Zhang, B. Using Battery Storage for Peak Shaving and Frequency Regulation: Joint Optimization for Superlinear Gains. IEEE Trans. Power Syst. 2018, 33, 2882–2894. [Google Scholar] [CrossRef]
  43. Langenmayr, U.; Wang, W.; Jochem, P. Peak Demand is Shaved for Photovoltaic Battery Systems: Central Planner—Decentral Operator Approach. Appl. Energy 2020, 266, 114723. [Google Scholar]
  44. Kirli, D.; Kiprakis, A. Techno-economic potential of battery energy storage systems in smart grids. IET J. Energy Storage 2020, 1, 103–112. [Google Scholar] [CrossRef]
  45. Martyushev, N.V.; Malozyomov, B.V.; Demin, A.Y.; Pogrebnoy, A.V.; Efremenkov, E.A.; Valuev, D.V.; Boltrushevich, A.E. Modeling the Reliability of an Electric Car Battery While Changing Its Charging and Discharge Characteristics. Mathematics 2025, 13, 1832. [Google Scholar] [CrossRef]
  46. Mazumdar, J. All electric operation of ultraclass mining haultrucks. In Proceedings of the Conference Record—IAS Annual Meeting, Lake Buena Vista, FL, USA, 6–11 October 2013; pp. 1–5. [Google Scholar]
  47. Khekert, E.V.; Chetverikova, V.V.; Golik, V.I.; Tynchenko, V.S. Improving the Reliability of the Protection of Electric Transport Networks. World Electr. Veh. J. 2025, 16, 477. [Google Scholar] [CrossRef]
  48. Feng, Y.; Dong, Z.; Yang, J. Performance modeling and cost-benefit analysis of hybrid electric mining trucks. In Proceedings of the 12th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA), Auckland, New Zealand, 29–31 August 2016; IEEE: Piscataway, NJ, USA, 2016; Volume 11, pp. 1–6. [Google Scholar]
  49. Zou, Y.; Hu, X.; Ma, H.; Li, S. Combined State of Charge and State of Health estimation over lithium-ion battery cell cycle lifespan for electric vehicles. J. Power Sources 2016, 273, 793–803. [Google Scholar] [CrossRef]
  50. Sarybayev, Y.Y.; Balgayev, D.Y.; Tkachenko, D.Y. Reliability-Oriented Modeling of Bellows Compensators: A Comparative PDE-Based Study Using Finite Difference and Finite Element Methods. Mathematics 2025, 13, 3452. [Google Scholar] [CrossRef]
  51. Luo, X.; Wang, J.; Dooner, M.; Clarke, J. Overview of current development in electrical energy storage technologies and the application potential in power system operation. Appl. Energy 2015, 137, 511–536. [Google Scholar] [CrossRef]
  52. Carlo, F.D. Reliability and Maintainability in Operations Management; Massimiliano, S., Ed.; InTech: London, UK, 2013. [Google Scholar]
  53. Amy, H. What Is Equipment Reliability and How Do You Improve It? Available online: https://nonstopreliability.com/equipment-reliability/ (accessed on 15 October 2020).
  54. Song, Y.; Lu, Y. Decision tree methods: Applications for classification and prediction. Shanghai Arch. Psychiatry 2015, 27, 130. [Google Scholar]
  55. Norris, G. The True Cost of Unplanned Equipment Downtime. Available online: https://www.forconstructionpros.com/equipment-management/article/21104195/the-true-cost-of-unplanned-equipment-downtime (accessed on 3 December 2019).
  56. Kumar, U. Reliability Analysis of Load-Haul-Dump Machines. Ph.D. Thesis, Lulea Tekniska Universitet, Lulea, Sweden, 1990. [Google Scholar]
  57. Wang, Z.; Zhao, L.; Kong, Z.; Yu, J.; Yan, C. Development of Accelerated Reliability Test Cycle for Electric Drive System Based on Vehicle Operating Data. Eng. Fail. Anal. 2022, 134, 106696. [Google Scholar] [CrossRef]
  58. Yong, B.; Qiang, B. Subsea Engineering Handbook; Gulf Professional Publishing: Oxford, UK, 2018. [Google Scholar]
  59. Collins, E.W. Safety Evaluation of Coal Mine Power Systems. In Proceedings of the Annual Reliability and Maintainability Symposium, Philadelphia, PA, USA, 27 January 1987; Sandia National Labs: Albuquerque, NM, USA, 1987. [Google Scholar]
  60. Ivanov, V.V.; Dzyurich, D.O. Justification of the technological scheme parameters for the development of flooded deposits of construction sand. J. Min. Inst. 2022, 253, 33–40. [Google Scholar] [CrossRef]
  61. Roy, S.K.; Bhattacharyya, M.M.; Naikan, V.N. Maintainability and reliability analysis of a fleet of shovels. Min. Technol. Trans. Inst. Min. Metall. Sect. A 2001, 110, 163–171. [Google Scholar] [CrossRef]
  62. Ruijters, E.; Stoelinga, M. Fault Tree Analysis: A survey of the state-of-the-art in modelling, analysis and tools. Comput. Sci. Rev. 2015, 15–16, 29–62. [Google Scholar]
  63. Abdollahpour, P.; Tabatabaee Moradi, S.S.; Leusheva, E.; Morenov, V. A Numerical Study on the Application of Stress Cage Technology. Energies 2022, 15, 5439. [Google Scholar] [CrossRef]
  64. Coetzee, J.L. The role of NHPP models in the practical analysis of maintenance failure data. Reliab. Eng. Syst. Saf. 1997, 56, 161–168. [Google Scholar] [CrossRef]
  65. Gabov, V.V.; Zadkov, D.A.; Pryaluhin, A.F.; Sadovsky, M.V.; Molchanov, V.V. Mining combine screw executive body design. MIAB Min. Inf. Anal. Bull. 2023, 11, 51–71. [Google Scholar]
  66. Skamyin, A.N.; Dobush, I.V.; Gurevich, I.A. Influence of nonlinear load on the measurement of harmonic impedance of the power supply system. In Proceedings of the 2023 5th International Youth Conference on Radio Electronics, Electrical and Power Engineering, Moscow, Russia, 16–18 March 2023. [Google Scholar]
  67. Hall, R.A.; Daneshmend, L.K. Reliability Modelling of Surface Mining Equipment: Data Gathering and Analysis Methodologies. Int. J. Surf. Min. 2003, 17, 139–155. [Google Scholar] [CrossRef]
  68. Abramovich, B.N.; Bogdanov, I.A. Improving the efficiency of autonomous electrical complexes of oil and gas enterprises. J. Min. Inst. 2021, 249, 408–416. [Google Scholar] [CrossRef]
  69. Rao, K.R.; Prasad, P.V. Graphical methods for reliability of repairable equipment and maintenance planning. In Proceedings of the Annual Symposium on Reliability and Maintainability (RAMS), Philadelphia, PA, USA, 22–25 January 2001; pp. 123–128. [Google Scholar]
Figure 1. Unified methodological framework for reliability assessment.
Figure 1. Unified methodological framework for reliability assessment.
Mathematics 14 00723 g001
Figure 2. Visualization of the decomposition process for a bridge diagram.
Figure 2. Visualization of the decomposition process for a bridge diagram.
Mathematics 14 00723 g002
Figure 3. Workflow of the system structure convolution method (SCM).
Figure 3. Workflow of the system structure convolution method (SCM).
Mathematics 14 00723 g003
Figure 4. Algorithmic structure of the system structure convolution method (SCM).
Figure 4. Algorithmic structure of the system structure convolution method (SCM).
Mathematics 14 00723 g004
Figure 5. Visualization of test structures used in comparative research. (A) Bridge scheme (6 elements) is a canonical non-monotonic structure that serves as a standard for basic verification of algorithms. It is characterized by the presence of redundant paths and the impossibility of reducing to a series-parallel scheme. (B) Tree structure (15 elements)—hierarchical organization with minimal connectivity. Models systems with a regular tree topology, allowing you to evaluate the scalability of methods. (C) 3 × 3 lattice structure (9 elements) is a strongly connected system with regular connections. Demonstrates how the methods work in conditions of high local connectivity and uniform distribution of connections. (D) Random connected graph (15 elements)—an irregular structure that simulates real complex systems. It is characterized by a variable degree of vertices and a heterogeneous distribution of connections.
Figure 5. Visualization of test structures used in comparative research. (A) Bridge scheme (6 elements) is a canonical non-monotonic structure that serves as a standard for basic verification of algorithms. It is characterized by the presence of redundant paths and the impossibility of reducing to a series-parallel scheme. (B) Tree structure (15 elements)—hierarchical organization with minimal connectivity. Models systems with a regular tree topology, allowing you to evaluate the scalability of methods. (C) 3 × 3 lattice structure (9 elements) is a strongly connected system with regular connections. Demonstrates how the methods work in conditions of high local connectivity and uniform distribution of connections. (D) Random connected graph (15 elements)—an irregular structure that simulates real complex systems. It is characterized by a variable degree of vertices and a heterogeneous distribution of connections.
Mathematics 14 00723 g005
Figure 6. Canonical bridge circuit (rectifier block) used as the reference two-terminal topology for reliability assessment.
Figure 6. Canonical bridge circuit (rectifier block) used as the reference two-terminal topology for reliability assessment.
Mathematics 14 00723 g006
Figure 7. Computational scalability analysis of reliability assessment methods across different system sizes and topologies.
Figure 7. Computational scalability analysis of reliability assessment methods across different system sizes and topologies.
Mathematics 14 00723 g007
Figure 8. Analysis of the structural complexity of test configurations.
Figure 8. Analysis of the structural complexity of test configurations.
Mathematics 14 00723 g008
Figure 9. Comparative analysis of system reliability estimation for the bridge circuit using different computational methods. The red dashed line indicates the reference (exact) reliability value of 0.999675. The structure convolution method (SCM) and logic-probability method (LVM) provide identical exact results, validating both approaches.
Figure 9. Comparative analysis of system reliability estimation for the bridge circuit using different computational methods. The red dashed line indicates the reference (exact) reliability value of 0.999675. The structure convolution method (SCM) and logic-probability method (LVM) provide identical exact results, validating both approaches.
Mathematics 14 00723 g009
Figure 10. Scalability and performance of methods.
Figure 10. Scalability and performance of methods.
Mathematics 14 00723 g010
Figure 11. Distributing the importance of elements in the 10 × 10 grid.
Figure 11. Distributing the importance of elements in the 10 × 10 grid.
Mathematics 14 00723 g011
Table 1. Summary of Test Structures for Reliability Analysis.
Table 1. Summary of Test Structures for Reliability Analysis.
Structure Type Elements Edges Max Degree Purpose Complexity
Bridge Circuit683Basic verificationLow
Tree Structure15143Scalability testMedium
Grid 3 × 39124Connectivity analysisHigh
Random Graph15307Real-world simulationVery High
Methodological Notes: 1. All structures tested with identical reliability parameters (p = 0.995). 2. Failure independence assumed for all elements. 3. Binary element states (operational/failed). 4. Comprehensive coverage of structural types. 5. Standardized testing methodology.
Table 2. Comparative Results of FBG Calculation for a Bridge Scheme.
Table 2. Comparative Results of FBG Calculation for a Bridge Scheme.
MethodValuationAbs. ErrorRel. Error, %Time, sMethod ClassCommentary
Decomposition by “special element”0.9949750.0047000.470%0.003approximationUnderestimation of the result due to incomplete accounting of dependencies
Boundary methods (Top/Bottom)[0.999600; 0.999850]±0.000125±0.0125%0.002boundsThe standard is included in the interval
Logical–probabilistic modeling (LVM) “triangle-star”0.999675000.024Accurate (Path/Section Analysis)Reference solution
System structure convolution method (SCM) (matrix)0.999675000.012Exact (combinatorial)Coincides with LVM, standard
Table 3. Contribution of the most significant elements to the overall reliability of the system.
Table 3. Contribution of the most significant elements to the overall reliability of the system.
ElementImportance Index IINode Reliability Increment
Δpi = 0.01
Contribution to the Overall Reliability of ΔPi = Ii⋅ΔpiCommentary
Canonical Bridge Diagram (5 Elements)
10.37+0.0100+0.00370Centerpiece; Key Workaround
20.22+0.0100+0.00220Top Branch
30.22+0.0100+0.00220Lower Branch
40.12+0.0100+0.00120Left Side Branch
50.12+0.0100+0.00120Right Side Branch
Total (3 most significant elements)+0.0081083% Total Effect
Total (all 5 items)+0.01050100% Effect
10 × 10 lattice structure (100 elements)
The three most important nodes (central area)0.27–0.29+0.0100+0.0028 (average)Central Nodes, Track Intersection
Remaining 97 knots (average Ii ≈ 0.014) +0.0100+0.0136 (total)Peripherals, Local Connections
Total (3 key + 97 others)+0.0164Overall System Reliability Gains
Share of the three most important nodes≈49%Almost Half Of The Effect Is Provided By 3% Of The Elements
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Martyushev, N.V.; Malozyomov, B.V.; Demin, A.Y.; Pogrebnoy, A.V.; Kurdyumov, G.E.; Kondratiev, V.V.; Karlina, A.I. Comparative Assessment of the Reliability of Non-Recoverable Subsystems of Mining Electronic Equipment Using Various Computational Methods. Mathematics 2026, 14, 723. https://doi.org/10.3390/math14040723

AMA Style

Martyushev NV, Malozyomov BV, Demin AY, Pogrebnoy AV, Kurdyumov GE, Kondratiev VV, Karlina AI. Comparative Assessment of the Reliability of Non-Recoverable Subsystems of Mining Electronic Equipment Using Various Computational Methods. Mathematics. 2026; 14(4):723. https://doi.org/10.3390/math14040723

Chicago/Turabian Style

Martyushev, Nikita V., Boris V. Malozyomov, Anton Y. Demin, Alexander V. Pogrebnoy, Georgy E. Kurdyumov, Viktor V. Kondratiev, and Antonina I. Karlina. 2026. "Comparative Assessment of the Reliability of Non-Recoverable Subsystems of Mining Electronic Equipment Using Various Computational Methods" Mathematics 14, no. 4: 723. https://doi.org/10.3390/math14040723

APA Style

Martyushev, N. V., Malozyomov, B. V., Demin, A. Y., Pogrebnoy, A. V., Kurdyumov, G. E., Kondratiev, V. V., & Karlina, A. I. (2026). Comparative Assessment of the Reliability of Non-Recoverable Subsystems of Mining Electronic Equipment Using Various Computational Methods. Mathematics, 14(4), 723. https://doi.org/10.3390/math14040723

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop