Next Article in Journal / Special Issue
mESC: An Enhanced Escape Algorithm Fusing Multiple Strategies for Engineering Optimization
Previous Article in Journal
Development of a Bayesian Network-Based Parallel Mechanism for Lower Limb Gait Rehabilitation
Previous Article in Special Issue
Three-Dimensional UAV Path Planning Based on Multi-Strategy Integrated Artificial Protozoa Optimizer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Guarantees of Recurrent Neural Networks for the Subset Sum Problem

1
College of Artificial Intelligence, Jiaxing University, Jiaxing 314001, China
2
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(4), 231; https://doi.org/10.3390/biomimetics10040231
Submission received: 8 March 2025 / Revised: 1 April 2025 / Accepted: 2 April 2025 / Published: 8 April 2025

Abstract

:
The subset sum problem is a classical NP-hard problem. Various methods have been developed to address this issue, including backtracking techniques, dynamic programming approaches, branch-and-bound strategies, and Monte Carlo methods. In recent years, researchers have proposed several neural network-based methods for solving combinatorial optimization problems, which have shown commendable performance. However, there has been limited research on the performance guarantees of recurrent neural networks (RNNs) when applied to the subset sum problem. In this paper, we conduct a novel investigation into the performance guarantees of RNNs to solve the subset sum problem for the first time. A construction method for RNNs is developed to compute both exact and approximate solutions of subset sum problems, and the mathematical model of each hidden layer in RNNs is rigorously defined. Furthermore, the correctness of the proposed RNNs is strictly proven through mathematical reasoning, and their performance is thoroughly analyzed. In particular, we prove w N N w O P T ( 1 ε ) mathematically, i.e., the errors between the approximate solutions obtained by the proposed ASS-NN model and the actual optimal solutions are relatively small and highly consistent with theoretical expectations. Finally, the validity of RNNs is verified through a series of examples, where the actual error value of the approximate solution aligns closely with the theoretical error value. Additionally, our research reveals that recurrence relations in dynamic programming can effectively simulate the process of constructing solutions.

1. Introduction

The subset sum problem (SSP) is a well-established issue in combinatorial optimization problems (COPs), which has found extensive applications in various engineering domains, including capital budgeting, workload allocation, and job scheduling. While the solution to the verification aspect of this problem is relatively straightforward, identifying a subset of numbers that satisfies a specified target remains a challenging endeavor.

1.1. Heuristic Algorithm for SSP

Over the past few decades, numerous exact and heuristic algorithms have been developed to address the SSP [1,2]. M. F. M. K. Madugula et al. proposed a well-enabled meta-heuristic algorithm named the arithmetic optimization algorithm to solve the SSP [3], while Wan et al. proposed an efficient GPU-based parallel double-list algorithm to solve the SSP, achieving higher performance gains on GPUs than CPUs by improving the design of generation, pruning, and search phases and optimizing GPU memory management and task allocation [4,5]. P. Dutta et al. presented search algorithms for variants of the SSP [6], and Ye et al. introduced a priority algorithm approximation ratios for the SSP with a focus on the power of revocable decisions, where accepted data items can be later rejected to maintain solution feasibility [7]. V. Parque developed a new scheme to sample solutions for the SSP based on swarm-based optimization algorithms with distinct forms of selection pressure, the balance of exploration-exploitation, the multi-modality considerations, and a search space defined by numbers associated with subsets of fixed size [8]. L. Li et al. devised a DNA procedure for solving the SSP in the Adleman–Lipton model, which operates in O ( n ) steps for an undirected graph with n vertices [9]. J. R. M. Kolpakov et al. presented a readily implementable recursive parallelization strategy for solving the SSP using the branch-and-bound method [10]. R. Kolpakov et al. studied the question of parallelization of a variant of the branch-and-bound method for solving the SSP [11]. Thada et al. proposed a genetic algorithm approach with infeasible offspring rejection and a penalty function to find approximate solutions for the SSP, demonstrating improved efficiency by discarding unfit subsets during the optimization process [12]. Li et al. presented a DNA-based algorithm in the Adleman–Lipton model that solves the SSP in O ( n ) time by strategically designing vertex strand lengths to simplify computation and efficiently identify valid subsets [9]. Wang et al. proposed an enhanced genetic algorithm for the SSP that replaces probabilistic operations with conditional crossover and mutation, demonstrating improved capability to find optimal solutions [13]. Bhasin et al. proposed a genetic algorithm-based approach to solve the SSP and explored its potential as a generic methodology for tackling NP-complete problems, demonstrating promising results through implementation and analysis [14]. Saketh et al. evaluated the genetic algorithm for solving the SSP and compared it with dynamic programming, concluding that the genetic algorithm is less favorable due to its longer execution time despite its adaptability [15]. Genetic algorithms exhibit strong adaptability when addressing large-scale problems. However, their performance is sensitive to parameter selection, and they are susceptible to becoming trapped in local optima.

1.2. Dynamic Programming for SSP

Dynamic programming is an effective method for solving COPs [16]. The basic idea of dynamic programming is to decompose the problem into several sub-problems with similar structures and then derive the solution of the original problem from the solutions of these sub-problems. Yang et al. proposed a neural network-based dynamic programming method for NP-hard problems. Since this method requires training on each testing instance of the problem, its time complexity remains high for practical tasks [17]. Allock et al. put forward a novel dynamic programming data structure with applications to subset-sum and its variants including equal-sums, two-subset-sum, and shifted-sums [18], while H. Fujiwara et al. formalized the recurrence relation of the dynamic programming for the SSP [19]. Xu et al. developed a general framework called neural network approximated dynamic programming, which replaces policy or value function calculation processes with neural networks [20]. Both Yang et al.’s and Xu et al.’s studies utilized neural networks to expedite dynamic programming algorithms for COPs but not to search solution space. This proposed method can be generalized to other COPs that can be solved by dynamic programming. However, different problems have different dynamic programming equations, and different COPs have different approximate solution methods. Therefore, correctly establishing corresponding neural networks for specified COPs is crucial in this approach. The dynamic programming method constructs solutions based on recursive relationships, enabling the effective acquisition of optimal solutions. However, even when problems are broken down into sub-problems to reduce search space, the complexity of algorithms using dynamic programming can still be high for NP-hard problems [20]. However, its time and space complexity increases significantly with the growth of the problem size, making it more suitable for solving small-scale problems.

1.3. Others

Biesner et al. tackled the SSP as a quadratic unconstrained binary optimization problem and showed how gradient descent on Hopfield networks reliably determines solutions for both artificial and real data [21]. Chenyang Xu and Guochuan Zhang introduced an enhanced learning algorithm to address the online SSP. This approach is designed for rapid solution generation, which does not guarantee the accuracy of the solutions obtained [22]. Coron J. et al. described a proven polynomial-time algorithm for solving the hidden SSP based on statistical learning [23]. M. Costandin provided a geometric interpretation of a specific class of SSP [24]. Zheng Q. L. et al. proposed a method for solving the SSP utilizing a quantum algorithm. However, this approach is only effective for collections containing a limited number of elements [25], and in [26], they translated SSP into the quantum Ising model and solved it with a variational quantum optimization method based on conditional values at risk. This method provides a new perspective for solving the SSP. Moon explored the potential of quantum computing, particularly Grover’s algorithm, to solve the NP-complete SSP more efficiently than classical methods like dynamic programming while also discussing the broader implications for NP-complete problems [27]. Bernstein et al. introduced a new algorithm that combines the Howgrave-Graham–Joux subset sum algorithm with a new streamlined data structure for quantum walks on Johnson graphs [28]. Quantum algorithms leverage superposition states to simultaneously explore multiple candidate solutions, allowing for efficient searches of optimal solutions within the solution space. Nevertheless, this approach is prone to noise interference, has high implementation complexity, and requires specialized quantum equipment.
With the advancement in deep learning and neural networks, significant progress has been made in the application of deep learning to fields such as image recognition, natural language processing, and autonomous driving. As a result, there is growing interest in exploring the potential of deep learning in other domains. Research indicates that deep learning and neural networks show promise in addressing COPs [29]. Notable efforts to apply deep learning methodologies to COPs are summarized in [30]. Recurrent neural networks (RNNs) are powerful sequence models. Hopfiled et al. were among the first to utilize RNNs to solve COPs [31]. M. S.Tarkov proposed an algorithm for solving the traveling salesman problem based on the use of the Hopfield recurrent neural network, the “Winner takes all” method for the cycle formation, and the two-opt optimization method [32]. Gu et al. devised a novel algorithm for the L smallest k-subsets sum problem by integrating the L shortest paths algorithm with the finite-time convergent recurrent neural network [33]. Gu and Hao applied recurrent neural networks driven by pure data to solve the 0–1 knapsack problems [34]. Zhao et al. proposed a two-phase neural combinatorial optimization method with reinforcement learning for the agile Earth-observing satellite scheduling problem [35]. M. T. Kechadi et al. developed an efficient neural network approach to minimize the cycle time for job shop scheduling problems [36]. C. Hertrich et al. conducted a study on the expressive power of neural networks through knapsack problems. They iteratively applied a class of RNNs to each item in a knapsack instance and computed optimal or provably good solution values [37].
Neural networks demonstrate significant advantages in solving COPs, such as robust pattern-learning capabilities and the ability to uncover hidden rules. By employing end-to-end mapping, neural networks bypass the explicit traversal of all possible composition spaces, thereby avoiding the issue of combinatorial explosion. These characteristics are absent in traditional heuristic algorithms and dynamic programming methods. Nonetheless, neural networks also possess certain limitations, including high dependence on data, substantial costs associated with generating high-quality training samples, and excessive computational resources required for exact solutions in some cases. Additionally, the accuracy of approximate solutions remains an area requiring improvement.

1.4. Our Contributions

In this paper, we present a rigorous mathematical study on the construction of RNNs to solve two types of SSPs. The main contributions of this paper are as follows:
  • We introduce a recurrent neural network, denoted as SS-NN, to solve the classical SSP. By defining a new activation function, we develop a dynamic programming equation for the classical SSP. This activation function maps all negative numbers to −1 and ensures that the number of inputs to the neural network is fixed. NN-SS is utilized to mimic the dynamic programming approach for solving the SSP. We define the mathematical model for each hidden layer of NN-SS and prove its correctness.
  • We propose an approximate solution method for a type of SSP, where the value of the subset-sum is closest to a given value but not exceeding it. The dynamic programming equations for this type of SSP are defined using rounding granularity and the ReLU activation function. We construct a recurrent neural network, denoted as ASS-NN, to mimic the presented dynamic programming approach in determining an approximate solution. We rigorously prove that our proposed ASS-NN can correctly solve the SSP and analyze both its time complexity and error in approximation.
  • We verify the correctness of our proposed method through examples and demonstrate that actual error values in approximate solutions align with theoretical error values through a series of illustrative examples.
The remainder of this paper is structured as follows. Section 2 describes the construction of a recurrent neural network to solve the classical SSP. Section 3 presents an approximate solution method for another type of SSP by constructing a novel recurrent neural network. Section 4 provides the experimental results and performance error analysis. Section 5 discusses the work of this paper in depth. Finally, we conclude this paper and discuss future work in Section 6.

2. RNNs for an Exact Solution to the SSP

In this section, we aim to determine whether a given set of positive numbers S contains a subset whose sum equals a specified positive number W. This problem can be formulated as a YES/NO decision problem, denoted as the Y/N SSP in the following discussion. It is worth noting that this problem falls within the class of NP-complete problems [38]. Let S = { w 1 , , w n } be a subset of N + and W N + be a fixed number. Let S * = { w j 1 , , w j m } be a subset of S, and V * = k = j 1 j m w k . The mathematical model of the Y/N SSP is described by Equation (1).
y = 1 S * S satisfies V * = W 0 o t h e r w i s e .
We propose a dynamic programming formulation for the Y/N SSP. Let s w , i 0 , 1 . If the sum of the first i elements of set S equals w, then s ( w , i ) = 1 ; otherwise, s ( w , i ) = 0 . Notably, s ( w , i ) = 1 when w = 0 . The values of s ( w , i ) can be computed using Equation (2).
s ( w , i ) = s ( w , i 1 ) w < w i s ( w w i , i 1 ) o r s ( w , i 1 ) w w i ,
where i n , and n = { 1 , 2 , , n } . If s ( w , i ) = 1, the corresponding subset can be obtained by backtracking.
Let s ( w w i , i 1 ) = 0 for w w i < 0 . It is evident that Equation (2) can be substituted with Equation (3) as follows:
s ( w , i ) = s ( w w i , i 1 ) o r s ( w , i 1 ) .
Notably, s ( w , i ) = 1 for w = 0 .
Inspired by the work of [37], who employed a recurrent neural network to address the knapsack problem, we present our own recurrent neural network, referred to as SS-NN, aimed at solving the Y/N SSP in this section. It is evident that if w w i < 0 , then s ( w w i , i 1 ) = 0 . However, it is impractical to input all possible values of w w i into the neural network. Therefore, we represent all negative numbers with −1. Consequently, we define a novel activation function δ ( x ) , as shown in Equation (4).
δ ( x ) = x x 0 1 x < 0 .
According to the activation function δ ( x ) , if w w i 0 , then we have δ ( w w i ) = w w i . Conversely, if w w i < 0 , then it follows that δ w w i = 1 . The network structure for δ ( x ) is shown in Figure 1.
Therefore, we can utilize Equation (5) to replace Equation (3) with the activation function δ ( x ) . The definition of Equation (5) is as follows:
s ( w , i ) = s ( δ ( w w i ) , i 1 ) o r s ( w , i 1 ) .
Notably, s ( 1 , 0 ) = 0 , s ( 0 , 0 ) = 0 , and s ( w * , 0 ) = 0 for w * > 0 .
In order to solve the Y/N SSP by implementing the logical disjunction operation, we have developed a neural network specifically designed for this purpose. The network structure for x 1 or x 2 is illustrated in Figure 2, with the activation function h ( x ) as a step function. The precise definition of h ( x ) can be found in Equation (6).
h ( x ) = 1 x > 0 0 x 0 .
The recurrent structure based on SS-NN is shown in Figure 3. An overview of the entire structure of SS-NN can be seen in Figure 4.
To clearly demonstrate the recurring structure of SS-NN, we will omit the use of the index i in the following description of the recurrent neural network. Equation (5) will be replaced by Equation (7).
s o u t ( w ) = s i n ( δ ( w w i n ) ) o r s i n ( w ) .
The inputs of SS-NN consist of s i n ( 1 ) , s i n ( 0 ) , s i n ( 1 ) , , s i n ( W ) , and w i n . Therefore, the input layer has W + 3 neurons, where the values of s i n ( 1 ) = 0 and s i n ( 0 ) = 1 are constants.

2.1. The First Hidden Layer of SS-NN

The first hidden layer is utilized for computing δ ( w w i n ) and consists of W + 2 neurons, denoted as F 1 ( w , w i n ) for w 1 , 0 W , where w i n N + , and W = { 1 , 2 , , W } , N + represents the set of positive integer. The neurons are defined as follows:
F 1 w , w i n = δ w w i n , w 1 , 0 W .
Theorem 1.
If w 0 , 1 W , and w i n N + , then F 1 ( w , w i n ) belongs to the set { 1 , 0 , 1 , , w 1 } .
Proof. 
We observe that F 1 w , w i n = δ w w i n . Thus, if w w i n < 0 , we have F 1 w , w i n = 1 according to the definition δ function. If w w i n 0 , then F 1 w , w i n = w w i n . Since w i n N + , it is evident that the maximum value of w w i n is equal to w − 1. Therefore, we can conclude that the value F 1 ( w , w i n ) must belong to 1 , 0 , 1 , , w 1 . □

2.2. The Second Hidden Layer of SS-NN

The second hidden layer contains 2 W + 2 W + 2 neurons, denoted as F 2 + w , w and F 2 w , w , where w and w belong to the set { 1 , 0 } W .
F 2 + w , w = δ ( 2 ( F 1 w , w i n w ) ) ,
F 2 w , w = δ ( 2 ( w F 1 w , w i n ) ) ,
where δ ( x ) represents the ReLu activation function. The objective of the second hidden layer is to ascertain whether the value w is equal to F 1 ( w , w i n ) or not.
Theorem 2.
F 2 + w , w + F 2 w , w = 0 if only if w = F 1 ( w , w i n ) . Otherwise, we have F 2 + w , w + F 2 w , w 2 .
Proof. 
Clearly, if w = F 1 ( w , w i n ) , then we have F 1 w , w i n w = 0 and w F 1 w , w i n = 0. Consequently, we derive that F 2 + w , w = 0 and F 2 w , w = 0. Therefore, it follows that F 2 + w , w + F 2 w , w = 0.
If F 2 + w , w + F 2 w , w = 0, and given that F 2 + w , w 0 and F 2 w , w 0 , we can conclude that F 2 + w , w = 0 , F 2 w , w = 0 . Therefore, we have established that w = F 1 ( w , w i n ) .
If w F 1 ( w , w i n ) , then we have either w > F 1 ( w , w i n ) or w < F 1 ( w , w i n ) . In the case where w > F 1 ( w , w i n ) , it follows that w F 1 ( w , w i n ) 1 since w { 1 , 0 } [ W ] and F 1 ( w , w i n ) { 1 , 0 , 1 , , w 1 } . Consequently, we determine that F 2 + w , w = 0 and F 2 w , w 2 . Thus, it holds that F 2 + w , w + F 2 w , w 2 . Conversely, if w < F 1 ( w , w i n ) , we also conclude that F 2 + w + F 2 w 2 .
Let F 2 + w + F 2 w 2 . We assume that w = F 1 ( w , w i n ) . Consequently, we have F 2 + w , w + F 2 w , w = 0 , which contradicts the inequality F 2 + w , w + F 2 w , w 2 . Thus, our initial hypothesis is invalid and we conclude that w F 1 ( w , w i n ) . □

2.3. The Third Hidden Layer of SS-NN

The third hidden layer contains ( W + 2 ) W + 2 neurons, denoted as F 3 w , k for w and k 1 , 0 W . These neurons are defined as follows:
F 3 w , k = δ ( s i n ( k ) F 2 + w , k F 2 w , k ) .
Theorem 3.
For w , k 1 , 0 W and w i n N + , we have F 3 w , k = s i n ( k ) if only if k = F 1 ( w , w i n ) . Otherwise, it follows that F 3 w , k = 0 .
Proof. 
If k = F 1 ( w , w i n ) , then by Theorem 2, we have F 2 + w , k + F 2 w , k = 0. This leads to the conclusion that s i n ( k ) F 2 + w , k F 2 w , k = s i n ( k ) . Therefore, if k = F 1 ( w , w i n ) , it follows that F 3 w , k = s i n k by s i n k { 0 , 1 } .
Conversely, if k F 1 ( w , w i n ) , we obtain from Theorem 2 that F 2 + w , k + F 2 w , k 2 . Since s i n ( k ) 0 , 1 , this implies that s i n ( k ) F 2 + w , k F 2 w , k < 0 . Thus, we conclude that F 3 w , k = 0 . □

2.4. The Fourth Hidden Layer of SS-NN

The fourth hidden layer contains W + 2 neurons, denoted as F 4 w for w and k 1 , 0 W .
F 4 w = σ ( k = 1 W F 3 w , k ) .
Theorem 4.
For each w , k 1 , 0 W and w i n N + , we have s i n δ w w i n = F 4 w = σ ( k = 1 W F 3 w , k ) .
Proof. 
According to the definition of δ ( x ) , we have
s i n δ w w i n = 0 w w i n < 0 s i n w w i n w w i n > 0 1 w w i n = 0 .
On the other hand, if k > F 1 ( w , w i n ) or k < F 1 ( w , w i n ) , we have F 2 + w , k + F 2 w , k 2 . Since s i n ( k ) 1 , 0 , this implies that F 3 w , k = 0 . Consequently, we determine
k = 1 a n d k F 1 w , w i n W F 3 w , k = 0 . Thus, the value k = 1 W F 3 w , k is determined by k, where k = F 1 ( w , w i n ) . We can draw the following conclusions.
1.
If k = F 1 w , w i n = 0 , then it follows that w w i n = 0 and F 3 w , k = δ s i n k = δ s i n 0 = 1. Hence, we obtain F 4 w = σ k = 1 W F 3 w , k = δ 1 = 1 .
2.
If k = F 1 w , w i n = 1 , then we conclude that w w i n < 0   and   F 3 w , k = δ s i n k = δ s i n 1 = 0. Therefore, we have F 4 w = σ k = 1 W F 3 w , k = δ 0 = 0 .
3.
If k = F 1 w , w i n > 0 , this implies that k = w w i n > 0 and we have F 3 w , k = δ s i n k = δ s i n w w i n = s i n w w i n . Thus, we conclude that F 4 w = σ k = 1 W F 3 w , k = δ s i n w w i n = s i n w w i n .
In summary, it holds true that F 4 w = s i n δ w w i n . This indicates that the fourth hidden layer is capable of accurately computing the value of s i n δ w w i n . □

2.5. The Fifth Hidden Layer of SS-NN

The fifth hidden layer is used to compute s i n ( δ ( w w i n ) ) o r s i n ( w ) , and it contains W + 2 neurons, denoted as F 5 w for w 1 , 0 W . The definitions are as follow:
F 5 w = s i n ( δ ( w w i n ) ) o r s i n ( w ) ,
s o u t ( w ) = F 5 w .
Theorem 5.
For each w { 1 , 0 } ( [ W ] and w in N + , if w < w i n , then we have F 5 w = s i n ( w ) . Conversely, if w w i n , it follows that F 5 w = s i n ( w w i n ) or s i n ( w ) .
Proof. 
If w < w i n , i.e., w w i n < 0 , then we have δ w w i n = 1 . Thus, we obtain s i n ( δ ( w w i n ) ) = s i n ( 1 ) = 0 . Consequently, it follows that F 5 w = s i n ( δ ( w w i n ) ) or s i n ( w ) = 0 or s i n ( w ) = s i n ( w ) .
Conversely, if w w i n , i.e., w w i n 0 , then we determine that δ w w i n = w w i n , leading to s i n ( δ ( w w i n ) ) = s i n ( w w i n ) . Therefore, we conclude that F 5 w = s i n ( w w i n ) or s i n ( w ) . □
Theorem 6.
Let S = { w 1 , , w n } , where w i N + ( 1 i n ) , and let W be a positive integer. We can utilize the SS-NN to iteratively compute the value of s ( w , i ) for w { 1 , 0 } [ W ] , i [ n ] .
Proof. 
We observe that s ( 1 , 0 ) = 0 , s ( 0 , 0 ) = 1 , and s ( w 1 , 0 ) = 0 for w 1 > 0 . In the first iteration, w i n = w 1 . Since 0 < w i n , it follows that s ( 1 , 1 ) = s ( 1 , 0 ) = 0 , and s ( 0 , 1 ) = s ( 0 , 0 ) = 1 according to Theorem 4. For w [ W ] , if w < w i n , we have s ( w , 1 ) = s ( w , 0 ) , otherwise we obtain s w , 1 = s ( w w i n , 0 ) or s ( w i n , 0 ) by Theorem 4. In the second iteration, since w i n = w 2 , we have s ( 1 , 2 ) = s ( 1 , 1 ) = 0 , and s ( 0 , 2 ) = s ( 0 , 1 ) = 1 by Theorem 4. For any w [ W ] , if w < w i n , it follows that s ( w , 2 ) = s ( w , 1 ) . Conversely, if this condition does not hold true, we conclude that s w , 2 = s ( w w i n , 1 ) or s ( w , 1 ) and so on. In the final iteration, where w i n = w n , we have s ( 1 , n ) = s ( 1 , n 1 ) = 0 , and s ( 0 , n ) = s ( 0 , n 1 ) = 1 by Theorem 4. For any w [ W ] , if w < w i n , we have s ( w , n ) = s ( w , n 1 ) ; otherwise, we have s w , n = s ( w w i n , n 1 ) or s ( w , n 1 ) . □
The depth of a single SS-NN is seven, comprising five hidden layers, one input layer, and one output layer. Consequently, the overall depth of the SS-NN designed for solving SSPs is 7 n , where n represents the number of elements in the set. Thus, the order of depth can be expressed as O ( 7 n ) . The width of the SS-NN is determined by the value of W. Given that the hidden layers with maximum width are the second and third layers, we determine that the width order for a single SS-NN is O ( 2 ( W + 2 ) 2 ) . Therefore, when unfolding the SS-NN and treating it as a singular neural network executing an entire dynamic program, we arrive at a width complexity of O ( 2 n ( W + 2 ) 2 ) .

3. RNNs for the Approximate Solution to the SSP

It is not hard to know that the exact result of an SSP is dependent on the network width M. In order to overcome this drawback, we proposed a neural network called ASS-NN that can determine the approximation solution for variants of SSP. Similar to [37], ASS-NN uses less neurons but loses optimality. Let S = { w 1 , , w n } be a subset of N + , P N + be a fixed number, S * = { w j 1 , , w j m } be a subset of S, and V * = k = j 1 j m w k . In this section, our task is determining subset S * such that V * is closest to P but not more than P. This kind of SSP is more common in practical applications. Let w i * = k = 1 i w k be the total value of the first i elements. Once w i * exceed P, we do not store a required value for every possible sum but have to round sum. The rounding granularity of w i * is denoted by r i = roundup ( max 1 , w i * M ) , where roundup ( x ) is the upward rounding function and M N + . Let p ( w , i ) be the total value of a subset of S, and it must be closest to w r i but not more than w r i , where w M . The values of p ( w , i ) can be computed recursively by Equation (16).
p ( w , i ) = max p ( 1 ) , p ( 2 ) , w ( 2 ) M such that p ( 2 ) w r i p ( 1 ) , otherwise ,
where p 1 = p ( w 1 , i 1 ) and p 2 = p w 2 , i 1 + w i . Let w 1 , w 2 1 , w 1 be the largest possible integers satisfying the condition p ( w ( 1 ) , i 1 ) w r i , and w 2 M be the largest possible integers that satisfies the condition p w 2 , i 1 + w i w r i . In particular, if i 0 , then p w , i = 0 . In order to use a recurrent neural network to determine the solution for Equation (16), we introduce an activation function called ReLUs to reconstruct dynamic programming for the SSP. The solution can be computed recursively by Equation (17).
p w , i = max { p w 1 , i 1 , δ ( p w 2 , i 1 + w i ) } ,
where p w , i = for w 0 and p w , 0 = 0 , w 1 is the largest possible integer satisfying the condition p w 1 , i 1 w r i , and w 2 is the largest possible integer that satisfies the condition p w 2 , i 1 + w i n w r i .
Theorem 7.
For w [ M ] and i 1 , the value p w , i in Equation (17) is equal to the value p w , i in Equation (16).
Proof. 
Obviously, if w [ M ] , then w r i > 0 . Since w 1 is the largest possible integer with p ( w ( 1 ) , i 1 ) w r i , it is easy to know that p w , i 0 for w [ M ] and i 1 using Equation (17). Furthermore, we observe that r i 1 r i . Hence, there exists a value of w 1 [ M ] satisfying the condition p ( 0 , i 1 ) < p ( w ( 1 ) , i 1 ) w ( 1 ) r i 1 w r i . On the other hand, w 1 is the largest possible integer satisfying the condition p w 1 , i 1 w r i according to the definition of Equation (16). Thus, we conclude that the value of p ( w ( 1 ) , i 1 ) in Equation (17) is equal to the value of p ( w ( 1 ) , i 1 ) in Equation (16).
For Equation (17), if no w 2 [ M ] is the largest possible integer satisfying the condition p w 2 , i 1 + w i n w r i , then we have w 2 = 0 . This leads to p 0 , i 1 + w i n w r i . Since p 0 , i 1 = , it follows that δ ( p w 2 , i 1 + w i ) = 0 . Consequently, we obtain p w , i = max { p w 1 , i 1 , δ ( p w 2 , i 1 + w i ) } = p w 1 , i 1 . For Equation (16), if no w 2 M is the largest possible integer that satisfies the condition p w 2 , i 1 + w i w r i , we determine that p w , i = p ( w 1 , i 1 ) . Thus, we conclude that the value of p w , i in Equation (17) is equal to the value of p w , i in Equation (16).
For Equation (17), let w 2 [ M ] be the largest possible integer that satisfies the condition p w 2 , i 1 + w i n w r i . In this case, we have δ ( p w 2 , i 1 + w i ) = p w 2 , i 1 + w i when p w 2 , i 1 + w i 0 . Consequently, we obtain p w , i = max { p w 1 , i 1 , δ ( p w 2 , i 1 + w i ) ) } = max { p w 1 , p w 2 , i 1 + w i } . For Equation (16), if w 2 [ M ] is the largest possible integer satisfying the condition p w 2 , i 1 + w i w r i , it follows that p w , i = max { p w 1 , i 1 , δ ( p w 2 , i 1 + w i ) } . Thus, we also have the value of p w , i in Equation (17), which is equal to the value of p w , i in Equation (16). □
Furthermore, in order to obviously show the recurrent structure of ASS-NN, we do not use the index i in the following:
p o u t w = max { p i n w 1 , δ ( p i n w 2 + w i n ) } , w M ,
where w i n represents the value of element i, p i n w 1 denotes the option of not using element i, and δ ( p i n w 2 + w i n ) represents the option of using element i. To compute w 1 and w 2 , both w i n * and w i n are utilized to determine rounding granularities in ASS-NN.
In this section, we propose a recurrent neural network referred to as ASS-NN, designed to approximate the solution for Equation (18). The recurrent structure of the proposed neural network is shown in Figure 5. Figure 6 presents the comprehensive structure of ASS-NN, where the first hidden layer is responsible for calculating the previous and current rounding granularities, denoted as r o l d and r n e w . The second hidden layer’s function is to select integers w ( 1 ) and w ( 2 ) . The third hidden layer computes values p i n w 1 and p i n w 2 , while the fourth hidden layer determines max { p i n w 1 , δ ( p i n w 2 + w i n ) ) } .
The ASS-NN is detailed in this section, demonstrating its capability to approximate solutions for the SSP. The proposed neural network comprises four hidden layers.

3.1. The First Layer of ASS-NN

The first layer is responsible for calculating the rounding granularities r o l d and r n e w , which are defined as follows:
F 1 1 = δ ( w i n * M 1 ) , F 1 2 = δ ( w i n * + w i n M 1 ) r o l d = F 1 1 + 1 , r n e w = F 1 2 + 1 .
The correctness of the computing method was proven in [37].

3.2. The Second Layer of ASS-NN

The second layer consists of 2 M 2 + M neurons, denoted as F 2 1 + w , k and F 2 1 w , k for w M , k 0 M with the condition that w k . Additionally, we have F 2 2 + w , k and F 2 2 w , k for w M , k 0 M under the constraint that w k .
F 2 1 + w , k = δ M r m a x p i n k w r n e w ,
F 2 1 w , k = δ M r m a x w r n e w p i n k + 1 + r m a x ,
F 2 2 + w , k = δ M r m a x p i n k + w i n w r n e w ,
F 2 2 w , k = δ M r m a x w r n e w p i n k + 1 w i n + r m a x .
For a fixed w M , let w ( 1 ) and w ( 2 ) denote the largest possible integers such that p i n w 1 w r n e w and p i n w 2 + w i n w r n e w . The purpose of the second layer is to identify the integers w ( 1 ) and w ( 2 ) . The validity of this hidden layer is guaranteed by the following two theorems.
Theorem 8.
For every w , k M with w k , we have F 2 1 + w , k + F 2 1 w , k = 0 , if and only if k = w ( 1 ) . Otherwise, it follows that F 2 1 + w , k + F 2 1 w , k r m a x .
Proof. 
Clearly, it follows that F 2 1 + w , k = 0 , if and only if w r n e w p i n k . Simultaneously, since p i n k + 1 and r n e w are integer multiples of 1 M , we have F 2 1 w , k = 0 w r n e w + 1 M p i n k + 1 w r n e w < p i n k + 1 , and no integer k * > k satisfies p i n k * + 1 w r n e w .
If F 2 1 + w , k + F 2 1 w , k 0 , then it follows that either F 2 1 + ( w , k ) 0 or F 2 ( 1 ) ( w , k ) 0 . Assuming F 2 1 + w , k 0 , we derive the inequality p i n k w r n e w > 0 . Given that both p i n k and r n e w are integer multiples of 1 M , it can be concluded that p i n k w r n e w 1 M . Consequently, this leads to the result M r m a x p i n k w r n e w r m a x . Thus, we obtain F 2 1 + w , k + F 2 1 w , k r m a x . If F 2 1 w , k 0 , we have w r n e w p i n k + 1 . This implies that M r m a x ( w r n e w p i n ( k + 1 ) ) 0 . Hence, we conclude that δ ( M r m a x ( w r n e w p i n k + 1 ) + r m a x ) r m a x . Therefore, the claim is proven. □
Theorem 9.
For each w , k M with w k , we have F 2 2 + w , k + F 2 2 w , k = 0 , if and only if k = w ( 2 ) . Otherwise, it follows that F 2 2 + w , k + F 2 2 w , k r m a x .
Proof. 
It is clear that F 2 2 + w , k = 0 , if and only if w r n e w p i n k + w i n . Since p i n k + 1 and r n e w are integer multiples of 1 M , it follows that F 2 2 w , k = 0 w r n e w + 1 M p i n k + 1 + w i n w r n e w < p i n k + 1 + w i n , and no integer k * > k satisfies p i n k * + 1 + w i n w r n e w
If F 2 2 + w , k + F 2 2 w , k 0 , it holds that either F 2 2 + w , k 0 or F 2 2 w , k 0 . Assuming F 2 2 + w , k 0 , we have the inequality w r n e w < p i n k + w i n . Since r o l d , r n e w and w i n are integer multiples of 1 M , and we have p i n k + w i n w r n e w 1 M . Thus, we obtain M r m a x p i n k + w i n w r n e w r m a x and F 2 2 + w , k + F 2 2 w , k r m a x . If F 2 2 w , k 0 , we have w r n e w p i n k + 1 + w i n . This implies that M r m a x ( w r n e w p i n ( k + 1 ) w i n ) + r m a x r m a x . Hence, we conclude that F 2 2 + w , k + F 2 2 w , k r m a x . □

3.3. The Third Hidden Layer of ASS-NN

In the third hidden layer, there are M 2 + M hidden neurons denoted as F 3 1 ( w , k ) for w , k M with w k , and F 3 2 ( w , k ) for w , k M with w > k . These neurons are defined as follows:
F 3 1 ( w , k ) = δ M r m a x p i n k M ( F 2 1 + w , k + F 2 1 w , k ) h 1 w = M r m a x k = w M F 3 1 ( w , k ) , w M ,
F 3 2 ( w , k ) = δ k M ( F 2 2 + w , k + F 2 2 w , k ) h 2 ( w ) = p i n ( k = 1 w F 3 2 w , k ) , w M .
The function of the third hidden layer is to calculate the values p i n w 1 and p i n w 2 . The accuracy of this layer is guaranteed by the following two theorems.
Theorem 10.
For each w M , if w w 1 M , we have h 1 w = p i n w 1 . If w 1 > M , it follows that h 1 w = M r m a x .
Proof. 
If w w 1 M , then by Theorem 8, we have F 3 1 w , w 1 = M r m a x p i n w ( 1 ) and F 3 1 w , k = 0 for each k w ( 1 ) . Consequently, it follows that h 1 w = p i n w 1 . In the case where w ( 1 ) > M , it holds that F 2 1 + w , k + F 2 1 w , k r m a x for each k. This implies that F 3 1 w , k = 0 . Thus, we conclude that h 1 w = M r m a x . □
Theorem 11.
For each w M , if w 2 1 , then k = 1 w F 3 2 w , k = w ( 2 ) . If w 2 0 , then k = 1 w F 3 2 w , k = 0 .
Proof. 
If w 2 1 , then F 3 2 w , w 2 = w 2 and F 3 2 w , k = 0 for each k w 2 . Thus, we have k = 1 w F 3 2 w , k = w ( 2 ) . If w 2 0 , then by Theorem 9, it follows that F 2 2 + w , k + F 2 2 w , k r max for each k. This implies F 3 2 w , k = 0 . Thus, we obtain k = 1 w F 3 2 w , k = 0 . □

3.4. The Final Hidden Layer of ASS-NN

Obviously, the value p i n w 2 is equal to h 2 ( w ) according to Theorem 11. The function of the final hidden layer is to calculate the value of m a x { h 1 w , δ ( w i n + h ( 2 ) ( w ) ) } and comprises M neurons, denoted as F 4 ( w ) for w M . Finally, we output the value of p o u t w for w M .
F 4 ( w ) = δ h 1 w δ ( w i n + h ( 2 ) ( w ) ) , w M p o u t w = F 4 w + δ ( w i n + h 2 w ) , w M .
Theorem 12.
For each w M , the value p o u t w is equal to the value m a x { h 1 w , δ ( w i n + h ( 2 ) ( w ) ) } .
Proof. 
If h 1 w δ ( w i n + h ( 2 ) ( w ) ) , then F 4 ( w ) = h 1 w δ ( w i n + h ( 2 ) ( w ) ) . Consequently, we have p o u t w = F 4 w + δ ( w i n + h 2 w ) = h 1 w . Conversely, if h 1 w < δ ( w i n + h ( 2 ) ( w ) ) , then F 4 ( w ) = 0 . This leads to the conclusion that p o u t w = F 4 w + δ ( w i n + h 2 w ) = δ ( w i n + h 2 w ) . Thus, the value of p o u t w is equal to the value m a x { h 1 w , δ ( w i n + h ( 2 ) ( w ) ) } . □
Let S = { w 1 , , w n } and M be fixed positive integers. Let w O P T represent the actual optimal solution to the SSP that does not exceed w r n . The corresponding subset is denoted as W O P T . Furthermore, define W i O P T = W O P T [ i ] , and w i O P T = j W i O P T w j .
Theorem 13.
For i n and w w i O P T r i 1 , we have p ( w , i ) w i O P T .
Proof. 
If w w i O P T r i 1 , we have
w r i ( w i O P T r i 1 ) r i = w i O P T r i r i r i w i O P T .
Given that p ( w , i ) w r i , it follows that
p ( w , i ) w i O P T .
Let w N N be the approximate solution to ASS-NN and w O P T be the actual solution. We have the following theorem.
Theorem 14.
The values of w N N and w O P T satisfy the following inequality: w N N w O P T ( 1 ε ) .
Proof. 
According to Theorem 13, it follows that p ( w O P T r n 1 , n ) w n O P T , and we have the following inequalities:
w N N w O P T r n n 1 r n w O P T ( n + 1 ) r n .
Given that w O P T w n * , r n = w M * , we have
w N N w O P T 1 n ( n + 1 ) M 1 ε .
Consequently, we have w N N w O P T ( 1 ε ) . □
The depth of a single ASS-NN is six, comprising four hidden layers, one input layer, and one output layer. Consequently, the depth of the ASS-NN designed for solving SSPs is represented as 6 n , where n denotes the number of set elements. Thus, the order of depth can be expressed as O ( 6 n ) . The width of the ASS-NN is determined by the value of M. Given that the hidden layers with maximum width are identified as the second and third layers, we conclude that the width order for a single ASS-NN is O 2 M + 2 2 . Therefore, when unfolding the ASS-NN and treating it as a singular neural network executing an entire dynamic program, we derive a width complexity of O 2 n M + 2 2 .

4. Verification and Analysis with Examples

4.1. Example for SS-NN

To demonstrate that the SS-NN can effectively solve the SSP, we consider an illustrative example. Let S = w 1 = 8 , w 2 = 34 , w 3 = 4 , w 4 = 12 , w 5 = 5 , w 6 = 3 and W = 7 . The solution processes of SS-NN is described as follows.
Iteration 1: The inputs of SS-NN are s i n 1 , 0 , s i n 0 , 0 , s i n 1 , 0 , , s i n 6 , 0 , s i n 7 , 0 and w 1 , respectively. The corresponding values are 0 , 1 , 0 , , 0 and 8, respectively. By utilizing the first hidden layer of the SS-NN model, we determine that F 1 1 , 8 = F 1 0 , 8 = = F 1 6 , 8 = F 1 7 , 8 = 1 . Furthermore, it follows that F 4 1 = F 4 0 = F 4 1 = F 4 2 = = F 4 6 = F 4 7 = 0 . Consequently, the outputs of SS-NN are s o u t 1 , 1 , s o u t 0 , 1 , s o u t 1 , 1 , , s o u t 6 , 1 and s o u t 7 , 1 , respectively. The corresponding values for these outputs are as follows: 0,1,0,...,0 and 0, respectively.
Iteration 2: The inputs of SS-NN are 0 , 1 , 0 , , 0 and 34. According to the SS-NN model, the outputs are s o u t 1 , 2 , s o u t 0 , 2 , s o u t 1 , 2 , , s o u t 6 , 2 and s o u t 7 , 2 , respectively. The corresponding value for these outputs are 0 , 1 , 0 , , 0 and 0, respectively.
Iteration 3: The inputs of SS-NN are 0 , 1 , 0 , , 0 and 4. According to the SS-NN framework, the corresponding outputs are as follows: 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 .
Iteration 4: The inputs for the SS-NN model are 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 and 12. Based on the SS-NN framework, the corresponding outputs are 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 .
Iteration 5: The inputs for the SS-NN model are as follows: 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 and 5. According to the SS-NN model, the outputs are 0 , 1 , 0 , 0 , 0 , 1 , 1 , 0 , 0 .
Iteration 6: The inputs of SS-NN are 0 , 1 , 0 , 0 , 0 , 1 , 1 , 0 , 0 and 3. Based on the SS-NN model, the outputs are 0 , 1 , 0 , 0 , 1 , 1 , 1 , 0 , 1 .
The values s o u t ( w , i ) computed by the SS-NN are shown in Table 1. It is evident that the results obtained the SS-NN alignment with the actual outcomes. Notably, there exist subsets whose elements are equal to 3, 4, 5, and 7, respectively.

4.2. Example for ASS-NN

To demonstrate the capability of the ASS-NN in effectively solving the SSP, we consider an illustrative example. Let S = { w 1 = 7 , w 2 = 34 , w 3 = 4 , w 4 = 12 , w 5 = 5 , w 6 = 3 } , M = 6 . The solution processes of the ASS-NN is described as follows:
Iteration 1: The inputs of ASS-NN are p i n 0 , p i n 1 , , p i n 6 and w i n , respectively. The corresponding values are , 0 , 0 , , 0 and 7, respectively. We have r o l d = 0 and r n e w = 2 , as determined by the first hidden layer. Thus, we determine that p o u t 0 = , p o u t 1 = p o u t 2 = p o u t 3 = 0 , and p o u t 4 = p o u t 5 = p o u t 6 = 7 . To illustrate this process further, let us compute the value of p o u t 4 . By analyzing the second hidden layer of ASS-NN, we obtain w 1 = 6 and w 2 = 6 . Additionally, from the third hidden layer, we have p i n ( w 1 ) = p i n ( w 2 ) = 0 . It follows that p i n ( w 1 ) < ( p i n ( w 2 ) + w i n ) , and we conclude that the value of p o u t 4 is equal to 7.
Iteration 2: The inputs to the ASS-NN are , 0 , 0 , 0 , 7 , 7 , 7 and 34, respectively. We obtain r o l d = 2 and r n e w = 7 from the first hidden layer. Consequently, we obtain the following output values: p o u t 0 = , p o u t 1 = p o u t 2 = p o u t 3 = p o u t 4 = 7 , p o u t 5 = 34 , and p o u t 6 = 41 . To illustrate the computing process, let us compute the value of p o u t 4 . According to the second hidden layer of ASS-NN, we obtain w 1 = 6 and w 2 = 0 . Furthermore, we have p i n ( w 1 ) = 7 and p i n ( w 2 ) = from the third hidden layer. This implies that p i n ( w 1 ) > ( p i n ( w 2 ) + w i n ) . It follows that p o u t 4 = 7.
Iteration 3: The inputs of ASS-NN are , 7 , 7 , 7 , 7 , 34 , 41 and 4, respectively. Consequently, we obtain the following output values: r o l d = 7 and r n e w = 8 from the first hidden layer. Thus, we have p o u t 0 = , p o u t 1 = 7 , p o u t 2 = p o u t 3 = p o u t 4 = 11 , p o u t 5 = 38 , and p o u t 6 = 45 . Let us consider how to compute the value of p o u t 4 . By analyzing the second hidden layer of ASS-NN, we determine that w 1 = 4 and w 2 = 4 . Furthermore, we have p i n ( w 1 ) = 7 and p i n ( w 2 ) = 7 from the third hidden layer. Given that p i n ( w 1 ) < ( p i n ( w 2 ) + w i n ) , we conclude that the value p o u t 4 is equal to 11.
Iteration 4: The inputs of ASS-NN are , 7 , 11 , 11 , 11 , 38 , 45 and 12, respectively. Utilizing the first hidden layer, we obtain r o l d = 8 and r n e w = 10 . Consequently, we have the following output values: p o u t 0 = , p o u t 1 = 7 , p o u t 2 = 19 , p o u t 3 = 23 , p o u t 4 = 38 , p o u t 5 = 50 , and p o u t 6 = 57 . To illustrate computing process, let us compute the value of p o u t 4 . We can derive w 1 = 5 and w 2 = 3 from the second hidden layer of ASS-NN. Furthermore, we conclude that p i n ( w 1 ) = 38 and p i n ( w 2 ) = 11 from the third hidden layer. Since p i n ( w 1 ) > ( p i n ( w 2 ) + w i n ) , we can obtain the value p o u t 4 , which is equal to 38.
Iteration 5: The inputs of ASS-NN are , 7 , 19 , 23 , 38 , 50 , 57 and 5, respectively. We have r o l d = 10 and r n e w = 11 as determined by the first hidden layer. Consequently, we obtain the following output values: p o u t 0 = , p o u t 1 = 7 , p o u t 2 = 19 , p o u t 3 = 28 , p o u t 4 = 43 , p o u t 5 = 55 , and p o u t 6 = 62 . To further illustrate this process, let us compute the value of p o u t 4 . According to the second hidden layer of ASS-NN, we obtain w 1 = 4 and w 2 = 4 . Additionally, from the third hidden layer we determine that p i n ( w 1 ) = 38 and p i n ( w 2 ) = 38 . Given that p i n ( w 1 ) < ( p i n ( w 2 ) + w i n ) , it follows that the value p o u t 4 is equal to 43.
Iteration 6: The inputs of ASS-NN are , 7 , 19 , 28 , 43 , 55 , 62 and 3, respectively. Utilizing the first hidden layer, we obtain r o l d = 11 and r n e w = 11 . Consequently, we have the following output values: p o u t 0 = , p o u t 1 = 10 , p o u t 2 = 22 , p o u t 3 = 31 , p o u t 4 = 43 , p o u t 5 = 55 , and p o u t 6 = 65 . Take p o u t 4 as an example. By analyzing the second hidden layer of ASS-NN, we determine that w 1 = 4 and w 2 = 3 . Additionally, from the third hidden layer, we have that p i n ( w 1 ) = 43 and p i n ( w 2 ) = 28 . Given that p i n ( w 1 ) > ( p i n ( w 2 ) + w i n ) , we can obtain the value p o u t 4 , which is equal to 43.
The values p o u t ( w , i ) computed by the ASS-NN are shown in Table 2. Given a positive integer P, we can identify a subset of S such that the sum of its elements is closest to P without exceeding P. For instance, if we aim to determine a subset of S, the sum of the subset elements is closest to 33 but no more than 33. Firstly, we observe the value closest to 33 but no more than 33 in Table 2 is 31, and the corresponding function is p o u t 3 , 6 . The solution process for identifying the subset associated with p o u t 3 , 6 proceeds as follows:
1.
Given that p o u t 3 , 6 = max { p o u t 3 , 5 , δ p o u t 3 , 5 + 3 } = p o u t 3 , 5 + 3 , it follows that 3 belongs to the required subset.
2.
Given that p o u t 3 , 5 = max { p o u t 3 , 4 , δ p o u t 3 , 4 + 5 } = p o u t 3 , 4 + 5 , it follows that 5 belongs to the required subset.
3.
Given that p o u t 3 , 4 = max { p o u t 4 , 3 , δ p o u t 4 , 3 + 12 } = p o u t 4 , 3 + 12 , 12 also belongs to our desired subset.
4.
Given that p o u t 4 , 3 = max { p o u t 4 , 2 , δ p o u t 4 , 2 + 4 } = p o u t 4 , 2 + 4 , 4 belongs to the required subset.
5.
Given that p o u t 4 , 2 = max { p o u t 6 , 1 , δ p o u t 0 , 1 + 34 } = p o u t 6 , 1 , 34 does not belong in our required set.
6.
Given that p o u t 6 , 1 = max { p o u t 6 , 0 , δ p o u t 6 , 0 + 7 } = p o u t 6 , 0 + 7 , we conclude that 7 is included in the required subset.
7.
Given that the second argument of p o u t 6 , 0 is zero, the solution process concludes here. Ultimately, we determine that the required subset of is 3 , 5 , 12 , 4 , 7 .
Since the value p o u t ( w , i ) of ASS-NN serves as an approximation to the SSP, there may exist a discrepancy between p o u t ( w , i ) and the actual optimal solution. The values p o u t ( 2 , 6 ) , p o u t ( 3 , 6 ) , p o u t ( 5 , 6 ) , and p o u t ( 6 , 6 ) are equal to the actual optimal solution, and the error ratio is zero. The errors between p o u t ( 1 , 6 ) , p o u t ( 4 , 6 ) , and the actual optimization are 1. Specifically, the error ratio of p o u t ( 1 , 6 ) is 0.091, and p o u t ( 4 , 6 ) is 0.023.

4.3. Error Analysis

In order to assess the effectiveness of the proposed ASS-NN in addressing subset problems, we evaluate the degree of error between the approximate solution generated by ASS-NN and the actual optimal solution. Let w N N be the approximation solution with ASS-NN and w O P T be the actual optimal solution. The degree of error δ is calculated using Equation (31).
δ = w O P T w N N w O P T .
Example 1.
A set of 20 natural numbers was randomly generated within a range from 1 to 50, resulting in the set S = {50, 48, 46, 45, 44, 43, 32, 30, 29, 25, 24, 20, 16, 15, 14, 13, 10, 6, 4, 1}. We analyzed the error degree across four scenarios with M values of 10, 15, 20, and 25. The corresponding box plots are shown in Figure 7. The maximum errors between the approximate solutions derived from ASS-NN and the actual optimal solution were found to be 3%, 3.5%, 2%, and 1%, respectively. In the scenario where M equals 25, only 5 out of 25 approximate solutions deviated from the actual optimal solution. The median difference across all four cases remained within a margin of 2%.
Example 2.
A total of 20 natural numbers were randomly generated within a range from 1 to 100 to form the set S = {90, 72, 65, 61, 59, 58, 57, 56, 47, 44, 43, 38, 37, 34, 30, 29, 22, 20, 13, 6}. We analyzed the error degrees for four different values of M = 10, 15, 20 and 25, and the corresponding box plot is shown in Figure 8. The maximum errors observed between the approximate solutions derived from ASS-NN and those representing actual optimal solutions are recorded as follows: 6.7%, 5%, 4.5%, and 10%, respectively. These findings indicate that while there exists a minimal median difference across all four scenarios examined here, the variability in error tends to escalate with increasing values of M.
Example 3.
We randomly generated 25 natural numbers between 1 and 50, resulting in the set S = {50, 49, 48, 47, 46, 45, 43, 42, 40, 39, 36, 35, 33, 32, 30, 26, 22, 21, 20, 18, 14, 13, 10, 8, 3}. We analyzed the error degree across four cases with M = 10, 15, 20, and 25. The corresponding box plot is illustrated in Figure 9. The maximum errors between the approximate solutions obtained through ASS-NN and the actual optimal solution were found to be 2.7%, 3.5%, 2.7%, and 6.8%, respectively. Among these four cases of analysis for different values of M, it was observed that the median for M = 25 yielded the lowest value. Furthermore, the median differences across all four scenarios remained within a margin of less than or equal to 1%.
Example 4.
A set of 25 natural numbers was randomly generated within a range from 1 to 100, resulting in the set S = {100, 95, 93, 91, 82, 81, 79, 71, 70, 67, 65, 54, 50, 46, 44, 41, 38, 31, 27, 25, 19, 11, 10, 4, 2}. We analyzed the error degree across four scenarios with M values of 10, 15, 20, and 25. The corresponding box plot is illustrated in Figure 10. The maximum errors between the approximate solutions derived from ASS-NN and the actual optimal solution are recorded as 1.5%, 2.3%, 2%, and 3.8%, respectively. Among these cases examined, M = 15 yielded the lowest median value. Furthermore, the median differences across all four scenarios remained within a margin of 0.5%.
Example 5.
We randomly generated 25 natural numbers between 1 and 200, resulting in the set S = {196, 194, 193, 177, 172, 165, 157, 155, 152, 133, 130, 126, 108, 101, 98, 91, 76, 60, 57, 52, 50, 27, 9, 4, 1}. We analyzed the error degree across four cases with M = 10, 20, 30, and 40. The corresponding box plots are illustrated in Figure 11. The maximum errors between the approximate solutions obtained by ASS-NN and the actual optimal solution were found to be 1.9%, 3.5%, 2.5%, and 6.8%, respectively. Among these four cases, a median for M = 20 was observed to be the lowest. Furthermore, the median differences among all four cases remained within a range of 0.5%.
Example 6.
We randomly generated 25 natural numbers between 1 and 500, resulting in the set S = {461, 450, 409, 391, 362, 349, 310, 293, 291, 276, 263, 238, 202, 199, 135, 128, 121, 120, 119, 114, 54, 48, 43, 37, 9}. We analyzed the error degree across four scenarios with M = 20, 30, 40, and 50. The corresponding box plots are shown in Figure 12. The maximum errors between the approximate solutions obtained through ASS-NN and the actual optimal solution were found to be 3.8%, 2.6%, 8%, and 9.2%, respectively. Among these cases, a median of M = 40 was observed to be the lowest. Furthermore, the median differences across all four scenarios remained within a range of 0.5%.
According to the box diagram, we observed that an increase in the M value corresponds to a greater number of abnormal values. However, the average difference in error degree for the same dataset across different M values remains minimal. Furthermore, it is evident that the discrepancy between the approximate solution and the actual solution aligns with the theoretical error degree.
Consider Example 6 again, where M = 50. In this scenario, the approximate solutions generated by ASS-NN alongside the actual optimal solution are presented in Table 3. Among the total of fifty solutions evaluated by ASS-NN methods; five of these correspond exactly with the actual optimal solution. Furthermore, nineteen solutions exhibited an error value ranging from 1 to 10, twelve solutions had an error value from 11 to 20, and fourteen solutions presented an error value exceeding 20. The maximum observed error value was 46, occurring at w = 44, while the highest degree of error reached approximately 9.17% when w = 2. These results indicate that the approximate solution method based on ASS-NN effectively addresses SSPs while maintaining a minimal discrepancy between approximate and actual optimal solutions.

5. Discussion

The SSP is a classical combinatorial optimization issue. It is not merely an abstract computational conundrum but also a juncture where theory and practice converge. Its research consistently promotes algorithmic innovation and offers methodological support for practical engineering problems. As of now, the classical approaches for solving the SSP encompass dynamic programming, genetic algorithms, quantum algorithms, double list algorithms, and others. The merit of the dynamic programming method lies in constructing solutions based on recurrence relations and being capable of attaining the optimal solution. Nevertheless, its time and space complexities escalate rapidly with the scale, thus being suitable for small-scale problems [16]. Quantum algorithms can concurrently explore multiple candidate solutions via superposition states and effectively search for the optimal solution in the solution space. However, they are vulnerable to noise interference, have high implementation complexity, and rely on dedicated quantum devices [26]. Genetic algorithms exhibit strong adaptability to large-scale problems but are sensitive to parameters and prone to getting trapped in local optima [15]. The double list algorithm can significantly reduce time complexity but has a high space complexity [5]. Simultaneously, all these methods encounter the combinatorial explosion problem resulting from traversing all possible combination spaces and a lack of generalization ability for similar problems.
Based on the powerful pattern learning capability and end-to-end mapping solution approach of neural network models, and given that RNN can simulate the process of constructing solutions based on recurrence relations in dynamic programming [17], this paper theoretically constructs an RNN for solving the SSP and proves its correctness. For the problem of whether there exists a subset in a set such that the sum of all elements in the subset equals a specified w value, recent research by [24] from the perspective of geometry, which interpreted SSP as a problem of deciding whether the intersection of the positive unit hypercube with the hyperplane contains at least a vertex, solved the problem precisely. The SS-NN network we constructed can solve the problem precisely, too. In Example 1 of Section 4, we present the solution process of SS-NN. The results indicate that SS-NN can not only provide the answer as to whether a subset exists whose sum of all elements equals a specified w value but also yield solution results for all problems with values less than w. However, although both methods can yield exact solutions, the method in the literature [24] can be applied to the simultaneous SSP.
For the problem of finding a subset such that the sum of elements in the subset is closest to but does not exceed a specified value, considering that constructing a neural network model to solve the exact value would be highly complex and demand a considerable amount of computational resources, this paper constructs the ASS-NN model for solving the approximate solution of the sub-problem. Compared to the model for solving the exact solution, this model is simpler but can determine an approximate solution that is very close to the optimal solution. For instance, in Table 3, w N N represents the approximate solution obtained by the ASS-NN model, while w O P T represents the actual optimal solution of the problem. In some cases, the solution obtained by the ASS-NN model is the optimal solution. Experimental results show that the maximum error rate between the approximate solution obtained by the ASS-NN model and the optimal solution is approximately 9.17%. Once it is theoretically ensured that the constructed neural network model can solve the problem, then in practice, by training the model and utilizing the end-to-end mapping approach of the model, it is possible to bypass explicitly traversing all possible combination spaces, thereby avoiding the combinatorial explosion. This is also the distinction between solving the SSP through the RNN model and traditional heuristic algorithms [11] and dynamic programming [19] and represents another approach to solving large-scale SSPs. However, the accuracy of the solution obtained by ASS-NN still needs to be improved.

6. Conclusions

In light of the deficiencies of existing dynamic programming, genetic algorithms, and quantum algorithms when addressing the SSP, and inspired by the work of C. Hertrich et al. [37], we proposed two models, namely SS-NN and ASS-NN, for simulating dynamic programming to solve the SSP. Stringent mathematical derivations demonstrate that the proposed neural network models can precisely solve the SSP. Among them, the SS-NN model can determine whether there exists a subset within a set such that the sum of all elements in this subset equals a specified value, while the ASS-NN model is capable of providing an approximate solution that is extremely close to the optimal solution for the problem of finding a subset whose sum of elements is closest to but does not exceed the specified value. Experimental results indicate that the errors between the approximate solutions obtained by the ASS-NN model and the actual optimal solutions are relatively small and highly consistent with theoretical expectations. Our research reveals that RNNs provide a novel approach to resolving the SSP. This approach not only constructs the solution process by emulating the recursive relationship in dynamic programming but also utilizes neural network models to learn the latent patterns in the problem and circumvents the combinatorial explosion problem caused by explicitly traversing all possible combination spaces through an end-to-end mapping approach. However, in practical applications, using RNNs to solve the SSP still confronts several challenges, mainly the following: (1) due to strong data dependence, the cost of generating high-quality training samples is relatively high; (2) for certain SSPs, if an exact solution is sought, it is necessary to construct a complex network model, thereby triggering the demand for a substantial amount of computing resources; (3) the accuracy of the approximate solutions still awaits further enhancement. In future research, we will introduce adversarial generative networks [39] to generate high-quality training samples and integrate the attention mechanism [40] to improve the solution accuracy of the model.

Author Contributions

W.L.: Conceptualization, Methodology, Formal analysis, Investigation, and Writing—original draft. Z.W. (Zengkai Wang): Conceptualization, Methodology, Supervision, and Writing—review and editing. Z.W. (Zijia Wang): Conceptualization, Methodology, Supervision, and Writing—review and editing. Y.J.: Validation and Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (62106055), the Guangdong Natural Science Foundation (2025A1515010256), and the Guangzhou Science and Technology Planning Project (2023A04J0388, 2023A03J0662).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request. There are no restrictions on data availability.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kang, L.; Wan, L.; Li, K. Efficient parallelization of a two-list algorithm for the subset-sum problem on a hybrid CPU/GPU cluster. In Proceedings of the 2014 Sixth International Symposium on Parallel Architectures, Algorithms and Programming, Beijing, China, 13–15 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 93–98. [Google Scholar]
  2. Ghosh, D.; Chakravarti, N. A competitive local search heuristic for the subset sum problem. Comput. Oper. Res. 1999, 26, 271–279. [Google Scholar] [CrossRef]
  3. Madugula, M.K.; Majhi, S.K.; Panda, N. An efficient arithmetic optimization algorithm for solving subset-sum problem. In Proceedings of the 2022 International Conference on Connected Systems & Intelligence (CSI), Trivandrum, India, 31 August–2 September 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar]
  4. Wan, L.; Li, K.; Liu, J.; Li, K. GPU implementation of a parallel two-list algorithm for the subset-sum problem. Concurr. Comput. Pract. Exp. 2015, 27, 119–145. [Google Scholar] [CrossRef]
  5. Wan, L.; Li, K.; Li, K. A novel cooperative accelerated parallel two-list algorithm for solving the subset-sum problem on a hybrid CPU–GPU cluster. J. Parallel Distrib. Comput. 2016, 97, 112–123. [Google Scholar] [CrossRef]
  6. Dutta, P.; Rajasree, M.S. Efficient reductions and algorithms for variants of Subset Sum. arXiv 2021, arXiv:2112.11020. [Google Scholar]
  7. Ye, Y.; Borodin, A. Priority algorithms for the subset-sum problem. J. Comb. Optim. 2008, 16, 198–228. [Google Scholar] [CrossRef]
  8. Parque, V. Tackling the Subset Sum Problem with Fixed Size using an Integer Representation Scheme. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 28 June–1 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1447–1453. [Google Scholar]
  9. Li, L.; Zhao, K.; Ji, Z. A genetic algorithm to solve the subset sum problem based on parallel computing. Appl. Math. Inf. Sci. 2015, 9, 921. [Google Scholar]
  10. Kolpakov, R.M.; Posypkin, M.A. Effective parallelization strategy for the solution of subset sum problems by the branch-and-bound method. Discret. Math. Appl. 2020, 30, 313–325. [Google Scholar] [CrossRef]
  11. Kolpakov, R.; Posypkin, M. Optimality and complexity analysis of a branch-and-bound method in solving some instances of the subset sum problem. Open Comput. Sci. 2021, 11, 116–126. [Google Scholar] [CrossRef]
  12. Thada, V.; Shrivastava, U. Solution of subset sum problem using genetic algorithm with rejection of infeasible offspring method. Int. J. Emerg. Technol. Comput. Appl. Sci 2014, 10, 259–262. [Google Scholar]
  13. Wang, R.L. A genetic algorithm for subset sum problem. Neurocomputing 2004, 57, 463–468. [Google Scholar] [CrossRef]
  14. Bhasin, H.; Singla, N. Modified genetic algorithms based solution to subset sum problem. Int. J. Adv. Res. Artif. Intell. 2012, 1, 38–41. [Google Scholar] [CrossRef]
  15. Saketh, K.H.; Jeyakumar, G. Comparison of dynamic programming and genetic algorithm approaches for solving subset sum problems. In Proceedings of the Computational Vision and Bio-Inspired Computing: ICCVBIC 2019, Coimbatore, India, 25–26 September 2019; Springer: Cham, Switzerland, 2020; pp. 472–479. [Google Scholar]
  16. Kolpakov, R.; Posypkin, M. Lower time bounds for parallel solving of the subset sum problem by a dynamic programming algorithm. Concurr. Comput. Pract. Exp. 2024, 36, e8144. [Google Scholar] [CrossRef]
  17. Yang, F.; Jin, T.; Liu, T.Y.; Sun, X.; Zhang, J. Boosting dynamic programming with neural networks for solving np-hard problems. In Proceedings of the 10th Asian Conference on Machine Learning, ACML 2018, Beijing, China, 14–16 November 2018; pp. 726–739. [Google Scholar]
  18. Allcock, J.; Hamoudi, Y.; Joux, A.; Klingelhöfer, F.; Santha, M. Classical and quantum dynamic programming for Subset-Sum and variants. arXiv 2021, arXiv:2111.07059. [Google Scholar]
  19. Fujiwara, H.; Watari, H.; Yamamoto, H. Dynamic Programming for the Subset Sum Problem. Formaliz. Math. 2020, 28, 89–92. [Google Scholar] [CrossRef]
  20. Xu, S.; Panwar, S.S.; Kodialam, M.; Lakshman, T. Deep neural network approximated dynamic programming for combinatorial optimization. In Proceedings of the AAAI 2020 Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 1684–1691. [Google Scholar]
  21. Biesner, D.; Gerlach, T.; Bauckhage, C.; Kliem, B.; Sifa, R. Solving subset sum problems using quantum inspired optimization algorithms with applications in auditing and financial data analysis. In Proceedings of the 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA), Nassau, Bahamas, 12–14 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 903–908. [Google Scholar]
  22. Xu, C.; Zhang, G. Learning-augmented algorithms for online subset sum. J. Glob. Optim. 2023, 87, 989–1008. [Google Scholar] [CrossRef]
  23. Coron, J.S.; Gini, A. Provably solving the hidden subset sum problem via statistical learning. Math. Cryptol. 2021, 1, 70–84. [Google Scholar]
  24. Costandin, M. On a Geometric Interpretation Of the Subset Sum Problem. arXiv 2024, arXiv:2410.19024. [Google Scholar]
  25. Zheng, Q.; Zhu, P.; Xue, S.; Wang, Y.; Wu, C.; Yu, X.; Yu, M.; Liu, Y.; Deng, M.; Wu, J.; et al. Quantum algorithm and experimental demonstration for the subset sum problem. Sci. China Inf. Sci. 2022, 65, 182501. [Google Scholar] [CrossRef]
  26. Zheng, Q.; Yu, M.; Zhu, P.; Wang, Y.; Luo, W.; Xu, P. Solving the subset sum problem by the quantum Ising model with variational quantum optimization based on conditional values at risk. Sci. China Phys. Mech. Astron. 2024, 67, 280311. [Google Scholar] [CrossRef]
  27. Moon, B. The Subset Sum Problem: Reducing Time Complexity of NP-Completeness with Quantum Search. Undergrad. J. Math. Model. One Two 2012, 4, 2. [Google Scholar] [CrossRef]
  28. Bernstein, D.J.; Jeffery, S.; Lange, T.; Meurer, A. Quantum algorithms for the subset-sum problem. In Proceedings of the Post-Quantum Cryptography: 5th International Workshop, PQCrypto 2013, Limoges, France, 4–7 June 2013; Proceedings 5. Springer: Berlin/Heidelberg, Germany, 2013; pp. 16–33. [Google Scholar]
  29. Bengio, Y.; Lodi, A.; Prouvost, A. Machine learning for combinatorial optimization: A methodological tour d’horizon. Eur. J. Oper. Res. 2021, 290, 405–421. [Google Scholar] [CrossRef]
  30. Thapa, R. A Survey on Deep Learning-Based Methodologies for Solving Combinatorial Optimization Problems. 2020. Available online: https://www.researchgate.net/publication/343876842_A_Survey_on_Deep_Learning-based_Methodologies_for_Solving_Combinatorial_Optimization_Problems (accessed on 1 April 2025).
  31. Hopfield, J.J.; Tank, D.W. “Neural” computation of decisions in optimization problems. Biol. Cybern. 1985, 52, 141–152. [Google Scholar] [PubMed]
  32. Tarkov, M.S. Solving the traveling salesman problem using a recurrent neural network. Numer. Anal. Appl. 2015, 8, 275–283. [Google Scholar]
  33. Gu, S.; Cui, R. An efficient algorithm for the subset sum problem based on finite-time convergent recurrent neural network. Neurocomputing 2015, 149, 13–21. [Google Scholar] [CrossRef]
  34. Gu, S.; Hao, T. A pointer network based deep learning algorithm for 0–1 knapsack problem. In Proceedings of the 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI), Xiamen, China, 29–31 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 473–477. [Google Scholar]
  35. Zhao, X.; Wang, Z.; Zheng, G. Two-phase neural combinatorial optimization with reinforcement learning for agile satellite scheduling. J. Aerosp. Inf. Syst. 2020, 17, 346–357. [Google Scholar]
  36. Kechadi, M.T.; Low, K.S.; Goncalves, G. Recurrent neural network approach for cyclic job shop scheduling problem. J. Manuf. Syst. 2013, 32, 689–699. [Google Scholar]
  37. Hertrich, C.; Skutella, M. Provably good solutions to the knapsack problem via neural networks of bounded size. INFORMS J. Comput. 2023, 35, 1079–1097. [Google Scholar]
  38. Oltean, M.; Muntean, O. Solving the subset-sum problem with a light-based device. Nat. Comput. 2009, 8, 321–331. [Google Scholar] [CrossRef]
  39. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; Volume 27. [Google Scholar]
  40. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; Volume 30. [Google Scholar]
Figure 1. Network for the δ function.
Figure 1. Network for the δ function.
Biomimetics 10 00231 g001
Figure 2. Network for or operator.
Figure 2. Network for or operator.
Biomimetics 10 00231 g002
Figure 3. Recurrent structure based on SS-NN.
Figure 3. Recurrent structure based on SS-NN.
Biomimetics 10 00231 g003
Figure 4. The structure of SS-NN.
Figure 4. The structure of SS-NN.
Biomimetics 10 00231 g004
Figure 5. Recurrent structure based on ASS-NN.
Figure 5. Recurrent structure based on ASS-NN.
Biomimetics 10 00231 g005
Figure 6. Computing p o u t w and w o u t * .
Figure 6. Computing p o u t w and w o u t * .
Biomimetics 10 00231 g006
Figure 7. Error of Example 1.
Figure 7. Error of Example 1.
Biomimetics 10 00231 g007
Figure 8. Error of Example 2.
Figure 8. Error of Example 2.
Biomimetics 10 00231 g008
Figure 9. Error of Example 3.
Figure 9. Error of Example 3.
Biomimetics 10 00231 g009
Figure 10. Error of Example 4.
Figure 10. Error of Example 4.
Biomimetics 10 00231 g010
Figure 11. Error of Example 5.
Figure 11. Error of Example 5.
Biomimetics 10 00231 g011
Figure 12. Error of Example 6.
Figure 12. Error of Example 6.
Biomimetics 10 00231 g012
Table 1. The value of s o u t ( w , i ) .
Table 1. The value of s o u t ( w , i ) .
w−101234567
i
0010000000
1010000000
2010000000
3010000000
4010001000
5010001100
6010011101
Table 2. The value of p o u t ( w , i ) .
Table 2. The value of p o u t ( w , i ) .
w0123456
i
0 000000
1 000777
2 77773441
3 71111113845
4 71923385057
5 71928435562
6 102231435565
Table 3. Results of Example 6 with M = 50.
Table 3. Results of Example 6 with M = 50.
w123456789101112
w N N 102198326436540654757862971109011751289
w O P T 106218327436545654763872981109011991304
e r r 42010506101002415
w131415161718192021222324
w N N 141715251614172518461961205321662274238324992593
w O P T 141715261635174418531962207121802289239825072616
e r r 0121197118141515823
w252627282930313233343536
w N N 270828282942304431583263336534833569368337973898
w O P T 272528342943305231613270337934883597370638153924
e r r 176183714528231826
w373839404142434445464748
w N N 400741114208435544324546468547504870499051095223
w O P T 403341424251436044694578468747964905501451235231
e r r 263143537322463524148
w4950
w N N 53085422
w O P T 53775422
e r r 290
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Liao, W.; Jin, Y.; Wang, Z. Performance Guarantees of Recurrent Neural Networks for the Subset Sum Problem. Biomimetics 2025, 10, 231. https://doi.org/10.3390/biomimetics10040231

AMA Style

Wang Z, Liao W, Jin Y, Wang Z. Performance Guarantees of Recurrent Neural Networks for the Subset Sum Problem. Biomimetics. 2025; 10(4):231. https://doi.org/10.3390/biomimetics10040231

Chicago/Turabian Style

Wang, Zengkai, Weizhi Liao, Youzhen Jin, and Zijia Wang. 2025. "Performance Guarantees of Recurrent Neural Networks for the Subset Sum Problem" Biomimetics 10, no. 4: 231. https://doi.org/10.3390/biomimetics10040231

APA Style

Wang, Z., Liao, W., Jin, Y., & Wang, Z. (2025). Performance Guarantees of Recurrent Neural Networks for the Subset Sum Problem. Biomimetics, 10(4), 231. https://doi.org/10.3390/biomimetics10040231

Article Metrics

Back to TopTop