Next Article in Journal
Thermal Entropy Generation in Magnetized Radiative Flow Through Porous Media over a Stretching Cylinder: An RSM-Based Study
Previous Article in Journal
Integrated Framework of Generalized Interval-Valued Hesitant Intuitionistic Fuzzy Soft Sets with the AHP for Investment Decision-Making Under Uncertainty
Previous Article in Special Issue
On the Recursive Representation of the Permutation Flow and Job Shop Scheduling Problems and Some Extensions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Large-Number Optimization: Exact-Arithmetic Mathematical Programming with Integers and Fractions Beyond Any Bit Limits

Astronomy Department, University of Florida, Gainesville, FL 32611, USA
Mathematics 2025, 13(19), 3190; https://doi.org/10.3390/math13193190
Submission received: 5 July 2025 / Revised: 28 September 2025 / Accepted: 2 October 2025 / Published: 5 October 2025
(This article belongs to the Special Issue Innovations in Optimization and Operations Research)

Abstract

Mathematical optimization, in both continuous and discrete forms, is well established and widely applied. This work addresses a gap in the literature by focusing on large-number optimization, where integers or fractions with hundreds of digits occur in decision variables, objective functions, or constraints. Such problems challenge standard optimization tools, particularly when exact solutions are required. The suitability of computer algebra systems and high-precision arithmetic software for large-number optimization problems is discussed. Our first contribution is the development of Python implementations of an exact Simplex algorithm and a Branch-and-Bound algorithm for integer linear programming, capable of handling arbitrarily large integers. To test these implementations for correctness, analytic optimal solutions for nine specifically constructed linear, integer linear, and quadratic mixed-integer programming problems are derived. These examples are used to test and verify the developed software and can also serve as benchmarks for future research in large-number optimization. The second contribution concerns constructing partially increasing subsequences of the Collatz sequence. Motivated by this example, we quickly encountered the limits of commercial mixed-integer solvers and instead solved Diophantine equations or applied modular arithmetic techniques to obtain partial Collatz sequences. For any given number J, we obtain a sequence that begins at 2 J 1 and repeats J times the pattern ud: multiply by 3 x j + 1 and then divide by 2. Further partially decreasing sequences are designed, which follow the pattern of multiplying by 3 x j + 1 and then dividing by 2 m . The most general J-times increasing patterns (ududd, udududd, …, ududududddd) are constructed using analytic and semi-analytic methods that exploit modular arithmetic in combination with optimization techniques.

1. Introduction

This study investigates the properties and advantages of large-number optimization problems, defined as those involving large numbers—that is, integers with more than N digits or rational numbers whose numerators or denominators exceed N digits—in the values of the decision variables, objective functions, or constraints. It further aims, on the one hand, to develop exact-arithmetic software for their solution and, on the other, to compute partially increasing subsequences of the Collatz sequence. At the time of writing, numbers with N = 100 digits could be considered large, corresponding approximatively to  2 329 .
Mathematical optimization, in both its continuous and discrete forms, is a mature discipline with a wide range of theoretical foundations and applications. Foundational contributions such as those by Nemhauser and Wolsey [1] and the comprehensive review of global optimization methods in process systems engineering by Floudas and Pardalos [2] have provided essential methodologies that remain influential today. Recent work continues to advance both theory and practice: Werner [3] and Werner [4] emphasize developments in discrete optimization, while Grossmann and Biegler [5] survey advances in mathematical programming techniques for process systems engineering. Broader algorithmic perspectives, such as those discussed in Vishnoi [6]’s work, highlight the interface of optimization with computer science and complexity theory. Applications in emerging fields are also expanding: Qin et al. [7] review real-time optimization in the process industries, while the edited volume of Nikeghbali et al. [8] provides insights into cutting-edge research and methodologies on optimization, discrete mathematics, and  applications to data science. Recent developments focus on bringing machine learning and classical mathematical optimization closer together. For  instance, Wang et al. [9] propose a reinforcement learning-based ranking teaching–learning optimization algorithm for parameter identification in photovoltaic models, while Schnabel and Usul [10] investigate the embedding of neural networks into optimization models, where the network architecture itself appears as constraints in the model. These studies collectively illustrate that optimization remains both a well-established and rapidly evolving area of research with deep methodological roots and continuing impact across disciplines.
However, a search of the existing literature did not reveal any publications explicitly addressing optimization problems involving large numbers. A  possible explanation may lie either in a limited perceived relevance of such problems, or in the absence of solvers capable of handling them, potentially due to the following reason: large numbers in the sense defined above are a challenge as they are beyond the limits of 32-bit or 64-bit architecture and, thus, programming languages. Many programming languages (like C, C++, Java, C#), algebraic modeling systems such as GAMS (cf. Bussieck and Meeraus [11] or Bussieck et al. [12]), and commercial mixed-integer programming solvers default to 32-bit signed integers (int) ranging from the smallest number 2 31 = 2,147,483,648 to the largest number 2 31 1 = 2,147,483,647 (one bit is reserved for the sign, leaving 31 bits for the value). Modern systems now widely support 64-bit integers (long or int64_t) corresponding to integers up to 2 63 1 = 9,223,372,036,854,775,807 . In Python, integers can grow beyond any number of bits dynamically; memory is the only limit.
Numerical challenges in exact arithmetic relate to algorithmic aspects [addition and subtraction are linear in time O(n), naive multiplication is quadratic in time O( n 2 ), a better approach by Karatsuba and Ofman [13] O( n 1.585 ), and division and module operations are much slower (factor 2 to 10) than multiplication]. In addition, there are aspects of memory to be considered, for instance, rational arithmetic requires storing the numerator and denominator separately, and fractions, when added, lead to numerators and denominators growing in the number of digits.
The computational complexity of large-number optimization is expected to be worse than that of normal-number optimization as the arithmetic briefly addressed above and memory issues cause extra problems and overhead. But when is large-number optimization—or exact-arithmetic optimization—actually needed? The need for large-number optimization or exact-arithmetic optimization arises in various contexts. In 2012, the specific motivation for this study was to identify the partially increasing Collatz sequences described in Section 2. More generally, optimization problems involving large numbers are, or could be, of value for several reasons:
  • Real-World Precision—Applications like cryptography and integer factorization involve large numbers and require exact computations, while quantum computing requires exact calculations or calculations beyond standard floating-point precision but does not necessarily involve large numbers. An application field that remains highly demanding despite not requiring exact solutions is orbital mechanics, also known as astrodynamics, which requires extremely precise calculations—often beyond standard floating-point precision (like 64-bit double)—due to the accumulation of numerical errors over time or sensitivity of the solution with respect to the initial conditions.
  • Practical Aspects and Scaling—If scaling is not possible due to the inherent structure of the optimization problem at hand, then exact arithmetic is is a good option. Note that problems related to large floating-point numbers that occur in scientific or optimization problems can often be solved by selecting the appropriate units and scaling, but when many-digit numbers are involved and exact solutions are required, this is not possible.
  • Algorithm Robustness—Testing optimization methods on such problems reveals weaknesses in numerical stability, rounding errors, or scalability as in the work of Jarck [14].
  • Theoretical Insights—Large numbers expose hidden computational limits, pushing advancements in arbitrary-precision arithmetic and efficient algorithms.
  • Emerging Needs—As data grows (e.g., blockchain, AI training), handling massive integers or fractions becomes critical for accuracy in optimization tasks. Currently, the focus is more on large-scale problems, i.e., many integers, but maybe the field will also develop a taste for exact solutions.
  • Very Large Weights or Capacity Data in Knapsack Problems—Cryptographic knapsack problems seem to move towards consistently larger input data to resist code-breaking attacks (cf. Wang and Chen [15] or Rizos and Draziotis [16], respectively).
  • Recreational Mathematics—What is the smallest prime number with 100 digits which contains all digits 0 to 9 (each of them occurring ten times) and zero is not in the leading position? The answer is given in Appendix A.
Solving optimization problems involving large numbers might also be considered a niche area that often intersects with computational number theory, cryptography and integer factorization, and high-precision arithmetic. Large integers are central to cryptographic algorithms like RSA, where factoring large semi-primes is a key challenge (cf. Aoki et al. [17], where the authors focus on factoring large integers with over 100 digits). Computational number theory often involves large integers, such as primality testing or solving Diophantine equations with large integer solutions (cf. Lenstra [18]). Several publications focus on developing specialized libraries and software tools for handling large integers and solving related optimization problems; cf. “GMP: The GNU Multiple Precision Arithmetic Library” by Granlund and the GMP Development Team [19], a widely used library for high-precision arithmetic, and also “MPFR: A Multiple-Precision Binary Floating-Point Library with Correct Rounding” by Fousse et al. [20], which focuses on high-precision floating-point arithmetic.
There are limits to expect in terms of the number of variables and constraints which can be handled—and there is a price to pay for exact results in terms of computational speed and the lack of real-time applications. Within the Simplex algorithm, intermediate expressions and fractions grow rapidly during row operations. Why not use existing programs and software? A few options have been explored, including Xcas, which was straightforward to install. Regarding GMP, the prerequisites (Windows is not GMP’s favored operating system) and setup intricacy did not appear appealing to us. At long last, our own software has endowed us with a greater degree of flexibility, and it has also made the experience all the more pleasurable.
The contributions and innovations by the author in this paper are as follows:
  • The construction of a partial subsequence of the Collatz sequence which grows for J steps.
  • The construction of a partial decreasing subsequence of the Collatz sequence which repeats J times the pattern of multiplying by 3 x j + 1 and reaching a number which has 2 m as a divisor, i.e., it can be divided m times by 2, then being odd again.
  • The construction of further partial subsequences of the Collatz sequence following more general patterns repeatedly growing for J steps.
  • The construction of special and generic algorithms to solve large-number optimization problems although only small ones (not too many variables and constraints) coded in Python (Version 3.6.1) and Fortran: LP-enum.py, simplex.py, BLP-enum.py, and BandB.py or the Fortran equivalents, respectively. Our exact standard Simplex and Branch and Bound algorithm is suitable to solve large-number optimization problem (no limit on the number of digits). This work represents an initial step, with the expectation that it will pave the way for a comprehensive library approach addressing a broader range of problems. Fortran was chosen for the existing block structure (supporting exact integers and fractions without bit limits) in my program since the late 1980s used to compute the determinants in the work of Kallrath and Neutsch [21], while modern Python was selected for its inherent support of arbitrary-precision integers without any bit limits and flexibility.
  • The construction of challenging test examples, for which I derived analytic optimal solutions, to test my algorithms and programs.
  • Briefly discussing the pros and cons of large-number optimization algorithms.
As far as the development of software is concerned, the following scope and thoughts about potential applications should be clear to the reader: This paper addresses optimization problems in which the dominant challenge is the size of the numbers or their number of digits, respectively, not the size of the dataset. Our algorithms and proofs target regimes where coefficients and intermediate quantities exceed standard bit limits, so that exact arithmetic and an arbitrary number of digits is required to guarantee correctness and verifiable optimality. Although this paradigm is distinct from large-scale data processing, it is relevant to models in which extremely large integers or rational numbers arise (e.g., exact mixed-integer formulations with very large coefficients; certification tasks where numerical error cannot be tolerated). Fields such as computational number theory or cryptography routinely manipulate large integers; while many of their core problems are not optimization-driven, certain verification or parameter-selection steps could, in principle, benefit from exact-arithmetic optimization. A systematic exploration of such domain-specific applications lies beyond the scope of the present foundational study and is a valuable direction for future work.
Consequently, the structure and methodology of research in this article is as follows. Section 2 contains the problem definition related to my original motivation and the development of analytic and semi-analytic procedures to construct increasing Collatz sequences defined by given patterns. As this problem leads to large-number optimization problems, an overview of computer algebra systems, high-precision libraries and rational solver is provided in Section 3, accompanied by the opportunities these techniques offer for the problem at hand. As they do not suffice for the given problem, in Section 4, a brief overview and description of my implementations of algorithmic approaches to optimization involving large numbers follow. Building on these methods, in Section 5, the reader finds the construction of several test problems, for which we have derived analytic optimal solutions for testing the programs, along with the corresponding numerical results. This is followed by the Conclusions section, which includes some suggestions for future work. The appendix has been selected as the location for material that seems necessary or useful for providing additional details, but that would have disrupted the flow of the article if it had remained in the main body. The sections in the appendix are not related.

2. Computing Partially Increasing Sequences Within the Collatz Sequence

The Collatz conjecture (cf. Lagarias [22] or Lagarias [23] for a recent overview), also known as the 3 n + 1 problem, involves a recursive sequence generated by iterating a simple function: for any positive integer n, if n is even, divide it by 2; if n is odd, multiply it by 3 and add 1. The Collatz sequence is the recursive sequence defined on I N by
c n + 1 = 3 c n + 1 c n / 2 c n 1 mod 2 c n 0 mod 2
i.e., if the number is even, divide it by two, and if the number is odd, triple it and add one. An equivalent closed-form recursion not depending on conditionals is
c n + 1 = ( 7 c n + 2 ) ( 1 ) c n ( 5 c n + 2 ) 4 ,
which may be useful for proofs and numerical computations.
The Collatz conjecture named after the Darmstadt (Germany) mathematician Lothar Collatz is as follows: This sequence will always reach the number 1, regardless of which positive integer is chosen initially. Proofing the Collatz conjecture would mean to rule out the existence of cycles different from 1-4-2-1, and to rule out that lim n c n = .
The motivation for this paper was the question: Is it possible to find a starting number of the Collatz sequence for which the partial subsequence increases for a given number, J, of steps, i.e., can one find a starting number x 1 = c 1 such that c 3 = x 2 = ( 3 x 1 + 1 ) / 2 I N odd I N , where I N odd is the subset of odd natural numbers? In general, this pattern (up by 3 x + 1 , down by dividing by 2), referred to as ud, repeats J times as
x j + 1 = ( 3 x j + 1 ) / 2 I N odd ; j J : = { 1 , , J } .
Of interest is only the smallest number x 1 fulfilling (2). Note that x 1 needs to be an odd number. The sequence x j contains only the odd elements of the Collatz sequence c n . This idea is later generalized in Section 2.5 to the concept of x n being the header element of block j where each block follows the same pattern.
As a generalization of (2), although that subsequence is not increasing, one might ask whether it is possible to construct a partial sequence of the Collatz sequence with the property
x j + 1 = ( 3 x j + 1 ) / 2 m I N odd ; j J .
At first, an integer linear programming (ILP) model is constructed in Section 2.1. Its solution establishes the sequence x n for up to J = 20 . The solutions for any J > 20 are obtained by solving the system of Diophantine equations in Section 2.2. For easier reading, the set J 1 : = { 1 , 2 , , J 1 } I N is introduced.

2.1. Integer Linear Programming (ILP) Model

Condition (2) translates algebraically into
x j + 1 = 3 x j + 1 2
or equivalently
3 x j 2 x j + 1 = 1 .
If one declares the variables x j as discrete variables, an MILP solver ensures through its underlying algorithms that x j I N . Note that this does not necessarily lead to x j + 1 I N odd . However, solving the minimization problem C1
min x 1 s . t . 3 x j 2 x j + 1 = 1 , j J 1
does yield a solution. As x 1 is minimal, x 2 is as well; the minimal property propagates from x j to x j + 1 . As the values x j grow quickly, the MILP solver GAMS/CPLEX can only solve the problem up to J = 20 . So, as expected, with growing J, the limits of a standard MILP solver are quickly reached when entering the regime of large-number optimization. Therefore, instead, the analytic solutions of linear Diophantine equations developed in Section 2.2 are used. Alternatively, the suitability of the computer algebra systems and other software packages described in Section 3 can be assessed, or a generic MILP solver can be developed; this results in BandB.py, which is presented in Section 4 and tested in Section 5.

2.2. Diophantine System of Equations

Due to the numeric limitations of the MILP solvers when handling large numbers, let us now derive a closed-form analytic solution. The Diophantine Equation (4)
3 x j 2 x j + 1 = 1
can be solved analytically using Euler’s algorithm. Its solutions are
x j = 2 k j 1 x j + 1 = 3 k j 1 ,
where k j I N parameterizes all its solutions. If one wants to solve the system of Diophantine equations
3 x j 2 x j + 1 = 1 ; j J 1
one needs to ensure compatibility of all k j , j = 1 , , J 1 , i.e.,
2 k j + 1 1 = 3 k j 1 ; j J 1
or
2 k j + 1 = 3 k j ; j J 1 .
Equation (5) is fulfilled by
k j = 2 J j 3 j 1 ; j J 1 .
The partial, odd elements of the Collatz sequence are thus
x j = 2 J j + 1 3 j 1 1 ; j J ,
while the next even elements—resulting from the operation 3 x j + 1 —are given by
x j = 2 J j + 1 3 j 2 = 2 2 J j 3 j + 1 1 1 = 2 x j + 1 ; j J 1 .
To illustrate this formula, consider the case J = 4 , leading to the following results:
j k j x j x j 1 8 15 46 2 12 23 70 3 18 35 106 4 27 53 160 .
Note that x 5 = 80 can also be derived from k 4 , but it is not an odd number any longer. For completeness, the sequence
x 5 = 80 , 40 , 20 , 10 , 5 , 16 , 8 , 4 , 2 , 1
initiated by x 5 is listed; it obviously does not follow the pattern ud any longer.
Note that x J + 1 is given by
x J + 1 = 3 k J 1 = 3 J 1 .
The starting element given by (7) allows the following interpretation. If one wants to increase infinitely many times, i.e., lim J , one has to start at infinity.

2.3. Comments on Decreasing Numbers

Let us comment on a few numbers c j , referred to as downers, for which the Collatz sequence—after a certain 3 c j + 1 step—will decrease directly to one; note that here j is not treated as a running index (one might also name it j = 1 ). At first, it is obvious that this is so for numbers c j with 3 c j + 1 = 2 n (type 1 downers). If  c j hits 2 n , in n steps, it reaches 1, i.e.,  c j + n = 1 . Thus, all numbers
c n = 2 n 1 3 , if c n I N odd
are downers of type 1. Examples ( n , c n ): (2, 1), (4, 5), (6, 21), (8, 85).
Second (type 2 downers), if 
3 c j + 1 = 4 n 1 3 ,
in 2 n + 1 steps, it reaches 1, i.e.,  c j + 2 n + 1 = 1 . The proof is based on the observation that 4 n can always be represented by the equation
4 n = 3 k + 1 , k = 2 m 1 ,
which can be proven by complete induction. For  n = 1 , the value m = 1 is obtained, initiating the induction. The induction step proceeds as follows:
4 n + 1 = 4 · 4 n = 4 · ( 6 m 2 ) = 6 · 4 m 8 = 6 · 4 m 6 2 = 6 · ( 4 m 1 ) 2 = 6 · m 2 ( q . e . d ) .
Thus, all numbers
c n = 4 n 4 9 = 4 9 4 n 1 1 , if c n I N odd
are downers of type 2. This gives only even positive integers, i.e., they cannot be used for the up step 3 c n + 1 .
Third, if
u : = u ( m , n ) = 3 c j + 1 = 2 m 4 n 1 3 ,
then downers are given by
c j = c j ( m , n ) = 1 3 2 m 4 n 1 3 1 , if c n I N odd
In 1 + m + 2 n + 1 steps (u-m·d-u-n·d) the result 1 is obtained, i.e., c j + 1 + m + 2 n + 1 = 1 . This is based on the second result. From the downer, there is one up step leading to a number which can be divided by 2 m reaching an odd number, from which the up step gives 2 2 n .
Examples for m = 1 :
n c j ( m , n ) : u u / 2 2 n 2 3 : 10 - 5 - 2 4 5 227 : 682 - 341 - 2 10 8 14563 : 43690 - 21845 - 2 16 11 932067 : 932067 - 1398101 - 2 22
Examples for m = 2 :
n c 1 ( m , n ) : u u / 2 u / 4 2 n 1 1 : 4 - 2 - 1 - 2 2 4 113 : 340 - 170 - 85 - 2 8 7 7281 : 21844 - 10922 - 5461 - 2 14 10 466033 : 1398100 - 699050 - 349525 - 2 20
Examples for m = 3 :
n c 1 ( m , n ) : u u / 2 u / 4 u / 8 2 n 2 13 : 40 - 20 - 10 - 5 - 2 4 5 909 : 2728 - 1364 - 682 - 341 - 2 10 8 58253 : 174760 - 87380 - 43690 - 21845 - 2 16 11 3728269 : 11184808 - 5592404 - 2796202 - 1398101 - 2 22
Examples for m = 4 :
n c 1 ( m , n ) : u u / 2 u / 4 u / 8 u / 16 2 n 1 5 : 2 4 4 453 : 1360 - 680 - 340 - 170 - 85 - 2 8 7 29125 - 87376 - 43688 - 21844 - 10922 - 5461 - 2 14 10 1864133 : 5592400 - 2796200 - 1398100 - 699050 - 349525 - 2 20
Note that further pairs are given by [ n = 3 k + 1 + mod ( m , 2 ) , c j ( m , n ) ].
Alternatively, this result could be expressed as follows. Each second power of two has a downer. If  2 n has a downer k, the following holds:
2 n = 3 k + 1 .
The next power of two then does not have this property due to
2 n + 1 = 2 ( 3 k + 1 ) = 6 k + 2 = 3 v + 2 ,
which means it cannot be represented in the form 3 v + 1 . However, 2 n + 2 has a downer as
2 n + 2 = 4 ( 3 k + 1 ) = 12 k + 4 = 12 k + 3 + 1 = 3 v + 1
with v = 4 k + 1 . As  4 = 2 2 has downer 1, the powers of two 2 4 , 2 6 , also have a downer.

2.4. Modified Diophantine Systems

Unlike the downers above, consider now a slightly different situation: If one wants, after an increase, m subsequent division by 2 and then up again (that requires the division by 2 m to be an odd number), the Diophantine Equation (4) needs to just be modified to
3 x j 2 m x j + 1 = 1 , j J 1 .
This modified Diophantine equation system (8) has for all its components j and for all m the solution gcd ( 3 , 2 m ) = 1 , where 1 is a divisor of 1 . To solve the system, it needs to be evaluated for each m individually. For  m = 2 , it follows that
x j = 4 k j + 1 x j + 1 = 3 k j + 1 , j J 1 ,
and the compatibility equation
4 k j + 1 + 1 = 3 k j + 1 , j J 1
with the solution
k j = 4 J j 3 j 1 , j J 1 .
To illustrate this formula, consider the simple case J = 4 and m = 2 , leading to the results
j k j x j x j x j 1 64 257 772 386 2 48 193 580 290 3 36 145 436 218 4 27 109 328 164 .
For m = 3 , it follows that
x j = 8 k j 3 x j + 1 = 3 k j 1 ; j J 1 ,
and the compatibility equation
8 k j + 1 3 = 3 k j 1 ; j J 1 ,
or
8 k j + 1 = 3 k j + 2 ; j J 1 ,
for which one cannot easily find a solution due to the appearance of the summand 2. However, the ILP approach easily produces solutions although only for small J. From these solutions, the values k j can be derived. Is it possible to prove that a solution exists for any given J N ? As (11) is a linear Diophantine equation with 2 / gcd ( 8 , 3 ) I N , one can always find a feasible pair ( k j , k j + 1 ) I N × I N .
k j = 8 J j 3 j 1 + 8 J j 3 j 1 j , j J 1 .
k j + 1 = 8 J j 1 3 j + 1 1 + 8 J j 1 3 j + 1 1 ( j + 1 ) , j J 1 .
8 k j + 1 = 8 J j 3 j + 1 1 + 8 J j 3 j + 1 1 ( j + 1 ) = 3 k j + 8 J j 3 j + 1 1 ; j J 1 .
For m = 4 , it follows that
x j = 16 k j + 5 x j + 1 = 3 k j + 1 ,
and the compatibility equation
16 k j + 1 + 5 = 3 k j + 1 ; j J 1 ,
or
16 k j + 1 = 3 k j 4 ; j J 1 ,
for which one cannot easily find a solution due to the presence of the minuend 4 . However, the ILP approach easily produces solutions although only for small J, even for m = 15 , using GAMS/CPLEX.

2.5. More Patterns

This subsection focuses on patterns constructed from U pairs of up and down moves (operation 3 x + 1 or division by 2, respectively) followed by a few additional down moves. For example, the pattern udududd consists of U = 3  up moves and D = 4  down moves, while the pattern ududududddd comprises U = 4  up moves and D = 6  down moves.
Is it possible to find a starting number of the Collatz sequence for which the partial subsequence of numbers x j increases for a given number, J, of steps? This means that one can find a starting number, C 1 = c 11 = x 1 , following the pattern udududd, i.e.,
x 2 = [ 3 [ 3 [ 3 x 1 + 1 ] / 2 + 1 ] / 2 + 1 ] / 4 = 27 16 x 1 + 19 16 I N odd I N ,
where x j is called the header of block j = 1 . The connection between the Collatz sequence C n and the partial Collatz sequence x j of block headers is given by C i + ( j 1 ) ( U + D ) = c 1 j = x j , where i [ 1 , 1 + U + D ] indicates the position in the pattern.
In general, this pattern (up by 3 x + 1 , down by 2, again up by 3 x + 1 , down by 2, and finally, up by 3 x + 1 , and down by 4) repeats J times as
x j + 1 = 3 3 16 x j + 3 2 16 + 3 1 8 + 3 0 4 = 27 16 x j + 19 16 I N odd ; j = 1 , , J .
Recursion (14) can be formulated as the ILP minimization problem
min x 1 s . t . 27 x j 16 x j + 1 = 19 ; j J 1 .
The focus is again solely on the smallest number x 1 that satisfies (15). The inhomogeneous system of Diophantine equations
27 x j 16 x j + 1 = 19 ; j J 1 : = { 1 , , J 1 }
can also be solved analytically using Euler’s algorithm, leading to a parameterized solution by k j I N :
x j = 16 k j + 7 x j + 1 = 27 k j + 13 .
To find an analytic expression for k j involving only J and j to solve (16), one needs to ensure the compatibility of adjacent k j , j J 1 , i.e.,
16 k j + 1 + 7 = 27 k j + 13 ; j J 1 27 k j 16 k j + 1 = 6 ; j J 1 .
Equation (17) is again an inhomogeneous linear Diophantine equation system and cannot—as in the homogeneous case—simply be an expression like (7):
k j = 16 J j 27 j 1 , j J 1 .
From (17) one can, on the one hand, obtain the recurrence relation
k j + 1 = 27 k j + 6 16 ; j J 1 ,
while, on the other hand, it has the parameterized solution
k j = 16 t j 18 ; t j Z Z k j + 1 = 27 t j + 30 ; t j Z Z
with the compatibility constraint
k j + 1 = 27 t j + 30 = 16 t j + 1 18
or
27 t j 16 t j + 1 = 48 ; j J 1 .
So, via this approach, detailed formulae for k j and repeating the worked out formulae J times do not bring us to a closed analytic solution; however, brute force or solving is still possible for J 13 . Our Python implementation, for  J = 11 , required 9 h 12 min 36 s to compute x 1 = 799644820199 , and  J = 12 took 14 h 8 min 52 s with x 1 = 3032214854484829 .
To obtain a more elegant solution and a general explicit formula for the repeated patterns udududd, ududududd, …, ududududdd (repeated J times), expressions relating Z and N to U and D are derived. For example, in the pattern ududududdd—where U = 4 and D = 6 —the computation starts with x 1 , and  x 2 is obtained according to
x 2 = [ 3 [ 3 [ 3 [ 3 x 1 + 1 ] / 2 + 1 ] / 2 + 1 ] / 2 + 1 ] / 8 = 81 64 x 1 + 65 64 I N odd I N
leading to
x 2 = 3 U 2 D x 1 + 3 U 1 2 D + 3 U 2 2 D 1 + + 3 0 2 D U + 1 I N odd = 3 U 2 D x 1 + k = 0 U 1 3 k 2 k + D U + 1 = 3 U 2 D x 1 + 1 2 D k = 0 U 1 3 k 2 k U + 1 = 3 U 2 D x 1 + 3 U 2 U 2 D I N odd ; j = 1 , , J .
The recursion formula for the starting numbers of the blocks of a pattern specified by U and D is
x j + 1 = 3 U 2 D x j + 3 U 2 U 2 D I N odd ; j = 1 , , J 1 .
which, for U = 4 and D = 6 , gives us
x j + 1 = 81 64 x j + 65 64 I N odd ; j = 1 , , J .
The general explicit formula—proof by complete induction—for repeating ududududdd J times is given by
x J = 3 U 2 D J 1 x 1 + 3 U 2 U 2 D k = 0 J 2 3 U 2 D k 3 U 2 D J 1 x 1 + 3 U 2 U 2 D 3 U 2 D 3 2 U 3 U 2 D 3 U 2 D J 1 1 = 3 U 2 D J 1 x 1 + 3 U 2 U 3 U 2 D 3 U 2 D J 1 1 I N odd ; j = 1 , , J .
For 3 U > 2 D or B > A , one gets a monotonically increasing sequence of block starting points x j of the specified pattern
x J = 3 U 2 D J 1 x 1 + Z N 3 U 2 D J 1 1 I N odd ; j = 1 , , J
with
Z : = 3 U 2 U , N : = 3 U 2 D .
To compute solutions of (20), the equation is reformulated as
A α B β = Z N B A , A = 2 D J 1 , B = 3 U J 1 , α = x J > β = x 1
or its equivalent form
A ( N α + Z ) = B Z β + N
with gcd ( A , B ) = 1 and B > A . Coprimality, i.e.,  gcd ( A , B ) = 1 , allows the deduction of
N α + Z = B k
N β + Z = A k
for some integer k derived as follows. From (22) and (23), the variables are obtained by solving
α = B k Z N
β = A k Z N .
Both numerators must be divisible by N. Exploiting the modular arithmetic techniques described and outlined in Appendix C.1, and using the modular inverse A 1 satisfying
A · A 1 1 mod N
it follows that
k 0 = ( mod ( Z , N ) · A 1 ) mod N .
Finally, the smallest integer m is determined such that k = k 0 + N m satisfies A k Z , which yields x 1 = β = ( A k Z ) / N according to (25).
So far, a computationally efficient method has been established for determining the first element x j of each block. However, this method only knows how to connect the block header elements but it does not know about the inner structure of the pattern. Therefore, one must check whether the Collatz operations up and down are possible as these operations depend on whether the Collatz sequence elements c j i [i is the index within the pattern structure] are odd or even. The process begins with m = 0 , which establishes the values of k and x 1 . This is performed by applying the Collatz sequence definition to c 11 = x 1 and checking whether x 1 is odd. This is followed by computing c 12 = 3 c 11 + 1 , checking whether c 12 is even, computing c 13 = c 12 / 2 , and so on, until the last element—making sure that c 1 , U + D + 1 is odd again to connect properly to the next block j = 2 . If a test for whether a number is odd or even fails, exit the loop and repeat the test with m + 1 . So far, if  3 U > 2 D a consistent pattern solution has always been found, leading to the conjecture that it is always possible to find a consistent solution with this approach. However, this method will not work if A mod N = 0 , e.g., for pattern u d u d d with U = 2 , D = 3 , and thus, N = 1 . This case is solved differently as in Appendix C.2.
The following table summarizes the results for a few patterns and J = 4 ( m and x 1 are the solutions without enforcing x J I N odd , while m and x 1 follow from enforcing it):
pattern U D 2 D 3 U Z N k 0 m x 1 m x 1 u d u d d 2 3 8 8 5 1 n . a 15 8187 u d u d u d d 3 4 16 27 19 11 u d u d u d u d d 4 5 32 81 65 49 u d u d u d u d d d 4 6 64 81 65 17 u d u d u d u d u d u d d d 6 8 u d u d u d u d u d u d d d d 6 9 u d u d u d u d u d u d u d d d d 7 9 512 2187 2059 1675 1353 5 779504511 517 69498981247
Note that if 3 U <   2 D or A > B , the sequence of block starting points is monotonically decreasing and given by
x J = 3 U 2 D J 1 x 1 + Z N 1 3 U 2 D J 1 I N odd ; j = 1 , , J
with
Z : = 3 U 2 U , N : = 2 D 3 U ,
and
A α B β = Z N A B , A = 2 D J 1 , B = 3 U J 1 , α = x J < β = x 1
with gcd ( A , B ) = 1 and B < A . Its equivalent form
A ( N α Z ) = B N β Z
and exploiting coprimality, i.e.,  gcd ( A , B ) = 1 , allows us to deduce
N α Z = B k
N β Z = A k
for some integer k. For  U = 5 and D = 8 , one obtains B = 243   < A = 256 and Z = 211 and N = 13 .
The variables α and β are obtained from solving (22) and (23)
α = B k + Z N β = A k + Z N ,
where both numerators must be divisible by N.
If the modular conditions are now exploited again, as previously conducted, there is no guarantee in this case of decreasing patterns that make itpossible to find such a path, i.e., sequence of numbers.

3. Computer Algebra Systems, Multi-Precision Libraries and Exact-Rational Solvers

The in the paper where I talk about which system I have used.
Purpose of this section is to check the suitability of computer algebra systems or other software packages for solving large-number optimization problems. To begin with, there are various commercial computer algebra systems (CAS, e.g., Mathematica, Maple) and libraries for high-precision computing—partly at no cost—available, which might be useful to solve large-number optimization problems.
The commercial computer algebra systems Mathematica and Maple both provide high-precision arithmetic or symbolic computations, respectively, and optimization functionality. While these systems are widely used and offer robust solvers, their proprietary nature and high licensing costs often limit accessibility in academic and resource-constrained contexts.
There are also some free and open-source computer algebra systems (CASs) that are used in academia and research. These systems can handle symbolic mathematics, numerical computations, and even optimization problems. And there are a few LP and MILP solvers for exact-rational arithmetic, though not necessarily designed or capable of handling large numbers. Below, without claiming completeness, is a list of such systems:
  • SageMath is one of the most comprehensive open-source CASs supporting symbolic and numerical computation, graph theory, combinatorics, and number theory as well as linear algebra, calculus, and optimization, e.g., linear programming and mixed-integer linear programming. It integrates many other open-source mathematical libraries (e.g., NumPy, SymPy, GAP, PARI/GP) into a single interface (URL: https://www.sagemath.org (accessed on 20 May 2025)).
  • SymPy is a Python library for symbolic mathematics, e.g., algebra, calculus, and equation solving as well as support for matrices, polynomials, and discrete math. It is lightweight and easy to integrate into Python projects and with NumPy and SciPy for numerical computations (URL: https://www.sympy.org (accessed on 20 May 2025)).
  • Maxima is a classic CAS with a long history in symbolic. It is based on the original Macsyma system developed at MIT. Its features include symbolic and numerical computation and numerical computations, e.g., solving equations, calculus, and linear algebra, or 2D and 3D plotting. It is mature, supports a large set of mathematical functions, and is known for its stability and robustness (URL: https://maxima.sourceforge.io (accessed on 20 May 2025)).
  • PARI/GP is a specialized CAS for number theory and algebraic computations, especially, fast arithmetic with large integers, support for elliptic curves, modular forms, and algebraic number fields. A scripting language for automation is provided as well (URL: https://pari.math.u-bordeaux.fr (accessed on 20 May 2025)).
  • Axiom is a general-purpose CAS with a strong focus on symbolic computation and algebraic structures, especially advanced symbolic algebra, support for abstract algebra and category theory. It is extensible via a scripting language (URL: http://www.open-axiom.org (accessed on 1 October 2025)).
  • Reduce is a general-purpose CAS designed for symbolic computation, especially symbolic algebra, calculus, and equation solving and support for polynomials, matrices, and tensors. Linear programming problems can be solved by the Simplex algorithm. It is customizable via its own scripting language (URL: https://reduce-algebra.sourceforge.io (accessed on 20 May 2025)).
  • Xcas/Giac is a user-friendly CAS based on the Giac library for symbolic and numerical computation with support for calculus, algebra, and geometry as well as linear programming and mixed-integer linear programming. Its syntax is compatible with Maple syntax and it is easy to use (URL: https://xcas.univ-grenoble-alpes.fr/en.html (accessed on 4 April 2025)).
There are a few rational—and in this sense—exact LP or MILP solvers:
  • PySimplex by Clavero [24] is a lightweight Python implementation of the standard Simplex algorithm for solving linear programming problems, aimed at educational use and algorithmic transparency (URL: https://github.com/carlosclavero/PySimplex (accessed on 11 June 2025)).
  • SCIP by Jarck [14] is a general purpose solver for mixed-integer linear and nonlinear programming problems. It has also an in-built exact-rational solver based on the GMP library (see below). Data and model input can be via the modeling language ZIMPL (URL: https://github.com/scipopt/scip/tree/exact-rational (accessed on 31 March 2025)).
  • SoPlex by Gleixner et al. [25] is an open-source, high-precision linear programming solver developed at ZIB, designed for robustness and accuracy using iterative refinement and rational computations; written in C++. It uses object-oriented design principles and integrates with other C++-based tools like SCIP, while also offering a C-style interface for broader compatibility.
It is not clear what the largest numbers these solvers support are. As PySimplex is coded in Python, it is likely that it can compute arbitrary large numbers only limited by the available memory of the computer.
Below are among the libraries relevant to large-number optimization problems:
  • GMP: The GNU Multiple Precision Arithmetic Library” by Granlund and the GMP Development Team [19], a widely used library for high-precision arithmetic; programming language: C. URL: https://gmplib.org (accessed on 20 May 2025). Granlund and the GMP Development Team [19].
  • MPFR: A Multiple-Precision Binary Floating-Point Library with Correct Rounding” by Fousse et al. [20] with a focus on high-precision floating-point arithmetic; programming language: ISO C language, and based on the GNU MP library. URLs: https://www.mpfr.org (accessed on 20 May 2025) and https://dl.acm.org/doi/10.1145/1236463.1236468.
  • FMlib: FM Multiple-Precision Software Package” by Prof. David M. Smith reachable through dsmith@lmu.edu. It is exact for integer and fractions. Programming language: Fortran95. URL: https://dmsmith.lmu.build (accessed on 7 April 2025).
Several computational tools were evaluated, including Xcas, which offered a straightforward installation process. In contrast, the GNU Multiple Precision Arithmetic Library (GMP) has less favorable prerequisites for Windows platforms, and its installation procedure is comparatively complex. In the preliminary stage of this study, Xcas proved effective for testing small-scale examples involving large numbers but was less suitable for larger or more complex cases. Other freely available computer algebra systems appear to lack support for solving mixed-integer linear programming (MILP) problems. Consequently, we developed our own programs, described in Section 4 (Foundations and Implementations) and Section 5 (Examples and Testing).

4. Algorithmic Approaches to Optimization Involving Large Numbers

In this section, a brief overview of our implementations of algorithmic approaches to large-number optimization is provided. As the focus is rather on exact results and less on numerical efficiency, textbook-style implementations are sufficient, especially as we use exact-number implementations, for instance, in Python. Among the approaches implemented are vertex enumeration for small LP problems (LP-enum.py), a standard-tableau Simplex algorithm (simplex.py), an enumeration scheme for binary linear programs (BLP-Enum.py), a B&B procedure (BandB.py), and some programs tailored to the individual problems explained.
The algorithms have been coded in Fortran95 (exploiting block structures to exceed the standard integer limit), and Python (symbolic as well as exact-fraction arithmetic via SymPy). Fortran was chosen for its existing block structure derived from my program used to compute the determinants in Kallrath and Neutsch [21], while Python was selected for its support of arbitrary-precision integers.
Testing the algorithm is challenging as there are no algebraic modeling languages available which would allow the coding of optimization problems in an elegant way. The results have been compared to the derived analytic optimal solutions, to the results obtained by GAMS/CPLEX up to the limit of numbers supported, and, in the initial phase of this project, to the results obtained by the computer algebra system Xcas. Having an enumeration scheme and a Simplex algorithm implementation is of great help to check both of these. The same is true to check the B&B implementation by the complete enumeration scheme.

4.1. Complete Vertex Enumeration

The vertex enumeration scheme has been useful in the beginning to check the Simplex algorithm described in Section 4.2. It addresses linear programs (problem LP) in the maximization form:
Maximize c T x Subject to A x = b l x u ,
where x  Mathematics 13 03190 i001 is the decision variable vector,  A  Mathematics 13 03190 i002 is the constraint matrix (m < n), b  Mathematics 13 03190 i003 is the right-hand side vector,  c  Mathematics 13 03190 i001 is the objective function coefficient vector, and l, u  Mathematics 13 03190 i001 are lower and upper bounds of the variables.
The implemented vertex enumeration method exploits exact arithmetic. It starts with inequality constraints that are converted to equalities by adding slack variables.
A e q = [ A S ] S i j = 1 if slack variable j for constraint i is 1 if slack variable j for constraint i is 0 otherwise
The vertex enumeration algorithm enumerates all possible basis combinations of size m from n + m variables (including original and slack variables). For each basis B, the following steps are performed:
  • Set non-basic variables to their bounds (considering both lower and upper bounds);
  • Solve A B x B = b A N x N for basic variables;
  • Check feasibility for the variable bounds: l x u and the original constraints: A x b with { , = , } ;
  • Compute objective value for feasible vertices.
To avoid numerical errors, exact arithmetic via SymPy is used within Python. The solver provides exact-fraction solutions with the objective function value in exact and decimal forms as well as a constraint verification with slack computation.

4.2. Simplex Algorithm

We have implemented a standard Simplex algorithm in tableau form for solving linear programs in standard form:
Maximize c T x Subject to A x = b x 0
using the same symbols as in Section 4.1 for problem LP. The implementation follows along the lines outlined in Kallrath [26].
For using the concept of basic feasible solution, let B { 1 , , n } be a basis with | B | = m , and let N = { 1 , , n } B be non-basic indices. This leads to partition A = [ A B | A N ] and x = ( x B , x N ) from which the basic solution follows by setting x N = 0 (or its lower and upper bounds) and solving
A B x B = b x B = A B 1 b .
Note that the basis inverse A B 1 is implicitly generated through the tableau. For maximization problems, the optimality condition is
c N T c B T A B 1 A N 0 .
If satisfied, the current solution is optimal.
In pivoting, the non-basic variable x k with the most positive reduced cost is selected as a candidate for entering the basis. Then, we compute the direction d = A B 1 a k and find the leaving variable via the minimum ratio test:
θ = min ( x B ) i d i d i > 0 .
Subsequently, the basis is updated, and the next iteration is performed until all reduced costs are negative.
For problems with general bounds, as described in Section 4.1 for problem LP, a transformation to standard form is applied. Specifically, for each bounded variable x i , the following procedure is implemented: If the lower bound l i 0 , substitute x i = x i + l i , and if u i < , add slack variable s i leading to
x i + s i = u i , s i 0 .
The resulting problem has only non-negative variables.
With bounds, the ratio test becomes:
θ = min ( x B ) i l i d i , u i ( x B ) i d i d i 0
Each variable in the optimal solution is either a basic variable with its value determined by A B 1 b , or a non-basic variable at lower bound if the reduced cost is the variables. < 0 , at upper bound if the reduced cost is > 0 , or free if the reduced cost = 0 (for a maximization problem). Each variable in the optimal solution is either a basic variable with its value determined by A B 1 b , or a non-basic variable at lower bound if reduced cost > 0 , at upper bound if reduced cost < 0 , or free if reduced cost = 0 . Note that a Phase I procedure is used to find an initial feasible solution.

4.3. Complete Enumeration for Binary Linear Programming (BLP) Problems

The binary linear programming problem is formulated as
Maximize ( or minimize ) c T x Subject to A x b x i { 0 , 1 } , i = 1 , , n ,
using the same symbols as in Section 4.1, ∘ represents the constraint relations (≤, =, or ≥ for each constraint) and x { 0 , 1 } n is the binary decision vector.
The algorithm implements complete enumeration starting with the initialization, in which all numerical values are converted to exact fractions
c j Fraction ( c j ) , a i j Fraction ( A i j ) , b i Fraction ( b i ) ,
and initialize the tracking variables
x None z if maximization + if minimization feasible _ count 0
Within the enumeration scheme, for each binary vector x { 0 , 1 } n (generated via Cartesian product), feasibility for all constraints
j = 1 n A i j x j i b i , i { 1 , , m }
is checked, where i is the relation for constraint i. If  feasible, the objective function value is
z = j = 1 n c j x j ,
and the updated best solution follows as
( x , z ) ( x , z ) if z > z ( maximization ) ( x , z ) if z < z ( minimization ) ,
and feasible count is incremented
f e a s i b l e _ c o u n t f e a s i b l e _ c o u n t + 1 .
The procedure terminates after evaluating all 2 n possible solutions: If feasible _ count = 0 , the problem is infeasible. Otherwise, the  optimal solution ( x , z ) is returned; alternatively, one could also inspect other binary feasible solutions.
The algorithm coded in BLP-enum.py has time complexity O ( m · n · 2 n ) and space complexity O ( m · n ) (for storing the problem). Consequently, the method is best applied only when n 20 .

4.4. Branch and Bound

Our B&B implementation follows along the lines outlined in Nemhauser and Wolsey [1] and Kallrath [26], and has been coded in our Python file BandB.py; it works well for the tests in Section 5.2 and the one described in Appendix B.4 solved by exploring three nodes. The pseudo-code of the implementation is shown below in Algorithm 1.
Algorithm 1 Branch the variables. and Bound for Mixed-Integer Linear Programming
1:
 Input: Objective coefficients c, constraint matrix A, right-hand side b, integer variables x, bounds B, sense s, relations r
2:
 Output: Optimal solution x , objective value z
3:
 Initialize best solution x null , best value z (min) or (max)
4:
 Initialize node queue Q with initial node n 0 from LP relaxation with bounds B
5:
 Set node counter k 1
6:
 while Q is not empty do
7:
     Pop node n from Q with LP solution x, value z, bounds B n
8:
     if  z z (min) or z z (max) then
9:
         Prune node n
10:
        continue
11:
    end if
12:
    if x satisfies integer constraints for all x i I  then
13:
        if  z < z (min) or z > z (max) then
14:
           Update x x , z z
15:
        end if
16:
        continue
17:
    end if
18:
    Find integer variable x i I with most fractional value x i
19:
    Compute floor f x i , ceiling c f + 1
20:
    Create left child n L with bound x i f
21:
    Solve LP relaxation for n L to get solution x L , value z L
22:
    if  N L is feasible then
23:
        Add n L to Q with priority z L (min) or z L (max)
24:
        Increment k
25:
    end if
26:
    Create right child n R with bound x i c
27:
    Solve LP relaxation for n R to get solution x R , value z R
28:
    if  n R is feasible then
29:
        Add n R to Q with priority z R (min) or z R (max)
30:
        Increment k
31:
    end if
32:
end while
33:
Return  x , z
Various tests of BandB.py are now performed in Section 5.

5. Sample Problems and Verification

In this section, a collection of test problems is presented to evaluate the correctness of the computational implementation. These problems have been specifically constructed for this study to verify that the Python programs produce accurate results. For each test problem, the analytical optimal solution has been rigorously derived and compared against the numerical solutions obtained from the programs described in Section 4 or Xcas. In addition to Xcas (Version: Giac/Xcas-Version 1.9.x based on builds from the period 2022–2024), we carried out tests with time-limited evaluation licenses of the commercial computer algebra systems Mathematica (Version 4.3 released on 25 August 2025) and Maple (Version 2025.1). The results obtained with these systems were identical with those of our Python implementation; only in the instance in Section 5.2.4 did Mathematica return an alternate solution with the same objective function value. Computation times of Mathematica, Maple, and Python were for most cases below a second and are thus of comparable.
A key challenge in developing effective test cases for this study lies in the absence of an algebraic modeling language capable of handling the large numbers specifically targeted for investigation. The same seems to hold true for standard input formats, such as the MPS- or LP-format used by various commercial LP or MILP solvers. Not surprisingly, these commercial solvers are not set up for dealing with large numbers, either. Therefore, only scalar models with a limited number of variables and constraints are presented. Note that our focus is solely on large numbers and exact solutions, not on computing times or efficiency of our large-number LP or ILP solvers. Unless reported otherwise, the computing times were of the order of seconds. The experiments were run on a 64 bit machine with an Intel(R) Core(TM) i7 CPU 2.8 GHz, 16 GB RAM, and Windows 10.
This section is divided into example groups of increasing complexity, beginning with three linear programming problems. This is followed by example group 2, with five ILP problems (the last one is the original Collatz integer programming problem introduced in Section 2). Finally, example group 3 has only one problem with a bilinear objective function and linear and quadratic constraints.

5.1. Example Group 1: Linear Programming Problems with Large Input Data

To evaluate the algorithms and implement solvers into these problems, either an analytical optimal solution is derived for each problem or the results are compared against one another for consistency. For instance, a problem is solved using Xcas, LP-enum.py, and simplex.py to verify whether consistent results are obtained across these implementations.

5.1.1. LP Example 1

Now, with  C 1 = 3 + 10 98 and C 2 = 2 , the aim is to solve the following maximization problem:
max z = C 1 x 1 + C 2 x 2
subject to the constraints:
5 x 1 + 3 x 2 7 + 10 100 ,
2 x 1 + x 2 38.5 + 3.5 · 10 99 ,
x 1 0 , x 2 0 .
As C 1 C 2 , for the optimal solution, one expects x 2 = 0 , and the approximate values x 1 1.75 · 10 99 and z = 1.75 · 10 197 . Our Simplex–Python code [LP-Ex1.py] produced the exact values as fractions:
x 1 = 7 × 10 99 + 77 4 , x 2 = 0 , z = 7 · 10 197 + 287 · 10 99 + 231 4 .

5.1.2. LP Example 2

Another test problem, utilized to evaluate the Python program simplex.py, and incorporating a large number N, such as N = 10 100 , is
max z = ( 3 + N ) x 1 + ( 2 + N ) x 2 ( 1 + N ) x 3
subject to the constraints
( 2 + N ) x 1 + ( 1 + N ) x 2 3 ( 1 + N )
( 1 + N ) x 1 ( 1 + N ) x 3 = 0
x 1 0 x 1 1 , 0 x 2 2 , 0 x 3 1 .
This problem has the following analytic solution: The basic variables
x 1 = x 3 = 1 + N 2 + N = 1 ε
are very close to 1 ( ε is a very small number approaching zero as lim N x 1 ( N ) = 1 ), the non-basic variable x 2 = 2 at its upper bound, and the objective function value
z = 2 N N + 1 2 N + 2 + N + 1 N + 2 N + 3 + 4 .
This example turned out to be a good test case for simplex.py, especially for the logic to handle bounds within the Simplex algorithm.

5.2. Example Group 2: Integer Linear Programming (ILP) Problems with Large Input Data

To test the ILP algorithms and implemented solvers on these problems, either an analytical optimal solution is derived for each problem or the results are compared against one another for consistency. For instance, a problem is solved using Xcas, BLP-enum.py, and BandB.py to verify whether consistent results are obtained across these implementations.
The following base case is presented, from which various variations are derived: The objective is to maximize
z = 10 98 x + 2 y
subject to
5 x + 3 y 10 100 2 x + y 10 99
with x , y I N 0 . The analytic solution is obviously x = 5 · 10 98 , y = 0 and z = 5 · 10 196 . Therefore, this base case is used as an example and starting problem from which a few more cases are derived.

5.2.1. ILP Example 1: Integer Linear Programming Problem with Large Input Data

Now, the aim is to solve the following maximization problem:
max z = C 1 x 1 + C 2 x 2 , C 1 = 3 + 10 98 , C 2 = 2
subject to the constraints:
5 x 1 + 3 x 2 7 + 10 100 ,
2 x 1 + x 2 38 + 3 · 10 99 ,
x 1 0 , x 2 0 , x 1 , x 2 I N 0 .
The solution obtained by Xcas (webversion) is
x 1 = 15 · 10 98 + 19 , x 2 = 0 , z = 15 · 10 196 + 64 · 10 98 + 57 .
Now, instead of (34), the inequality
2 x 1 + x 2 38.5 + 3.5 · 10 99
is used. Xcas provides the solution
x 1 = 15 · 10 98 + 19 , x 2 = 0 , z = 15 · 10 196 + 64 · 10 98 + 57 .
x 1 = 174999999999999793946100029942110534354159166340759960262438034155588 6439431476216320250651925807104
z = 17499999999999979394610002994211053435415916634075996026243803 4155588643943147621632025065192580715649999999999993818383 0008982633160306247749902 c 22798807873141024667659318294428648960751955777421312
which is wrong. The integer solution can be obtained by noting that
2 x 1 + x 2 38 + 35 · 10 98
is a valid inequality. Now Xcas and also BandB.py return the correct solution
x 1 = 175 · 10 97 + 19 , x 2 = 0 , z = 175 · 10 195 + 715 · 10 97 + 57 .
Key insight: Significant care is required when handling large numbers to ensure computational accuracy.

5.2.2. ILP Example 2: Problem with Large Objective Function Coefficients

With C 1 = 123456789012345678901234567890123 and C 2 = 987654321098765432109876543210987 , the goal is to maximize
max z = C 1 x 1 + C 2 x 2
subject to
2 x 1 + x 2 = 4 x 1 + 2 x 2 = 5
and x 1 , x 2 I N 0 . As  C 2 > C 1 , in the optimal solution one expects x 2 x 1 . Due to x 1 + 2 x 2 = 5 , the maximum value x 2 can attain is x 2 . Thus the analytic solution is that our Python code BandB.py returns the correct results
x 1 = 1 , x 2 = 2 , z = 2098765431209876543120987654312097 2 · 10 33 .
Also, these were obtained by BandB.py in less than a second. These values are found at root node and both inequalities are active.

5.2.3. ILP Example 3: Evolve a Problem from Small to Large Coefficients in the Constraints and Objective Function

The base problem ILP3 is to maximize
z = x 1 + x 2 + 2 x 3
subject to
7 x 1 + 2 x 2 + 3 x 3 26 5 x 1 + 4 x 2 + 7 x 3 42 2 x 1 + 3 x 2 + 5 x 3 28
with integer variables x 1 , x 2 , x 3 I N 0 . This problem has three optimal solutions (1, 0, 5), (0, 1, 5) and (1, 2, 4) with z = 11 and is required to explore the following eight nodes to prove optimality—BandB.py returns the solution within a second:
node bounds   added x 1 x 2 x 3 z status root   0 14 / 11 0 56 / 11 126 / 11 x 1 , x 3   fractional 1 : 0 - 1 x 1 2 2 0 4 10 integer   feasible 2 : 0 - 2 x 1 1 1 0 26 / 5 57 / 5 x 3   fractional 3 : 2 - 1 x 3 6 infeasible 4 : 2 - 2 x 3 5 1 1 / 3 5 34 / 11 x 2   fractional 5 : 4 - 1 x 2 1 x 3   fractional 6 : 4 - 2 x 2 0 1 0 5 11 integer   feasible 7 : 5 - 1 x 3 5 x 3 = 5 0 1 5 11 integer   feasible 8 : 5 - 2 x 3 4 1 2 4 11 integer   feasible
The values agree with those obtained with GAMS and GAMS/CPLEX. Note that as long all integer numbers are within the limits that GAMS/CPLEX can handle, i.e.,  2 31 , it is, of course, faster and more robust than BandB.py. But GAMS/CPLEX is only used herein to verify that BandB.py produces the correct results.
Variation 1 is the base problem ILP3 with a modified objective function
z = ( 1 + N ) x 1 + ( 1 + N ) x 2 + ( 2 + N ) x 3
subject to the same constraints and, for instance, N = 10 800 . Ten nodes have been sufficient to proof the optimality of ( 1 , 7 , 1 ) with objective function z = 9 N + 10 . Note that (0, 1, 5) is feasible but a zero variable has less of an effect for one of the N’s in the objective function.
Variation 2 is the base problem ILP3 with the original objective function but modified inequalities
( 7 + N ) x 1 + ( 2 + N ) x 2 + ( 3 + N ) x 3 26 + 6 N ( 5 + N ) x 1 + ( 4 + N ) x 2 + ( 7 + N ) x 3 42 + 6 N ( 2 + N ) x 1 + ( 3 + N ) x 2 + ( 5 + N ) x 3 28 + 6 N
and, for instance, N = 10 100 . This time, six nodes have been sufficient to prove the optimality of ( 1 , 0 , 5 ) with the same objective function value z = 11 as in the base case. A value of up to N = 10 6 has also verified the solution with GAMS/CPLEX. Beyond this value, the GAMS/CPLEX solution becomes inexact.
Variation 3 is the base problem ILP3 with a modified objective function— C 1 = 2 , C 2 = 3 , C 3 = 5 —and modified inequalities
z = ( C 1 + N ) x 1 + ( C 2 + N ) x 2 + ( C 3 + N ) x 3
subject to
( 7 + N ) x 1 + ( 2 + N ) x 2 + ( 3 + N ) x 3 26 + 7 N ( 5 + N ) x 1 + ( 4 + N ) x 2 + ( 7 + N ) x 3 42 + 7 N ( 2 + N ) x 1 + ( 3 + N ) x 2 + ( 5 + N ) x 3 28 + 7 N
producing—for N = 10 800 within a second and 12 nodes explored—the solution ( 1 , 2 , 4 ) and z = 7 N + 28 . Note that this is identical to the analytic solution as it has the feasible region, but as C 1 < C 2 < C 3 , one expects x 1 < x 2 < x 3 . As for the Variation 2 problem, up to N = 10 6 has also verified the solution with GAMS/CPLEX. Beyond this value, the GAMS/CPLEX solution becomes inexact.
So, for these problems with only a few variables and constraints, BandB.py based on B&B using LP relaxation works fine and produces results within seconds.

5.2.4. ILP Example 4: Problem with Large Coefficients in Both the Objective Function and Constraints

With
A = 12345678901234567890 B = 98765432109876543210 C = 123456789012345678901234567890
the goal is to maximize
max z = A x + B y
subject to
A x + B y C
and x , y I N 0 . To derive an analytic-closed form solution, note that (37) can be divided by 90, leading to the inequality by
A x + B y C
with
A = 137174210013717421 , B = 1097393690109739369 , C = 1371742100137174210013717421 .
Now, (38) is transformed into the equality
( A x + B y ) + v = C
or
A x + B y = C v = D , x , y , v I N 0 .
which allows us to apply a modified solution approach for solving linear Diophantine equations, i.e., (39) is interpreted as a Diophantine equation in two integer variables x and y while the slack v is treated as an integer parameter to be minimized (finding the maximum of A x + B y corresponds to finding the minimum of v). The greatest common divisor (gcd) of A and B ,
gcd ( 137174210 013717421 , 1097393690109739369 ) = 10 10 + 1 = 10000000001 ,
is not a divisor of C , which means that v = 0 is not a solution and that the Diophantine approach cannot directly be applied. Thus, finding the maximum of A x + B y corresponds to finding the minimum of v. As (39) is interpreted as a Diophantine equation in two integer variables x and y with a constant D = D ( v ) , it suffices to find the largest D , which has 10 10 + 1 as a divisor:
1371742100137174210013717421 10000000001 = 1371742100137174210000000000 10000000001 + 13717421 10000000001 = 137 174 210 000 000 000 + 13717421 10000000001
Therefore, the minimum value of v is 13717421, which corresponds to the minimum value u = 90 · 13717421 = 1234567890 as the slack variable in the original equality ( A x + B y ) + v = C . Knowing the minimal u, we construct two particular solutions of (39), or the original equivalent, respectively.
The largest possible x follows from setting y = 0 and observing
x = C A = 10 10 , y = 0 .
from which the optimal objective function value
z = 123456789012345678900000000000 = 123456789012345678901234567890 1234567890
follows. The largest possible y follows from setting x = 0 and observing
y = C B = 1249 999 988 , x = 0
leading to the non-optimal objective function value
z = 123456788952160493693981481480 = 123456789012345678901234567890 60185185207253086410 .
To summarize, instead of maximizing A x + B y subject to A x + B y C , the slack u is minimized in the equation
A x + B y + u = C .
The minimal u occurs when A x + B y is maximized without exceeding C. Then, a particular solution follows by setting y = 0 and maximize x:
x = C A = 10 10
u = C A × 10 10 = 1234567890
finding that it is optimal. The general solution for all triplets ( x , y , u ) is parameterized as
x = 10 10 109739369 s y = 13717421 s u = 1234567890
where s I N 0 . Requiring that only non-negative solutions are accepted
x = 10 10 109739369 s 0 s 91 y = 13717421 s 0 s 0 ,
there exist 92 optimal solutions ( s = 0 to 91) with identical and minimal slack u = 1234567890 achieved by the particular solution ( 10 10 , 0 , 1234567890 ) , which is also the basis for a family of solutions ( 10 10 109739369 s , 13717421 s , 1234567890 ) for s = 0 to 91.
These results were confirmed using the computer algebra system Xcas with its Maple-like input syntax close to Maple syntax, i.e.,
  lpsolve(12345678901234567890*x+98765432109876543210y,
      [12345678901234567890x+98765432109876543210y <= 123456789012345678901234567890],
      assume=integer,maximize)
The solution of the LP relaxation ( x , y 0 . I N 0 ) is returned within less than a second z = 123456789012345678901234567890 with
x = 0 , y = 1371742100137174210013717421 1097393690109739369 = 1249999988 + 668724280080589849 1097393690109739369 1249999988.61
However, if the integrality condition x , y Z Z is added, the Branch-and-Bound-based MILP solver in Xcas would need about two hours to solve this problem and give the optimal solution
x = 96017172 , y = 1262002135 , z = 123456789012345141599999946270 .
After modifying the integrality condition to x , y I N 0 and adding the derived bounds x 10 10 and y 1249999988 , Xcas needed only minutes to come up with the result
x = 13707741 , y = 1248286521 , z = 123456789012346767900000108900 .
which, however, violates (37).
BandB.py is too simplistic to solve this case; within 24 h, it has not found an optimal solution but many integer-feasible solutions. It is interesting to see that the B&B algorithm produces feasible solutions moving from the first integer-feasible solution found ( x , y ) = ( 4 , 1249999988 ) and z = 123456789001543209298919753040 to ( x + 8 , y 1 ) , and so on; each move requires three LP problems to be solved. This can be understood through the observation
B A = 98765432109876543210 12345678901234567890 = 109 739 369 13717421 = 8 + 1 13717421 ,
i.e., from an integer-feasible solution one can always get to the next one by increasing x by 8 and decreasing y by 1. In fact, this problem is not very well suited for a B&B algorithm at all. Its is rather a problem about which combinations ( x , y ) produce the smallest slack in the inequality. Therefore, we programmed a complete enumeration to approach this problem starting at ( 4 , 1249999988 ) and then increasing x by 1 each time. The first optimal solution is found at node 13717418 with ( x , y ) = ( 13717421 , 1248285311 ) and z = 123456789012345678900000000000 as expected. Furthermore, Mathematica returned the solutions x = 123456790 and y = 1234567890 with the correct optimal objective function value while Maple only came up with an infeasible point x = 10 10 + 1 and y = 0 .

5.2.5. ILP Example 5: Solving the Collatz ud Pattern Problem

Let us now apply BandB.py to the MILP problem C1 presented in Section 2.1:
min x 1 s . t . 3 x j 2 x j + 1 = 1 , j J 1 : = { 1 , , J } .
The results for J = 3 to J = 11 obtained by BandB.py agree with the analytic solution x 1 = 2 J 1 and are summarized in the table below, which shows x 1 , the number N of nodes explored, and the CPU times t used.
Two key observations emerge from Table 1. First, BandB.py based on B&B with LP relaxation is stable enough to handle this problem (for J = 11 , 11 integer variables and 10 equality constraints). Second, as BandB.py in the current version lacks sophisticated presolving techniques as implemented, for instance, in GAMS/CPLEX, it is too simplistic to solve this problem efficiently. The reason is that the lower bound for the objective function x 1 moves up only very slowly. A brute-force approach enumerating all possible values of x 1 and verifying the feasibility of all constraints would likely achieve comparable computational efficiency. Alternatively, Mathematica yielded the correct solution for J = 30 or J = 100 in less than a second; J = 1000 produced the solution within 20 s.

5.3. Example 3: Bilinear Objective Function and Linear and Quadratic Constraints (MIQP)

Consider the following constrained quadratic mixed-integer optimization (MIQP) problem with very large integers (approximately 100 digits) starting with the continuous case in which the bilinear function
max z = f ( x , y ) = x y
should be maximized subject to one linear and one quadratic inequality constraints:
x + y N
x 2 + y 2 M
x , y 0 .
The maximum occurs on the boundary of the feasible region. Consequently, two cases need to be considered:
Case 1: The maximum lies on the line x + y = N . The substitution y = N x leads to
z = x ( N x ) = N x x 2 .
Taking the derivative and setting to zero,
d z d x = N 2 x = 0 x = N 2 .
Thus y = N 2 and
z = N 2 2 = N 2 4
This point is feasible only if
N 2 2 + N 2 2 = N 2 2 M .
Case 2: The maximum lies on the circle x 2 + y 2 = M . By symmetry, the  maximum occurs when x = y :
2 x 2 = M x = M 2 .
Thus
z = M 2
This point is feasible only if
2 M 2 = 2 M N .
Thus, the maximum value occurs at
x = y = N 2 when N 2 2 M z = N 2 4
or
x = y = M 2 when N 2 2 2 M N z = M 2 )
At the boundary where N 2 = 2 M , both solutions coincide.
Consider now the mixed-integer case for maximizing z = x y subject to
x + y N
x 2 + y 2 M
with x Z Z and y I R therefore yielding a mixed-integer quadratic programming problem.
Feasible domain integer bounds for x follow from (42) and yield
x 2 M x M , M Z Z .
Since z = x y 0 , we consider x 0 and conduct a full enumeration on x
x 0 , 1 , , min ( N , M ) ,
i.e., for each fixed value of x, a feasible value of y needs to be found. For each integer x, y must satisfy
y min N x , M x 2
For each feasible integer x, the objective function value is computed as
z ( x ) = x · min N x , M x 2
When N x M x 2 , the solution is in the linear-dominated region and follows as
z ( x ) = x ( N x )
which has the maximum at x = N / 2 or N / 2 .
When M x 2 < N x , the solution is in the quadratic dominated region, i.e.,
z ( x ) = x M x 2
with the maximum near x = M / 2 .
This leads to the Algorithm 2.
Algorithm 2 Optimize(N, M)
 1: xmax <– min(floor(N), floor(sqrt(M))
 2: zmax <– 0
 3: for x = 0 to xmax do
 4:    y <– min(N-x, sqrt(M-x2))
 5:    if xy > zmax then
 6:       (x*, y*, zmax) <– (x, y, xy)
 7: return (x*, y*, zmax)
This is illustrated for the small number example N = 10 , M = 50 :
x { 0 , 1 , , 7 } x = 3 : y 6.403 , z 19.21 x = 4 : y 5.831 , z 23.32 x = 5 : y = 5 , z = 25 ( optimal by inspection ) x = 6 : y 3.742 , z 22.45
The maximum occurs at integer x closest to N / 2 when the linear constraint dominates, and  M / 2 when the quadratic constraint dominates. Computing the optimal solution requires full enumeration of feasible x values. This generalizes the continuous solution to the integer case.
For the example data N = 10 50 + 1 and M = 2 · 10 100 + 1 , our Python program finds the optimal solution
x = 50000000000000003814884920545943501647482485473280
and
y = 49999999999999996185115079454056498352517514526721.0 ,
yielding the maximal objective function value
   z = 249999999999999999999999999999998544665304299
           1170386487993513416619233679558726642285401256891514880.0

6. Conclusions and Outlook

We have constructed partial sequences of the Collatz sequence increasing for various patterns ud, ududd, ududududdd, etc.—and repeatedly increase for any fixed natural number J. With growing J, this quickly leads to large-number optimization. Beyond the limits of 2 31 for integers supported by commercial MILP solvers, Diophantine systems of equations and modular arithmetic have been used to derive analytic or semi-analytic solutions.
In addition, we have developed Python and Fortran code to solve exactly linear programming, binary linear programming, and integer linear programming problems with not too many variables or constraints, suitable to provide exact solutions for large-number optimization problems. To test the correctness of the programs, nine small specifically constructed examples [LP1-3, ILP1-5 and one mixed-integer quadratic programming problem (MIQP)] are used (analytical analytic optimal solutions have been derived). Unless otherwise reported, the computing times were on the order of seconds, and the results agreed with the expected analytical solutions. These test examples can serve as basic benchmarks or a test suite for other researchers. While the algorithms themselves have the same computational complexity and scale as their normal-number counterparts, the overhead for exact arithmetic and large numbers limits their current implementations to small and, at best, medium-sized problems. The scalability trends observed in the Collatz MILP example (Table 1) illustrate this effect concretely, as both the number of explored nodes and the CPU time increase rapidly with growing size of the decision variables. In practice, the sensitivity to input size becomes evident even in small test cases: as the number of digits or coefficients increases, runtimes and memory usage grow sharply due to the overhead of exact arithmetic. Although the underlying algorithms maintain the same asymptotic complexity as their fixed-precision counterparts, the constant factors introduced by large-number operations dominate. This explains why the current implementations remain limited to small and medium-scale instances, while pointing to the need for future advances in algorithm engineering and data structures to address scalability.
Specialized algorithms or their development for mathematical optimization and large numbers can be justified in special situations. General algorithms or their development for large-number optimization is more difficult to justify as in many cases it can be easier to program simple algorithms in Python, C, or Fortran or to use computer algebra systems (although computer algebra systems are not specifically designed for optimization problems, Mathematica is always a valuable tool to explore). However, one can argue that generic exact large-number LP and ILP solvers allow us to explore further problems that appear interesting to us.
Python’s capabilities for handling large numbers and exact-fraction arithmetic are robust, making it well suited for solving small- and medium-sized problems involving large numbers, which correspond closely to the problem characteristics considered in this study. Unlike many programming languages that impose fixed-size integer limits, Python’s built-in ‘int’ type supports arbitrary precision, constrained only by available memory, although extremely large numbers require substantial memory. The current work (version 28 September 2025) is the beginning of large-number optimization (see the GitHub repository: https://github.com/JosefKallrath/Large-number-optimization (accessed on 28 September 2025)), and hopefully this will lead to the creation of a library approach that covers a wider range of problems, for instance, prime programming problems. Given our exact B&B solver for large numbers at hand, the plan is to extend the B&B logic to prime numbers, i.e., instead of numbers to be integers, they are enforced to be prime numbers—this would enable us to solve also prime programming problems for large numbers. An important improvement is to simplify problem coding, ideally by employing a GAMS-type syntax enhanced to support large numbers.

Funding

This research received no external funding.

Data Availability Statement

The original data presented in the study are openly available in the GitHub “Large-number optimization” at https://github.com/JosefKallrath/Large-number-optimization (accessed on 28 September 2025).

Acknowledgments

S.A.E.G. Falle (Dept. of Mathematics, Leeds University, UK) made very helpful comments and suggestions that improved the paper. I appreciate the discussions with Siegfried Jetzke (retired from Ostfalia Hochschule für angewandte Wissenschaften, Salzgitter, Germany) on computer algebra systems. Constructive reviews and suggestions by four anonymous referees and the encouraging comments by the two Guest Editors are greatly appreciated.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A. Smallest 100-Digit Prime Number

The smallest 100-digit prime number should have each of the digits 0 to 9 occurring ten times and zero should not be in the leading position. This problem was introduced by the author to illustrate how easy it is to fool AI chatbots. They usually apply brute force and come up with solutions which are not even prime numbers. Even worse, they do not detect that each 100-digit number with the additional constraint formulated above has a digit sum of 450, i.e., it is dividable by 9—and thus definitely not a prime number. Admittedly, no large-number optimization algorithm is needed to solve this problem. A brief analysis suffices in this case.

Appendix B. Analytic Solutions for Selected LP and BLP Problems

For solver testing, in this appendix, optimal analytic solutions or enumeration methods for various problems with only two or three variables are developed.

Appendix B.1. Maximization LP Problem with Two Variables

General Formulation

Maximize z = C 1 x + C 2 y Subject to A 1 x + A 2 y b x 0 , y 0
where C 1 , C 2 , A 1 > 0 , A 2 > 0 , b > 0 are arbitrary constants. We can derive the analytic optimal by solving the equality from the inequality:
A 1 x + A 2 y = b y = b A 1 x A 2 .
The feasible regions of the problems consists of the three vertices
( 0 , 0 ) , b A 1 , 0 , 0 , b A 2 .
Evaluation of the objective function at these vertices gives
z 1 = 0 , z 2 = C 1 b A 1 , z 3 = C 2 b A 2
Thus, the optimal solution is given by
( x , y ) = b A 1 , 0 , if C 1 1 A 1 > C 2 1 A 2 0 , b A 2 , if C 1 1 A 1 C 2 1 A 2
with the maximum value of the objective function
z max = max C 1 b A 1 , C 2 b A 2 .

Appendix B.2. Analytic Solution for a Maximization LP Problem with Three Variables

The following linear programming (LP) problem is considered:
max z = C 1 x 1 + C 2 x 2 + C 3 x 3
Subject to
A 11 x 1 + A 12 x 2 + A 13 x 3 b 1
A 21 x 1 + A 22 x 2 + A 23 x 3 b 2
x 1 , x 2 , x 3 0 .
This problem is solved through a complete enumeration of the vertices. Initially, slack variables s 1 and s 2 are introduced to convert the inequalities into equalities:
A 11 x 1 + A 12 x 2 + A 13 x 3 + s 1 = b 1
A 21 x 1 + A 22 x 2 + A 23 x 3 + s 2 = b 2
s 1 , s 2 0 .
Next, basic and non-basic variables are identified, which is easy for this system with n = 5 variables ( x 1 , x 2 , x 3 , s 1 , s 2 ) and m = 2 equations. For a basic feasible solution, we set three non-basic variables to zero and solve for the remaining two basic variables. In this small example, there are only n m = 5 2 = 10 possible cases enumerated below:
case basic   variables non - basic   variables 1 x 1   and   x 2 x 3 = s 1 = s 2 = 0 2 x 1   and   x 3 x 2 = s 1 = s 2 = 0 3 x 2   and   x 3 x 1 = s 1 = s 2 = 0 4 x 1   and   s 1 x 2 = x 3 = s 2 = 0 5 x 1   and   s 2 x 2 = x 3 = s 1 = 0 6 x 2   and   s 1 x 1 = x 3 = s 2 = 0 7 x 2   and   s 2 x 1 = x 3 = s 1 = 0 8 x 3   and   s 1 x 1 = x 2 = s 2 = 0 9 x 3   and   s 2 x 1 = x 2 = s 1 = 0 10 s 1   and   s 2 x 1 = x 2 = x 3 = 0
For each case, we solve the system of equations for the basic variables and check feasibility ( x 1 , x 2 , x 3 , s 1 , s 2 0 ). For the example Case 1 ( x 1 and x 2 are basic), we set x 3 = s 1 = s 2 = 0 and the system to be solved becomes
A 11 x 1 + A 12 x 2 = b 1 A 21 x 1 + A 22 x 2 = b 2
which leads to x 1 and x 2 :
x 1 = b 1 A 22 b 2 A 12 A 11 A 22 A 21 A 12
x 2 = b 2 A 11 b 1 A 21 A 11 A 22 A 21 A 12
If both x 1 and x 2 turn out to be non-negative,
x 1 0 , x 2 0
we have found a feasible vertex and can compute the objective function value z 1 = C 1 x 1 + C 2 x 2 .
Repeating this process for all 10 cases allows us to pick the optimal solution as the best feasible basic solution with the maximum value of z c . While working within the enumeration scheme, we can identify the special cases as being either unbounded (the feasible region is unbounded and the objective function can increase indefinitely; the problem has no finite solution; zero denominator for x 1 or x 2 ), degeneracy (one or more basic variables are zero), or infeasible (for all 10 cases, at least one basic variable is negative).

Appendix B.3. Maximization in a Knapsack Problem

Here, we consider a simple knapsack problem with n = 2 binary variables and one m = 1 constraint:
max z = C 1 x + C 2 y Subject to A 1 x + A 2 y b x { 0 , 1 } , y { 0 , 1 }
with C 1 , C 2 , A 1 , A 2 , b > 0 . In this case, again, we find the analytic solution by enumerating all possible values of ( x , y ) { ( 0 , 0 ) , ( 0 , 1 ) , ( 1 , 0 ) , ( 1 , 1 ) } and check the feasibility for each pair under the inequality constraint A 1 x + A 2 y b :
( 0 , 0 ) : 0 b ( unconditionally , feasible ) ( 0 , 1 ) : A 2 b ( Feasible if true ) ( 1 , 0 ) : A 1 b ( Feasible if true ) ( 1 , 1 ) : A 1 + A 2 b ( Feasible if true )
Then, we evaluate the objective function for each feasible pair
z ( 0 , 0 ) = 0 z ( 0 , 1 ) = C 2 z ( 1 , 0 ) = C 1 z ( 1 , 1 ) = C 1 + C 2
and find the optimal solution:
( x , y ) = ( 1 , 1 ) , if A 1 + A 2 b ( 1 , 0 ) , if A 1 b , A 2 > b , or ( C 1 > C 2 and A 1 b ) ( 0 , 1 ) , if A 2 b , A 1 > b , or ( C 2 > C 1 and A 2 b ) ( 0 , 0 ) , if A 1 > b and A 2 > b
yielding the maximum value of the objective function:
z max = C 1 x + C 2 y .

Appendix B.4. Analytic Solution for a Maximization BLP Problem with Three Binary Variables

Now we consider the following binary linear programming (BLP) problem with n = 3 binary variables and m = 2 constraints:
max z = C 1 x 1 + C 2 x 2 + C 3 x 3
subject to
A 11 x 1 + A 12 x 2 + A 13 x 3 b 1
A 21 x 1 + A 22 x 2 + A 23 x 3 b 2
x 1 , x 2 , x 3 { 0 , 1 } .
Again, we resort to complete enumeration. Since x 1 , x 2 , and  x 3 are binary variables, there are 2 3 = 8 possible combinations of values for ( x 1 , x 2 , x 3 ) . We enumerate all combinations (0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), and (1, 1, 1) and check for each combination ( x 1 , x 2 , x 3 ) if it satisfies the constraints
A 11 x 1 + A 12 x 2 + A 13 x 3 b 1
A 21 x 1 + A 22 x 2 + A 23 x 3 b 2 .
If a combination satisfies both constraints, it is feasible; otherwise it is infeasible. For each feasible combination, we compute the corresponding value of the objective function
z = C 1 x 1 + C 2 x 2 + C 3 x 3 .
The optimal solution is the feasible combination ( x 1 , x 2 , x 3 ) that maximizes z. If none of the eight combinations satisfies the constraints, then the problem is infeasible. If multiple combinations yield the same maximum value of z, then there are multiple optimal solutions.
With N = 10 800 , let us consider the example
C 1 = 3 + N , C 2 = 2 + N , C 3 = 1 + N
A 11 = 1 , A 12 = 2 , A 13 = 1 , b 1 = 3
A 21 = 2 , A 22 = 1 , A 23 = 3 , b 2 = 4
We evaluate each combination c and obtain the following results:
c ( x 1 , x 2 , x 3 ) check   constraints status z 1 ( 0 , 0 , 0 ) 0 3   and   0 4 feasible 0 2 ( 0 , 0 , 1 ) 1 3   and   3 4 feasible 1 + N 3 ( 0 , 1 , 0 ) 2 3   and   1 4 feasible 2 + N 4 ( 0 , 1 , 1 ) 3 3   and   4 4 feasible 3 + 2 N 5 ( 1 , 0 , 0 ) 1 3   and   2 4 feasible 3 + N 6 ( 1 , 0 , 1 ) 2 3   and   5 4 infeasible 7 ( 1 , 1 , 0 ) 3 3   and   3 4 feasible 5 + 2 N 8 ( 1 , 1 , 1 ) 4 3   and   6 4 infeasible
The unique optimal solution is ( x 1 , x 2 , x 3 ) = ( 1 , 1 , 0 ) with z = 5 + 2 N , which we also obtained by exploring three nodes by using our B&B code implementation BandB.py.

Appendix C. Details for Solving Diophantine Equations and a Special Collatz Pattern

Appendix C.1. Modular Arithmetic for Solving the Diophantine Equations for Pattern-Defined Collatz Subsequences

We start with the Diophantine system
N α + Z = B k
N β + Z = A k
for some integer k, for which we now want to exploit the modular conditions. From (25), it follows that
A k Z mod N A k mod ( Z , N ) mod N .
Let us consider the example case with A = 16 J , Z = 19 and N = 11 . Since 16 5 mod 11 , then A = 16 J 5 J mod 11 .
The modular inverse A 1 satisfies:
A · A 1 1 mod 11 where A = 5 J mod 11
and therefore,
k 8 A 1 mod 11 .
To find minimal solutions, we let k 0 = mod ( Z , N ) A 1 mod N and derive
k = k 0 + N m for m 0 .
Choose the smallest m such that
A ( k 0 + N m ) Z
As we are dealing with large numbers A, m = 0 will work. Finally, we compute
β = A k Z N , α = B k Z N .
Implementation in Fortran or Python has some advantages when it comes to large integers:
  • Computing A = 16 J 1 or mod ( A , N ) J 1 , respectively.
  • Finding A 1 mod 11 or mod ( A , N ) 1 mod N via a loop over t from 1 to N 1 .
  • Calculating k 0 = ( 8 A 1 ) mod N or mod ( Z , N ) A 1 mod N in the general case.
  • Determining the minimal m such that A ( k 0 + 11 m ) 19 , in  general: ( k 0 + N m ) Z .
  • Computing β and α using the derived Formula (A7).
Let us demonstrate this for the example. For J = 6 the detailed steps are
  • A = 16 5 = 1048576 ;
  • 16 5 mod 11 = 5 5 mod 11 = 1 A 1 = 1 as 1 · 1 = 1 ;
  • k 0 = 8 mod 11 = 8 , in general: k 0 = mod ( Z , N ) mod N ;
  • m = 0 as 1048576 × 8 19 ;
  • β = ( 1048576 × 8 19 ) / 11 = 762599 , in general: β = ( A × mod ( Z , N ) Z ) / N ;
  • B = 27 5 = 14348907 , in general: B = ( 3 U ) J 1 ;
  • α = ( 14348907 × 8 19 ) / 11 = 10435567 , in general: α = ( B × mod ( Z , N ) Z ) / N .
We compute the modular inverse of A modulo N, denoted as A 1 (a number between 1 and N 1 ) defined as
A × A 1 = 1 mod N
by performing a loop over t from 1 to N 1 exploiting
A × A 1 mod N A mod N × A 1 mod N
and A mod 11 = 5 5 mod 11
! Compute inv\_A (modular inverse of A modulo N) via a loop over t:
   inv\_A = 0
   do t = 1, N-1
    if (mod(A\_modN * t, N) == 1) then
       inv\_A = t
      exit
    end if
   end do
Now, we can compute k 0 = ( mod ( Z , N ) A 1 ) mod N and determine the smallest integer m such that k = k 0 + N m satisfies A k Z , which finally gives us β = ( A k Z ) / N .

Appendix C.2. Computing the Increasing Collatz Sequence for Pattern ududd

Is it possible to find a starting number of the Collatz sequence for which the partial subsequence increases for a given number, J, of steps, i.e., can we find a starting number x 1 following the pattern ududd (note: U = 2  up moves and D = 3  down moves)? That is,
x 2 = [ 3 [ 3 x 1 + 1 ] / 2 + 1 ] / 4 = 9 8 x 1 + 5 8 I N odd I N ,
where I N odd is the subset of odd natural numbers. In general, this pattern (up by 3 x + 1 , down by 2, again up by 3 x + 1 , and finally, down by 4) repeats J times as
x j + 1 = 3 2 8 x j + 3 1 8 + 3 0 4 = 9 8 x j + 5 8
x j + 1 = 9 8 x j + 5 8 I N odd ; j = 1 , , J .
The block headers x j can be computed by solving the MILP minimization problem:
min x 1 s . t . 9 x j 8 x j + 1 = 5 , j J 1 .
However, the modular arithmetic part for solving the system of Diophantine equations needs some adjustment, as N = 1 and thus A mod N = 0 . This we consider in Appendix C.2.1.

Appendix C.2.1. Approach 1: Modified Modular Arithmetic

Consider the Diophantine system
α + 5 = B k
β + 5 = A k ,
where A = 8 J 1 , B = 9 J 1 , and gcd ( A , B ) = 1 for all J 1 . Note that these equations, (A11) and (A12), also follow from (24) and (25) with Z = 5 and N = 1 obtained from (21):
α = B k 5 β = A k 5 .
As in Section 2.5, the smallest integer m is determined such that k = k 0 + N m satisfies A k Z and leads to a consistent inner pattern structure. This time, k 0 follows as k 0 = k min , i.e., occurring as the minimal non-negative solution:
k min = k 0 = 5 A = 5 8 J 1 .
For J = 1 , one obtains
A = 8 0 = 1 B = 9 0 = 1 k min = 5 1 = 5 β min = A k min Z = 1 × 5 5 = 0 α = 1 × 5 5 = 0
which cannot be used as x 1 = β min = 0 , is not odd. For k = k 0 + 1 = 1 , one gets x 1 = 6 5 = 1 , which leads to the Collatz cycle 1-2-4-1, i.e., a useless gain for our purpose. For k = 2 , one obtains x 1 = 5 , leading to 5-16-8-4-2-1, not consistent with the pattern ududd. k = 3 gives x 1 = 10 leading to 5, which is also not good. The first k which is feasible is k = 16 , leading to x 1 = 11 , leading to 11-34-17-52-26-13.
For J = 2 , the solution is
A = 8 1 = 8 B = 9 1 = 9 k min = 5 8 = 1 β min = 8 × 1 5 = 3 α = 9 × 1 5 = 4 ( valid )
and thus, 8 k 5 . Interestingly enough, the first feasible k is again k = 16 leading to x 1 = 123 leading to 123-370-185-556-278-139.
Finally, for J 3 we note
A = 8 J 1 64 k min = 5 8 J 1 = 1 β min = 8 J 1 5 α = 9 J 1 5 ( always valid )
and again k = 16 produces the first feasible pattern. The smallest β for A = 8 J 1 , B = 9 J 1 is given by
β min = 5 if J = 1 , 3 if J = 2 , 8 J 1 5 if J 3 .
Table A1. Values of x 1 = β min for small J.
Table A1. Values of x 1 = β min for small J.
J x 1 = β min
10
23
359
4507
54091
These results are subject to checking the inner pattern consistency. Therefore,
x 1 = A k Z = 8 J 1 k 5
with
k = k 0 + m , k 0 = k min = 5 if J = 1 , 1 if J 2 .
where m starts at m = 0 and then is increased by 1 if an inconsistency is detected. In the best case (inner consistency fulfilled for k 0 ), it holds that
x 1 = β min .
It is somewhat unsatisfying that it is necessary to check and increase k to find a feasible solution ( k = 1 never seems to work, but k = 2 does work for J = 3 and J = 4 ). Therefore, in Appendix C.2.2, a different approach is developed.

Appendix C.2.2. Approach 2: A Special Approach for Pattern ududd

For pattern ududd—as it is small—a different approach turns out to work well. As the last element in each block should be odd, one gets
3 ( 2 k j + 1 ) + 1 = 2 ( 2 u j + 1 ) 3 ( 2 u j + 1 ) + 1 = 4 ( 2 k j + 1 + 1 ) , j = 1 J
or in equivalent form
3 k j + 1 = 2 u j 3 u j = 4 k j + 1 , j = 1 J .
Eliminating u j leads to
9 k j + 3 = 8 k j + 1 , j = 1 J
and the recurrence relation
k j + 1 = 9 k j + 3 8 = 9 8 k j + 3 8 , j = 1 J .
The recurrence k j + 1 = 9 8 k j + 3 8 has the homogeneous solution
k h ( j ) = A 9 8 j 1
with appropriate A to be determined, and the particular solution:
k p ( j ) = 3 .
Thus,
k ( j ) = ( k 1 + 3 ) 9 8 j 1 3 .
For k j to be integer for all j N , k 1 + 3 must cancel denominators:
k 1 + 3 = 8 J m ( m I N 0 ) .
The minimal solution occurs when m = 1 :
k 1 = 8 J 3 .
This minimality property can be verified as follows: For j = 1 ,
k 1 = 8 J 3
u 1 = 3 ( 8 J 3 ) + 1 2 = 3 · 8 J 8 2 = 3 · 4 · 8 N 1 4
which is integer. For j > 1 ,
k ( j ) = 8 J j + 1 ( 9 j 1 ) 3
Each term remains integer because 8 J j + 1 cancels the denominator.
Note that in terms of the uniqueness, any smaller k 1 would fail to satisfy 8 J ( k 1 + 3 ) , making some k ( j ) non-integer. Based on the minimal k 1
k 1 = 8 J 3
one obtains
x 1 = 2 · 8 J 5 = 2 3 J + 1 5
and—for J = 1 , 2 , 3 —the following results for x j :
J block   1 block   2 block   3 1 11 - 34 - 17 - 52 - 26 - 13 2 123 - 370 - 185 - 556 - 278 - 139 139 - 418 - 209 - 628 - 314 - 157 3 1019 - 3058 - 1529 - 4588 - 2294 - 1147 1147 - 3442 - 1721 - 5164 - 2582 - 1291 1291 - 3874 - 1937 - 5812 - 2906 - 1453
which show consistent patterns in each block. The results (A14) for J 3 in this subsection agree with the results (A13) for J + 1 and k = 2 in the previous Appendix C.2.1. For J 2 , they agree for J directly but require k = 16 in the previous subsection.

References

  1. Nemhauser, G.L.; Wolsey, L.A. Integer and Combinatorial Optimization; John Wiley and Sons: New York, NY, USA, 1988. [Google Scholar]
  2. Floudas, C.A.; Pardalos, P.M. State of the Art in Global Optimization: Computational Methods and Applications. J. Glob. Optim. 1999, 14, 307–309. [Google Scholar]
  3. Werner, F. Discrete Optimization: Theory, Algorithms, and Applications. Mathematics 2019, 7, 397. [Google Scholar] [CrossRef]
  4. Werner, F. (Ed.) Advances and Novel Approaches in Discrete Optimization; MDPI: Basel, Switzerland, 2020. [Google Scholar]
  5. Grossmann, I.E.; Biegler, L.T. Optimization in chemical engineering. Comput. Chem. Eng. 2021, 151, 107304. [Google Scholar] [CrossRef]
  6. Vishnoi, N.K. Bridging Continuous and Discrete Optimization. In Algorithms for Convex Optimization; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
  7. Qin, S.J.; Flores-Cerrillo, A.; Huang, B. Real-time optimization in the process industries: Past, present and future. Annu. Rev. Control 2023, 56, 100909. [Google Scholar] [CrossRef]
  8. Nikeghbali, A.; Pardalos, P.M.; Rassias, M.T. (Eds.) Optimization, Discrete Mathematics and Applications to Data Sciences. In Optimization and Its Applications; Springer: Berlin/Heidelberg, Germany, 2025; Volume 220. [Google Scholar]
  9. Wang, H.; Yu, X.; Lu, Y. A reinforcement learning-based ranking teaching-learning-based optimization algorithm for parameters estimation of photovoltaic models. Swarm Evol. Comput. 2025, 93, 101844. [Google Scholar] [CrossRef]
  10. Schnabel, A.; Usul, H.B. Embedding Neural Networks into Optimization Models with GAMSPy. Preprint, to Appear in Proceedings of the XVI Workshop on Global Optimization, Stockholm. Available online: https://andreschnabel.de/GAMSPyNeuralNetworks.pdf (accessed on 20 August 2025).
  11. Bussieck, M.R.; Meeraus, A. General Algebraic Modeling System (GAMS). In Modeling Languages in Mathematical Optimization; Kallrath, J., Ed.; Kluwer Academic Publishers: Norwell, MA, USA, 2004; pp. 137–157. [Google Scholar]
  12. Bussieck, M.R.; Ferris, M.C.; Lohmann, T. GUSS: Solving Collections of Data Related Models within GAMS. In Algebraic Modeling Systems: Modeling and Solving Real World Optimization Problems; Kallrath, J., Ed.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 35–56. [Google Scholar]
  13. Karatsuba, A.A.; Ofman, Y. Multiplication of multi-digit numbers on automata. Dokl. Akad. Nauk SSSR 1962, 145, 293–294. [Google Scholar]
  14. Jarck, K. Exact Mixed-Integer Programming. Ph.D. Thesis, Technische Universität Berlin, Berlin, Germany, 2020. Available online: https://api-depositonce.tu-berlin.de/server/api/core/bitstreams/37ff06df-65ea-4e7a-b124-49d159f43d73/content (accessed on 12 May 2025).
  15. Wang, Y.; Chen, H. Quadratic compact knapsack public-key cryptosystem. Comput. Math. Appl. 2009, 58, 671–677. [Google Scholar] [CrossRef]
  16. Rizos, G.S.; Draziotis, K.A. Cryptographic primitives based on compact knapsack problem. J. Inf. Secur. Appl. 2024, 82, 103744. [Google Scholar] [CrossRef]
  17. Aoki, K.; Franke, J.; Kleinjung, T.; Lenstra, A.K.; Osvik, D.A. A Kilobit Special Number Field Sieve Factorization. Advances in Cryptology—ASIACRYPT 2007. In Lecture Notes in Computer Science; Kurosawa, K., Ed.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4833, pp. 1–12. [Google Scholar] [CrossRef]
  18. Lenstra, H.W.J. Solving the Pell Equation. Not. Am. Math. Soc. 2002, 49, 182–192. [Google Scholar]
  19. Granlund, T.; GMP Development Team. GMP: The GNU Multiple Precision Arithmetic Library, 6.1.2th ed.; 2016. Available online: www.gmplib.org (accessed on 4 July 2025).
  20. Fousse, L.; Hanrot, G.; Lefèvre, V.; Pélissier, P.; Zimmermann, P. MPFR: A Multiple-Precision Binary Floating-Point Library with Correct Rounding. ACM Trans. Math. Softw. 2007, 33, 1–15. [Google Scholar] [CrossRef]
  21. Kallrath, J.; Neutsch, W. Lattice Integration on the 7-sphere. J. Appl. Comput. Math. 1990, 29, 9–14. [Google Scholar] [CrossRef]
  22. Lagarias, J.C. (Ed.) The Ultimate Challenge: The 3x+1 Problem. In Miscellaneous Books; American Mathematical Society: Providence, RI, USA, 2010; Volume 78. [Google Scholar] [CrossRef]
  23. Lagarias, J.C. The 3x+1 Problem: An Overview. arXiv 2021, arXiv:2111.02635. [Google Scholar]
  24. Clavero, C. PySimplex: A Python Implementation of the Simplex Algorithm. 2016. Available online: https://github.com/carlosclavero/pysimplex (accessed on 23 June 2025).
  25. Gleixner, A.M.; Steffy, D.E.; Wolter, K. Iterative Refinement for Linear Programming. INFORMS J. Comput. 2016, 28, 449–464. [Google Scholar] [CrossRef]
  26. Kallrath, J. Business Optimization Using Mathematical Programming—An Introduction with Case Studies and Solutions in Various Algebraic Modeling Languages, 2nd ed.; Springer Nature: Cham, Switzerland, 2021. [Google Scholar]
Table 1. Results for J = 3 to J = 11 with J, x 1 , the number N of nodes explored, and the time (t in seconds) needed to solve the problem. These results also illustrate the scalability behavior of exact-arithmetic Branch and Bound: as J increases, both the number of explored nodes and the solution time grow rapidly, highlighting the computational limits of current implementations.
Table 1. Results for J = 3 to J = 11 with J, x 1 , the number N of nodes explored, and the time (t in seconds) needed to solve the problem. These results also illustrate the scalability behavior of exact-arithmetic Branch and Bound: as J increases, both the number of explored nodes and the solution time grow rapidly, highlighting the computational limits of current implementations.
J x 1 Nt
3390.1415
47240.6479
515592.5507
63115210.7068
76339434.7998
81271017114.9888
92552492380.8536
1051159831111.9415
11102314,2953342.1630
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kallrath, J. Large-Number Optimization: Exact-Arithmetic Mathematical Programming with Integers and Fractions Beyond Any Bit Limits. Mathematics 2025, 13, 3190. https://doi.org/10.3390/math13193190

AMA Style

Kallrath J. Large-Number Optimization: Exact-Arithmetic Mathematical Programming with Integers and Fractions Beyond Any Bit Limits. Mathematics. 2025; 13(19):3190. https://doi.org/10.3390/math13193190

Chicago/Turabian Style

Kallrath, Josef. 2025. "Large-Number Optimization: Exact-Arithmetic Mathematical Programming with Integers and Fractions Beyond Any Bit Limits" Mathematics 13, no. 19: 3190. https://doi.org/10.3390/math13193190

APA Style

Kallrath, J. (2025). Large-Number Optimization: Exact-Arithmetic Mathematical Programming with Integers and Fractions Beyond Any Bit Limits. Mathematics, 13(19), 3190. https://doi.org/10.3390/math13193190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop