On the Universal Encoding Optimality of Primes

: The factorial-additive optimality of primes, i.e., that the sum of prime factors is always minimum, implies that prime numbers are a solution to an integer linear programming (ILP) encoding optimization problem. The summative optimality of primes follows from Goldbach’s conjecture, and is viewed as an upper efﬁciency limit for encoding any integer with the fewest possible additions. A consequence of the above is that primes optimally encode —multiplicatively and additively—all integers. Thus, the set P of primes is the unique, irreducible subset of Z—in cardinality and values — that optimally encodes all numbers in Z, in a factorial and summative sense. Based on these dual irreducibility/optimality properties of P, we conclude that primes are characterized by a universal “ quantum type ” encoding optimality that also extends to non-integers.


Introduction
Prime numbers were a subject of mathematical inquiry long before the time of Pythagoras (570-500 BCE), who first attributed to numbers in general and their relationships a unique-almost mystical-power to express deeper truths about the universe and natural phenomena [1][2][3]. Although Pythagoras and his school did not systematically study primes, we refer to primes that can be expressed as sum of two squares, as "Pythagorean Primes", e.g., 5 2 = 3 2 + 4 2 , 17 2 = 8 2 + 15 2 . The Greek philosopher Heraclitus (540-480 BCE) also accepted the existence of "powerful hidden harmonies" in nature and worked them into his philosophical theory about the cosmos [4]. The systematic study of primes starts with Euclid (fl. 300 BCE), who famously proved that primes "measure" all other numbers and that there is an infinitude of them [5,6]. Eratosthenes (276-194 BCE) invented a sieve as a practical way to find primes [7] by observing that composite numbers follow repeating patterns. Goldbach (1690-1764), Euler (1707-1783) and Riemann (1826-1866)-among other mathematicians-have grappled with important questions related to prime numbers. One of those questions, Goldbach's conjecture, states that every even number ≥ 4 can be expressed as a sum of two primes. It remains unproven despite "twenty-six decades of effort by some of the best minds on the planet" ( [7], p. 90) as one of those "conjectural theorems of which no proof is known, although empirical evidence makes their truth seem highly likely" [8].
The increasing rise of digital communications in recent years, along with the concomitant need for enhanced privacy and secure authentication, has rekindled interest in prime numbers. Several encryption methods employ large primes in the construction of secure cryptographic keys, since the prime factorization of large numbers is a computationally tedious process. Primes therefore help secure the communication of digital information over public networks such as the internet [9].
Life patterns based on primes also occur in nature. Examples are the cicadas' 13-and 17-year birth cycles, giving them an advantage over predators by making it rarer to coincide with them. If a predator emerges on a 5-year cycle, it will coincide with the cicadas every 65 and 85 years respectively, giving them an evolutionary advantage. Prime numbers can help explain the phyllotaxis spirals, characteristic spiral arrangement of leaves that maximize a plant's sunlight exposure.
Scientists continue to study the numerical, algebraic, geometric, distributional, asymptotic and other properties of prime numbers-proved or conjectured-and explore their implications on the nature of our universe.
In this paper, we show that the set P ∪ {1} has the smallest cardinality and values for optimally encoding all integers through product or sum operations. We refer to this unique dual irreducibility property or primes as a quantum type encoding optimality. Finally, we show how these encoding properties can be extended to non-integers.

ILP Formulation-Additive Optimality of Prime Factors
Euclid, in his Elements, first proved that any integer, larger than or equal to 2, that is not a prime can be expressed as a product of primes called prime factors ( [5] Book VII Prop. 30-32, pp. 218-219; Book IX Prop. 14, p. 265). This result is of such profound importance in mathematics that it has been called the "Fundamental Theorem of Arithmetic".
The uniqueness of prime factorization implies that an integer cannot be expressed factorially using a smaller number of prime factors. Primes can therefore be thought of as the "universal factorial quanta" of Z ≥2 , and, in that sense, P is irreducible in a quantum sense; i.e., no subset of Z with smaller cardinality or member values exists having the same universally optimal factorial encoding capability in Z.
Irreducibility is a fundamental property in science; for mathematics in particular, irreducibility arises in problems related to computability, incompleteness, decidability, algorithmic information theory and the limits of computation [10]. Generally, when all members of a set A can be expressed as functions of set B, and no set with smaller cardinality than B exists with the same property, then B is called irreducible. If B⊂A and B is irreducible in cardinality and values, i.e., if B contains the fewest and smallest numbers possible, capable of encoding every member of A, we refer to the subset B of A as optimal in a quantum encoding sense.
Prime factors are also optimal in a factorial-additive sense, as we show below.
Definition 1. For s ∈ Z ≥2 , the Prime Counting Function π(s) is equal to the number of primes less than or equal to s.
Notation. The symbols R, Z and Z ≥2 are used in this paper as representations of the set of real numbers, integers and integers greater or equal to 2, respectively. The nth prime number, in ascending order, is denoted as p n ; P n is the set of the first n primes, P of all primes,P = P ∪ {1}, P o is the set of odd primes, and P o = P o ∪ {1}. The integer part of x ∈ R, i.e., the largest integer less than or equal to x, is denoted by x and the discrete delta function, taking the values of 1 for x = 0 and 0 for x = 0, is denoted by δ(x). The natural logarithm of x is log(x).
Consider an odd s ∈ Z ≥3 with π(s) = n for some n ∈ Z ≥1 ; then s may be expressed by its prime factorization By taking the natural logarithm on both sides of (1) and since s ≥ p n > n, the above may be written as where By applying (2) on each element of log(s) we obtain where and D s ∈ Z ≥0 (s−1)/2×(s−1)/2 is the augmented log coefficient matrix of s. The following example shows how (3) can be used to compactly express the prime factorizations of all odd numbers in [3, s]. Example 2. Consider s = 9. Using (3), the logarithm of each odd i ∈ [3,9] can be expressed through prime factorization as a linear product of log(s), as follows: In the above equation, log(9) is expressed through the prime factorization of 9 = 3 2 , or equivalently log(9) = 2· log (3) and not via the identity log(9) = 1· log (9) which would result in D s = I. This is an important distinction, in terms of the additive optimality of prime factors, as we will see below.
We can rewrite (3) as D s − I log(s) = 0 where I is the (s − 1)/2 × (s − 1)/2 identity matrix and 0 the (s − 1)/2 × 1 null vector. Equation (4) implies that the vector log(s) is in the null space of matrix D s − I . As we saw in the previous example, this does not give us immediate insight into the optimality of D s , since D s = I trivially satisfies (4). To establish the additive optimality of prime factors we need the following proposition.
Proposition 1. Let D s , s satisfy (4). If D s corresponds to the prime factorization of s , then every element of the (s − 1)/2 × 1 vector D s ·s attains its minimum value.
Proof. Since the minimality applies element-wise for vector D s ·s, we may use (2) to show that the proposition holds for each vector element. Consider any non-prime factorization of a composite, i.e., non-prime, number s > 1 and let q be any composite factor. If s is odd, 2 cannot be one of its factors, and it follows that there is a prime p > 2 and some odd integer q > 2 such that s = p · q. Using basic algebra, we can show that (p + q) < p · q = s, i.e., the sum, p + q, of two factors is always strictly less than their product p · q iff 1/p + 1/q < 1 which, in this case, is true. Since primes are the smallest factors possible, the proposition is proved. For s even, the same rationale applies with the exception that, in this case, we have p ≥ 2 and q ≥2 so that (p + q) ≤ p · q = s. Therefore, in this case, the attained additive minimum may not be unique, except for the special case when s/2 is odd.
Remark 1. The minimality condition of Proposition 1 does not extend to the logarithms of composite factors, since log(p) + log(q) = log(p · q) = log(s).

Remark 2.
The reason for losing the strict inequality in the proof of the above proposition for s even-and thus, the uniqueness of the minimum-is that the number 2 has the unique property 2·2 = 2 + 2, which means that any factorization having 2 n as a factor, with n ≥ 2, results in different factorizations with equal factor sums. For example, consider s = 2 3 , which can be factored either as 2·4 or 2·2·2, both with a factor sum of 6. Even numbers not divisible by 2 n , for n ≥ 2, are not subject to this relaxation. In such case, s/2 is odd, since the multiplicity of 2 in the prime factorization of s is equal to one. As a result, the conditions for the strict inequality in the proof of Proposition 1 hold.
The additive minimality of prime factors, established by Proposition 1, gives us additional insight into how efficiently primes encode integers in a factorial-additive sense.
Example 4. Consider the previous example and suppose we are asked to encode s = 450, using multiplication only, in a way that minimizes the "additive cost" of the factors used to generate it.
We are told that each factor has a cost equal to its value and are asked to allocate our factor-cost budget in such a way that 450 is generated factorially at minimum cost. The unique solution to this encoding optimization problem is to generate 450 via its prime factorization 450 = 2 1 · 3 2 · 5 2 at a total additive-cost of 18. The additive cost of every alternate factorization is higher than that of the prime factors, ranging from 19 to 227, as shown in the previous example.
In summary, the subsetP ⊂ Z uniquely encodes all members of Z factorially, has the lowest cardinality, the smallest possible numbers as members, with each factorial encoding also optimal in an additive sense.
From (4) and Proposition 1, it follows that prime factorization is the solution to the ILP-type problem min D s D s ·s elementwise subject to : where D s ∈ Z ≥0 (s−1)/2×(s−1)/2 . Equation (5) describes a minimization problem with a linear objective function, linear constraints, and integer decision variables. Here, the decision variables are the elements of the augmented log coefficient matrix D s .

Remark 3.
The last row of D s corresponds to the prime factorization of s, expressed in log form, as in (2).
to further reduce the size of the ILP's decision space. Through the ILP formulation described by (5) and (6), we can employ a rich array of computational optimization methods to solve large-scale prime factorization problems.

Remark 4.
The above formulation can be used to determine prime factors larger than a given number n. One way to accomplish this is to construct a number s that cannot have any prime factor ≤ n and then solve the corresponding prime factorization ILP. A candidate that meets this condition is wheren! is the product of all odd integers up to n and If M n , given by (7), had any prime factors ≤ n, it would follow that 2 m would have at least one of those as prime factor; false, since the only prime factor of 2 m is 2.
For much larger n, finding the prime factors requires more sophisticated search and optimization techniques.

Remark 5.
Instead of multiplying all odd numbers up to n, as in the previous example, we can multiply the number 3 with all integers not exceeding n, of the form 6·k ± 1 with k ≥ 1, since all primes > 3 may be expressed that way. This results in a smaller candidate compared to M n . If we know the first m primes and want to search for a prime larger than p m , we can opt for an even smaller product given by ∏ p i , where i = 2, 3, . . . , m. We explore additional search techniques in the next section, where we discuss partitions and the summative optimality of primes.

Remark 6.
Given that any number s ∈ Z ≥2 may be written as n] are the factor exponents and since, by definition, all p i ≥ 2 which implies that s is an (elementwise) upper bound on the factor-cost vector D s · · s given by (5).

Remark 7.
The prime counting function π(·) may be expressed in algebraic form. By using such an expression, we can express Goldbach's Conjecture (GC) algebraically. In Appendix A, we derive algebraic expressions for π(·) and GC and discuss their properties.

Summative Optimality of Primes
Of the many remarkable properties prime numbers possess, perhaps the most intriguing is a "mirroring" property inherent in their distribution on the number line. To visualize this, consider an even number s ≥ 6, and write the odd numbers in [3, s/2], in ascending order, on the left and those in [s/2, s − 3] in reverse order on the right. By construction, the sum of both elements in any row is equal to s. In this otherwise unremarkable arrangement, at least one of the pairs is comprised of two prime numbers, suggesting that any even number ≥ 6 may be expressed as a sum of two odd primes.
Considering that the density of primes π(s)/s decreases as s is increased, the above conjectured "mirroring" property becomes even more surprising: that for any even number s, there is always an odd prime in [s/2, s] that "aligns" itself with another odd prime in [3, s/2], in such a way as to partition s. We will explore the implications of this later in the paper.
Remarkably, after removing all repeating number patterns, such as the prime factors of s and their multiples, there is always an odd prime in [s/2, s − 3] that aligns itself with another odd prime in [3, s/2] to form a partition of s. This persistent pattern seems to be a fitting example of an Heraclitean "hidden universal harmony".
An interesting ramification of the above, is that given an even s > 6 with m prime factors, at least one of the π(s/2) − m numbers given by s − p i , where p i is any of the primes in [3, s/2] that are not factors of s, is prime. This facilitates finding large primes, as it dramatically reduces looking for prime candidates in [s/2, s -3]. Since a prime in [s/2, s − 3] is always paired with a prime number in [3, s/2] which is not a prime factor of s, if we want to find a prime larger than s/2, we need only perform at most π(s/2) − m primality tests. When we have no a priori knowledge of the primes in [1, s/2] we may still use the algebraic properties of pairs in a partition to make the search for P o × P o pairs more efficient. We discuss this in Appendix B. Note that a consequence of the prime density decreasing with s is that-even for large values of s-primes are not evenly distributed in [1, s/2] and [s/2, s]; this is discussed in Appendix C.
This prime "mirroring" (or "pairing") property is a result of the "Goldbach conjecture" (GC), named after the German mathematician Christian Goldbach (1690-1764) who mentioned it in a 1742 letter to the eminent Swiss mathematician, physicist, astronomer, geographer, logician and engineer Leonhard Euler, to which Euler responded: "That every even integer is a sum of two primes, I regard as a completely certain theorem, although I cannot prove it." [11]. While Goldbach considered {1} as prime, the statement "every even number > 2 can be expressed as the sum of two primes" is now the proper formulation of the GC, commensurate with our current definition of primes.
The GC is easily seen to imply that "every odd number > 5 is the sum of three primes", since every odd number > 5 can be expressed as sum of an odd prime and some even number, which in turn is a sum of two primes. This is often referred to as the "weak" or "odd" Goldbach Conjecture (oGC).
In the two and a half centuries since the GC was formulated, and while considerable efforts have been made by mathematicians towards a proof, the conjecture remains open. The oGC is considered "relatively easier" and has been proved for numbers larger than 10 43000 [12]. The GC has been computationally verified for numbers up to 4·10 18 [13].
Rather than considering the GC or oGC as binary questions, it is helpful to think of them, respectively, as encoding efficiency limits, in the sense that, in general, an even number cannot be written as a sum of fewer than two odd primes and an odd number cannot be written as a sum of fewer than three primes. In that sense, significant progress has been made towards attaining these encoding limits. Vinogradov (1937) showed that every sufficiently large odd integer can be written as the sum of at most three primes, and so every sufficiently large integer is the sum of at most four primes. One result of Vinogradov's work is that we know GC holds for almost all even integers. In 1966, Chen proved that every sufficiently large even integer is the sum of a prime plus a number with no more than two prime factors. The GC has also been linked to the Riemann Hypothesis (RH), in the sense that if we accept the RH as true, then every odd integer is a sum of three primes, which implies that every even integer is a sum of four primes. The interested reader can find more information in [14].
Ultimately, this might be the first instance where an important proof relies on showing a mathematical proposition to be true for a range of numbers larger than some lower bound L, while computationally confirming its validity for all numbers up to L.
In this paper, our purpose is to explore the universal optimality property of primes at its limit; we do so by assuming that the optimal encoding efficiency implied by the GC is true.
Note that, even at suboptimal encoding efficiencies, i.e., assuming that even (odd) integers can be expressed as sums of more than two (three) primes, the set of primes P remains the subset of Z with the fewest and smallest members that can most efficiently encode all integers. For example, consider the-suboptimal to the GC-summative encoding efficiency resulting from Bertrand's postulate ( [8], p. 455), which asserts that there is always a prime p ∈ [n, 2n] where n ≥ 1. Therefore, there is always a prime p ≥ s/2 in the interval [2, s] for s ≥ 3. The postulate implies that this prime spans at least half of s. By successively applying this process we establish the existence of a sequence of decreasing primesp i with s as its sum. For example, let s = 100 and letp 1 = 61 be the first prime in the sequence between [s/2, s]. By applying the postulate to the remainderr 1 = 100 − 61 = 39 and selectingp 2 = 23 ∈ [ 39/2 , 39] as the second term in the sequence, the new remainder becomesr 2 = 100 − 61 − 23 = 16. Since the new remainder is small (<20), we can easily express it as a sum of two primes, e.g., 13 + 3. Generalizing, we can show that any number s ≥ 3 can be written as a sum of at most log(s)/log(2) + 1 primes. Given the sparsity of primes and, hence, the low value of the ratio n/p n , as n increases, i.e., the ratio of the cardinality n of P n to its largest member p n , this demonstrates the remarkable summative encoding capability of primes.
We therefore accept as true that every even integer > 4 is a sum of two odd primes. Equivalently, we accept as true that every integer > 0 may be expressed as a sum of up to three numbers in P o . From the above discussion, it follows that by accepting the validity of the GC, we establish the significance of P o and its subsets as the most efficient-in an encoding sense-"building block" of all integers, including primes themselves.
As with the factorial case, the optimal summative encoding implied by the GC, can be represented as the solution of an optimization problem. In this case however, the formulation is nonlinear, i.e., gives rise to a nonlinear programming (NLP) formulation. To see this, consider how a set of even numbers in [6, 2m], for some integer m ≥ 3, can be generated by using one addition operation over the smallest possible subset of odds within [3, 2m − 3]. The solution is the summative optimal encoding implied by the GC, i.e., that every even number ≥ 6, is the sum of two primes.
Note that additional constraints may be introduced to refine this solution, when multiple summative encodings of an even number e exist. One approach is to select the one with the smallest (or largest) deviation from the midpoint e/2, as discussed in Appendix B. For example, in the case of e = 16, this would result in 13 + 3 = (8 + 5) + (8 − 5) be more (or less) preferable than 11 + 5 = (8 + 3) + (8 − 3), in the sense that in the first encoding instance the distance from the midpoint is 5, while in the second it is equal to 3.
Let w e be the vector of even numbers in [6, 2m], for some integer m ≥ 3, and w o the vector of odds in [3, 2m − 3]. Thus, the NLP may be formulated as follows: max A ∑ j δ ∑ i a ij subjectto : A·w o = w e and ∑ j a ij = 2 ∀ i (9) where A is the m × m decision-variable matrix, with elements a ij ∈ {0, 1, 2} for i, j ∈ [1, m], specifying the elements of w o to be added in order to optimally generate the output vector w e . From the GC, we know that the prime elements of w o are necessary and sufficient for an optimal NLP solution to exist, i.e., to satisfy the objective function and the constraints in (9).
Ongoing research is focused on alternate formulations, e.g., based on an objective function that minimizes the rank of the decision matrix, as in the NLP below:

Quantum Encoding Optimality
As we showed in Section 2, prime factorizations are optimal in a factorial-additive sense, since for any integer, the sum of its prime factors is minimum. Prime factors are irreducible in value, since there can be no factors smaller than prime factors. Thus, P is an irreducible (in cardinality) subset of Z that can factorially encode all integers > 1 using factors that are also irreducible (in value) and have a minimal sum (factorial-additive optimality). From Section 3, it follows that the set P o has an additional optimality property: it is the "smallest" (in cardinality) subset of Z whose members can summatively optimally encode any integer in Z, i.e., with no more than two addition operations.
Combined, the factorial-additive and summative optimality ofP empower its members with a unique "quantum-type encoding optimality". The corollary below summarizes this result. Corollary 1.P is optimal in a quantum encoding sense for all numbers in Z.
Proof. In Section 2, we showed that prime factorization is the optimal quantum factorialadditive encoding for any integer; that is, a factorial encoding that is also factorial-additively optimal uses the smallest possible factors that are members of an irreducible subsetP of Z, i.e., with the smallest possible cardinality. In Section 3, we discussed the summativeoptimality of primes, and we explained how the GC expresses a summative encoding efficiency limit, with any integer encoded as a sum of up to three members ofP. Thus,P is the unique, irreducible subset (minimal cardinality) of Z, having members with irreducible (minimal) values, capable of optimally encoding any number in Z in a factorial-additive and summative sense.
Note that, for a number s with multiple summative prime decompositions, these may be ordered according to the product value of the summands, e.g., for s = 14 we have 7 × 7 = 49, 3 × 11 = 33. We call each product the prime summand product (PSP). Every s has a summative decomposition where the PSP obtains its minimum/maximum values. This is analogous to the factorial-additive minimum discussed in Section 2. See Appendix B for a discussion about the extrema of PSP.
In natural systems, redundancy is usually an evolutionary "insurance policy" to ensure survival, that is, continued performance under high uncertainty or risk. Fault-tolerance in AI systems is also a direct result of robust design. How much do the summative encoding capabilities ofP degrade if some of its members are eliminated? The answer may provide us with insights that lead to improvements in the design of robust, fault-tolerant systems.

Conclusions
The factorial-additive and summative optimality properties of primes, combined with their dual irreducibility ("quantumness") in cardinality and value, provide us with unique insights into their ability to efficiently encode the seemingly innumerable patterns of quantifiable variability within our universe. This universal, powerful capability is amplified even more if we consider how low the density of primes π(x)/x is. For x ≈ 10 3 , prime density is approximately 17%; for x ≈ 10 13 it decreases to 3%, and for x ≈ 10 27 drops to 1.26%. Therefore, 98.74% of integers in [1, 10 27 ] can be generated optimally-in a factorial and summative sense-by using only 1.26% of them! Prime numbers seem to be nature's optimal encoding quanta of choice.
The set P o may also be used to encode irrational and transcendental numbers in a summative sense. For example, let's consider π = 3.14159 . . . By using the Leibniz formula for π, written as an Euler product, π may be expressed as a product of ratios of odd primes divided by their nearest multiple of 4, as follows Since P o can summatively optimally encode every integer, it follows that it can also encode any product, ratio, sum or difference between integers.
Additional research work is needed to more fully explore the scope of the universal quantum-encoding optimality of prime numbers which enables this sparse subset of Z to quantum-optimally encode any measurable quantity in the universe, to any desired degree of accuracy.
Funding: This research received no external funding.

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Data Availability Statement: Not applicable.
Acknowledgments: The author expresses his gratitude to his doctoral advisor, Stelios C. A. Thomopoulos, Head of the Integrated Systems Laboratory and past Director of the Institute of Informatics & Telecommunications at the National Center for Scientific Research "Demokritos", Greece, for his continued and invaluable mentorship. The author thanks Igal Sason, Professor at the Technion-Israel Institute of Technology, for his helpful comments at an early stage of this work.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Two Closed-Form Expressions for the Prime Counting Function and an Algebraic Formulation of the Goldbach Conjecture
There exist several formulas for the prime counting function π(·), some containing a finite or infinite number of terms ( [8], p. 593), [11,15]. Closed-form formulas, i.e., expressions with a finite number of terms, typically employ the modulo function mod(·) and/or the factorial function (·)!. Below, we derive two expressions for π(·) that use the integer-part operation, which may be approximated by smooth functions. T s has the properties of a "prime blocking filter", e.g., y = T s ·s with pass-through properties for composite inputs (T s = 0, composite s) while prime valued inputs are blocked (T s = 0, prime s). T s also expresses a primality test for s in closed-form, so we can apply it iteratively and obtain a closed-form expression for π(s).
Proposition A2. Consider any integer s ∈ Z >3 . The prime counting function π(s) is given by Proof. Equation (A2) is an iterative application of the primality test given by (A1) for each j ∈ [2, s] and factors i ∈ [2, j/2 ]. From Proposition A1, the summand term 1/(1 + T j ) takes the value 1 if j is prime (T j = 0), or the value of 0 if j is not prime (T j = 1), and the proposition is proved.
Another closed-form formulation for π(s) follows from the proposition below.
Proposition A3. For any s ∈ Z >3 Proof. Since the term (j/i) − j/i is 0 if and only if i is a factor of j, the product term in (A3) is zero if and only if j is not prime. In that case, δ(0) = 1 and the value of the summand term is 0. Conversely, if j is prime, the product term is nonzero and the summand term is 1, thus counting j in the running sum.
The primality test associated with (A3) iŝ whereT s = 0 if s is divisible by some i ∈ [2, s/2 ], i.e., s is a composite number, and if s is prime. Unlike T s ,T s acts like a binary filtering operator that blocks composite inputs while it allows prime inputs to pass-through without distortion, i.e., performs as a lossless passthrough prime filter with a gain of 1.
Expressions (A2) and (A3) can be coded to programmatically generate π(s) for any s > 3. A difference between the expressions for π(s) in ( [8], p. 593), [11,15], and (A2), (A3) is that we can use the latter to detect primes by obtaining analytically smooth approximations of π(s), theoretically of arbitrary numerical accuracy, through smooth approximations of x and δ(x). For the integer-part operation · , we may use Fourier series approximations of the shifted Heaviside step function, while the discrete delta function δ(·) may be approximated via a zero-mean normal distribution. While these approximations inevitably introduce numerical errors, in certain special cases we may determine upper bounds for them.
Given that primes occur at the "spikes" of the otherwise flat-line (zero value) graph of the first derivative of π(s), exploring how well we can "detect" the presence of primes as "bumps" in the derivative of a smooth approximation of π(s) might also be a viable path in computational prime number detection.
We may also use (A4) to express GC in algebraic form, for any even s ≥ 6, by writing since G(x) is 0 if x is prime and 1 otherwise, so that Goldbach's conjecture stating that "there always exist two odd primes adding up to s" is algebraically equivalent to D = 0 where

Appendix B. Efficient Searches for Prime-Prime (P-P) Type Pairs within a Partition
Consider a partition similar to the one discussed in Section 3, for an even s ≥ 6 and no apriori knowledge of any primes in [1, s − 1]. Since all primes > 3 are of the form 6k ± 1, we can formulate our P-P pair search using modulo algebra in compact notation, as follows: where k ≥ 1 is a generic integer constant. From (A7) it follows that q L and p R are of the form where f ∈ [0, (s/2) − 1] is the partition-generating offset. Instead of using (A8) and (A9) to generate the candidate P-P pairs, we use (A10) and (A11) based on the admissible values of offset f. We proceed on a case-by-case basis, next. First, consider the case s = 6k, q L = 6k + 1 which implies s/2 = 3k. From the above expressions and (A10) we obtain for λ ≥ 0, which gives the admissible values for the offset f in this case. If s/2 is even, given that q L ∈ P o and is therefore odd, it follows from (A10) that f is odd so that, from (A13), we conclude that λ is even. Proceeding in a similar manner, we consider the case s = 6k, q L = 6k − 1, s/2 even and conclude that where λ is even. Table A1 summarizes the admissible offset values in each case: By using (A10) and (A11) and from Table A1, the search for pairs (q L , p R ) with q L ∈ P o and p R ∈ P o is computationally more efficient, even in the absence of any information about primes in [1, s]. If such information is available, we may use it to eliminate additional non P-P pairs. From (A10) and (A11), after multiplication, we have that which implies that for any s, the product of the summands attains its minimum when f = f max , i.e., when the distance of q L , p R from the midpoint s/2 is maximized. Therefore, the prime summand product (PSP) is minimized (maximized) for the P-P pair with the largest (smallest) distance from the midpoint.

Appendix C. On the Relative Density of Primes between [1, x] and [x, 2x]
In Section 3 we established the summative optimality of primes and discussed how this property manifests itself in the partitioning of an even number s. We mentioned how this powerful property is amplified if we consider that the density of primes π(x)/x decreases as the value of x increases. For larger values of x, we might expect to find fewer primes in the second half interval [x, 2x] than the first [1, x] and so we examine the ratio R(x) of primes in [x, 2x] to those in [1, 2x]. Using the prime counting function π(x) we may write R(x) as Figure A1 below shows that, as x→3000, R(x) trends towards 45%, i.e., there is a 10-point (45-55%) split in the prime density values between the two half-intervals. Asymptotically, π(x)~x/log(x) ( [8], p. 463), so that π(x)/π(2x)~log(2x)/[2log(x)]~1/2, implying that R(x) converges to 50% as x → ∞.

1.
Aristotle. Metaphysics; Book 1 sec. 985b-986a, Perseus Digital Library; Tufts University, Massachusetts, USA: Translated in English by Hugh Tredennick: "they (ref. to the Pythagoreans) assumed the elements of numbers to be the elements of everything, and the whole universe to be a proportion ('Harmony') or number" and in the Original Greek Text: " Mathematics 2021, 9, x FOR PEER REVIEW 13 of 14 . Figure A1. Graph of R(x) = 1 − π(x)/π(2x).