A Review of the Asymmetric Numeral System and Its Applications to Digital Images

The Asymmetric Numeral System (ANS) is a new entropy compression method that the industry has highly valued in recent years. ANS is valued by the industry precisely because it captures the benefits of both Huffman Coding and Arithmetic Coding. Surprisingly, compared with Huffman and Arithmetic coding, systematic descriptions of ANS are relatively rare. In 2017, JPEG proposed a new image compression standard—JPEG XL, which uses ANS as its entropy compression method. This fact implies that the ANS technique is mature and will play a kernel role in compressing digital images. However, because the realization of ANS involves combination optimization and the process is not unique, only a few members in the compression academia community and the domestic industry have noticed the progress of this powerful entropy compression approach. Therefore, we think a thorough overview of ANS is beneficial, and this idea brings our contributions to the first part of this work. In addition to providing compact representations, ANS has the following prominent feature: just like its Arithmetic Coding counterpart, ANS has Chaos characteristics. The chaotic behavior of ANS is reflected in two aspects. The first one is that the corresponding compressed output will change a lot if there is a tiny change in the original input; moreover, the reverse is also applied. The second is that ANS compressing an image will produce two intertwined outcomes: a positive integer (aka. state) and a bitstream segment. Correct ANS decompression is possible only when both can be precisely obtained. Combining these two characteristics helps process digital images, e.g., art collection images and medical images, to achieve compression and encryption simultaneously. In the second part of this work, we explore the characteristics of ANS in depth and develop its applications specific to joint compression and encryption of digital images.


Introduction
In our review paper, we present the operational details and possible applications of the newly developed lossless compression algorithm-Asymmetric Numeral System (ANS). ANS is one of the most recently proposed entropy coding methods. Fast execution speed and close to the theoretical limit compression performance are the prominent features of ANS; therefore, it has been primarily adopted by industrials. Jarek Duda first proposed ANS in 2007 [1][2][3], and it was adopted and implemented by Facebook in 2015, namely, Zstandard [4], which is open-sourced and used in various fields such as Linux Kernel/Hadoop/Mysql/FreeBSD. Apple also released its ANS implementation-LZFSE [5]-in 2015 and used it at the bottom layer of iOS and macOS. Google launched its lossless compression standard-pik [6]-in 2019, in which the entropy coding part also uses ANS. Microsoft also applied for ANS-related patents [7] in 2019. In addition to the industry giants mentioned above, the JPEG standard committee began drafting the new compression standard JPEG XL [8] in 2017. ANS also plays a significant role in its entropy coding. We can see that in the past five years, ANS has been widely accepted and adopted by the IT giants, but in the compression academia community and nonexpert IT industry, the awareness and the adoption of ANS for Multimedia compression is still in its infancy.
Its lossless compression feature makes ANS especially suitable for distortion-less compression-related applications, such as medical and digital art collection images. The prospective property of ANS comes from its chaotic characteristics: if the original input is changed a little bit, its compressed output will change relatively significantly. Similarly, if we slightly change the compressed representation, the reconstructed version will also present a rather significant change after decompression. This kind of significant range's difference between input and output of a function is one of the preferred features and is called the avalanche effect in cryptography [9]. Recall that, in ANS, encoding an input symbol will produce two outputs: a positive integer state and a segment of a bit sequence (we call this the segmentation feature of ANS). As mentioned above, if the input changes a little, the corresponding integer state and the bitstream segment of the output will change significantly. Conversely, if we tiny modify the integer state or the bitstream segment of the ANS production, the reconstruction will also significantly change after decompression. The avalanche feature mentioned above is suitable for providing a compact representation of digital art collection images. A digital art image is now represented by a positive integer state and a bitstream sequence. Art collectors can store the state separately and open it to the public as a piece of evidence for claiming the ownership of this artwork while keeping the bitstream sequence in private as the verifier if a dispute occurs. Because of its avalanche characteristics, we think there will be an excellent opportunity to combine ANS with the recently popular NFT (Non-fungible token) [10] to make the intellectual property rights (IPRs) of an artwork much more secured.
With ANS's segmentation feature, we can assign different degrees of protection to various portions of an artwork according to their art values. For example, the portrait in the middle of the Mona Lisa image certainly has higher art value than its corners or other flat counterparts. An artwork publisher who intends to sell his digital artworks to more than one artwork collector can divide his art collection into different pieces and price them according to the corresponding values. Now, combining all specific features of ANS, the publisher can generate the state and the bit sequence for each partition. He can now disclose the state information to the potential customers as a marketing representative of this partition in NFT applications. Moreover, the publisher can send the bit sequence of the same segmented area to the actual buyer as a voucher for certifying the ownership. We will justify the above postulation through a concrete experiment, with the aid of table-ANS [11], at the end of this writeup.
The contributions of this work include 1.
We present an in-depth and systematic discussion about various ANS-related technologies for providing a clear picture of this new lossless compression tool; 2.
We address several selected applications of ANS in response to the survey nature of this work; 3.
We explore the chaotic property of ANS and apply it to compress and encrypt digital images jointly, which is the desired mechanism for most digital image generators; 4.
We present a detailed performance comparison of various lossless compression algorithms in terms of compression ratio and execution speed.
In addition, as application examples, we will explore the feasibility of using ANS to protect IPRs of art collection images and check the integrity of medical images.

Basic Concepts of Asymmetric Numeral Systems
An ANS coder will encode an input to a non-negative integer number and call it the state. Mathematically, we can illustrate the ANS encoding process as follows.
That is, using the language of the Finite State machine, ANS encoding can be realized as a transition from a given current state to its next state. At the same time, the ANS decoding process plays the reverse role of the encoding process (cf. Figure 1).

Basic Concepts of Asymmetric Numeral Systems
An ANS coder will encode an input to a non-negative integer number and call it the state. Mathematically, we can illustrate the ANS encoding process as follows.
That is, using the language of the Finite State machine, ANS encoding can be realized as a transition from a given current state to its next state. At the same time, the ANS decoding process plays the reverse role of the encoding process (cf.  Therefore, as shown in Figure 1, we can regard ANS encoding and decoding processes as state transitions on a Finite State Machine. Each node denotes a legal state (with an integer state value). Furthermore, according to the symbol 'a', each edge transits from one node to another.

Huffman Coding, Arithmetic Coding, and the Asymmetric Numeral Systems
Huffman Coding [12] and Arithmetic Coding [13] are the most well-known and adopted algorithms among the entropy compression methods. As described, ANS is the newest entropy coder that the industry has highly valued in recent years. ANS is valued by the industry precisely because it captures the benefits of both Huffman Coding and Arithmetic Coding [2]. Huffman Coding is known for its fast encoding and decoding but has limitations in compression performance (at least one bit is required to represent a symbol). On the contrary, Arithmetic Coding is characterized by a high compression ratio (the degree of compression can be close to the theoretical optimal value) but has limitations in encoding and decoding speed.
Generally speaking, the slow execution speed disadvantage of Arithmetic Coding comes from its involvement in floating-point numbers calculations, which complicates the practical realization and slows down the entire compression and decompression process. The shortage mentioned above of arithmetic codes can be understood as follows. Theoretically, the amount of self-information contained in a symbol s with probability is log ( ) bits. Similarly, in conventional arithmetic coding, the amount of self-information for two continuous coding stages x and ′ will be −log and −log bits, respectively. After transition from stage x to stage ′ by encoding the new symbol s; ideally, we have = . Therefore, in arithmetic codes, the probability range after encoding the incoming symbol shrinks from the previous range by multiplying the probability of the symbol s, which is less than 1. This explains why floating-point numbers are used in arithmetic coding′ implementation. To overcome this shortage, as one of the anonymized reviewers mentioned, modern Arithmetic Coding implementations use renormalization, Therefore, as shown in Figure 1, we can regard ANS encoding and decoding processes as state transitions on a Finite State Machine. Each node denotes a legal state (with an integer state value). Furthermore, according to the symbol 'a', each edge transits from one node to another.

Huffman Coding, Arithmetic Coding, and the Asymmetric Numeral Systems
Huffman Coding [12] and Arithmetic Coding [13] are the most well-known and adopted algorithms among the entropy compression methods. As described, ANS is the newest entropy coder that the industry has highly valued in recent years. ANS is valued by the industry precisely because it captures the benefits of both Huffman Coding and Arithmetic Coding [2]. Huffman Coding is known for its fast encoding and decoding but has limitations in compression performance (at least one bit is required to represent a symbol). On the contrary, Arithmetic Coding is characterized by a high compression ratio (the degree of compression can be close to the theoretical optimal value) but has limitations in encoding and decoding speed.
Generally speaking, the slow execution speed disadvantage of Arithmetic Coding comes from its involvement in floating-point numbers calculations, which complicates the practical realization and slows down the entire compression and decompression process. The shortage mentioned above of arithmetic codes can be understood as follows. Theoretically, the amount of self-information contained in a symbol s with probability p s is log 2 1 p s bits. Similarly, in conventional arithmetic coding, the amount of self-information for two continuous coding stages x and x will be −log 2 p x and −log 2 p x bits, respectively. After transition from stage x to stage x , by encoding the new symbol s; ideally, we have p x = p x p s . Therefore, in arithmetic codes, the probability range after encoding the incoming symbol shrinks from the previous range by multiplying the probability of the symbol s, which is less than 1. This explains why floating-point numbers are used in arithmetic coding's implementation. To overcome this shortage, as one of the anonymized reviewers mentioned, modern Arithmetic Coding implementations use renormalization, which helps avoid floating-point operations. The first such fully integer multi-symbol implementation of Arithmetic Coding was proposed in 1987 in [14]. Nevertheless, the implementation in [14] needs multiplications and divisions; therefore, several look-up table-based adaptive binary arithmetic coding implementations were proposed, which many video and image compression standards have adopted. Moreover, there is more advanced research work related to adaptive range coding (Arithmetic Coding with fast renormalization), for example [15], and multiplication and division free multi-symbol Arithmetic Coding [16]. Different from the prescribed speeding up approaches for Arithmetic Coding, to speed up the processing speed, in ANS, a positive integer state value is the desired target. To achieve this goal, instead of shrinking the new state variable's range, Jarek Duda [1] suggested dividing the original state variable's range by the symbol's probability to expand it into integer values, that is x ≈ x p s . Therefore, if s ∈ {0,1}, each state transition doubles the original state range, while if s ∈ {0, 1, 2, . . . , 9}, each state transition will ten times enlarge the new state's range. This kind of assignments, in some sense, make the behavior of ANS similar to that of the conventional weighted number systems, such as Binary and Decimal number systems.

Types of the Asymmetric Numeral Systems
According to the distributions of the source symbols and methods of realization, there are three variants of ANS [1][2][3][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34] (follow the chronological sequence of publication dates): We will present the definitions and operating processes of various ANSs in the rest of this section, in "learning by examples" and "step-by-step" ways. We will explain different versions of ANS encoding and decoding procedures in detail through concrete examples. Before going into details, a summary about the characteristics of different types of ANS is given as follows. The inputs processed by uABS are only 0 and 1. The information processed by rANS is not only 0 and 1 but with a variety of possibilities. tANS tabularizes the ANS's encoding and decoding processes.

The Uniform Asymmetric Binary System (uABS)
uABS is the most basic type, and the input processed by it is only two possible cases: 0 or 1. Expressed by a mathematical formula, the input set A looks like: A = {0, 1} {s 0 , s 1 }, with probability distributions: p(s 0 ) = p 0 = p, p(s 1 ) = p 1 = 1 − p, and p 0 + p 1 = 1. In uABS, the input is a series of finite number bitstreams consisting of 0 or 1, such as 010011. The output will be a natural number (i.e., a non-negative integer). For simplicity, we use x to denote the state variable of a node. Therefore, in the encoding process, as mentioned above, state transitions are performed as Enc (input bit, current state) → (next state); or symbolically reduces to C (s, x) = x . We also use state transitions to realize the decoding process: D (x ) = (x, s).
(a) uABS Constructions for Uniformly Distributed Binary Sources As described in Section 2.2, the function of an uABS (or an ANS in general) encoder can be represented: This arrangement shows that the smaller the probability of the symbol encoded, the larger the new state number (or state-variable range) after state transition. This implies that if the probability of the current encoding symbol is smaller, then we need more bits to represent its corresponding uABS output.
To give readers a clear picture of the process of ANS encoding, let us examine the following simple example first.
Example 1: Assume s = {0,1} and p 0 = p 1 = 0.5. According to Equation (1), the best encoding function for 0 or 1 would be C(0, x) = C(1, x) = x p 0 = 2x. In fact, taking the polarity of input symbols into account, the encoding function becomes Entropy 2022, 24, 375 5 of 33 and the decoding function is Now, for the input sequence b 1 b 2 b 3 b 4 b 5 = 01111, the initial state is x 0 = 1, and the Encoding process is conducted in order as follows: That is, for the input b 1 b 2 b 3 b 4 b 5 = 01111, the corresponding uABS output is the positive integer 47, which is also the value (or range) of the state variable x 5 .
Similarly, the relevant Decoding process is conducted in order as follows: Clearly, for input 47, the uABS output would be b 1 b 2 b 3 b 4 b 5 = 01111. It is the same as the original input.
For speeding up the whole coding process, table lookup techniques are often used in entropy coding areas. This convention also applies to ANS. In the following, we will use a so-called coding table to illustrate the encoding and decoding processes of uABS with uniformly distributed inputs.
Example 2. Assume s = {0,1} and p 0 = p 1 = 0.5. Let us consider the following coding table, where the table occupancy of 0 and 1 is the same since they have the same probability distribution.
First, let us take the red-3 in the bottom row of the Table 1 as an example to explain from the perspective of encoding. Assume the red 3 is the current state, then from the index of the row it belongs to, we say that the symbol to be encoded is s = 1. In contrast, the corresponding column denotes the encoded state x = 7. According to our previous discussions, mathematically, we have the following uABS expression: C(s, x) = x ⇒ C(1, 3) = 7 . Second, let us continue to use the red 3 as an example to explain from the perspective of decoding. Now, the red 3 represents the decoded state x, and its corresponding row index denotes the decoded symbol s = 1. The corresponding column shows that the to-be-decoded state is x = 7. Mathematically, its uABS expression becomes D(x ) = (s, x) ⇒ D(7) = (1, 3). In this way, as long as we know the coding table, we can completely describe the encoding and decoding processes very efficiently. However, the question is: 'how is the coding table constructed?' We will answer this question later. From Example 2, we can find an interesting phenomenon: when the symbol to be encoded is 0, the generated next state x is an even number, and when the symbol is 1, the next state x is an odd number. The reason comes from the encoding function C (x, s) = 2x + s. Therefore, depending on the polarity of s, we can divide the coding states into two categories: even-numbered and odd-numbered types.
This observation reveals that there is an allotting mechanism between a given symbol and its possible mapping states. This mapping mechanism plays an essential role in building efficient and effective realization of ANS, which is called the symbol spread function (SSF) [1]. Simply put, SSF addresses the mapping relation from states to symbols. Here, we use the notation s to represent the symbol spread function. Mathematically, we have the following expression: s : N → A ⇒ s(x) = s, where N denotes the set of natural numbers, and A is the set of involved source symbols. With this notion, the SSP used in the above two examples can be written as s(x) = x mod 2.
In the physical sense, SSF divides a given state into several subsets and allocates a different symbol to each distinct subgroup. Since ANS is mainly applied to compress data, therefore, the effectiveness of an SSF is judged by its compression performance and execution speed. Unfortunately, finding the best SSF for an ANS construction involves solving complicated combinatorial problems; therefore, sub-optimal heuristic approaches are adopted in most practical use cases.
From our previous discussions, the selection of SSF is closely related to the symbol probability p s . Moreover, the encoding function C(s, x) = x ≈ x p s tells us that the encoded state x corresponding to symbol s would appear at an integer multiple of intervals with spacing 1 p s , in a tabularized realization non-uniformly distributed ABS, since the input state x here could be any non-negative integer.
In summary, if the probability of symbol s is more significant (its occupancy is higher and the symbol appears more times in the coding table). In addition, the interval spacing between neighboring states x will be smaller, which means more states will be allocated to the same symbol s. The physical meaning is that the ANS hopes to give more states for the symbol that appears more often. To give readers a clear picture of the process of non-uniform ABS encoding, we give an illustrative simple in Appendix A.
We take non-uniform input 01111 as another illustrative example to further discuss this scenario. In this case, the probability of 0 is 1 5 and the probability of 1 is 4 5 As we can see in the following Table 2, the appearance of 0 is one out of five, and the appearance of 1 is four out of five.  In an rANS, the set of input symbols to be encoded is A = {s 0 , s 1 . . . s n }, and the number of occurrences of each symbol s i is L i , the total number of occurrences of all symbols is L, L = ∑ L i . Assume the probability of symbol s i is p i , p i = L i L , and ∑ p s = 1. In the following discussions, we call L i a sub-cycle, L a cycle, and 'cycle' also stands for 'the range' of an rANS. The major difference between rANS and uABS is whether the number of symbols involved in the process is more than two or not. To explain the concept of rANS, again, we start with a simple example.
Example 3. Suppose A = {a, b, c}, with probability distributions: p a = 5 8 , p b = 2 8 , and p c = 1 8 . Notice that this assumption of symbol distributions is the same as in Figure 3 of [35]; therefore, the same repeating patterns are obtained, as shown in Figure 2. From the above discussions, the ideal SSF for this example should assign symbol s to state x  Notice that this assumption of symbol distributions is the same as in Figure   3 of [35]; therefore, the same repeating patterns are obtained, as shown in Figure 2. From the above discussions, the ideal SSF for this example should assign symbol s to state x' consistently according to . So, symbol a should occupy of all states, symbol b should occupy of all states, and symbol c should occupy of all states. According to this concept, we have the following coding table.
Of course, this deduction still applies to cases with other different probability distributions, as shown in Figure 2.  Of course, this deduction still applies to cases with other different probability distributions, as shown in Figure 2.
Following the same inferencing, the proper SSF for Example 3 would be: Or, we can express s(x) as a repeated pattern 'aaaaabbc' with period 8. Similarly, it is easy to find that the encoding functions C(a, x) = x p a = 8 5 x, C(b, x) = x p b = 8 2 x, and C(c, x) = x p c = 8x do not work well. In the next paragraph, we will pay attention to derive the actual encoding functions for Example 3.
First, let us define the Cumulated Distance Function, CDF[s] = ∑ L S ; its physical meaning is to find the sum of the sub-cycle lengths of the symbol s' before the to-beencoded symbol s, in a cycle. For example, if the to-be-encoded symbol is b, then s' = a, and CDF[b] = sum of the sub-cycle lengths for symbol a = 5. Second, since there is more than one sub-cycle in the coding table for the given symbol s, according to the current state x, we can find which sub-cycle the to-be-encoded symbol s belong to simply by calculating x L s + 1, where y denotes the largest integer less than y. For example, if we are computing C(b,3), then 3 L b + 1 = 2 tells us that we are now encoding that symbol b in its second sub-cycle. Moreover, x L s * L = 3 L b * 8 = 8 means we should add a bias 8 to calculate the address of the next state x . Finally, we should now find the exact position of the current state x in the sub-cycle it belongs to, and computing x mod L S can quickly achieve this goal. Therefore, combining all the relevant calculations we have From the above discussions, we can conclude that the proper ANS encoding and decoding functions for a non-binary source with different symbol probability distributions should respectively be: Entropy 2022, 24, 375 According to Equation (4), the rANS coding table for Example 3 should be shown in Table 3.  Notice that the main difference between Tables 3 and 4 lies in the number of encoded states x'. Since there are three distinct symbols and the minimum symbol probability is oneeighth; therefore, as shown in Table 3, there are 24 states in total. Moreover, we can easily check the correctness of Equation (5) by computing: D (14) = 14 8 × 2 − CDF[b] + 14 mod 8 = 2 − 5 + 6 = 3 and s is the (14 mod 8 =) 6-th symbol in the coding table, which is b. Since all the above derivation is based on the symbol's range occupation in the coding table, we think this is why this approach is called the range ANS in the literature. From the above discussions, we can conclude that the proper ANS encoding and decoding functions for a non-binary source with different symbol probability distributions should respectively be: According to Equation (4), the rANS coding table for Example 3 should be shown in Table  3.
Notice that the main difference between Tables 3 and 4 lies in the number of encoded states x'. Since there are three distinct symbols and the minimum symbol probability is one-eighth; therefore, as shown in Table 3, there are 24 states in total. Moreover, we can easily check the correctness of Equation (5) by computing: 8 =2−5+6 = 3 and s is the (14 mod 8 =) 6-th symbol in the coding table, which is b. Since all the above derivation is based on the symbol's range occupation in the coding table, we think this is why this approach is called the range ANS in the literature.

Sub-cycle for a Sub-cycle for c
To accelerate the decoding speed, besides the above addressed basic coding table construction, the period of the repeat pattern (or the sum of the sub-cycle lengths), L, is usually selected as an integer power of 2, that is = 2 . With this setting, in the decoding, we can use bit-shifting instead of division to realize   × and use masking instead of modular operation to implement ′ . In this way, a decoding process needs one multiplication operation only. From the above discussions, we can conclude that the proper ANS encoding and decoding functions for a non-binary source with different symbol probability distributions should respectively be:

(b) Streaming ANS Coding and the Renormalization Process
According to Equation (4), the rANS coding table for Example 3 should be shown in Table  3.
Notice that the main difference between Tables 3 and 4 lies in the number of encoded states x'. Since there are three distinct symbols and the minimum symbol probability is one-eighth; therefore, as shown in Table 3, there are 24 states in total. Moreover, we can easily check the correctness of Equation (5) by computing: 8 =2−5+6 = 3 and s is the (14 mod 8 =) 6-th symbol in the coding table, which is b. Since all the above derivation is based on the symbol's range occupation in the coding table, we think this is why this approach is called the range ANS in the literature.    To accelerate the decoding speed, besides the above addressed basic coding table construction, the period of the repeat pattern (or the sum of the sub-cycle lengths), L, is usually selected as an integer power of 2, that is = 2 . With this setting, in the decoding, we can use bit-shifting instead of division to realize   × and use masking instead of modular operation to implement ′ . In this way, a decoding process needs one multiplication operation only.

(b) Streaming ANS Coding and the Renormalization Process
To accelerate the decoding speed, besides the above addressed basic coding table construction, the period of the repeat pattern (or the sum of the sub-cycle lengths), L, is usually selected as an integer power of 2, that is L = 2 n . With this setting, in the decoding, we can use bit-shifting instead of division to realize x L × L s and use masking instead of modular operation to implement x mod L. In this way, a decoding process needs one multiplication operation only.

(b) Streaming ANS Coding and the Renormalization Process
The two ANSs discussed earlier, uABS and rANS, face a common serious problem: the state value will become larger and larger in the encoding process if a streaming (or continuous) data source is encountered. This unbounded growth of state range is unacceptable in practice because, in any computer architecture, the realizable integer is always limited. For example, in a 64-bit computer, the largest type of integers is the Unsigned Long Int, and its range is [0, 2 64 − 1]. If we want to encode an ultra-long sequence, there will be an overflow even the largest integer type is adopted. In contrast, along with the decoding process, the state value will decrease and eventually be smaller than 0 and jump to a negative integer number. In addition, the negative integers have their limits on a computer.
To keep the state values within the computer representable integer ranges during the encoding and the decoding processes, we should derive a dynamic mechanism for adjusting the state ranges during the coding processes. When the state value is less than the allowable range, the mechanism will increase the state range accordingly and vice versa. We call the state range adjusting mechanism the renormalization process in ANS.
Before defining the renormalization process, we noticed that although both the ANS encoding and decoding involve state transitions, they cannot be described correctly by a Finite State Machine because the involved states have unbounded ranges if a streaming source is considered. In other words, the numbers of possible state ranges become finite only after applying the renormalization process. This bounded involved state range makes the corresponding ANS realizable by using a limit-sized computational facility. Since both in encoding and decoding, the involved states may exceed the allowable ranges of the computing device, we will discuss the renormalization mechanisms for the encoding and the decoding processes, respectively.
In ANS encoding, when the state value goes out of the designated range, the renormalization process shifts the out-of-range state value one bit to the right, that is, divide the state value by 2, then removing the least significant bit (LSB) from the state value and stuffing it into the newly defined 'ANS-bitstream variable.' For example, suppose the designated ANS state range is [18,32]. Now, if the next encoding state goes to 70, which exceeds the maximum allowable range of 29, since the binary representation of 70 is 1000110 2 , and after shifting one bit to the right, we have 100011 2 = 35, which is still larger than 29. So, we move one bit of 35 to the right again and obtain 10001 2 = 17 within the target range. Of course, the two right-shifted bits 10 are now stored in the pre-described ANS-bitstream variable, and the renormalization ends. With the aid of renormalization, we can continuously encode the incoming source symbols and guarantee the state range is within the predefined bound. For ease of understanding the prescribed renormalization mechanism, Appendix B presents the pseudo-codes and illustration examples for both the ANS Stream Encoding and the ANS Decoding.
Observing the extreme example presented in the latter part of Appendix B, we can conclude that: to guarantee the proper operation of ANS Stream Coding, the total number of states in the allowable state range must be larger than the number of involved source symbols. Thus, the compactness consideration gives us the best choice of UI s = b × L s . Recall that in ANS, to speed up the processing speed, expanding the new state range by dividing the original state range by the symbol's probability is used instead. This statement tells us that the lower bound of the allowable state range, ILs, is determined by the smallest probability of the source symbol, which may bring challenges in realization when the source vocabulary is enormous. Fortunately, [32] investigated how to extend ANS's capability to serve the situation that the size of the input set is considerably large-thousands or millions of symbols. Under this condition, the table size for realizing tANS will be huge also. This new situation has not been addressed in the traditional ANS-related research. Most of the ANS-related studies dealt with unsigned byte (uint8_t) inputs, but [32] deals with unsigned integer (uint32_t) inputs and even higher precision cases. The most significant contribution of [32] comes from its discussion and investigation about finding a reasonable and realizable capability. Moreover, [32] proposes ways to achieve the maximal allowable capacity based on symbol folding and Partial Alphabet Re-Ordering. The core idea of symbol folding lies in a particular coding technique-Elias gamma coding, commonly used when coding integers whose upper bound cannot be determined beforehand. This characteristic fits well with the new condition (i.e., the size of the input set can be enormous). The core idea of Partial Alphabet Re-Ordering is to supplement the case that symbol folding cannot do well-the most frequent symbols have the high symbol number. [32] took the technique to enhance byte codes with restricted prefix properties proposed by S. Culpepper and A. Moffat [36] in 2005 to surpass this challenging case. As shown in [32], with the aids of two existing techniques, we can apply ANS to handle applications with an extensive involved alphabet set.
For ease of discussion, let us focus on the case of a source with reasonable source symbols in the rest of this work. After understanding why we design the allowable state range in this way, we start to apply the renormalization process to rANS for making it feasible in practice in the following sub-section.

The Table Asymmetric Numeral System (tANS)
As the name suggests, tANS focuses on the subject of ANS's realization using lookup tables. That achieves all encoding and decoding operations through table lookups, making encoding and decoding faster and easing for hardware implementations [17,18,31,33]. Since all processes are operated in a table, the size of the table must have a limit, and this is equivalent to set an upper bound on the number of states. This design thinking is the same as that of the stream rANS discussed earlier.
Similar to rANS, in tANS, where L s is the number of occurrences of symbol s. Assume the actual probability of symbol s is p s , L s L = q s , and q s is designed as close to p s as possible. The same as with the conventional entropy coding, the higher the difference between q s and p s , the worse the compression efficiency.
Recall from Example 3, the corresponding SSF is s(x) = aaaaabbc, which is an orderly arrangement. In tANS, s(x) can be arranged in much more ways; for example, s(x) = abaaabaac is another proper choice. Actually, for this particular example, the total number of possible s(x) will be 8! 5!2!1! = 168, and this is only an example with a fairly small number of source symbols. Generally speaking, when an English file is to-be-compressed, ASCII code is the most often used symbol representation. That is, a symbol has 256 possibilities. In this setting, all possible numbers of SSF s(x) are . . + i n = 256, and the number of possible choices is relatively large. Therefore, the SSFs of tANS provide more possibilities for encoding/decoding, which increases the degree of system chaos and provides more vital cryptographic characteristics. Moreover, the associated broader choice in SSF also offers more room for optimizing the compression performance. It follows that the proper design of an SSF plays the core role in tANS. Due to their similarity in behavior, the design of tANS follows the same principles of the rANS stream encoder. From the pseudo-codes of ANS stream encoding presented in Appendix B, we found a while loop in it. At first glance, it seems this while loop will run for a long time, but in fact, we can use O(1) time to calculate how many while loops we need to run in advance, as follows.
Assume the to-be-encoded symbol is s, and the current encoded state value x is higher than the upper bound of the designated allowable state range. According to the renormalization principle, we must shift the current state x to the right several times to constrain the resulting state value within the target state range.
Let k s (x) denote how many times the while loops we need to run in advance. For a given target state range I s := {L s , L s + 1, . . . , 2L s − 1} it is easy to derive k s (x) = log 2 x L s . After knowing k s (x), we will modify the calculations of mod(x, 2) to mod x, 2 k s (x) and x = x 2 to x = x 2 ks (x) . Therefore, the pseudo-code of the tANS encoding function becomes: Similarly, the pseudo-code of the tANS decoding function becomes: The construction of Coding Tables for tANS Based on the discussions above, when the current input state is x, the symbol to be encoded is s, and the output next state is x', we have x = C s, x 2 k and the generated bit sequence = mod x, 2 k . Therefore, Tables 5 and 6 illustrate the forms of tANS encoding and the decoding tables, respectively.  For a symbol sequence to be encoded, tANS starts its encoding from the last symbol of the symbol sequence, then the second to last. The generated bit sequence is storing on the LSB of the bitstream variable during the encoding process. When completing the encoding, a state and a bitstream will be generated. In the opposite direction, tANS starts its decoding with a state and a bitstream. As pre-described, the tANS decoder starts extracting bits from the MSB of the bitstream variable during the decoding.
In short, we can summarize the whole tANS coding process by the following four steps: Step 1: Step 3: • Determine the encoding and decoding tables according to the SSF determined in Step 2.
To give readers a clear picture of the operations of tANS encoding and decoding and maintain fluent readability, a concrete and step-by-step example that illustrates the complete tANS processes is given in Appendix C.

The Avalanche Effect of the tANS
As mentioned earlier, tANS encoding processes can be treated as state transitions in a Finite State Machine model. Therefore, as long as the encoding input symbol is different, the encoder will produce (or the model will jump to) different output states even for the same initial state. Under the same condition given in Example C-1, assume there are two different inputs: input one is with the symbol sequence "cabcaada," while input two is with the symbol sequence "cbbcaada". That is, the two inputs are different only at the second symbol. Following the tANS encoding procedure, it is easy to verify that the output corresponds to input one is (State = 16, bitstream variable = "1101111110111111100"), and the result associated with input two is (State = 16, bitstream variable = "110111100010000000"). Notice that the output states are identical, but the bitstreams in the stream variables are dissimilar starting from the second bit, which is where the two input symbol sequences begin to have a difference. In the opposite direction, in tANS decoding, the output states generated during encoding will be used as the starting states of the decoder, and the bitstream stored on the bitstream variable will be extracted to conduct the renormalization process. Because of this mutual chaining nature, as long as the operand state or the content of the bitstream variable is different, the decoded result will be completely different, also.
Like arithmetic coding, this kind of functional behavior that a tiny change in inputs will produce a significant difference in outputs is one of the preferred features called the avalanche effect in cryptography. As mentioned above, the avalanching characteristics of tANS make it applicable to data security protection besides its original well-known usage in data compression.
As for the chaotic behavior of ANS, there is one more thing that is worthy of notice. As addressed in Section 3.2(b) and Section 3.3, the ANS encoding and decoding functions are highly related to the designated SSF, where enormous possible choices exist. In other words, the combinatorial complexity in selecting SSF will lead to a higher degree of chaos, especially for tANS.

Applications of the Asymmetric Numeral Systems
To provide strong evidence of the value of ANS in practical applications, we review some collected successful and meaningful applications of ANS that have been addressed in the literature so far in this section. Additionally, as a new contribution, the application of ANS to Intellectual Property Rights Management and Integrity Checking of Digital Images will be discussed in detail in Section 4.3.

ANS in Index Compression and Machine Learning-Based Lossless Data Compression
Alistair Moffat and Matthias Petri [22] considered how ANS coding could be used with existing index compression techniques. They showed that ANS could be usefully combined with several index compression approaches to yield improved compression effectiveness within reasonable additional resource costs. By joining ANS with each of byte-based codes, word-based codes, and packed codes, they established new trade-offs for effectiveness and efficiency in index compression. Experiments on an inverted index for the 426 GiB Gov2 collection, the authors showed in [22] that the combination of blocking and ANS-based entropy-coding against a set of 16 magnitude-based probability models yields compression effectiveness superior to most previous mechanisms while still providing reasonable decoding speed. Later, the same authors extended their study to examine the task of block-based inverted index compression [23], in which fixed-length blocks of postings data are compressed independently of each other. Instead of using one parameter, [23] proposed Entropy 2022, 24, 375 13 of 33 using a two-dimensional selector to summarize each block's distribution of values. Ref. [23] also introduced a revised mapping from symbol identifiers to ANS values requiring less memory and providing byte-friendly output for exception values. Experiments with two extensive document collections demonstrate that the proposed mechanism can achieve substantial compression gain, and the query throughput speeds are relatively unaffected.
The field of machine learning has experienced an explosion of activity in recent years. We have seen many papers looking at applications of modern deep learning methods, such as AutoEncoder-based and GAN-like mechanisms, to lossy compression. Comparatively, applying Deep Neural Networks (DNNs) to lossless compression has been less well covered in recent works. Ref. [28] seeks to advance in this direction, focusing on lossless compression using latent variable models. In contrast to implementing bits-back coding [37] by Arithmetic codes, ref. [28] suggested using ANS instead and termed the new coding scheme 'Bits Back with ANS' (BB-ANS). After conducting a series of experiments, ref. [28] found that BB-ANS with a Variational AutoEncoder (VAE) outperforms generic lossless compression algorithms for binarized and raw MNIST, even with a straightforward one model architecture. The authors of [28] extrapolate these results to predict that stateof-the-art latent variable models could be used in conjunction with BB-ANS to achieve significantly better lossless compression rates than current methods. However, as pointed out by [29], BB-ANS incurs an overhead that grows with the number of latent variables, restricting the capacity of VAE and posing difficulties for density estimation performance; hence, the resulting compression rate suffers. Ref. [29] suggested recursively applying bits-back coding and termed the resulting scheme 'Bit-Swap' approach to conquering this shortage. Bit-Swap [29] improves BB-ANS's performance on hierarchical latent variable models with Markov chain structure. Compared to latent variables models with only one latent layer, these hierarchical latent variable models allow us to achieve better density estimation performance on complex high-dimensional distributions. Although connecting ANS with DNN is out of the focus of this writeup, we do think this is one of the future research directions worthy of further exploration and investigation.

ANS in Joint Compression and Encryption of Digital Images
As a variation of entropy codes, Duda mentioned in his earliest works [2,3] that there is considerable freedom while choosing a specific implementation table for ANS; therefore, we can simultaneously apply ANS to compress and encrypt a message. Duda and Niemiec continue to discuss the applicability of ANS for compression with encryption in [19], pointing out that ANS makes it possible to encrypt the encoded message at nearly no additional cost simultaneously. Moreover, ref. [19] analyzed the security level provided by ANS-based cipher. The main security feature provided by ANS is the pre-described Avalanche effect which comes from ANS's variable length coding nature. Any attempt to recover from ANS-coded bits to the original symbols has to resolve the error propagation problem caused by even a single bit of erroneous decoding. It is well known that the probability of getting a successful frame synchronization is negligible even for short sequences of symbols and decreases exponentially with the number of compressed symbols. However, as analyzed in [34], plain ANS could only support applications with low-level security requirements. In the same writeup, Seyit Camtepe et al. investigated the natural properties of ANS, allowing incorporation with authenticated encryption using as little cryptography as possible. Moreover, they proposed three joint compression and encryption algorithms to face real applications with much higher security requirements. The first applies a single ANS with state jumps controlled by a pseudorandom bit generator (PRBG). The second one uses two copies of ANS, where PRBG manages the transition between the two ANSs. The third algorithm deploys encoding function evolution to enhance the obtained security level. The contributions of [34] boomed up the applicability of ANS in joint compression and encryption a lot.
As mentioned in [34], though, the randomness of the pure Avalanche effect-based encryption scheme is not enough to deal with high-level security applications. There are cases where low-level security may be workable with the aid of other control mechanisms. For example, the distribution of art collections and the verification of medical images are under particular management rules, which is quite different from the communications scenario among IoT sensors or devices considered in [34]. We believe that ANS might still provide a useful jointly compressing and encrypting function for those applications. Therefore, we will investigate the possibility of applying ANS to protect the intellectual property rights (IPRs) of art collection pictures or check the integrity of medical images in the next section.

ANS in Intellectual Property Rights Management and Integrity Checking of Digital Images
To exactly recover a time signal from its frequency domain representation, we need to know both the magnitude and phase responses of the signal. Likewise, in ANS, both the correct state value and content of the bitstream variable are a must for reconstructing a digital image without loss. Based on its avalanche effect, we can apply tANS as a vehicle to protect the intellectual property rights (IPRs) of art collection pictures or check the integrity of medical images as described in the following sub-sections.
(a) Some Specific Characteristics of ANS Before going into the details, let us recall several preferred features provided by ANS.

Lossless and Compressive Representation
As pre-described, ANS belongs to the category of entropy coding; lossless compression is undoubtedly one of its profound properties. Therefore, it is pretty suitable for being applied to digital art collection images or medical images, where compact and distortionfree representation is of top priority.
Moreover, ANS provides a compression efficiency close to the Shannon limit, but relatively few researches of ANS on image compression exist. The JPEG Standard committee proposed JPEG XL [8] in 2017, in which the entropy coder changed to use rANS. Since JPEG XL includes many pre-processing and optimization techniques, its reported compression efficiency is better than the naive approach adopted in this work.

2.
Avalanche and Retrospective Properties The avalanche effect mentioned above is quite suitable for providing a compact representation of digital art collection images. We can represent a digital art image by a positive integer state and a bit sequence. Art collectors can open, says the state, to the public as the evidence for claiming the ownership of this artwork and keep the bit sequence in private as the verifier if a dispute occurs. Because of its retrospective and avalanche characteristics, we think there will be an excellent opportunity to combine ANS with the recently popular NFT (Non-Fungible Token) [11] to make the IPRs of artwork much more secured. Similarly, we can use these two properties to check the integrity and protect the privacy of medical images at the same time.

Severability
We can apply the compactness and the lossless properties of ANS mentioned above to digital images in a block-segmented way. With ANS's segmentable feature, we can assign different levels of protection or degrees of integrity checking to various portions of an image according to their importance. An artwork publisher who intends to sell his digital artworks to more than one artwork collector can divide his art collection into different pieces and price them according to the corresponding values. Then, the publisher can generate the state and the bit sequence for representing each partition. He can now disclose the state information to the potential customers as a marketing representative of this partition in NFT applications. Moreover, the bit sequence of the same segmented area can then be sent to the actual buyer as a voucher for certifying the ownership. Moreover, from the marketing point of view, through the integration of ANS and NFT, a single physical artwork collection can be distributed, shared, and sold in the virtual world, which enlarges the potential market size and magnifies the market value of a digital artwork substantially. Figure 3 shows the information flow of the proposed ANS-based digital image processing system. A bank of ANS encoders is used to encode a given image, where each encoder generates a state and a bitstream representation for a given portion of the segmented input image. All the generated state values are collected to form a state-map of the image, which is made public and openly distributed in our system as a digital representation of that particular picture. On the contrary, we keep the collection of generated bitstreams in the artist's (or a museum official's) hands as proof of the ownership of that image (i.e., the digital artwork). Notice that we include a segmentation mask into our system, indicating the geometric pattern and the number of portions the input image could be partitioned. With the aid of the mask, we can process different portions of an image with distinct ANS encoders, where different SSFs are adopted to offer various realizations of ANS coding functions. The more complex and erratic the mask is the higher our system's security protection.

(b) The Proposed Applications of ANS-based Digital Image Processing System
pieces and price them according to the corresponding values. Then, the publisher can generate the state and the bit sequence for representing each partition. He can now disclose the state information to the potential customers as a marketing representative of this partition in NFT applications. Moreover, the bit sequence of the same segmented area can then be sent to the actual buyer as a voucher for certifying the ownership. Moreover, from the marketing point of view, through the integration of ANS and NFT, a single physical artwork collection can be distributed, shared, and sold in the virtual world, which enlarges the potential market size and magnifies the market value of a digital artwork substantially. Figure 3 shows the information flow of the proposed ANS-based digital image processing system. A bank of ANS encoders is used to encode a given image, where each encoder generates a state and a bitstream representation for a given portion of the segmented input image. All the generated state values are collected to form a state-map of the image, which is made public and openly distributed in our system as a digital representation of that particular picture. On the contrary, we keep the collection of generated bitstreams in the artist's (or a museum official's) hands as proof of the ownership of that image (i.e., the digital artwork). Notice that we include a segmentation mask into our system, indicating the geometric pattern and the number of portions the input image could be partitioned. With the aid of the mask, we can process different portions of an image with distinct ANS encoders, where different SSFs are adopted to offer various realizations of ANS coding functions. The more complex and erratic the mask is the higher our system's security protection.  Figure 4 shows the actual encoder we used to enhance our system's security protection capability. We separate the input image into RGB components and segment each color component into equal-sized blocks (called them sub-images) simply for ease of implementation. Additionally, we add a block-based shuffling module to our system to increase the confusion ability of our system. Finally, Figure 5 shows the block diagram of  Figure 4 shows the actual encoder we used to enhance our system's security protection capability. We separate the input image into RGB components and segment each color component into equal-sized blocks (called them sub-images) simply for ease of implementation. Additionally, we add a block-based shuffling module to our system to increase the confusion ability of our system. Finally, Figure 5 shows the block diagram of the actual decoder used in our system. Of course, we can treat the key used to conduct the block-based permutation as one of the security parameters of the proposed system.

Experimental Results
Through a series of experiments, we examine the applicability of the proposed tANSbased system to protect IPRs of digital artwork collections and the integrity of medical images in this section. The following experiment is conducted in Darwin MacBook-Pro.local 18.7.0 Darwin Kernel Version 18.7.0; root:xnu-4903.278.44~1/RELEASE_X86_64 x86_64 computer system. For the ANS algorithm, we choose new generation entropy codecs: Finite State Entropy from [38], which is the first implementation of ANS developed by Yann Collet.

tANS in IPRs Protection of Digital Artwork Collections
This section will utilize the segmentable and retrospective features of tANS to protect the IPRs of an artwork image. To make readers better understand what we are doing, let us examine the related processing flow for the digitized painting picture shown in Figure  6. (We choose a low-resolution picture as the testing benchmark to avoid violating copyrights. ANS coding operations will not affect the processed image quality because they are conducted in the integer domain.)

Experimental Results
Through a series of experiments, we examine the applicability of the proposed tANSbased system to protect IPRs of digital artwork collections and the integrity of medical images in this section. The following experiment is conducted in Darwin MacBook-Pro.local 18.7.0 Darwin Kernel Version 18.7.0; root:xnu-4903.278.44~1/RELEASE_X86_64 x86_64 computer system. For the ANS algorithm, we choose new generation entropy codecs: Finite State Entropy from [38], which is the first implementation of ANS developed by Yann Collet.

tANS in IPRs Protection of Digital Artwork Collections
This section will utilize the segmentable and retrospective features of tANS to protect the IPRs of an artwork image. To make readers better understand what we are doing, let us examine the related processing flow for the digitized painting picture shown in Figure 6. (We choose a low-resolution picture as the testing benchmark to avoid violating copyrights. ANS coding operations will not affect the processed image quality because they are conducted in the integer domain.) Different colors in the mask define geometric patterns for different segmenting subimages according to various degrees of importance about the image's content. As previously addressed, a tANS encodes a sub-image into an outputs state and an associated bitstream. As shown in Figure 7, our first experiment is to change one byte of the state value in the encoded domain to see whether the decoded result will show the so-called avalanche effect. Different colors in the mask define geometric patterns for different segmenting subimages according to various degrees of importance about the image's content. As previously addressed, a tANS encodes a sub-image into an outputs state and an associated bitstream. As shown in Figure 7, our first experiment is to change one byte of the state value in the encoded domain to see whether the decoded result will show the so-called avalanche effect. Different colors in the mask define geometric patterns for different segmenting subimages according to various degrees of importance about the image's content. As previously addressed, a tANS encodes a sub-image into an outputs state and an associated bitstream. As shown in Figure 7, our first experiment is to change one byte of the state value in the encoded domain to see whether the decoded result will show the so-called avalanche effect. We randomly pick a sub-image defined by one specific color in the mask. Then, we randomly change a byte of the state value of the chosen sub-image. We observe the corresponding decoded output for the following two things: (1) Does the damaged area of the decompressed image locate in the same areas where the state value changed? (2) Is the degree of contamination in the damage severe or not?
There are ten distinct areas with different geometric patterns defined in the mask in our experiments. Figure 8 shows the snapshots corresponding to each sub-images, where one byte of the state value in each sub-image is changed randomly. We randomly pick a sub-image defined by one specific color in the mask. Then, we randomly change a byte of the state value of the chosen sub-image. We observe the corresponding decoded output for the following two things: (1) Does the damaged area of the decompressed image locate in the same areas where the state value changed? (2) Is the degree of contamination in the damage severe or not?
There are ten distinct areas with different geometric patterns defined in the mask in our experiments. Figure 8 shows the snapshots corresponding to each sub-images, where one byte of the state value in each sub-image is changed randomly.
We measure our experiment's compression performance based on the compression ratio, defined as the file size before compression to the file size after compression. The average compression ratio of our experiments is 88%. This ratio is not very impressive as compared with conventional entropy coders. The reason behind this not-so-good compression performance is that we did not take many pre-processing and optimization techniques into account, which have been proved effective in enhancing compression performance in JPEG XL. Another possible factor for the not-so-impressive compression performance comes from the usage of tANS. Although tANS is one branch of ANSs, which provides the best efficiency in realization and processing speed, it is not optimized for image compression. This fact tells us that still there is a large room for us to develop ANS-based approaches for providing good performance both in security protection and compression ratio. As for the degree of contamination, those completely black blocks in the sub-images of the decompressed picture tell us that the avalanche effect causes 100% of the impact, even though only a one-byte state value is changed.

tANS in Integrity Checking of Digital Medical Images
Enforcing protection of the contents of medical imaging, such as computed tomography (CT), magnetic resonance (MR), positron emission tomography (PET), mammography, ultrasound, X-ray, has become a significant issue of computer security. Except for their being valuable and essential for the early detection, diagnosis, and treatment of diseases, their more and more widespread distribution makes developing security mechanisms to guarantee their confidentiality, integrity, and traceability in an autonomous way becomes a must. Facing such a demand, researchers proposed Reversible Watermarking (RW) [39,40] schemes for images of sensitive content, e.g., medical images, such that any modification may aspect their interpretation. However, extra data (the watermark) must be embedded in the protection target, which usually increases the file size. In this section, we suggest using tANS as the representative of the medical image content to achieve medical images' security protection and file size reduction simultaneously. We measure our experiment's compression performance based on the compression ratio, defined as the file size before compression to the file size after compression. The average compression ratio of our experiments is 88%. This ratio is not very impressive as compared with conventional entropy coders. The reason behind this not-so-good compression performance is that we did not take many pre-processing and optimization techniques into account, which have been proved effective in enhancing compression performance in JPEG XL. Another possible factor for the not-so-impressive compression perfor- We use the same system given in Figure 7 to test the integrity of medical images. Figures 9 and 10, respectively, show the original input and contaminated output images. Notice that the ability to check Images' integrity comes from tANS' avalanche feature while the segmentability of tANS contributes to parallelizability in implementation.
be embedded in the protection target, which usually increases the file size. In this section, we suggest using tANS as the representative of the medical image content to achieve medical images' security protection and file size reduction simultaneously.
We use the same system given in Figure 7 to test the integrity of medical images. Figures 9 and 10, respectively, show the original input and contaminated output images. Notice that the ability to check Images' integrity comes from tANS' avalanche feature while the segmentability of tANS contributes to parallelizability in implementation.  becomes a must. Facing such a demand, researchers proposed Reversible Watermark (RW) [39,40] schemes for images of sensitive content, e.g., medical images, such that a modification may aspect their interpretation. However, extra data (the watermark) m be embedded in the protection target, which usually increases the file size. In this secti we suggest using tANS as the representative of the medical image content to achieve m ical images' security protection and file size reduction simultaneously.
We use the same system given in Figure 7 to test the integrity of medical imag Figures 9 and 10, respectively, show the original input and contaminated output imag Notice that the ability to check Images' integrity comes from tANS' avalanche feat while the segmentability of tANS contributes to parallelizability in implementation.

Performance Comparison among Various Lossless Compression Algorithms
As suggested by anonymous reviewers, the comparisons of the performance amo various lossless compression algorithms, in terms of compression ratio and execut speed, are reported in this Section.

Description of Experimental Settings
Environment setting is Darwin MacBook-Pro.local 18.7.0 Darwin Kernel Vers

Performance Comparison among Various Lossless Compression Algorithms
As suggested by anonymous reviewers, the comparisons of the performance among various lossless compression algorithms, in terms of compression ratio and execution speed, are reported in this section.
The pictures we chose for benchmarking include all black, lattice, Lena, fruits, baboon, airplane, and chest images with sources from [54]. The reason we choose these images is for diversity. Our choice with the all-black and the black-and-white lattice images is to see how the algorithms, as mentioned above, performed on low entropy images with one color and two colors. Similarly, our choice with Lena, fruits, baboon, and airplane images is to see how those algorithms performed on classic gray images used in image processing communities. Finally, we chose the chest image is to see how these algorithms performed on a medical image. In order to show how different these pictures are, we show their histograms also. By the way, our experiments did not involve any preprocessing of the testing images; therefore, the compression ratios do not as good as expected. However, we can still see the performance differences among all these algorithms.

Experiment Results
In the following, we will take the all-black image as an illustrative example to address and explain our experiment's procedures and results first. The leftmost (a) and the middle (b) pictures of Figure 11, respectively, show the all-black image's snapshot and histogram, while the rightmost (c) chart reports the compression ratios of all algorithms we want to compare. Here, the compression ratio is defined as the ratio of the compressed file size to the uncompressed one. The x-axis of the middle picture denotes the image's RGB value, which ranges from 0 (pure black) to 255 (pure white); the corresponding y-axis represents the number of appearances of each RGB value. Histograms help characterize how different colors are distributed within an image. Notice that the higher the height of the bar is in the (c) chart, the poorer the compression performance.
Following the same arguments, Figures 12-17 show the related experimental information associated with the black-and-white lattice, the Lena, the fruits, the baboon, the airplane, and the chest images, respectively.
For providing a clear picture of the relative compression ratio comparison, Table 7 shows the compression ratio of each algorithm in numerals. Moreover, to show the timing performance of all benchmark algorithms, Table 8 reports the time consuming of each tested algorithm we obtained in seconds. Notice that, as abovementioned, no optimization preprocess has been included in any of our experiments.
while the rightmost (c) chart reports the compression ratios of all algorithms we want to compare. Here, the compression ratio is defined as the ratio of the compressed file size to the uncompressed one. The x-axis of the middle picture denotes the image's RGB value, which ranges from 0 (pure black) to 255 (pure white); the corresponding y-axis represents the number of appearances of each RGB value. Histograms help characterize how different colors are distributed within an image. Notice that the higher the height of the bar is in the (c) chart, the poorer the compression performance.  Following the same arguments, Figures 12-17 show the related experimental information associated with the black-and-white lattice, the Lena, the fruits, the baboon, the airplane, and the chest images, respectively.

Observations Obtained from Our Experiments
Those rows with gray backgrounds in Table 7 report compression ratios obtained from ANS-related algorithms. We could make some comments to these results:

1.
ANS-related algorithms performed well if the data distribution is highly skewed, inferred from the all-black and the lattice images. 2.
ANS-related algorithms performed generally if the data distribution is almost uniform, inferred from Lena, fruits, and baboon images. 3.
ANS-related algorithms performed well for medical images (c.f., chest image) because most of the area in a medical image is of the same color black or white), which also coincides with our first comment.

4.
As for the compression ratio, we found that ANS-related algorithms performed almost the same as the arithmetic coding or a little bit better, which is as expected from the theoretical point of view.

5.
As for the time consumption, we found that ANS-related algorithms almost need the least execution time among all algorithms and are comparable to the Huffman code, which is also as expected from the theoretical point of view.
As we said in Section 6.2, we did not employ any image preprocessing before compressing the images. This fact explains why the compression ratios of some images are not as good as expected. In general, some steps before applying the entropy coding are a must within a standard image compression algorithm. For example, in pik, which Google releases and adopts ANS as its source coding component, it involved some image preprocessing techniques to enhance its compression performance.

Conclusions
ANS is valued by the industry precisely because it captures the benefits of both Huffman coding and arithmetic coding. Surprisingly, compared with Huffman and arithmetic coding, the application of ANS to image compression is rare. Therefore, this paper intends to give a self-contained review of ANS-related technologies in depth and apply them to compress and encrypt digital images. ANS's lossless compression feature makes it especially suitable for distortion-less compression-related applications, such as medical and digital art collection images. The retrospective of ANS comes from its avalanche breakdown characteristic, which can easily be realized by using table ANS. Further, we suggested combining ANS with the recently popular NFT (non-fungible token) to make the intellectual rights of artwork much more secure.
In addition, as application examples, we explored the feasibility of using ANS to art collection images and medical images. We thoroughly investigated ANS's avalanche effect, which makes ANS applicable to lossless compression, segmentation, and retrospective digital images. Moreover, we successfully applied ANS's avalanche characteristic and segmentability to check the integrity of medical images in parallel.
As ANS is still under development, there is enormous room for future research. We list some topics that we plan to explore shortly in the following: (1) The combinatorial complexity in designing proper SSF makes developing an optimal ANS codec concerning a specific target becomes very challenging. Thus, finding a heuristic approach for reaching an effective ANS solution for a given input source is of great interest. (2) Based on the obtained states and bitstreams, develop some post-processing, such as prefix or suffix coding, or go through a hash function to find a unique state representation is worthy of doing. (3) Develop an efficient way to combine image recognition and segmentation techniques to automatically find Region of Interests (ROIs) in a picture so that the mask does not need to be manually set. This subject is of interest and beneficial to those planning to develop ANS-based image protection applications systematically and automatically. (4) Since one of the tANS coded results is a bitstream, which indeed can be losslessly compressed again to make the space smaller, then, "what is the best combination of all possible entropy coders?" would be an exciting research topic. (5) Before the image enters the ANS coder, it can be processed (transformed) in advance. Since the mask can divide an image at will, applying other image processing techniques to a sub-image with arbitrary shapes becomes challenging. (6) As mentioned at the end of Section 4.1, properly combining ANS with DNN to produce a fast compression mechanism with a high compression ratio is a research direction worthy of further exploration and investigation. Informed Consent Statement: Not applicable.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A
This part of the appendix presents an illustrative example for understanding the process of non-uniform ABS encoding.
Example A-1. Let us consider a non-uniformly distributed binary source with distributions p 0 = 0.25 and p 1 = 0.75. In this example, since p 0 p 1 = 1 3 , the corresponding SSF will allocate one quarter of all possible states to the symbol 's = 0' and three quarters of them to the symbol 's = 1'.
To achieve this goal, we detail the construction flow of the lookup table of Example A-1 in the following. According to Equation (1), the encoding function of symbol 0 is C(x, 0) = 4x; the encoding function of symbol 1 is C(x, 1) = 4 3 x. The physical interpretation is that for every four states, there will be a state corresponding to symbol 0; every 4 3 states, there will be a state corresponding to symbol 1. Because the ratio involves non-integer numbers, which can be synonymous with every four states, three of them correspond to symbol 1 and the others to symbol 0. If expressed by a coding table, the result is shown as in Table A1. From the coding table, it can be found that the state corresponding to symbol 0 does appear once every four, and the state corresponding to symbol 1 does appear three times in each four. Let us extend the above discussions further and consider the situations associated with several different probability distributions. Figure A1 illustrates the even and the odd number distributions associated with different probabilities of the symbol 1, if the ideal ABS coding function, C(x, s) = x = x p s , is applied directly. Since the involved distributions of symbols are the same as those in [35], we obtain the same even-odd distribution patterns, as shown in Figure 1 of [35].
in each four. Let us extend the above discussions further and consider the situations associated with several different probability distributions. Figure A1 illustrates the even and the odd number distributions associated with different probabilities of the symbol 1, if the ideal ABS coding function, ( , ) = = , is applied directly. Since the involved distributions of symbols are the same as those in [35], we obtain the same even-odd distribution patterns, as shown in Figure 1 of [35]. Figure A1. In ABS, the even and the odd number distributions associated with different probabilities of the symbol 1. Figure A1. In ABS, the even and the odd number distributions associated with different probabilities of the symbol 1.
Take the above figure as an example: in the second row, p 1 = 3 7 , symbol 1 (deep-blue block) appears three times with a period of seven; as for the third row, where p 1 = 1 3 , symbol 1 appears once with a period of three; and, in the fourth row, where p 1 = 3 10 , symbol 1 appears three times with a period of ten. However, if one looks at Table A1 in depth, one will find that the ideal encoding function, C(x, 1) = 4 3 x, does not give the matched result presented on the coding table. For example, when x = 2, s = 1, the third rowand-fourth column of the coding table shows that the corresponding next state is 3, but according to the ideal encoding function, the result should be C(2, 1) = 4 3 * 2 = 2.67. The reason for this mismatch comes from the fact that, as indicated in Equation (1), the state range expansion in ABS (or ANS in general) is just inversely approximately proportional to the symbol's probability. In other words, even for a simple non-uniform binary source, the applicable ABS coding function is not unique. According to the actual probability distributions, one must modify the naive encoding function to provide good compression performance.
Based on the abovementioned design guidelines for SSF and observations from Table A1, we deduce that a better SSF for Example A-1 would be: 0, x mod 4 = 0 1, otherwise the renormalization and the corresponding ANS Stream encoding processes presented in Section 3.2(b) cab be addressed by the following pseudo codes: Similarly, in ANS decoding, the state value may be smaller than the designated range. Now, the renormalization process shifts the out-of-ranged state one bit to its left (i.e., multiplies the state value by (2)). We then extract the most significant bit (MSB) from the ANS-bitstream variable and add it into the LSB of the magnified state value.
For example, let us suppose the target range of state is [15,29]. Assume the content of the ANS-bitstream variable is 110 2 , and the current state is 5, which is less than the permissible range lower bound 15, so renormalization is a must. Since the binary representation of 5 is 101 2 , after shifting one bit to the left, we have 1010 2 = 10. Now, extracting the MSB from the ANS-bitstream variable, which is 1, and adding it to the just obtained range value 10, we have the new current state value 11= 1011 2 , which is still less than the lower bound 15; clearly, we have to conduct left-shifting operation one time more. After applying the second left bit shifting to 11 and adding the second MSB of the ANS-bitstream variable (which is 1) to it, we have the newest state value 10111 2 = 23, which is now within the target state range, and the renormalization process ends.
Follow the same idea, the corresponding ANS Stream decoding processes presented in Section 3.2(b) can be addressed by the following pseudo codes: 1 bit taken f rom the MSB o f the ANS bitstream Variable ; } } Up to now, we know how to perform ANS stream encoding and decoding if the permissible state range is given; but the question is how to determine the proper state range such that the corresponding ANS will provide good compression and execution performances. According to the basic definitions and characteristics of ANS, for each source symbol s i , there will be an allowable state range, Is i {Ls i , Ls i + 1, . . . , b·Ls i − 1}, where b is the base of the used number system (i.e., b = 2 and b = 10 for the binary and the decimal number systems, respectively). As for all the involved symbols, it is straightforward to get the following state range bounds: LI s ≥ L and UI s ≤ b·L − 1. Generally speaking, if we select L as a power of two and let b = 2, just like we have done in the abovementioned ANS stream coding processes, the associated ANS will be more efficient in practical implementations. A nature question may arise now: Does an allowable state value have to be located within the range of {L s , L s + 1, . . . , b·L s − 1}? In other words, at least, there are bLs possible states in Is. To answer this question, let us take an extreme example that does not conform to the above condition. Suppose b = 2 and assume the range of state Is = {5, 6}, which does violate the state range constraint (that should be {5, (2*5 − 1)} = {5,9}), as mentioned earlier. Suppose the current calculated state value is 7, which is greater than the allowable maximum state value 6. According to the renormalization process, we should shift state 6 one bit to the right and get the new state value 3. After adding the MSB extracted from the ANS-bitstream variable (assume it is 1), the state value changes from 3 to 4. That is, in the renormalization process, the target state range {5, 6} has been skipped entirely. It means that there is no way to return to the allowed state range for conducting operations afterward. Use the Finite State Machine language.

Appendix C
This part of the appendix presents a detailed and step-by-step illustration example for understanding tANS encoding and decoding processes.
Example C-1. Suppose the input source has four symbols A : {a, b, c, d}, and the corresponding probability distributions are p a = 2 16 , p b = 3 16 , p c = 5 16 , p d = 6 16 . Now, let us consider the following to-be-compressed sequence 'cabcaada.' According to the tANS coding processes summarized in Section 3.3(c), we have Step 1: Select L = 16 => State range I := {16, 17, . . . , 31} and sub-cycle length for each symbol becomes: [5,9] => q c = L c L = 5 16 , |I c | = 5 I d = [L d , 2L d − 1] = [6,11] => q c = L c L = 6 16 , |I d | = 6 It can be found in step 1 that, for each symbol s, q s = p s , there is no compression performance deficiency because L is exactly a power of 2.
Step 2: Determine the SSF, s(x) = s, and its tabularized encoding and decoding functions.
Step 3: Determine the encoding table and decoding table according to the symbol spread function defined in Step 2.
For ease of explanation, we present the resulting encoding table first and choose an  example table entry to verify its correctness the second. According to the form of encoding  table presented in Table 5, Table A4 shows the complete encoding table associated with Example C1. The first row indicates the current input state value in the table, and the first column denotes the to-be-encoded symbol. Each entry of the table consists of two elements: the top element gives the value of the encoded state after renormalization; at the same time, the bottom presents the content of the (ANS-) stream variable.  Now, take the gray-colored entry as a benchmark for verification. That is, the current input state is 25, and the symbol to be encoded is c. According to Table A4, C(c, 25) = NOT FOUND in the first step, this is because the legal state range of symbol c (cf . Table A2) would be I c = [L c , 2L c − 1] = [5,9]. Thus, the pre-described renormalization process has to be applied. According to the renormalization rule mentioned in Section 3.2(b), we should shift right 25 by 2 (= log 2 x L c = log 2 25 5 ) bits and put the two LSBs (01) of the state 25 10 = 11001 2 into the bitstream variable in order. So, the content of the bitstream variable changes from empty to 10 2 and that of the state value from 25 10 = 11001 2 to 6 10 = 110 2 . Since 6 10 is within the legal state range of symbol c, the encoding process ends. Finally, according to Table A2, C(s, x) = x => C(c, 6) = 21, which is the next state. As explained above, this entry stores the bit sequence 01 2 on the bitstream variable and outputs the corresponding next state 21. We can fill in all other entries in similar ways.
Follows the form of decoding table presented in Table 6, Table A5 shows the complete decoding table associated with Example C-1. When decoding, let y denote the bit sequence extracted from the bitstream variable. Again, we take the gray-colored entry as a benchmark for verification. That is, the input state to the decoder is 24. According to Table A2, the generated symbol is c, and the corresponding decoded state value would be 7. However, 7 is not in the legal state range I := [16,31], so we should left-shift 7 by 2 (= R − log 2 (7) = 4 − log 2 (7) ) bit and add K (=2) bits taken from the bitstream variable (denoted as y) to the renormalized result. It is easy to check that the output of the decoder becomes 28 + y now. We can fill in all other table entries in similar ways.
Step 4: After completing the coding tables construction, we start encoding the inputs symbol by symbol. let us look back to Example C-1, where the input symbols string is "cabcaada" in sequence. Now, suppose the initial state is 19, then Figure A2 illustrates the ANS encoding process in detail.
Entropy 2022, 24, x FOR PEER REVIEW 32 of 34 Figure A2. The complete tANS encoding process associated with Example C-1, with the initial state 19 and input sequence "cabcaada".
From the above figure, it follows that the encoded state is 16 and the content of the bitstream variable is "1101111110111111100".
Similarly, in the opposite direction and according to the decoding table, we tANS decode the current state 16 associated with the bitstream "1101111110111111100", as illustrated in Figure A3. It is easy to check that we can recover the correct initial state 19 successfully. Notice that, in decoding, the bitstream extracted from the stream variable is in the reverse order of that of the encoding counterpart. Figure A3. The complete tANS decoding process associated with Example C-1, with the input state 19 and stored bitstream '1101111110111111100'.
From the above figure, it follows that the encoded state is 16 and the content of the bitstream variable is "1101111110111111100".
Similarly, in the opposite direction and according to the decoding table, we tANS decode the current state 16 associated with the bitstream "1101111110111111100", as illustrated in Figure A3. It is easy to check that we can recover the correct initial state 19 successfully. Notice that, in decoding, the bitstream extracted from the stream variable is in the reverse order of that of the encoding counterpart.
Entropy 2022, 24, x FOR PEER REVIEW 32 of 34 Figure A2. The complete tANS encoding process associated with Example C-1, with the initial state 19 and input sequence "cabcaada".
From the above figure, it follows that the encoded state is 16 and the content of the bitstream variable is "1101111110111111100".
Similarly, in the opposite direction and according to the decoding table, we tANS decode the current state 16 associated with the bitstream "1101111110111111100", as illustrated in Figure A3. It is easy to check that we can recover the correct initial state 19 successfully. Notice that, in decoding, the bitstream extracted from the stream variable is in the reverse order of that of the encoding counterpart. Figure A3. The complete tANS decoding process associated with Example C-1, with the input state 19 and stored bitstream '1101111110111111100'.