Next Article in Journal
A Novel Virtual Sample Generation Method to Overcome the Small Sample Size Problem in Computer Aided Medical Diagnosing
Next Article in Special Issue
Nearest Embedded and Embedding Self-Nested Trees
Open AccessArticle
Peer-Review Record

# Compaction of Church Numerals

Algorithms 2019, 12(8), 159; https://doi.org/10.3390/a12080159
by *,† and
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Algorithms 2019, 12(8), 159; https://doi.org/10.3390/a12080159
Received: 28 June 2019 / Revised: 5 August 2019 / Accepted: 5 August 2019 / Published: 8 August 2019
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)

Round 1

Reviewer 1 Report

I find the article interesting. It carries some novelty in the easy way it reveals the compaction of \lambda-terms. The idea is very good and well developed. I cannot affirm it is completely original, since non-normal terms in typed-lambda calculus are representations of short forms. The upper-bound on lenght of the normal lambda-term is of the same size (tetrations of 2) of the uncompacted term. The use of inverse normal forms to compress general typed lambda terms is an old idea. IN the 70's some work has been done using non-normal forms as a way to shorten the lambda calculus representation. If you search you will find it. However, the work in this article is not directly derived from theses old ideas. It is related but the relation seems to be natural. I consider the application here original in essence.

Small thing to note:

page 1 line 26 "for some n"  denotes which n ??  There is not related to an existential in this phrase.

page 4 line 105 This definition of Compaction is too restrict. You have a more liberal definition

M' N and M N have the same normal form (when it exists) , for every N, and #M'< #M.

What happens if you use this definition of compaction.

Page 5 line 155 : is not a 2 instead of a \phi ??

Congratulations for reporting empirical results....

Author Response

Our responses are explained below.
Note that the revisions made considering your comments appear with notes "IF Note: A[1-3]" in the revised paper.

Point 1:
Page 1 line 26 "for some n" denotes which n ?? There is not related to an existential in this phrase.

Response 1:
We wanted to say that our algorithm achieves the time complexity for some specific numbers.
So we have changed the representation from "for some n" to "in some cases".

Point 2:
Page 4 line 105 This definition of Compaction is too restrict. You have a more liberal definition
M' N and M N have the same normal form (when it exists), for every N, and #M'< #M.
What happens if you use this definition of compaction.

Response 2:
Thank you very much for your suggestion.
We have revised the definition in line with your comment.

Point 3:
Page 5 line 155 : is not a 2 instead of a \phi ??

Response 3:
Thank you for pointing out.
We have fixed it.

Reviewer 2 Report

The authors propose a novel compaction algorithms on numbers, using

higher-order syntax, Church numerals an iterated exponentiation so.called tetration

I did not find errors and believe the claims are correct.

Remarks and Recommendations:

However, in order to put the paper in a better scientific context, I have several recommendations.

The authors should give more references of related work and discuss this related work.

For example, work on using grammars in the form of straight line programs

(XX: Markus Lohrey:

Algorithmics on SLP-compressed strings: A survey. Groups Complexity Cryptology 4(2): 241-299 (2012)

XX: Wojciech Rytter:

Grammar Compression, LZ-Encodings, and String Algorithms with Implicit Input. ICALP 2004: 15-27

and LZ-encodings..

XX:  Lempel-Ziv compression

Also mention that due to information theory arguments, compression cannot be  better than binary encoding (on all numbers)

Of course, for certain subsets of number, compression may be better, as you mention 2^(2^n) is a good compression of rare numbers.

Please use standard notation; I think log* has a wide spread use, in contrast to slog, and there is no difference if you usse

it in the O(.)-notation.

Since you make sharp claims (even using log log n), you should state your basic assumptions on the computational model.

For example, in Definition 5 this is visible: your measure ignores  the size of names and pointers.

This is ok, but should be mentioned in the assumptions.

You may say something on the time (complexity?) used for compression/decompression or simplifications of the compression,

or using the compressed numbers.

Definition 10: lambda terms with exactly one occurrence of the bound variable are called "linear".

Author Response

We would like to express our appreciation for your efforts to review our paper.
Our responses are explained below.
Note that the revisions made considering your comments appear with notes "IF Note: B[1-5]" in the revised paper.

Point 1:
The authors should give more references of related work and discuss this related work.
For example, work on using grammars in the form of straight line programs
(XX: Markus Lohrey:
Algorithmics on SLP-compressed strings: A survey. Groups Complexity Cryptology 4(2): 241-299 (2012)
XX: Wojciech Rytter:
Grammar Compression, LZ-Encodings, and String Algorithms with Implicit Input. ICALP 2004: 15-27
and LZ-encodings..
XX: Lempel-Ziv compression
Also mention that due to information theory arguments, compression cannot be better than binary encoding (on all numbers)
Of course, for certain subsets of number, compression may be better, as you mention 2^(2^n) is a good compression of rare numbers.

Response 1:
Thank you very much for your suggestion on improving our paper.
We have added the two references ([12] and [13]) and explanation in line with your comment.
We have also mentioned that compression cannot be better than binary encoding on all numbers.

Point 2:
Please use standard notation; I think log* has a wide spread use, in contrast to slog, and there is no difference if you usse it in the O(.)-notation.

Response 2:
Thank you for pointing out this significant issue.
In the O(.)-notation the size of slog depends on its base, thus strictly it is not equal to log* unless the base is e.
As you mentioned, this representation is confusing a bit, so we have added the explanation as follows.
"However, note that in the O(.)-notation the size of super-logarithm depends on its base, that is, O(slog_a n)!=O(slog_b n) if a!=b."

Point 3:
Since you make sharp claims (even using log log n), you should state your basic assumptions on the computational model.
For example, in Definition 5 this is visible: your measure ignores the size of names and pointers.
This is ok, but should be mentioned in the assumptions.

Response 3:
We have added reference to the computational model.

Point 4:
You may say something on the time (complexity?) used for compression/decompression or simplifications of the compression,
or using the compressed numbers.

Response 4:
Thank you for pointing out this issue.
We have added a simple analysis of the time complexity for Algorithm 1.
For Algorithm 2, it doesn't seem to be easy to analyze its time complexity, but we have mentioned the working time for obtaining \varphi^{\ast}.

Point 5:
Definition 10: lambda terms with exactly one occurrence of the bound variable are called "linear".

Response 5: