Next Article in Journal
A Collaborative Optimization Scheme for Beamforming and Power Control in MIMO-Based Internet of Vehicles
Previous Article in Journal
Chaos Prediction and Nonlinear Dynamic Analysis of a Dimple-Equipped Electrostatically Excited Microbeam
 
 
Article
Peer-Review Record

Computing One-Bit Compressive Sensing via Alternating Proximal Algorithm

Mathematics 2025, 13(18), 2926; https://doi.org/10.3390/math13182926
by Jin-Jiang Wang 1 and Yan-Hong Hu 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Mathematics 2025, 13(18), 2926; https://doi.org/10.3390/math13182926
Submission received: 30 June 2025 / Revised: 17 August 2025 / Accepted: 8 September 2025 / Published: 10 September 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The manuscript considers the problem of one-bit compressive sensing, where only sign information of the measurements is retained for use in inference algorithms. In addition, the manuscript assumes that neither the noise level nor the sparsity of the signal is known in advance, conditions that were designed to reflect many practical scenarios. The authors propose a robust, non-smooth, and non-convex objective model that accounts for unknown noise without requiring prior knowledge of the signal sparsity. To solve this model, they develop an alternating proximal algorithm and provide rigorous convergence analysis. The authors show that the sequence generated by the algorithm converges to a local minimiser, and further, that it converges to a global minimiser provided the initial estimate is sufficiently close that global minimiser. Numerical experiments are used to support the theoretical findings and demonstrate the algorithm’s practical performance. The manuscript has several strengths. First, it addresses a timely and realistic problem in compressive sensing, extending applicability to noisy and uncertain settings where existing methods often struggle. Second, the theoretical contributions are significant, the authors provide convergence guarantees that enhance the reliability of the proposed algorithm. Third, the use of an alternating proximal scheme is well-justified given the non-convex and non-smooth nature of the objective. Finally, the inclusion of numerical experiments, although briefly described in the abstract, indicates an effort to validate the proposed method empirically.

However, the manuscript also has several areas that would benefit from improvement.

The writing needs significant refinement for clarity and correctness. There are several grammatical errors, for example, “existed methods” should be “existing methods,” Phrasing such as “a robust objective model whose objective function...       ” is unnecessarily repetitive. The authors should revise the manuscript thoroughly for language and presentation.

In addition, the novelty of the proposed model and algorithm should be made more explicit. The current summary does not clearly distinguish this work from existing robust one-bit CS methods, either in terms of theoretical guarantees or practical performance. A more detailed comparison with recent literature would strengthen the case for the contribution’s originality.

Another concern is that the global convergence proof depends on a good initialisation, yet the paper does not appear to offer guidance on how to choose such an initial estimate in practice. Including heuristics or empirical observations on initialisation strategies would improve the paper’s practical relevance.

Finally, the scope of the experiments is unclear from the abstract;    to demonstrate robustness, it is important that the experimental section includes varied scenarios, such as different noise levels, sparsity regimes, and possibly real-world data.

Overall, this paper presents a valuable contribution to the field of compressive sensing by proposing a robust, theoretically grounded approach to signal recovery under uncertainty.

Author Response

Thank you for your comments on our manuscript (mathematics-3759449). These suggestions are invaluable for improving our paper, and we have carefully revised the manuscript accordingly. 

 

  1. The writing needs significant refinement for clarity and correctness. There are several grammatical errors, for example, “existed methods” should be “existing methods,” Phrasing such as “a robust objective model whose objective function... ” is unnecessarily repetitive. The authors should revise the manuscript thoroughly for language and presentation.

Response: We were really sorry for our careless mistakes. Thank you for your reminder. We tried our best to improve the manuscript and made some changes to the manuscript. These changes will not influence the content and framework of the paper. And here we did not list the changes but marked in red in the revised paper. We appreciate for Editors/Reviewers’ warm work earnestly and hope that the correction will meet with approval.

 

  1. 2.In addition, the novelty of the proposed model and algorithm should be made more explicit. The current summary does not clearly distinguish this work from existing robust one-bit CS methods, either in terms of theoretical guarantees or practical performance. A more detailed comparison with recent literature would strengthen the case for the contribution’s originality.

Response:  Thank you very much for your suggestion. We added a comparison with the results of other literature in the last paragraph of the introduction part of page 3.

 

  1. 3.Another concern is that the global convergence proof depends on a good initialisation, yet the paper does not appear to offer guidance on how to choose such an initial estimate in practice. Including heuristics or empirical observations on initialisation strategies would improve the paper’s practical relevance.

Response: Thank you very much for your valuable feedback. This article focuses on the establishment of models, the development of algorithms, and the study of convergence, but does not delve into the selection of initial points in practical problems. Due to time constraints, we are currently unable to produce results. We have recorded this suggestion and plan to conduct in-depth research on this issue in future studies. Thank you very much for your suggestion.

 

  1. 4.Finally, the scope of the experiments is unclear from the abstract; to demonstrate robustness, it is important that the experimental section includes varied scenarios, such as different noise levels, sparsity regimes, and possibly real-world data.

Response: Thank you very much for your suggestion. We provided a detailed explanation of the scope of the experiment in the fifth section. In the abstract, a general description of the experimental scope has been added.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The manuscript presents many language errors and typos that prevent understanding all the ideas. Besides, formulas also need a careful revision, in some cases, the extent of the min operators are unclear, better phrased, the formulas notation seems incoherent across the manuscript. It also seems that (4) is unnecessary for the history the paper tells. In addition, the jump to (10) misses a better explanation. Notation is sometimes confusing as well.

It is unclear how to interpret the second line in the for loop in Algorithm 1.

Theorem 2 seems irrelevant.

Confidence intervals could help in the figures. The problem is that simply presenting the error metrics is not exactly providing information if the reconstruction is good or not. Which thresholds would be acceptable? How many measurements (bits) would be required to provide  acceptable reconstructions?

Author Response

Thank you for your comments on our manuscript (mathematics-3759449). We have carefully revised the manuscript accordingly. Revised portions are highlighted in red.

 

1.The manuscript presents many language errors and typos that prevent understanding all the ideas. Besides, formulas also need a careful revision, in some cases, the extent of the min operators are unclear, better phrased, the formulas notation seems incoherent across the manuscript. It also seems that (4) is unnecessary for the history the paper tells. In addition, the jump to (10) misses a better explanation. Notations sometimes confusing as well.

Response: Thank you for this suggestion. (1) We have corrected the grammar errors and polished the English in entire text. We tried our best to improve the manuscript and made some changes to the manuscript. These changes will not influence the content and framework of the paper. And here we did not list the changes but marked in red in the revised paper. We appreciate for Editors/Reviewers’ warm work earnestly and hope that the correction will meet with approval.

(2)All formulas have been numbered consecutively. By means of curly brackets, the extent of min operator is clarified.

The model corresponding to formula (3) is valid for the noiseless case. The model corresponding to formula (4) suits  the case that the measurement Bx is contaminated by noise (sign flips). They represent models in different situations and we hope to preserve them.

(3)We have added a description of the algorithm proposed in the reference where Formula (9) is located, and the reference where Formula (10) is located has improved it by removing assumptions about the data and enhancing the convergence results. The new description reflects the correlation between the two references.

 

  1. 2.It is unclear how to interpret the second line in the for loop in Algorithm 1.

Response: Thank you very much for your valuable feedback. We provide the definitions and calculations for these two iteration points in formulas (24), (25), and (27) on page 5, which alternate between them.

 

  1. 3.Theorem 2 seems irrelevant.

Response: Thank you very much for your valuable feedback. Theorem 1 shows that the sequence generated by Algorithm 1 converges to a local minimizer of (20). Theorem 2 shows that, if the initial point of Algorithm 1 is sufficiently close to any one of the global minimizers of the function L given in (20), then the sequence generated by Algorithm 1 converges to a global minimizer of model (20). For the sake of completeness of the article, we  hope to preserve it.

 

  1. 4.Confidence intervals could help in the figures. The problem is that simply presenting the error metrics is not exactly providing information if the reconstruction is good or not. Which thresholds would be acceptable? How many measurements (bits) would be required to provide acceptable reconstructions?

Response: We agree that confidence intervals would be helpful in the figures. Due to time constraints, we are currently unable to produce results.  We hope, in the future, to employ this techniques to determine thresholds. Thank you very much for your suggestion.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Although the analyses could be improved, this does not prevent the publication of the paper.

Back to TopTop