Next Article in Journal
Probabilistic Slope Stability Analysis of Mount St. Helens Using Scoops3D and a Hybrid Intelligence Paradigm
Previous Article in Journal
An Inconvenient Truth about Forecast Combinations
Previous Article in Special Issue
Unsupervised Attribute Reduction Algorithm for Mixed Data Based on Fuzzy Optimal Approximation Set
 
 
Article
Peer-Review Record

Privacy-Preserving Distributed Learning via Newton Algorithm

Mathematics 2023, 11(18), 3807; https://doi.org/10.3390/math11183807
by Zilong Cao, Xiao Guo and Hai Zhang *
Reviewer 1: Anonymous
Reviewer 2:
Mathematics 2023, 11(18), 3807; https://doi.org/10.3390/math11183807
Submission received: 19 June 2023 / Revised: 28 July 2023 / Accepted: 30 July 2023 / Published: 5 September 2023
(This article belongs to the Special Issue Data Mining: Analysis and Applications)

Round 1

Reviewer 1 Report

The paper proposed GDP-LocalNewton algorithm for privacy-preserving and communication efficient distributed learning and communication efficient distributed learning. The authors established the theoretical convergence results of GDP-LocalNewton. The numerical experiments testified the corresponding theory and the efficiency of the algorithms. The motivation is very interesting, The results obtained are good and reliable. I recommend the paper can be accepted to publish.

1. Can the assumptions of Loss function be Weakened?

Some English expressions require minor modifications

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper studies federated learning by introducing a privacy-preserving method that effectively addresses communication costs. The GDP-LocalNewton algorithm is a privacy-preserving distributed learning. Based on Newton's method, the algorithm optimizes communication efficiency by prioritizing local computations.

 

The strength of this work is the theoretical analysis. The authors rigorously studied the convergence of GDP-LocalNewton and identified two types of error—one related to privacy protection, and the other to local computation. These findings were supported by experimental results, bridging the gap between theory and practice, and showing the effectiveness of the proposed algorithm. However, these results are based on strong assumptions, such as strongly convexity, which is not practical in FL area. From my previous optimization knowledge, the strongly convexity is introduced by using newton methods, which makes the GDP-LocalNewton not ideal. If the authors can remove such assumptions or use some weaker ones, this will be a major contribution.

 

Theorem 1-4 should be used as a lemmas. It is cited from other work, not derived in your paper.

 

Besides the line-search stepsize, do authors try to analyze other easier stepsize, such as fixed or decaying stepsize? Will others work? Why do you choose line-search here? Also in the experiment, can you explain, how to set up the step-size \alpha^star of GDP-LocalNewton to be the fixed step size of GDP-GD.

 

In the description of Algorithm 1, what is “Loss function parameter B and \Tau”, authors should introduce these earlier than the algorithm, or give some explanation.

 

I suggest authors use a table of parameters to list all parameters used in this paper, I lost several times when reading the paper because of so many parameters entangled together.

The references are quite limited, I believe there are many related works of FL, Newton method and privacy areas.

 

Some typos:

Line 32: showed the improved convergence rate over the distributed first-order competitors;  the ---> an

Theorem 7......Moreover, asssume that the sample size for each work satisfies.... ; work ---> worker

 

 

 

 

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop