Next Article in Journal
System-Level Consideration and Multiphysics Design of Propulsion Motor for Fully Electrified Battery Powered Car Ferry Propulsion System
Previous Article in Journal
A Novel Data Partitioning Method for Active Privacy Protection Applied to Medical Records
 
 
Article
Peer-Review Record

Two-Stage Generator Network for High-Quality Image Inpainting in Future Internet

Electronics 2023, 12(6), 1490; https://doi.org/10.3390/electronics12061490
by Peng Zhao 1,2,3,4,*, Dan Zhang 1,3,4, Shengling Geng 1,3,4 and Mingquan Zhou 1,3,4,*
Reviewer 1:
Reviewer 2: Anonymous
Electronics 2023, 12(6), 1490; https://doi.org/10.3390/electronics12061490
Submission received: 18 January 2023 / Revised: 18 March 2023 / Accepted: 19 March 2023 / Published: 22 March 2023

Round 1

Reviewer 1 Report

The paper introduces a generative model for high resolution image inpainting. The architecture uses two processing stages in order to generate different levels of details. Transformers are used as a central component in order to enhance the efficiency. The overall contribution is incremental to the current state-of-the-art.

Overall the paper is well written, with some minor typos and grammatical flaws. The overall structure should be improved. There should be a strict separation between related work and the own contribution. In its current form the overall contribution is not highlighted very well and about 50% of the article is related to reviews and other work. Further some clear statements about the central contribution and its novelty must be given in the abstract.

 Although written quite well, the review section is too extensive. Instead of giving a broad overview of inpainting architectures in total i would suggest to focus strongly on high-resolution inpainting solutions in detail. There already exist various publications using transformers to this problem and from the paper in its current form it is not easy to see the differences.

Results and the experimental seem reasonable with some shortcomings. First, a justification for the selection of the competing methods should be given. Second, there is no comparison or relation to existing inpainting benchmarks (like ETH challenge or the NTIRE 2022 Image Inpainting Challenge: Report). All results are based on a single dataset. At least a second evalution on a second domain should be reported in order to proof the generality of the proposed approach.

Some detailed comments:
l. 37: low pixel density, did you mean "low resolution"
l. 42/43: seems to be more related to de-noising or de-bluring problems. The relation to inpainting should be made clear
l. 179: a precise statistical inference problem should be stated
l. 214: maybe "perceptual loss" is a better term
l. 329: i think this are too many details which are not relevant. Will the code be made public ?
l. 386: the sentence is doubled
Image captions should be a little bit more explanatory.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The overall article seems interesting and useful. However, I would suggest the following changes:
Primarily, the experimental results comparison does not specify if the same set of images were used across all the algorithms or not, i.e., if the Partial, PEN-Net, and GapNet use the same set of test images as what used by the current algorithm when comparison were made on the performances in Tables 1 & 2. It is vital to use the same test dataset, and I urge the authors to clarify this.
Second, the authors claim speed and convergence as advantages, but fail to quantify it. Either present quantitative proof, or remove the assertion completely

Other minor suggestions:
70: Abbreviate GAN
179-180: This image inpainting definition is better suited to have in intro section
260: consistency between capital and small s_tr
260: What is S in q=t-S?
260: define S_tq
Section 3.3: Although verbally mentioned, provide mathematical equation for L_c, similar to what's provided for L_r in eqn. 11
353: quantify how slow
395: abbreviations for PSNR and SSIM for first time readerrs

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The paper has been improved significantly in most parts. What remains critical the is the response to point 5:

"Although, the proposed method has obtained good experimental results on CelebA compared to other methods. The applicability of the proposed method is not well. Thus, there is no comparison or relation to existing inpainting benchmarks (like ETH challenge or the NTIRE 2022 Image Inpainting Challenge: Report). "

The answer given by the authors should at least be stated somewhere in the conclusions. Since the proposed model is more compact and shows promising results under some conditions it is not a big issue if it leaves behind under some other benchmarking scores. What remains essential for a publication is to allow the reader to have a balanced and fair assessment of the proposed methods. Maybe some more details about the shortcommings and some cocrete suggestions to overcome them should be added for future steps at the end of the article.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop