Next Article in Journal
Railway Vehicle Wheel Restoration by Submerged Arc Welding and Its Characterization
Next Article in Special Issue
Visual Reconstruction of Ancient Coins Using Cycle-Consistent Generative Adversarial Networks
Previous Article in Journal
Data Fusion of Scanned Black and White Aerial Photographs with Multispectral Satellite Images
Previous Article in Special Issue
Learning to Describe: A New Approach to Computer Vision Based Ancient Coin Analysis
 
 
Article
Peer-Review Record

Making Japenese Ukiyo-e Art 3D in Real-Time

by Innes Brown and Ognjen Arandjelović *
Reviewer 1:
Reviewer 2:
Submission received: 10 February 2020 / Revised: 18 February 2020 / Accepted: 18 February 2020 / Published: 11 May 2020
(This article belongs to the Special Issue Machine Learning and Vision for Cultural Heritage)
Version 1
DOI: 10.3390/sci2010006

Round 1

Reviewer 1 Report

The paper deals with an interesting topic the integration of a ukiyo-e style in a 3D rendering scenario. The paper is well written and organise and the results provided in the results provided looks reliable and significant in term of novelty and scientific rigour 

Author Response

The paper deals with an interesting topic the integration of a ukiyo-e style in a 3D rendering scenario. The paper is well written and organise and the results provided in the results provided looks reliable and significant in term of novelty and scientific rigour We are very thankful for the reviewer's time and kind comments!

Reviewer 2 Report

This is an interesting paper that presents an innovative framework for real-time ukioyo-e style rendering of 3D objects in an interactive environment. The proposed framework is created in a way to mimic ukiyo-e printing method with primary objectives of achieving real-time rendering performance and ukioyo-e style image quality.

The proposed framework extracts linear elements from 3D models. The linear element extraction process starts by first detecting important edges which includes boundary, creases and silhouettes, followed by a reference image creation. The reference image acts as a basis for final image creation. Polylines are built by analyzing reference image and polygon strips are created along these polylines. Stroke textures are rendered onto the polygon strips to create the final image. The authors have proposed four methods for texture application on to the polygon strips. The first method uses a color along with noise pattern to render on the strips. In the second method, a texture pattern is applied. In the third and fourth methods, two variants of bokashi gradation effect are applied.

The proposed framework is evaluated on the base of performance and temporal coherence using several test scenes. The authors claim to achieve real-time rendering with good image quality. Some comments about the performance evaluation are provided below,

  1. The framework provides innovative solution for ukioyo-e style rendering. However, it is of limited practical applicability.
  2. The authors claim that the reference image resolution can be reduced independently of the final image, to reduce computational cost. Fig. 15, shows final images created with two different reference image resolutions. Authors claim that both images look similar. However, they don’t provide any quantitative comparison, for example by comparing both images using structural similarity index measure SSIM. Moreover, they don’t provide a minimum resolution of reference image which can be used to accurately create final image.
  3. In Fig. 14 (a), the frame rate of different image sets (arranged from left to right with respect to the polygon count) is provided which shows that polygon count is not a major factor to affect the performance. It would be better to compare different algorithms based on the polygon count per unit volume (or area) for each dataset.
  4. In Fig. 14 (b), the line method takes the highest processing time. As the authors have stated that they use a 4 core Intel i5 3470S processor with integrated graphics, its better to use multiprocessing to increase the framerate.

Author Response

Feedback of Reviewer 2. > This is an interesting paper that presents an innovative framework for real-time ukioyo-e style > rendering of 3D objects in an interactive environment. We are We are genuinely thankful for the reviewer's kind comments and the thorough feedback, which we found useful. As regards the reviewer's other comments, we address them in turn: 1. We are not quite sure why the reviewer feels this way. While there is no doubt that there is a lot of room for improvement (indeed, one of our goals is to encourage more work in this area), some of which we discuss in the article, our method is certainly practicable within the scope of the functionality we focused on. 2. This is a perfectly reasonable point. The reason why we did not opt for quantitative assessment suggested lies in the limited value of this *in the present context*. Since our goal is visual in nature, the primary consideration is indeed human perception, rather than quantitative measurement which may bear little relevance with reference to our aims. 3. We again entirely see where the reviewer is coming from, but we do slightly disagree. There is no inherent volume quantification to speak of here (e.g. why does it matter if we say that e.g. the skyscraper example is one of a giant physical skyscraper or a toy one?). How internal coordinates are mapped onto the real world, if that is needed, is extrinsic to our work. On the other hand, polygon count is an objective and clearly relevant measure. 4. We fully agree - had we used more powerful hardware, or focused on various optimization specific to specialized hardware, the absolute performance would have undoubtedly been even better. The important thing for understanding and replication is that we reported the details of the setup we used, allowing the reader to contextualize the results.
Back to TopTop