Next Article in Journal
Railway Vehicle Wheel Restoration by Submerged Arc Welding and Its Characterization
Next Article in Special Issue
Images of Roman Imperial Denarii: A Curated Data Set for the Evaluation of Computer Vision Algorithms Applied to Ancient Numismatics, and an Overview of Challenges in the Field
Previous Article in Journal
On Singular Perturbation of Neutron Point Kinetics in the Dynamic Model of a PWR Nuclear Power Plant
Previous Article in Special Issue
Learning to Describe: A New Approach to Computer Vision Based Ancient Coin Analysis
Open AccessArticlePost Publication Peer ReviewVersion 2, Approved

Making Japenese Ukiyo-e Art 3D in Real-Time (Version 2, Approved)

School of Computer Science, University of St Andrews, Scotland KY16 9AJ, UK
*
Author to whom correspondence should be addressed.
Received: 10 February 2020 / Revised: 18 February 2020 / Accepted: 18 February 2020 / Published: 11 May 2020
(This article belongs to the Special Issue Machine Learning and Vision for Cultural Heritage)
Peer review status: 1st round review Read review reports

Reviewer 1 Enrico Vezzetti Department of Management and Production Engineering, Politecnico di Torino, Torino, Italy Reviewer 2 Unsang Park Professor, Department of Computer Engineering, Sogang University, Seoul, South Korea.
Version 1
Original
Approved
Authors' response
Approved
Authors' response
Version 2
Approved
Version 2, Approved
Published: 11 May 2020
DOI: 10.3390/sci2020032
Download Full-text PDF

Version 1, Original
Published: 28 February 2020
DOI: 10.3390/sci2010006
Download Full-text PDF
Ukiyo-e is a traditional Japanese painting style most commonly printed using wood blocks. Ukiyo-e prints feature distinct line work, bright colours, and a non-perspective projection. Most previous research on ukiyo-e styled computer graphics has been focused on creation of 2D images. In this paper we propose a framework for rendering interactive 3D scenes with ukiyo-e style. The rendering techniques use standard 3D models as input and require minimal additional information to automatically render scenes in a ukiyo-e style. The described techniques are evaluated based on their ability to emulate ukiyo-e prints, performance, and temporal coherence. View Full-Text
Keywords: rendering; graphics; heritage; Japanese; Asian rendering; graphics; heritage; Japanese; Asian
Show Figures

Figure 1

MDPI and ACS Style

Brown, I.; Arandjelović, O. Making Japenese Ukiyo-e Art 3D in Real-Time. Sci 2020, 2, 32.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1

Reviewer 1

Sent on 30 Mar 2020 by Enrico Vezzetti | Approved
Department of Management and Production Engineering, Politecnico di Torino, Torino, Italy

The paper deals with an interesting topic the integration of a ukiyo-e style in a 3D rendering scenario. The paper is well written and organise and the results provided in the results provided looks reliable and significant in term of novelty and scientific rigour 

Response to Reviewer 1

Sent on 10 Jul 2020 by Innes Brown, Ognjen Arandjelovic

We are very thankful for the reviewer's time and kind comments!

Reviewer 2

Sent on 27 Mar 2020 by Unsang Park | Approved
Professor, Department of Computer Engineering, Sogang University, Seoul, South Korea.

This is an interesting paper that presents an innovative framework for real-time ukioyo-e style rendering of 3D objects in an interactive environment. The proposed framework is created in a way to mimic ukiyo-e printing method with primary objectives of achieving real-time rendering performance and ukioyo-e style image quality.

The proposed framework extracts linear elements from 3D models. The linear element extraction process starts by first detecting important edges which includes boundary, creases and silhouettes, followed by a reference image creation. The reference image acts as a basis for final image creation. Polylines are built by analyzing reference image and polygon strips are created along these polylines. Stroke textures are rendered onto the polygon strips to create the final image. The authors have proposed four methods for texture application on to the polygon strips. The first method uses a color along with noise pattern to render on the strips. In the second method, a texture pattern is applied. In the third and fourth methods, two variants of bokashi gradation effect are applied.

The proposed framework is evaluated on the base of performance and temporal coherence using several test scenes. The authors claim to achieve real-time rendering with good image quality. Some comments about the performance evaluation are provided below,

  1. The framework provides innovative solution for ukioyo-e style rendering. However, it is of limited practical applicability.
  2. The authors claim that the reference image resolution can be reduced independently of the final image, to reduce computational cost. Fig. 15, shows final images created with two different reference image resolutions. Authors claim that both images look similar. However, they don’t provide any quantitative comparison, for example by comparing both images using structural similarity index measure SSIM. Moreover, they don’t provide a minimum resolution of reference image which can be used to accurately create final image.
  3. In Fig. 14 (a), the frame rate of different image sets (arranged from left to right with respect to the polygon count) is provided which shows that polygon count is not a major factor to affect the performance. It would be better to compare different algorithms based on the polygon count per unit volume (or area) for each dataset.
  4. In Fig. 14 (b), the line method takes the highest processing time. As the authors have stated that they use a 4 core Intel i5 3470S processor with integrated graphics, its better to use multiprocessing to increase the framerate.

Response to Reviewer 2

Sent on 10 Jul 2020 by Innes Brown, Ognjen Arandjelovic

> This is an interesting paper that presents an innovative framework for real-time ukioyo-e style > rendering of 3D objects in an interactive environment. We are We are genuinely thankful for the reviewer's kind comments and the thorough feedback, which we found useful. As regards the reviewer's other comments, we address them in turn: 1. We are not quite sure why the reviewer feels this way. While there is no doubt that there is a lot of room for improvement (indeed, one of our goals is to encourage more work in this area), some of which we discuss in the article, our method is certainly practicable within the scope of the functionality we focused on. 2. This is a perfectly reasonable point. The reason why we did not opt for quantitative assessment suggested lies in the limited value of this *in the present context*. Since our goal is visual in nature, the primary consideration is indeed human perception, rather than quantitative measurement which may bear little relevance with reference to our aims. 3. We again entirely see where the reviewer is coming from, but we do slightly disagree. There is no inherent volume quantification to speak of here (e.g. why does it matter if we say that e.g. the skyscraper example is one of a giant physical skyscraper or a toy one?). How internal coordinates are mapped onto the real world, if that is needed, is extrinsic to our work. On the other hand, polygon count is an objective and clearly relevant measure. 4. We fully agree - had we used more powerful hardware, or focused on various optimization specific to specialized hardware, the absolute performance would have undoubtedly been even better. The important thing for understanding and replication is that we reported the details of the setup we used, allowing the reader to contextualize the results.

Back to TopTop