Next Article in Journal
Freeze-Drying Ethylcellulose Microparticles Loaded with Etoposide for In Vitro Fast Dissolution and In Vitro Cytotoxicity against Cancer Cell Types, MCF-7 and Caco-2
Previous Article in Journal
LEACH-MTC: A Network Energy Optimization Algorithm Constraint as Moving Target Prediction
Previous Article in Special Issue
Modified Neural Architecture Search (NAS) Using the Chromosome Non-Disjunction
Article

Deep Representation of a Normal Map for Screen-Space Fluid Rendering

1
Department of Computer Science and Engineering, Korea University, Seoul 02841, Korea
2
Interdisciplinary Program in Visual Information Processing, Korea University, Seoul 02841, Korea
*
Author to whom correspondence should be addressed.
Academic Editor: Aleksander Mendyk
Appl. Sci. 2021, 11(19), 9065; https://doi.org/10.3390/app11199065
Received: 23 August 2021 / Revised: 17 September 2021 / Accepted: 24 September 2021 / Published: 29 September 2021
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
We propose a novel method for addressing the problem of efficiently generating a highly refined normal map for screen-space fluid rendering. Because the process of filtering the normal map is crucially important to ensure the quality of the final screen-space fluid rendering, we employ a conditional generative adversarial network (cGAN) as a filter that learns a deep normal map representation, thereby refining the low-quality normal map. In particular, we have designed a novel loss function dedicated to refining the normal map information, and we use a specific set of auxiliary features to train the cGAN generator to learn features that are more robust with respect to edge details. Additionally, we constructed a dataset of six different typical scenes to enable effective demonstrations of multitype fluid simulation. Experiments indicated that our generator was able to infer clearer and more detailed features for this dataset than a basic screen-space fluid rendering method. Moreover, in some cases, the results generated by our method were even smoother than those generated by the conventional surface reconstruction method. Our method improves the fluid rendering results via the high-quality normal map while preserving the advantages of the screen-space fluid rendering methods and the traditional surface reconstruction methods, including that of the computation time being independent of the number of simulation particles and the spatial resolution being related only to image resolution. View Full-Text
Keywords: screen space rendering; image-based rendering; fluid rendering; machine learning; supervised learning screen space rendering; image-based rendering; fluid rendering; machine learning; supervised learning
Show Figures

Figure 1

MDPI and ACS Style

Choi, M.; Park, J.-H.; Zhang, Q.; Hong, B.-S.; Kim, C.-H. Deep Representation of a Normal Map for Screen-Space Fluid Rendering. Appl. Sci. 2021, 11, 9065. https://doi.org/10.3390/app11199065

AMA Style

Choi M, Park J-H, Zhang Q, Hong B-S, Kim C-H. Deep Representation of a Normal Map for Screen-Space Fluid Rendering. Applied Sciences. 2021; 11(19):9065. https://doi.org/10.3390/app11199065

Chicago/Turabian Style

Choi, Myungjin, Jee-Hyeok Park, Qimeng Zhang, Byeung-Sun Hong, and Chang-Hun Kim. 2021. "Deep Representation of a Normal Map for Screen-Space Fluid Rendering" Applied Sciences 11, no. 19: 9065. https://doi.org/10.3390/app11199065

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop