Machine learning, especially the GAN (Generative Adversarial Network) model, has been developed tremendously in recent years. Since the NVIDIA Machine Learning group presented the StyleGAN in December 2018, it has become a new way for designers to make machines learn different or similar types of architectural photos, drawings, and renderings, then generate (a) similar fake images, (b) style-mixing images, and (c) truncation trick images. The author both collected and created input image data, and specially made architectural plan and section drawing inputs with a clear design purpose, then applied StyleGAN to train specific networks on these datasets. With the training process, we could look into the deep relationship between these input architectural plans or sections, then generate serialized transformation images (truncation trick images) to form the 3D (three-dimensional) model with a decent resolution (up to 1024 × 1024 × 1024 pixels). Though the results of the 3D model generation are difficult to use directly in 3D spatial modeling, these unexpected 3D forms still could inspire new design methods and greater possibilities of architectural plan and section design.
This is an open access article distributed under the Creative Commons Attribution License
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited