Next Article in Journal
Imagination and Images: From the Treatise to the Digital Representation. Sforzinda and the Bridges in the Inda Valley
Previous Article in Journal
Images and Models of Thought
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Advantages, Critics and Paradoxes of Virtual Reality Applied to Digital Systems of Architectural Prefiguration, the Phenomenon of Virtual Migration †

Department of Architecture, University “G. D’Annunzio”, 65122 Pescara, Italy
Presented at the International and Interdisciplinary Conference IMMAGINI? Image and Imagination between Representation, Communication, Education and Psychology, Brixen, Italy, 27–28 November 2017.
Proceedings 2017, 1(9), 915; https://doi.org/10.3390/proceedings1090915
Published: 17 November 2017

Abstract

:
The research in question proposes analyzing cross-methodologies related to the phenomenon of the current adherence and dissemination of the new VR tools in architectural design through an in-depth study on topics such as the rather fluid learning curve of software, the level of emotional narrative adherence on an optimal calibration of digital media, but above all regarding issues related to the intensive use of such tools such as lack of collective testing or possible isolation from the real world. In the case study are confronted the multilevel fruitions of a VR environment on Uneral Engine with Oculus Rift Viewer and on an experimental EyeCad VR platform, a specific solution for architects, documenting how the reconfigured space combines multiple stages of investigation until a vision of objective synthesis, representing, with all its limitations and paradoxes, a powerful and versatile media of constantly evolving expression.

1. Introduction

The new immersive criteria offered by the active use of HMD virtual devices, in relation to editing and designing methodologies, have allowed a better empathic link between virtual visitor and digitally reconfigured spaces, surpassing in fact due to the fluidity of execution and simplicity of use, any other system of interaction with simulated environments. The progressive emergence of such devices in the workplace, and in particular in architectural design, increasingly outlines the importance of deepening the benefits and criticalities associated with the use of such tools in order to improve their usability and expand their use in increasingly specific areas. Immersion and interactivity are the two peculiarities that synthetically define the particular functionality of virtual apparatus and in a more general sense identify the advances of the Third Digital Revolution, ongoing in recent years, with relatively high speeds from vector representation in cad to a three-dimensional reconfiguration able to be manipulated on the desktop, following in parallel the evolution of rendering, increasingly photorealistic thanks to the advances in algorithms that are suitable for the rendering of luminous phenomena and the reproduction of materials by physically correct behavior, eventually reaching a dimension in which today it is possible to explore, and thus to intervene from within, the virtual scenario through stereoscopic viewers, obtaining a very similar perception to the real one by stimulating not only visual and auditory but, thanks to particular aptic suits, acting directly on the tactile sensation (Figure 1). It is very interesting to note that interactivity is a quality that has always configured the architectural space in relation to its functional distribution and use that the environment is capable of offering to the user, defining architecture only in terms of usability, functionality and interactivity. The concept of interactive environment is therefore not necessarily linked to the discourse of technological evolution if it is related to the much broader idea of Architecture in its particular relationship with man, always defining itself in relation to natural elements such as irradiation and atmospheric agents, conformation of the land and neighboring buildings, and internally, with respect to the activities and movements and habits of its users. All factors that tend to affect the “Shape”, far more than the designer’s taste or the style of time culture. Focussing on how interaction with human needs can affect the definition of architectural space is useful in understanding how the perceptive and interactive component, even in relation to a virtual connotative environment, can actually promote that scenario to an innovative and effective a form of architectural spatiality that can only be based on perception by external instruments, defining in all respects as Architecture not because it exists in the real dimension but because it “interacts” with man. In the virtual dimension so it is necessary to find the same characteristics of a real environment, the user being able to intervene not only with the objects contained in the scenario but also, in the process of editing the design, being able to modify the macro components that compose it. This requirement may perhaps be considered as the propulsive quality for the development of these representation systems, tools that translate the purely representative purpose to seamlessly intervene on personal creativity and the possibility of spatial metamorphosis according to their architectural taste. The other distinctive characteristic of virtual devices is attributable to Immersion, that is, the ability to fit into a closed and non-virtual context to the real dimension through a relationship that is not closely dependent on the degree of similarity between virtual reconfiguration and the physical world, but how much such simulation may be quite realistic to extend the experience over time. The virtual evolution of the very concept of spatiality thus coincides with the substantial transformation of the perception of contemporary spaces, coupled with factors relevant to Information Technology, which today tends to characterize in a widespread and transversal manner many of the attitudes that represent contemporary culture, shaping a connotative landscape that tends to overlap the real one, identifying itself, in its innumerable changes, as a dynamic interconnection between alternative levels of existence. The article therefore provides an in-depth study of the possible “virtual migration” phenomenon, which can be hypothesized concretely with the integration of virtual devices with the new hyper-fast Internet systems, capable of amplifying media coverage by drastically intervening, above all in social and global terms and triggering a process of massive colonization of cyberspace. This event could affect the everyday life of the real world, including the relationship with the existing constructed, positively but in some cases also unfavorably, in relation to phenomena of abandonment of the use of some types of buildings or modification of the way of conceptualizing at a functional level the built.

2. III Digital Revolution: Evolution and Development of Virtuality Applied to Representation

Today it is unthinkable to have an existential plan in which real and virtual are two separate spheres and are not related to each other as we are accustomed to rapporting us, accepting such a strong interconnection between them that the differences and boundaries that distinguished them were dissolved and made unrecognizable. Quoting L.Sacchi, if it is true that without the real there would be no virtual, it is just as true that without the virtual there would be no real one to which we have been extensively accustomed for some years [2]. The dynamic interconnection between these two levels of existence is due to concrete dimensional portals through which we can freely move from one world to another through an increasingly common process: access to the Augmented and Virtual Reality, for which a brief historical excursus is needed to clarify the very popular and the consequent almost pandemic diffusion, paradoxically promoted by ever-growing human desires dictated by fantasy and long expressed in the imaginary fantasy-science of movies and comics. Although relatively short, the evolution of interactive virtual environments and its applications appears complicated in the light of deep involvement in various technologies, such as hardware-software, mechanical engineering, and generally related to 3d graphics and rendering. Selecting the most relevant devices, ideas and events in the world can identify the term that fully identifies that cyber revolution: the concept of cyberspace. The term, coined by William Gibson in 1984, appears for the first time in the book titled “Neuromant” to illustrate the imaginary world within a data space on a global digital network. Therefore, cyberspace indicates the computer data space, in other words a connotative dimension belonging to the intangible, perceived in the same way as the real one. Emotional involvement is the essence of cyberspace in relation to how a data space can affect human senses as the perceptual reflection of reality in the virtual sphere. Regarding the evolution of the VR instrumentation, it can be said that interesting results have been achieved within a few decades, thanks also to the scientific and technological evolution in support of these systems. The most significant aspect is surely the illusory component that makes users feel like a synthetic world rather than just perceiving images and events as in a movie.
The earliest developments to produce such an effect were stereoscopic views, holographic stereograms, and 3-D cinematic experiments on large-screen projection. In a 1950s project called the Experience Theater, Norton Healing introduces a prototype of a first virtual environment, still a mechanical one, made in 1962 under the name “Sensorama” (Figure 2). This device was based on a predetermined perceptual model, not specifying a personalization of experience, yet lacking interactivity with the immersive feature [2]. The head-mounted display (HMD) is a stereoscopic binocular device capable of sending different information to each eye, but the first real-world virtual viewer created by Ivan Sutherland, who was the creator of the current point and click system (Sketch pad5): on the occasion of MIT 1966, the researcher designs the “Sword Of Damocles”, the imaginative name derives from the fact that the machine was equipped with a number of ropes that were rudely attached to the ceiling, connecting an HMD viewer to the computerized system. The device recorded movements of the head of the user by returning the exact perspective to the simulated stereoscopic vision. The first hypermedia virtual system was built during MIT 7 in the late 1970s and named “Aspen movie Map”: this is a virtual reconstruction of the city of Aspen, Colorado, where the user could explore the city in 3 different modes, summer, winter and polygonal wireframes. In 1979, NASA created “Polhemus 3dSPACE”, a system of sensors that can detect the location and orientation of points in space, a useful tool for creating complex and realistic virtual environments, and future incentive for the development of polygonal modeling , first in the commercial field of entertainment and then in the field of scientific use in the most diverse sectors. In 1982, Thomas Furness developed the “Super Cockpit”, a flight simulation system designed to assist the formation of high-speed aircraft pilots. In 1992, Tom DeFanti and Carolina Cruz-Neira conceived the “Cave System”, which is one of the most advanced and sophisticated virtual reality displays of the past generation. With a holographic screen divided into 3 sides, varying according to use, participants could be surrounded by high-resolution projected images and interacting with them with the first motion sensors.
Virtual reality does not imply that only devices involving vision and hearing are hoped to include the participation of other senses, so experimentation goes slower than the evolution of visual simulations [3]. An important aspect of this technology could offer users the feeling of pressure, consistency, strictly aimed at interacting with various objects and materials. A significant example was developed in 1968 by Frederick Brooks, of the University of North Carolina, in order to manage radioactive materials through a remote manipulator device to simulate force-feedback. In 1980, a joystring was developed by Richard Feldman to generate adequate pressure levels through manipulation but the tool was very expensive and usable only in certain areas. Only recently, thanks to the commercialization of the existing virtual interactive simulation tools, virtual technology is evoked and solutions are being developed to recreate artificial aptic sensations along with the ever-evolving visual simulation. The “Oculus Rift” project starts from a fundraiser kickstart of 2014, undertaken online by a group of young researchers led by Palmer Luckey, demonstrating the possibility of cost-cutting along with maintaining quality in the creation of a Hmd virtual instrument. We are experiencing a virtualization-based phenomenon within a workflow that is also very dissimilar to each other, together with the creation and dissemination of new professional formulas, specific to device testing, the creation of usable apps through smartphones and tablets, but above all related to the architectural and artistic realization of scenarios, consisting of buildings, cities, but also trees, animals, creatures from artificial intelligence or avatars impersonable by other human users, universally infinitely expanded and more complex and realistic than the past, the creation of which requires an ever higher level of competence. Another type of contemporary application, based primarily on holograms, and hence on AR, is Microsoft HoloLens. Still very expensive to become a common object, it promises to revolutionize the market, radically transforming the way of interacting with the real world: unlike the Cave VR screens it is a real computer in the form of a viewer, equipped with advanced sensors, relatively light and above all without any connecting cable, ensuring maximum freedom. Following the principles of increased reality, or better mixing reality, HoloLens blends spaces and real contents with spaces, space portions, and virtual 3d objects that can interactively be manipulated or digitally modified. According to recent studies, common element is the extreme perceptual realism between the hologram and the real elements, the lack of side effects of nausea, the fact that they can continue to see the surrounding reality, and a perfect overlapping of the elements, including complex, digitally reconfigured. Immersive devices, both based on VR technology such as Oculus Rift or Holografic AR such as efficient HoloLens, always have the advantage of allowing a very high sensory involvement to the user’s visitor, without using predefined paths such as happens in a cinematic video, it can change its exploratory journey in real time by intervening with unprecedented actions. The Vr system also allows communication between speakers from different cultural backgrounds by linking “knowing” and “experimenting” about the topic addressed, by experimenting and learning the knowledge of data thanks to a better emotional narrative, able to impart in the deepest memory the notions through the images. It is evident from the experiences analyzed the potential in the change offered by the use of these innovative design methods, which are surely immature for a capillary and complete technical overlap with common habits, but which are already capable of major advancements in compositional intervention on Architecture (Figure 3).

3. Towards a Not Physical Migration, the Paradox of Living the Reality through the Virtual Instruments

As part of the debate on the strategies to follow in order to expand the use of 3d models in functional design within the most solid phases of architectural composition, freeing them from merely being considered as tools for pure representation and communication of the project, more and more new systems and applications, which can range from 3d photogrammetry to real-time rendering, explain how, making full use of it as a digital model, an architecture system, or a more articulate and complex scenario such as a city, can be generated for a virtual exploration platform, somehow generating a “virtual migration”, or more generally a virtual cloning operation, which identifies the dual possibility of transferring users as Avatar in the digital dimension and transfer copy scenarios from the real world to the virtual world, attract towards a passage of data specifically related to architecture, bringing a number of advantages and disadvantages related to an upsurge in the natural order of intervention on the real world, whether it is a virtual operation of an existing entity or whether it is a of a creative design action. Assuming involvement in a Virtual Survey operation as a cognitive intervention of existing architectural fabric, the main prerogative of this action alternative lies in the possibility of not invasive investigation, resulting from a preventive acquisition of cloud data from laser scanners or 3d photogrammetry together with the opportunity to intervene on spatial modification without affecting the original structure, thus creating an alternative in line with current trends in environmental sustainability and energy and economic savings that tend to intervene in a cross-fertilization manner and in some ways on recovery of architectural artifacts with great problems of abandonment or inaccessibility. In addition to the transfer through cloning of real architectural elements on virtual domains, it is even more interesting to migrate of professions who use cyberspace as a virgin territory to colonize through which it is possible to arrive at every place without limitations of time space, accessing external users and customers, by providing services that can be accessed through the use of virtual devices. This trend is surely a benefit, both in terms of simplification of work and in terms of developing new forms of professionalism, in order to establish a parallel architectural design, such as specific programmers, modelers, virtual architects or even archaeologists specialized in the analysis and study of computer cloned artifacts.
The state of the art related to the new concept of digital representation thus implies in its immateriality some nonexistent questions in analogue, specifically related to the use of virtual visualization techniques and the possible access to three-dimensional models, both in the exploratory phase of virtual colonization or editing. The involvement of the VR System in design also introduces a feature already inherent in the concept of interactive 3D for which the user can from within a virtual scenario, by amplifying his possibilities, real-time re-evaluate his design proposals, exploiting the new virtual control pads and thus generating matter by evaluating and thus experimenting with different set of empty patterns by molding polygonal mesh elements to make them functional for a specific purpose by changing materials and weather or light conditions. With a virtual tool, a designer infinitely increases the chances of realizing his ideas by going beyond the tracing of a two-dimensional drawing on paper, overcoming the limited cultural convention that a 3d model can only be generated after drawing 2d [4]. With the implementation of virtual reality, even more surprising results are assumed. Virtual environments offer the ability to “experience” the space created by the computer. Applying virtual reality to a design studio, architects can understand the spatial qualities of their designs instantly, comprehending immediately the components of the structural system, visualizing the color and consistency of the hypothesized materials, experimenting with modules and proportions of space, and finally appreciating the general aesthetics of compositional elements (Figure 4). Adherence to VR will make it possible to express and realize ideas that are difficult to represent with traditional means, through a fundamentally revolutionary way of interacting with computers but at the same time going against issues that might become generalized, linked in a way transversal to the cultural preservation of the projects: while in the past through the design the ideas, the reliefs and more generally the archive data were transferred and handed down on paper support, to which it was possible to access them directly without the use of any instrumentation, today transferring beyond the final project all the phases of design and construction in virtual digital media, surely coming to an experience flowing smoothly without any recording of their actions on the work, you risk losing the cognition of compositional synthesis, all the data transmitted on 3d virtual digital support can be lost in the long run due to the evolution of software and devices and their potential Future technical incompatibility. This does not mean that the virtual reality experience cannot be adopted in design, but the complement of these methodologies with traditional design operations on all design stages is necessary, archival and conservative, as you still do for the final elaborates. It is hoped that, in conjunction with the new BIM control and supervision technology and with the increasingly widespread development of tablet-smartphone-pc communication systems, virtual reality can be used as a useful tool for representing the final project and above all, as well as CAD tools, which in fact speeds up and simplifies the actual structured construction process today in multiple and difficult phases.

4. Fluid Workflow Experimental Application Methodology for Software Management of Interactive and Explorable Digital Scenario. Results and Criticism

The digitization of the architectural space aimed at an interactive exploration requires a careful planning of some complex processes and concatenated elaborative phases, necessarily included in a specific workflow, through which the compositional elements are generated in order to be transferred on functional computerized platforms immersive virtual tours. Knowing and therefore adopting a weighted method greatly simplifies the three-dimensional modeling approach set to optimize the export of geometric assets to VR systems, reducing the issues that often lead to blocking the workflow, causing thus making continuous corrections on the previous phases. Aside from the actual target for virtual cloning and migration, that is to say, if the operations are implied in relation to a survey of the existing or are aimed at architectural design, two executive methodologies starting from the same phase of building elements, coming to the experimentation of two different software platforms commonly associated with the integration with the Oculus Rift, Unreal Engine 4.15, one of the most widely used virtual platforms in multiple professional and artistic fields, that offers a wealth of advanced features and contains computing scripts and programming language bases, and EYECad VR, another specific architects-based platform that stands out for its simplicity use, just by virtue of which it is its diffusion in architectural studies in which architects and non-programmers work. The case study thus demonstrates the importance of following a precise simplification methodology in structuring the many execution phases, highlighting their progressive ease of execution and their completion speed. It was chosen to reconstruct the famous Ville Savoye of Le Corbusier (Figure 5), starting from graphic material, plants, prospects and sections, easily found in online archives of architecture. It has been chosen to use for almost all of the work a polygonal modeling, it easily editable at any time, partly integrated to a Nurbs modeling for some structural elements portions, more suitable for better numerical control on the scale of elements [5]. Specifically, Rhinoceros 3d (Nurbs modeling) and subsequently Cinema 4d (polygonal modeling) were used, where UV mapping was carried out for the subsequent texturization. Any polygonal optimization needed for an ideal virtual platform export has been made in Zbrush, a world-renowned art, film and video game art program for its versatile 3D sculpting, automatic ritopology and colorpainting features for photo-generation texturing. By largely departing from its own graphical editing methodologies of precalculated kinematic sequences and static images, the technical process for obtaining an interactive criterion therefore requires special arrangements, first of all the polygonal density control, balancing the asset optimization level and digital props that are necessarily lightweight to facilitate navigation and do not weigh the next real time calculation phases: in this respect, the Le Corbusier building has been ideally decomposed in more than 30 elements, covering, carriers, furnishing items, etc. treated separately for almost all the pipeline by checking the mesh at low polygonal density, then being assembled in the end directly on the virtual exploration platform. The semantic partitioning process, albeit long, has provided a good organizational structure of building mesh work, allowing more stable tests on the many elements involved, with the ability to return to each of them to correct and verify their configuration. The step of adding digital detail to the model conjugated with the creation of bump and displacement maps was made by exploiting the power of the voxel algorithms of dedus, such as the retopology of some elements that had lost the transition definition between the various programs by the versatile zremesher tool offered by Zbrush. We also used the same software to build some photo-based texture maps of the existing artefacts, in order to obtain as much as possible a final result corresponding to the original. In this regard, we have used ZAppLink, a very easy to use plug-in that connects the sculpting program with Adobe Photoshop by working through the projection of photo prints and photopianes on the model. At this point, once all of the digital semantic elements of the building have been configured in 3d, it is recompiled on the two virtual platforms, which in the case study were compared to the experimental end, Unreal Engine and the lesser known EyeCad.

4.1. Using the Epic Game Unreal Engine Platform

The management of the effects of simulated illumination and the graphic rendering of physically correct materials has made the Unreal Engine 4 completely free for academic research and for non-profit projects, reaching the latest releases of photorealism more and more similar to those derived from the most famous, as well as notoriously advanced, pre-calculated rendering engines, such as V-Ray and Arnold Render. According to the latest Vision VR/AR 2017, the worldwide conference that has been held in Hollywood for a few years on the progress of virtual systems in the graphics computer, is under implementation implementation within the software of one of the unbiased GPU rendering engines most famous in this moments, Otoy Octane Render/Otoy Brigade, able to use the simultaneous Path Tracing, an algorithm that is similar to Global Illumination, according to which even shadowy parts retain minimal amount of reflected and diffused light, further enhancing the realistic graphics rendering and achievement of real-time visuals that are very similar to reality, with quick calculations and no optimization or baking illumination texture techniques currently used for integration with the Ray-Tracing algorithm to manage the distribution of light. Unreal still uses other alternative systems, such as the Ray Traced Distance Field and the Field Ambient Occlusion, to optimize the simulation, so it is useful to soften the net shadows and then simulate the rather large and near light sources, and Lightmass bake texturing, through which it is easy to break down the calculation times and to properly solve hardware performance issues [6]. These graphic escamotage exploit properties related to the geometric data of scenes imported from other programs to effectively calculate shadows and bring them to the mesh. During the Shadowing phase, the shade phase projected from a parallel or point light source, only the surfaces intercepted by the rays are considered, ignoring the calculation for all other parts of the 3d model. The softness of the edges of the shadow generated depends on the distance from the radiant element and the quality of the shade can be calibrated by increasing or decreasing the number of interceptor beams, involving only light-affected pixels. The limits of this technique reside in the same morphology of the model to be shaded: if a mesh is not uniform or affected by tasseling it can generate artifacts in the shadow areas. In the case study, all these graphical techniques were used to maximize the interactive visual experience without any particular problems, thanks to the geometric linearity of the three-dimensional model and the use of very homogeneous textures. The imported FBX format in pieces, but with precise spatial coordinates, is automatically re-assembled by virtue of the original layout, while the effects of light and shadow are directly imprinted on the textures previously defined in order not to recalculate the lighting on the scenario at each frame of movement. Thanks to this Baking texture function for Lightmass, also known as “Build Light” (Figure 6), which may require a few hours of calculation based on the size and complexity of the 3d asset, once the light samples are generated, addressing them specifically in a single defined portion of the model, in which all the refractions and reflections are calculated, so that it can be directly imprinted on the texture. The hardware will dedicate itself to the calculation of the Ray-tracing algorithm, much faster, taking advantage of the graphics card’s potential, thus making it possible to have a fluid exploration even at considerable frames per second [7]. The experiment carried out on the Unreal Engine has provided excellent graphical and visual results, although the compositing and light management phases, despite the very clear and intuitive technical interface, have proved to be somewhat drastic and the node system, used by the program to manage mapping, texturing, and interactions, requires at least a basic programming preparation and cannot yet be within reach of everyone. With regard to the system for the Oculus Rift implementation, this responds well to head movements and positioning of the optical cone thanks to the motion sensor recognized in real time by the software, while as a negative note I noticed that by turning the view too low the loss of the video signal occurs, this behavior may be due to the calibration not performed correctly according to the scale of the scene and to hardware performance. Basically everything works well enough and from the last two releases to link the virtual device has also become simpler and less machinate. The final step is to compile and export work on the most common portable external platforms compatible with windows, mac and playstation 4 systems.

4.2. Using the EyeCad VR Platform

By incorporating the solutions on the market that offer professional services for the realization and management of virtual architectural scenes, in order to relate best to developers and architects while at the same time becoming more independent of each other, EyecadVR is positioned, software constant development that incorporates an independent real time graphic engine, introducing a very intuitive system to create interactions without having to resort to technically specialized code writing and programming skills [8]. The software interface is designed by architects for architects in collaboration with digital artists and market analysts, aware of the basic issues inherent in the virtual representation of complex and scenographic architectural digital reconfigurations. Precisely for such development features, the software tends to be among the original proposals of the new frontier of virtual reality applied to the representation of architecture. At a functional level, the Trial version has been tested, though it does not offer all the paid version tools, gives a very clear idea of the software’s potential. The simplicity of the interface coupled with the dry parametrization of the visual effects is also the biggest defect in the application, which does not allow any in-depth management of the scene, allowing a few interesting customization of materials and virtual cameras but above all extreme simplification of the parameters related to the simulation of light phenomena, which should be widely configurable in any light-shadow condition. The implementation of HDR greatly improves chromatic quality while attempting to use a global lighting system without the support of baking texture renders problematic management of poor hardware interactions and annoying flikering problems in parts of the scene most darkening. Compared to Unreal Engine, which offers the full support of various escamotage as we have seen, it loses in terms of performance, both for the overall quality of the graphics and for the management of the frame rate, undoubtedly less fluid on Eye Cad VR. Interestingly, however, is the user friendly operation of connection of the Oculus Rift virtual device, which has enabled integration with the scene at a single click, giving almost the immediate possibility of a virtual tour within the previously imported Ville Savoye model. Another positive note lies in the many digital formats that the program can import, giving the possibility to create a bridge network with the most used cg applications, and also provide a convenient system of interpretation of the normal and overturned features of the RTR display, it was a shame that this functionality in the specific case study did not work perfectly, leaving some polygonal gaps even during the virtual tour. As a general rule, EyeCad, considering the few years of development, can be considered a great program for approaching the world of virtual reality: in the immediate future there are involved engagements, within its development and technological implementation pipelines, of giants as Apple and Microsoft, this shows a clear improvement in software especially linked to its use in commercially-deployed devices.

4.3. Comparison between the Two Analyzed Software

Beyond the good intentions of EyeCad and its super easy-to-manage interface, Unreal Engine looks like a far superior instrument, offering many more features not only related to the visual simulation of lighting but about the same scene editing, resembling much more software like 3dStudioMax or Cinema 4d modeling and three-dimensional composition. (Figure 7) The very well-balanced parameters of each element in the scenario allow not only the most elementary functions of geometric and parametric transformation of 3d assets, but also offer interesting internal polygonal optimization, collision management, particle editing functions related to simulations mists, vapors or liquids, a good digital fiber simulation, a complex and intuitive material management node system and animated interactions, automatic UV map creation and more. Unreal Engine 4.15 therefore reveals a program that can be used by both first-rate users and experienced programmers, always delivering very good visual and graphical results. Negative notes, however common to both systems analyzed, reside in the impossibility of using multiple GPUs for simulations, which would improve performance by increasing the frame rate viewable in rendering, and by the lack of a self-calibration system of the Oculus Rift based to the potential hardware that may be used, which could improve the frequency of virtual viewer usage even in architectural editing, which is still a bit uncomfortable despite the already commercially implemented special ergonomic pads for interactive virtual object control. As for the economic aspect, it can be said that both programs can be considered rather cheap, although the system produced by the Epic Game, in line with its closest competitors such as Crytek Engine [9] and Unity [10], has put Freely available for freelance and scientific research, its virtual interface, intelligently creating an expansive expansion market, specific plug-ins, 3D compliant assets, preset scenarios and extra paid features to which users can access more experienced.

5. Conclusions

A rapid evolution of the functional features of virtual devices has generated a widespread response in the field of architectural representation and design, where the use of such innovative instruments begins to be a habitual practice, leading inevitably, as we have seen, a series of shadow points related to social issues such as mental destabilization phenomena in the recognition of actual reality, individual disruption and isolation, and health problems such as ocular motions, called eye strains, linked to mass production initiated after tests still carried out on a few thousand non-calibrated users for all ages, still linked to the inevitable experimental phase of the first commercial samples, already partially resolved or progressively being corrected, both from a strictly technological point of view considering the evolutionary factor or the perception of virtual connotative space in correlation with habit: the main problem is the incongruity between the perceivable movement in the virtual dimension, through which explorative experience is developed, and the actual staticity of the user that the brain cannot adapt to the simulated movement, generating illness or dizziness. According to the experiences of the beta test, in contrast to multiple applications, it was still possible to define a kind of common learning curve or more perceptive education, according to which a not long but repeated period of permanence on fully immersive virtual platforms could progressively reduce this kind of disorders. There are numerous reflections on the use of the virtual and its implications in the practical and abstract field, not disconnected from the assumptions of change in the human modus vivendi in relation to the ever-greater combining between man and the virtual instrumentation. It has to be said that doubts and uncertainties have always accompanied any technological evolution, as it has already happened with the advent of the web, progressively expanded to overcome the coverage density of any other infrastructure system, which until some time ago it was believed that it could lead to the extinction of other forms of cultural communication such as paper books and film, or the use of mobile phones and after smartphones and their apps, a prospect of a certain negative social influence. Time has denied such pessimistic outcomes by recognizing the goodness of such technologically advanced tools as the amplification of human capabilities in support of everyday life or especially of sociality. For new systems, the welcome seems to be a lot warmer and the times of techno-disclosure seem to be far faster, partly due to the spread of countless sci-fi novels, science fiction films and early scientific experiments that, since the end of the years ‘70 have influenced progress in this direction, but above all thanks to the involvement of large-scale economic investments, driven by accurate market research, which have strongly influenced their development and distribution so that they are already capable already from their first delivering of excellent results in versatility of use.

Supplementary Materials

The following are available online at https://mjls.info/vr-architecture/. Figure S1: Ty Hedfan Virtual Reality in Architecture; http://mycours.es/gamedesign2016/presentations/a-brief-history-of-virtual-reality/; Figure S2: A brief history of Virtual Reality; https://www.dezeen.com/2016/05/25/virtual-reality-designing-architects-vrtisan-unreal-engine-htc-vive/; Figure S3: http://www.virtualrealityguide.com/history-of-virtual-reality, Figure S3: http://www.vaps.us/virtual-architect.html, Figure S3: https://www.geekwire.com/2016/seattle-high-rise-project-use-microsoft-hololens-worlds-first-holographic-leasing-center/; Figure S4: Seattle high-rise to use Microsoft HoloLens for world’s first holographic leasing center; http://www.ecomanta.com/2011/01/le-corbusier-villa-savoye-residential.html; Figure S5: Le Corbusier Villa Savoye Residential Design at its conception; http://dovydasbudrys.artstation.com/projects/ZonGx; Figure S6: Portfolio official page by Dovydas Budrys.

Conflicts of Interest

The author declare no conflict of interest.

References and Notes

  1. Livio Sacchi, POST SCRIPT Città Virtuale come fuga dal Reale?; : Roma, Italy, 2014; pp. 35–37, ISBN 9788849228298. In Atlante Dellitare Virtuale; Gangemi Editore: Roma, Italy, 2014; pp. 35–37. ISBN 9788849228298.
  2. The Sensorama consisting of a seat, a stereoscopic TV screen developed in 1957 (STAIU) and an interactive handlebars, basically constituted a virtual driving simulator, where by sliding the head into the screen and guiding through joystic, the user could have the “an impression” of driving on the streets of Manhattan and being able to perceive even the smell of car exhaust and wind on the face.
  3. In 1982, in the scientific research for aptic interaction, Thomas Zimmerman and L. Young Harvill created the interactive Data Glove to play a virtual guitar. The instrument worked to incorporate ultrasonic and magnetic hand position tracking technology.
  4. Interactive intervention directly in the design editing phases allows you to focus more quickly on the geometric issues of the architecture displayed in 3d, that can now be viewed on computers from various angles and with amazing visual graphics effects to simplify the editing phases and subsequent visual performance steps, thanks to the interactive nature of computer technology.
  5. Available online:. Available online: http://www.archiga.it/progetti-architetti-famosi-villa-savoye-di-le-corbusier-piante-prospetti-e-sezioni/ (accessed on 10 August 2017).
  6. A lightmap is a data structure used in lightmapping, a form of surface caching in which the brightness of surfaces in a virtual scene is pre-calculated and stored in texture maps for later use. Lightmaps are most commonly applied to static objects in realtime 3d graphics applications, such as video games, in order to provide lighting effects such as global illumination at a relatively low computational cost.
  7. Introduction to Ray Tracing: A Simple Method for Creating 3D Images. Available online: https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/raytracing-algorithm-in-a-nutshell (accessed on 13 August 2017).
  8. Official Site of EyeCadVr Software. Available online: http://www.eyecadvr.com/mondo-eyecadvr.html (accessed on 14 August 2017).
  9. Official Site of Cry Engine Software. Available online: http://www.crytek.com/cryengine (accessed on 12 August 2017).
  10. Official Site of Unity Software. Available online: https://unity3d.com/ (accessed on 10 August 2017).
Figure 1. Example of a 360° interactive virtual tour scene for the most common holographic viewers such as Samsung Vr and VR ONE Plus. Image by Ivr Nation.
Figure 1. Example of a 360° interactive virtual tour scene for the most common holographic viewers such as Samsung Vr and VR ONE Plus. Image by Ivr Nation.
Proceedings 01 00915 g001
Figure 2. Sensorama, the first virtual immersion system, the technical table and the pictures shown at the presentation of the experimental product.
Figure 2. Sensorama, the first virtual immersion system, the technical table and the pictures shown at the presentation of the experimental product.
Proceedings 01 00915 g002
Figure 3. Use of the current advanced Vr technology, including a HMD viewer for pad handling and editing elements, aptic devices and MOCAP tracking gloves.
Figure 3. Use of the current advanced Vr technology, including a HMD viewer for pad handling and editing elements, aptic devices and MOCAP tracking gloves.
Proceedings 01 00915 g003
Figure 4. Current use of the virtual exploration platform (Hololens) in an architecture study.
Figure 4. Current use of the virtual exploration platform (Hololens) in an architecture study.
Proceedings 01 00915 g004
Figure 5. Analysis and reconfiguration phase from documentary data. (a) Ville Savoye, photos; (b) volumetric data of the project derived from Archweb.
Figure 5. Analysis and reconfiguration phase from documentary data. (a) Ville Savoye, photos; (b) volumetric data of the project derived from Archweb.
Proceedings 01 00915 g005
Figure 6. Versatility of configurations related to lighting illumination phenomena offered by the Unreal engine 4.15. Experiment by Dovydas Budrys.
Figure 6. Versatility of configurations related to lighting illumination phenomena offered by the Unreal engine 4.15. Experiment by Dovydas Budrys.
Proceedings 01 00915 g006
Figure 7. Comparison between the two analyzed software. (a) Unreal Engine RTR Interface; (b) EYECAD VR, Intuitive interface, interaction management panel.
Figure 7. Comparison between the two analyzed software. (a) Unreal Engine RTR Interface; (b) EYECAD VR, Intuitive interface, interaction management panel.
Proceedings 01 00915 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Basso, A. Advantages, Critics and Paradoxes of Virtual Reality Applied to Digital Systems of Architectural Prefiguration, the Phenomenon of Virtual Migration. Proceedings 2017, 1, 915. https://doi.org/10.3390/proceedings1090915

AMA Style

Basso A. Advantages, Critics and Paradoxes of Virtual Reality Applied to Digital Systems of Architectural Prefiguration, the Phenomenon of Virtual Migration. Proceedings. 2017; 1(9):915. https://doi.org/10.3390/proceedings1090915

Chicago/Turabian Style

Basso, Alessandro. 2017. "Advantages, Critics and Paradoxes of Virtual Reality Applied to Digital Systems of Architectural Prefiguration, the Phenomenon of Virtual Migration" Proceedings 1, no. 9: 915. https://doi.org/10.3390/proceedings1090915

APA Style

Basso, A. (2017). Advantages, Critics and Paradoxes of Virtual Reality Applied to Digital Systems of Architectural Prefiguration, the Phenomenon of Virtual Migration. Proceedings, 1(9), 915. https://doi.org/10.3390/proceedings1090915

Article Metrics

Back to TopTop