1. Introduction
With the gradual fading of urban growth supremacism, community planning that focuses on optimizing urban stock assets has stepped onto the stage of history [
1]. Compared to traditional urban planning and design, focusing on new areas of development, community planning has to confront the limitations originating from existing environments [
1,
2]. Further, current community environments generally contain some trivial details, such as old buildings, public facilities, street furniture, and landscapes, which are difficult to measure and integrate accurately into conventional basic design drawings, making it difficult for designers to make design judgments and decisions [
3]. As a result, designers have gradually chosen to return to the design site to deeply perceive and understand the environmental details and to imagine and develop design proposals on-site [
4]. Such an on-site design method originates from the “back to the things themselves” concept of the phenomenological movement [
5], which encourages designers to immerse their proposals in community environments in order to evoke the spirit of the place [
6].
However, limited by technicalities, the current on-site design of community planning mainly relies on the visual imagination of the designers [
7], which is too abstract to support their intuitive design perceptions, accurate design judgments, and convenient design decisions. It therefore always leads to tedious, repetitive work [
8].
In order to address the limitations relating to design intuitiveness, accuracy, and convenience, scholars have been committed to introducing the rapid development of computer graphics (CG). Virtual reality (VR), a typical computer-generated imagery technology, was the earliest technology of this kind to be applied to the urban planning and design fields [
9]. By assembling individual physical devices such as personal computers (PCs), head-mounted displays (HMDs), display screens, cameras, and sensors, early VR design support systems (VR-DSS), such as Geo walls, vision domes, and CAVEs, generated immersive virtual environments indoors, which allowed designers to make intuitive spatial judgments and design imaginations [
10]. Meanwhile, 3D digital design objects were rendered for visualization in immersive virtual environments [
11]. Improvements in computer performance provided the immersive virtual environments with more vivid details, including sound and animation [
12]. The introduction of the tangible interactive interface allowed designers to interact with the virtual world in real time [
13,
14]. Nevertheless, the immersive virtual environments generated by VR were detached from the physical world, rarely providing real spatial perception [
15]. Additionally, creating models for immersive virtual environments costs considerable time and effort and still might not restore all the real environmental details, offering scant support for an accurate design judgment [
8]. In addition, the VR-DSS is generally involved in expensive and complicated devices that are usually operated in professional scientific laboratories and not available to most designers [
14,
16].
In summary, VR technology has the potential to improve on-site design intuitiveness, because it can simulate the vivid existing environments. However, the simulation of the existing environments would require a lot of effort and hardly contribute to improving design convenience. Additionally, the simulated virtual existing environments cannot support real spatial perception and thus can do nothing in terms of improving design accuracy.
Subsequently, the more advanced computer-generated imagery technology, augmented reality (AR), was introduced to the urban planning and design fields into compensate for the shortcomings of VR. AR can create environments where digital information can be inserted into a predominantly real-world view [
17]. With portable devices, such as smartphones and mobile tablets, AR can bring virtual design objects into real design scenes for visualization [
18,
19]. Additionally, portable devices can be a tangible input interface to provide real-time manipulation behaviors (e.g., moving, rotating and zooming) for designers to adjust their virtual design objects in AR environments [
4]. At the same time, real-time environmental analysis (such as wind, light and heat) for the virtual design objects, combined with remote computing, can be rendered in AR environments [
8,
20]. Since this 2D display on the screen of the portable device barely describes the real 3D spatial relationship, it was difficult for designers to make intuitive design perceptions, even if they returned to the design site [
21]. Similarly, the input interface of the AR design support system (AR-DSS) on the screen of the portable device was relatively stiff. These interactions did not support flexible design actions, and it was difficult for designers to make accurate judgments [
21].
In short, depending on portable devices, AR technology can bring virtual design objects into real design environments, thus avoiding the repeated trips between the design studios and design site and providing possibilities relating to the improvement of the on-site design convenience. Nevertheless, the visualization and interaction of virtual design objects are limited by the 2D screen display of portable devices, which hardly improve on-site design intuitiveness and accuracy.
Mixed reality (MR) technology combines the advantages of both VR and AR [
22] since it can merge real and virtual worlds to produce new visual environments where physical and digital objects coexist and interact in real time [
23]. The rapid development of hardware, especially the advent of the Microsoft HoloLens (Microsoft, Redmond, WA), a head-mounted display, makes MR available for some professional fields. Until now, MR has been used in the medical field for 3D visualization in surgery planning [
24,
25], the aerospace field for the simulation of the space exploration environments [
26], the tourism field for interactive acoustic and visual navigation [
27], the engineering field to promote communication and safety in on-site construction [
28,
29], and in other fields.
Overall, these successful applications demonstrate some important technical characteristics of MR, such as 3D on-site visualization, natural control mode (based on gaze and gesture), accurate spatial mapping, and real depth perception, which offer the potential to improve the current on-site design intuitiveness, accuracy, and convenience.
This study intends to apply mixed reality (MR) to community planning to improve on-site design with regard to intuitive perception, accurate judgment, and convenient decisions. Thus, an on-site design experiment, using an MR design support system (MR-DSS) for a typical community planning and design scene, was performed to examine the effectiveness of MR.
4. Discussion and Conclusions
This study aimed to use mixed reality (MR) technology to improve on-site design experiences in community planning. To test its feasibility, we invited sixteen designers to participate in an on-site design experiment using MR for a typical community planning and design scene. The results validate that MR can significantly improve on-site design intuitiveness, accuracy, and convenience in community planning.
Specifically, MR can create interactive visualization environments where life-like 3D virtual design objects are displayed in the real world. In MR environments, designers can therefore not only understand the specific community environmental details but also visualize potential design objects and make further design deductions intuitively. MR environments present a wholly real spatial relationship between virtual objects and the real world, which contributes to designers making accurate position perceptions and size judgments when imagining, deducting, and assessing their design proposals. At the same time, the vivid texture rendered by MR could aid designers in finding suitable design materials within the community contexts. With current portable MR-DSS and brief usage instructions, designers can acquire basic design control skills in MR environments. Thus, with MR, they can immerse their design proposals in an actual community scene and make real-time design adjustments conveniently instead of resorting to repeated observation, recording, sketching, and modeling between their studios and the design site.
Importantly, it can also be seen from the research results that although almost all the participants admitted that using MR led to improvements in on-site design of community planning, those with rich design experience gave relatively low ratings in the interviews. More specifically, the experienced designers had cultivated mature design habits that relied on their visual imagination through many years of planning and design practice. While they did experience the advantages of MR in this ten-minute on-site design experiment, it was difficult for the experienced designers to change long-standing design habits in such a short time. They preferred conventional on-site design methods that might be more tedious and complicated to the unfamiliar new technology. In their opinion, MR might serve as an auxiliary to current on-site design of community planning when necessary.
However, there are still some limitations to our study. First, regarding the hardware, we selected Microsoft HoloLens, which was the most advanced MR head-mount display at the time. Its technical characteristics (3D holographic visualization, natural control mode, accurate spatial mapping, and real depth perception) fit our study purposes well. Its weight (579 g), however, makes it unsuitable for wearing over long periods, and its low battery life (less than three hours) makes it less suitable for supporting sustainable outdoor design work. Additionally, the narrow FOV of the Microsoft HoloLens cannot present the whole MR environment, which can affect environmental perception and design judgment to some extent. Second, with regard to the software, because there were no suitable apps in the official Microsoft Store, we developed the application software, HoloDesigner, in-house. As we are not professional programmers, the current appearance, fluency, and stability of HoloDesigner need to be optimized further. This software also has only basic design functions (such as moving, zooming, and rotating), which barely support relatively complicated design work. Third, the design data used in the experiment were prepared beforehand, which might have limited the design imagination of the participants. Data preparation is a complex process that not only involves data creation, editing and uploading but also necessitates some coding work. If they are already familiar with the MR-DSS, designers can prepare the corresponding design data according to their specific design tasks by themselves. Fourth, in terms of the participants, some were from the same design institute. Thus, they might have already formed an attitude towards the new technology in their daily work before participating in the study, which might have affected their performance in the experiment. In addition, the sample size was relatively small; therefore, the results might not represent the attitudes of all designers. Finally, due to the simple experimental tasks and limited hardware power, the on-site design experience time with MR for a single participant was only ten minutes. Participants therefore might have found it difficult to fully understand the MR technology and make reasonable judgments about it.
Future work will examine how MR technology can support design communication between designers and residents in community planning. We intend to further improve the current MR-DSS to achieve a smoother operation, friendlier control, and richer functions, and, most importantly, multi-user sharing services. We then hope to apply it to facilitate the deliberation process of actual community planning projects.