3.2. Methodology and Data
Our study methodology was based on the following four phases of work, which included primary data collection and analysis process.
Phase 1: Developing a future fictional planning scenario—The project team partnered with the Borough of Glassboro to conduct a community design workshop and gather design ideas for a potential development scenario in the study area. A group of 46 attendees, including municipal officials, residents, business owners, and university students and faculty, discussed the opportunities, barriers, and key design considerations for a potential Arts and Entertainment District: (i) preservation of existing landmarks such as Angelo’s Diner and the Heritage Glass Museum; (ii) modification of existing features such as relocating the public library for better visibility and access, adding a third floor to existing two-story retail/mixed-use buildings, and implementing green infrastructure such as pocket parks, street trees, and gardens on existing underused green/open spaces; and (iii) innovation, such as a new movie theater, parking garage, and a landscaped pedestrian plaza with retail, dining, and entertainment options. Based on community feedback, two architects from the project team created a fictional planning scenario, including site plan and 3D visualization using Google Earth, SketchUp, and ArcGIS 3D Analyst software.
Phase 2: Setting up VR simulations
—We partnered with Rowan University VR Center to create a VR version of the 3D models using Autodesk Maya and Google Earth software, and organize four focus group sessions. The VR Center has a CAVE lab, featuring a 7-foot-high by 40-foot-wide curved wall of eight adjoining screens and provides room for up to 20 people to explore VR simulations. After creating a basic VR simulation, the project team added human figures, different modes of transportation (e.g., cars, vans, buses, school buses, fire trucks, and bicycles), vegetation (e.g., trees, plants, and vegetable/flower beds), and various street furniture and placemaking features (e.g., lamp posts, street banners, street vendors, seating arrangements, tables, fountains, gazebos, murals, and sculptures). We collected these models from free online libraries, so the quality of models varied. We intentionally kept some physical or sensory design flaws to see if they evoked any reactions from participants. Our focus group experiments were predetermined, and the participants had no control over the path or speed of the exploration. We recorded a 7-min fly-through and walk-through simulation of the study area. This simulation was based on a written script explaining the design proposals and navigating the walking path (see Figure 1
and Figure 2
Next, we added auditory and olfactory stimuli. One project team member recorded an audio story following the script; another member recorded a song while playing a guitar, a 30-s sound bite for an animated singer performing on the center stage of the pedestrian plaza. We also added ambient sounds including restaurant chatter in the open-air dining area, an ice cream truck jingle, a lawnmower, a dog barking, traffic sounds near the intersection and a movie reel near the theater. All sound bites were timed and attached to appropriate locations in the VR simulation. The auditory stimuli were connected to the CAVE lab’s surround sound system for an immersive sensory experience. Two distinct olfactory stimuli, freshly cut grass and buttered popcorn, were designed to release during the 4D IVR simulation, cued with visuals of an outdoor grassy area and movie theater, respectively. The olfactory cues were designed to be introduced via spray bottles, placed approximately eight feet high, out-of-sight from the focus group participants. We finalized three simulations for the focus group sessions: (i) a 7-min recorded 2D video of 3D simulation with oral presentation but no IVR (“2D video of 3D simulation”); (ii) a 7-min recorded 3D IVR simulation with audio story only (“3D IVR simulation”); and (iii) a 7-min recorded IVR simulation with additional auditory and olfactory stimuli (“Multi-sensory 4D IVR simulation”). Among these three, the first simulation—a 2D video of 3D simulation, along with oral presentation—is a commonly used community engagement tool in the USA for demonstrating proposed development scenarios [1
]. We decided to compare this “standard” method with relatively uncommon 3D IVR and 4D IVR simulations.
Phase 3: Conducting focus group sessions—The project team, with the help of municipal officials, recruited participants via email invitations, website announcements and social media posts. We formed four focus groups with ten participants each and hosted four separate sessions during weekdays from 6 to 8 p.m. in the spring of 2018. Focus Groups 1, 2, and 3 experienced a 2D video of 3D simulation, a 3D IVR simulation, and a multi-sensory 4D IVR simulation, respectively. Focus Group 1 was a comparison group, which did not experience any IVR simulation and the findings from this group were compared with the findings from groups 2 and 3. Group 4 experienced both a 2D video of 3D simulation and a 4D IVR simulation. We allocated USD 30 gift cards for participants in groups 1 to 3, and USD 40 for group 4 as they attended two sessions. In order to make the focus groups inclusive, it was important for us not to exclude any participant unable to attend a specific session on a pre-assigned date. Participants, therefore, chose their groups based on personal convenience, and were not filtered or assigned into specific groups. Of the total 40 participants, 55% were female and 30% were non-white. The age of participants ranged from 21 to 69, but 60% people were within the range of 20 to 30. The diversity of participants was consistent with Glassboro’s demographic characteristics. Participants’ familiarity with the study area ranged from low (5%), moderate (52.5%), to high (42.5%). Their familiarity with VR technology also varied from low (5%), moderate (90%), to high (5%).
The session with Focus Group 1 included a verbal presentation (using PowerPoint) explaining the project and proposed design ideas; a Google Map of the study area for geographical context; a 2D video of 3D simulation of the proposed planning scenario; guided activities and discussions; and an open feedback forum. We followed the typical steps taken by many US local governments in public meetings to present and discuss new development proposals. The other two focus group sessions followed the same procedure but replaced the 2D video with 7-min IVR simulations in the CAVE lab (see Figure 3
a). The activities and discussions in all sessions were designed to collect data to answer our four research questions. First, we asked participants to draw a mental map [28
] on a piece of paper using mental recall of the planning scenario they experienced (see Figure 3
b). Second, using sticky notes, participants documented the emotional responses they felt throughout the design simulation and adhered those notes to specific locations on their mental maps. Third, we asked participants to write questions—as many as they like—about the content and visualization techniques of the simulation, as well as the purpose/logistics of the project. Finally, we engaged participants in an open conversation to discuss some of their questions and understand their views on the strength and challenges of IVR simulations. Focus Group 4 participants, who experienced all types of simulations, answered additional questions to compare the types, as well as the effect of auditory and olfactory stimuli in the multi-sensory 4D IVR. Two project team members were in charge of taking detailed notes of all conversations and counting the number of times participants were engaged—either by answering questions, asking questions or offering comments.
Phase 4: Analyzing data and interpreting results
—We performed content analysis and descriptive statistics to analyze all the data collected from focus groups (e.g., group discussion transcripts, written and verbal responses to our questions and prompts, mental maps drawn as sketches with notes, and emotions labeled on sticky notes). Using content analysis, communication artifacts were converted to coded categories based on a consistent and unambiguous rule of coding [32
]. A reliability analysis was performed to test the tendency for different researchers to consistently recode the same data in the same way [33
At the end of all focus group sessions, we compared notes taken by two team members for reliability check and created a single transcription document. We deleted comments or questions that were not related to this study or its methodology in any way (for example, a question about rest rooms in the VR Center or a comment about guest parking spaces outside the VR Center). Next, all comments and questions were coded into three categories: content, visualization, and purpose/logistics; emotional words were coded into two categories: positive and negative; and characteristics of IVR technology were coded into two categories: strength and limitations. A project team member performed coding after developing constructed rules and categories. Another member recoded the content by following those rules. After comparing the coding results, we calculated a percent reliability for each category by calculating the number of cases identically categorized by two coders for each category and dividing it by the total number of cases in the dataset. The mean percent reliability ranged from 97% to 99%, which was high enough to indicate the dependability of the categories that the original coder chose for the dataset. Finally, descriptive statistics were used to calculate and present the total numbers of comments discussed, questions asked, design elements recalled, or emotional responses offered by each participant in each focus group session. We aggregated individual level data at the group level and calculated percentage of changes between groups 1, 2, and 3 or between two sessions of group 4.