One View Is Not Enough: Review of and Encouragement for Multiple and Alternative Representations in 3D and Immersive Visualisation
Abstract
:1. Introduction
2. Background: Understanding 3D, Research Questions and Vision
3. Historical and Key Developments
Historical Developments
4. Multiple Views, Dataflow and 2D/3D Views
4.1. Understanding Data and Generating Multiple Views
4.2. Mapping 3D Data, and Mapping Data to 3D
4.3. Display and Interaction Technologies
5. Case Studies
5.1. Case Study—Cultural Heritage Data
5.2. Case Study—Oceanographic Visualisation
5.3. Case Study—Immersive Analytics
5.4. Case Study—Handheld Situated Analytics
5.5. Case Study—Haptic Data Visualisation
6. Summary and Lessons Learnt
- Make a plan—perform a design study on your immersive visualisation before implementing it (e.g., through sketching). Developing data visualisation solutions is time consuming: whether they are immersive, contain 3D models or not. Developers need to make sure that their solutions are suitable, and fit for purpose. For any visualisation project, the developer should perform a deep analysis of the data, the purpose of the visualisation, and the way that it will be presented. It is far less time-consuming to sketch ideas, or develop a low-fidelity prototype, than it is to develop the full implementation and realise that it is not fit for purpose. Outline sketches help to confirm ideas, which can be evaluated with real users. We use the Five Design-Sheet (FdS) method that leads developers through early potential ideas, to three possible alternative ideas, and a final realisation concept [39,70]. The FdS has five panels (five areas of the sheet) dedicated to consider the design from five different viewpoints: from a summary of the idea (first panel), what it will look like (sketched in the Big Picture panel), how it operates (discussed in the Components panel), what is the main purpose (in the Parti panel of the FdS sheet), and the pros and cons of the idea (in the final panel).
- Understand the purpose of the visualisation. All visualisations have a purpose. There is a reason to display the data and present it to the user. Perhaps the visualisation is to explain something, or could be to allow the user to explore the data and gain some new insights. It is imperative that the developer knows the purpose of the visualisation, otherwise they will not create the right solution. Munzner [83] expresses this in terms of “domain problem characterisation”, and the developer needs to ascertain if “the target audience would benefit from visualisation tool support”. Most methods, to understand the purpose, are qualitative. Ethnographic studies, interviews with potential users, can each help to clarify the situation and need. One method to clarify the purpose of the visualisation is to follow the five Ws method: Who, What, Why, When, Where and Wow [1]. Who is it for? What will it show? When will it be used? What is the purpose of the visualisation? What data will it show? Answering these questions is important for deciding how the visualisation solution will address the given problem, fit with the goals of the developers and users, and how it could be created.
- Display alternative views concurrently. Alternative views afford different tasks. There is much benefit in displaying different data tables, alternative visualisation types, and so on. This allows the user to see the information from different viewpoints. For example, from the heritage scenario (Section 5.1), we learn that each alternative 3D view helps with multivocality. The real standing stones in the field, or depicted virtually on a map, show the lay of the land. The rendered models show the deterioration of the heritage artifacts, which can be stored and compared with captured models of previous years. The physical models become tangible interfaces, and can be passed around a group to engender discussion. There are many possible approaches for achieving this. For instance, several views can be immersed inside one virtual environment, or different visualisations can be displayed across different devices. Gleicher et al. [33] express this idea in terms of views for comparison, while Roberts et al. [6] explore different meanings of the term ‘multiple views’, including view juxtoposition, side-by-side, and alternative views.
- Link information (through interaction or visual effects to allow exploration). Although displayed in different views, the information still presents the same information. Therefore, with concurrent alternative views, it is important to link information between these complementary views. Linking can be through highlighting objects when they are selected, or coordinating other interactions between different views (such as scaling objects or concurrently filtering data). Many researchers have proposed different coordination models, for instance, North and Shneiderman [84], Roberts et al. [5] and Weaver [85] explain coordination models of interaction. However, typically, researchers have concentrated on side-by-side view displays; it is more complex to coordinate across display devices, and display modalities. Subsequently, it is much more challenging to coordinate interaction between a tangible object and virtual ones, without means to link them—then at least the view information needs to be consistent; with same colours, styles, appearances, etc. For instance, the Visfer system can transfer visual data across devices [86].
- Address the view occlusion challenge. One of the challenges with 3D and especially immersive visualisations is when volumetric data are displayed, as it can be difficult to ‘see inside’. With volume visualisations, a 3D gel-like image is created. Transfer functions map different materials to colour and transparency [23]. Similar techniques can be used in immersive visualisation, where transparency could be used to see through objects to others. Alternative strategies could be to separate objects into smaller ones, or separate them from each other [46]. Other solutions include using shadows to help clarify what other viewing angles would look like (such as used by George et al. [53] in their cone trees 3D visualisation of hierarchical data). The survey paper by Elmqvist and Tsigas [87] provides a taxonomy of different design spaces, to mitigate and manage 3D occlusion.
- Integrate tangibles (for interaction, to elicit different stories, and inclusion). We used tangible visualisations in the oceanographic case study (Section 5.2) in three ways: as an interface device, and to engender conversation and multivocality, and as a way to add inclusion to the visualisation. The 3D printed models of the heritage standing stones became interaction devices that were placed on the tabletop display. By adding QR codes, we were able to present descriptive information about the standing stones. They become a ‘talking stick’, where a person can hold a tangible object, and talk about their experiences. The person with the tangible denotes the speaker, and the object is passed around the group to share interpretations and accrue multivocal stories. Finally, we used haptics and tangibles to visualise the information for blind and partially sighted users [65]. While no single comprehensive model exists, there are several relevant survey papers: Paneels et al. [13] review designs for haptic data visualisation, and Jensen investigates physical and tangible information visualisation [88].
- Make it is clear where objects are located. Particularly if the user is immersed inside visual information, it is important for them to be able to navigate and see all the information. In this case, make sure the user is aware that some data have been visualised and displayed behind them. This could be achieved through navigation, allowing the user to zoom out to see everything, or allow them to turn around, perhaps add hints or arrows to explain that their is more information to the left or right. Leveraging proprieception, and awareness that the user would have of themselves in the space, users can place and observe visualisation objects that surround them [89], and understand how to select the objects [90].
- Put the visualisation ‘in context’. If the context of the data is not understood, then the data presentation could be meaningless, or hard to understand. One of the challenges with 3D is that it can be difficult to provide contextual information. For instance, in 3D it not clear where to locate 3D titles, text annotations, photographs that explain the context, and so on. In traditional 2D visualisations, contextual information is achieved by coordinating views. Subsequently, dual view displays are popular; where one view provides the context and the other view provides detail [33]. In this way, the detailed information is shown in context, and the user can use the overview display to help them navigate to a specific location. How do we display context in immersive visualisation? There are potentially many solutions. For instance, floating descriptive text, popup information, audio descriptions, external descriptions presented before someone becomes immersed, or perhaps displaying information on a movable menu attached to their hand (e.g., view on a bat [6]). From the oceanographic case study (Section 5.2), we understand that quantitative information is better in 2D, but 3D is required to give context, positional information and allow users to select specific locations. It is easier to select a transept across the estuary in the 3D map view, than on the alternative visualisations. From our work in immersive analytics (Section 5.3), we understand the power of visual embodiments, to allow people to innately understand the context of the data. If the 3D view is modelled to look like the real-world (that it represents) then users can quickly understand the context of the information. We also learn that without suitable contextual information (or contextual scales, legends and other meta-information) the data presentation can be meaningless. Because of the growth in this area, many phrases are used, including: context aware, situated, in situ, embodied and embedded visualisations. While no single reference model exists, Bressa et al. [91], for instance, classify the different techniques and explain that solutions consider the space they are placed within, often include a temporal variables or are time sensitive, embedded into everyday activities and put emphasis on the community of people who will create or use them.
- Develop using inter-operable tools and platforms. Developers have been making it easier to create 3D and immersive visualisations (see Section 4) by relying on inter-operable tools, and synthesising capabilities from a wide range of research domains. For example, computer vision-based tools such as AR.js (based on ARTookit [92]) of Vuforia can be used to provide marker/image-based tracking to web-based augmented reality applications [14,77] (Section 5.4), through integration with the HTML DOM. Our HITPROTO toolkit was developed to help people create haptic data visualisations [65], using a combination of standards such as XML, OpenGL and X3D, through the H3DAPI (h3dapi.org). Likewise, our latest immersive analytics prototyping framework, VRIA [3] relies heavily on Web-based standards, being built with WebXR, A-Frame, React and used ’standardised’ features such as a declarative grammar. The use of standards allows developers to combine capabilities, and therefore complement visual depictions with capabilities that enhance the comprehension and use fullness of said depictions.
- Incorporate multiple senses. With virtual and immersive visualisations, there are many opportunities to incorporate different senses. Different sensory modalities afford different interaction methods, and help the user to understand the information from different viewpoints. In the HITPROTO work Section 5.4, we developed a haptic data visualisation (HDV) system [13], to visualise data through haptics. However, other sensory modalities could be used such as smell/olfaction [15]. While no single reference model exists, several researchers promote a more integrated approach [59], encourage users to think ‘beyond the desktop’ [9,60] and propose an interaction model [8]. By seeing, hearing, touching and smelling within the virtual environment, it is possible to feel more immersed in the experience.
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
3D | Three Dimensions |
AR | Augmented Reality |
FdS | Five-Design Sheets |
IA | Immersive Analytics |
MR | Mixed Reality |
Multiple Coordinated Views | Multiple Coordinated Views |
VRIA | Virtual Reality Immersive Analytics (tool) |
XR | Immersive Reality, where X = Augmented, Mixed or Virtual |
References
- Roberts, J.C.; Mearman, J.W.; Butcher, P.W.S.; Al-Maneea, H.M.; Ritsos, P.D. 3D Visualisations Should Not be Displayed Alone—Encouraging a Need for Multivocality in Visualisation. In Computer Graphics and Visual Computing (CGVC); Xu, K., Turner, M., Eds.; The Eurographics Association: Geneve, Switzerland, 2021. [Google Scholar] [CrossRef]
- Cutting, J.E.; Vishton, P.M. Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In Perception of Space and Motion; Elsevier: Amsterdam, The Netherlands, 1995; pp. 69–117. [Google Scholar] [CrossRef]
- Butcher, P.W.; John, N.W.; Ritsos, P.D. VRIA: A Web-based Framework for Creating Immersive Analytics Experiences. IEEE Trans. Vis. Comput. Graph. 2021, 27, 3213–3225. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Dixon, S.; Fitzhugh, E.; Aleva, D. Human factors guidelines for applications of 3D perspectives: A literature review. In Display Technologies and Applications for Defense, Security, and Avionics III; Thomas, J.T., Desjardins, D.D., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2009; Volume 7327, pp. 172–182. [Google Scholar]
- Roberts, J.C. State of the Art: Coordinated & Multiple Views in Exploratory Visualization. In Proceedings of the Fifth International Conference on Coordinated and Multiple Views in Exploratory Visualization (CMV 2007), Zurich, Switzerland, 2 July 2007; Andrienko, G., Roberts, J.C., Weaver, C., Eds.; IEEE Computer Society Press: Los Alamitos, CA, USA, 2007; pp. 61–71. [Google Scholar] [CrossRef] [Green Version]
- Roberts, J.C.; Al-maneea, H.; Butcher, P.W.S.; Lew, R.; Rees, G.; Sharma, N.; Frankenberg-Garcia, A. Multiple Views: Different meanings and collocated words. Comp. Graph. Forum 2019, 38, 79–93. [Google Scholar] [CrossRef]
- Mehrabi, M.; Peek, E.; Wuensche, B.; Lutteroth, C. Making 3D work: A classification of visual depth cues, 3D display technologies and their applications. AUIC2013 2013, 139, 91–100. [Google Scholar]
- Jansen, Y.; Dragicevic, P. An Interaction Model for Visualizations Beyond The Desktop. IEEE Trans. Vis. Comp. Graph. 2013, 19, 2396–2405. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Roberts, J.C.; Ritsos, P.D.; Badam, S.K.; Brodbeck, D.; Kennedy, J.; Elmqvist, N. Visualization beyond the Desktop–the Next Big Thing. IEEE Comput. Graph. Appl. 2014, 34, 26–34. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Marriott, K.; Schreiber, F.; Dwyer, T.; Klein, K.; Riche, N.H.; Itoh, T.; Stuerzlinger, W.; Thomas, B.H. Immersive Analytics; Springer: Berlin/Heidelberg, Germany, 2018; Volume 11190. [Google Scholar]
- Büschel, W.; Chen, J.; Dachselt, R.; Drucker, S.; Dwyer, T.; Görg, C.; Isenberg, T.; Kerren, A.; North, C.; Stuerzlinger, W. Interaction for immersive analytics. In Immersive Analytics; Springer: Berlin, Germany, 2018; pp. 95–138. [Google Scholar]
- McCormack, J.; Roberts, J.C.; Bach, B.; Freitas, C.D.S.; Itoh, T.; Hurter, C.; Marriott, K. Multisensory immersive analytics. In Immersive Analytics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 57–94. [Google Scholar]
- Panëels, S.; Roberts, J.C. Review of Designs for Haptic Data Visualization. IEEE Trans. Haptics 2010, 3, 119–137. [Google Scholar] [CrossRef] [Green Version]
- Ritsos, P.D.; Jackson, J.; Roberts, J.C. Web-based Immersive Analytics in Handheld Augmented Reality. In Proceedings of the Posters IEEE VIS 2017, Phoenix, AZ, USA, 3 October 2017. [Google Scholar]
- Patnaik, B.; Batch, A.; Elmqvist, N. Information Olfactation: Harnessing Scent to Convey Data. IEEE Trans. Vis. Comput. Graph. 2019, 25, 726–736. [Google Scholar] [CrossRef]
- Da Vinci, L. Da Vinci Notebooks; Profile Books: London, UK, 2005. [Google Scholar]
- Spence, R. Information Visualization, an Introduction; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar] [CrossRef] [Green Version]
- Straßer, W. Zukünftige Arbeiten. Schnelle Kurven- und Flächendarstellung auf grafischen Sichtgeräten [Fast Curve and Surface Display on Graphic Display Devices]; Technical Report; Technische Universität: Berlin, Germany, 1974. (In German) [Google Scholar]
- Catmull, E.E. A subdivision Algorithm for Computer Display of Curved Surfaces; Technical Report; The University of Utah: Salt Lake City, UT, USA, 1974. [Google Scholar]
- Kajiya, J.T. The rendering equation. In Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, Dallas, TX, USA, 18–22 August 1986; pp. 143–150. [Google Scholar]
- Glassner, A.S. An Introduction to Ray Tracing; Morgan Kaufmann: Burlington, MA, USA, 1989. [Google Scholar]
- Lorensen, W.E.; Cline, H.E. Marching cubes: A high resolution 3D surface construction algorithm. ACM SIGGRAPH Comput. Graph. 1987, 21, 163–169. [Google Scholar] [CrossRef]
- Drebin, R.A.; Carpenter, L.; Hanrahan, P. Volume rendering. ACM SIGGRAPH Comput. Graph. 1988, 22, 65–74. [Google Scholar] [CrossRef]
- Newman, T.S.; Yi, H. A survey of the marching cubes algorithm. Comput. Graph. 2006, 30, 854–879. [Google Scholar] [CrossRef]
- Cameron, G. Modular Visualization Environments: Past, Present, and Future. ACM SIGGRAPH Comput. Graph. 1995, 29, 3–4. [Google Scholar] [CrossRef]
- Upson, C.; Faulhaber, T.A.; Kamins, D.; Laidlaw, D.; Schlegel, D.; Vroom, J.; Gurwitz, R.; Van Dam, A. The application visualization system: A computational environment for scientific visualization. IEEE Comput. Graph. Appl. 1989, 9, 30–42. [Google Scholar] [CrossRef]
- Cruz-Neira, C.; Sandin, D.J.; DeFanti, T.A.; Kenyon, R.V.; Hart, J.C. The CAVE: Audio Visual Experience Automatic Virtual Environment. Commun. ACM 1992, 35, 64–72. [Google Scholar] [CrossRef]
- Gillilan, R.E.; Wood, F. Visualization, Virtual Reality, and Animation within the Data Flow Model of Computing. ACM SIGGRAPH Comput. Graph. 1995, 29, 55–58. [Google Scholar] [CrossRef]
- Schroeder, W.J.; Avila, L.S.; Hoffman, W. Visualizing with VTK: A tutorial. IEEE Comput. Graph. Appl. 2000, 20, 20–27. [Google Scholar] [CrossRef] [Green Version]
- Bostock, M.; Ogievetsky, V.; Heer, J. D3 Data-Driven Documents. IEEE Trans. Vis. Comput. Graph. 2011, 17, 2301–2309. [Google Scholar] [CrossRef]
- Lai, Z.; Hu, Y.C.; Cui, Y.; Sun, L.; Dai, N.; Lee, H.S. Furion: Engineering High-Quality Immersive Virtual Reality on Today’s Mobile Devices. IEEE Trans. Mob. Comput. 2020, 19, 1586–1602. [Google Scholar] [CrossRef]
- Haber, R.B.; McNabb, D.A. Visualization idioms: A conceptual model for scientific visualization systems. In Visualization in Scientific Computing; IEEE Computer Society Press: Los Alamitos, CA, USA, 1990; pp. 74–93. [Google Scholar]
- Gleicher, M.; Albers, D.; Walker, R.; Jusufi, I.; Hansen, C.D.; Roberts, J.C. Visual Comparison for Information Visualization. Inf. Vis. 2011, 10, 289–309. [Google Scholar] [CrossRef]
- Al-maneea, H.M.; Roberts, J.C. Towards quantifying multiple view layouts in visualisation as seen from research publications. In Proceedings of the 2019 IEEE Visualization Conference (VIS), Vancouver, BC, Canada, 20–25 October 2019. [Google Scholar]
- Chen, X.; Zeng, W.; Lin, Y.; AI-maneea, H.M.; Roberts, J.; Chang, R. Composition and Configuration Patterns in Multiple-View Visualizations. IEEE Trans. Vis. Comput. Graph. 2021, 27, 1514–1524. [Google Scholar] [CrossRef]
- Shneiderman, B. The eyes have it: A task by data type taxonomy for information visualizations. In Proceedings of the 1996 IEEE Symposium on Visual Languages, Boulder, CO, USA, 3–6 September 1996; pp. 336–343. [Google Scholar] [CrossRef] [Green Version]
- Shneiderman, B. Why Not Make Interfaces Better Than 3D Reality? IEEE Comput. Graph. Appl. 2003, 23, 12–15. [Google Scholar] [CrossRef] [Green Version]
- Williams, B.; Ritsos, P.D.; Headleand, C. Virtual Forestry Generation: Evaluating Models for Tree Placement in Games. Computers 2020, 9, 20. [Google Scholar] [CrossRef] [Green Version]
- Roberts, J.C.; Headleand, C.J.; Ritsos, P.D. Five Design-Sheets: Creative Design and Sketching for Computing and Visualisation, 1st ed.; Springer International Publishing AG: Cham, Switzerland, 2017. [Google Scholar]
- Tufte, E.R. The Visual Display of Quantitative Information; Graphics Press: Cheshire, CT, USA, 1983; Volume 2. [Google Scholar]
- Borgo, R.; Abdul-Rahman, A.; Mohamed, F.; Grant, P.; Reppa, I.; Floridi, L.; Chen, M. An Empirical Study on Using Visual Embellishments in Visualization. IEEE Trans. Vis. Comput. Graph. 2012, 18, 2759–2768. [Google Scholar] [CrossRef] [PubMed]
- Roberts, J.C.; Ritsos, P.D.; Jackson, J.R.; Headleand, C. The Explanatory Visualization Framework: An Active Learning Framework for Teaching Creative Computing Using Explanatory Visualizations. IEEE Trans. Vis. Comput. Graph. 2018, 24, 791–801. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Roberts, J.C.; Butcher, P.; Sherlock, A.; Nason, S. Explanatory Journeys: Visualising to Understand and Explain Administrative Justice Paths of Redress. IEEE Trans. Vis. Comput. Graph. 2022, 28, 518–528. [Google Scholar] [CrossRef] [PubMed]
- Leung, Y.K.; Apperley, M.D. A Review and Taxonomy of Distortion-Oriented Presentation Techniques. ACM Trans. Comput. Hum. Interact. 1994, 1, 126–160. [Google Scholar] [CrossRef]
- Mitchell, K.; Kennedy, J. The perspective tunnel: An inside view on smoothly integrating detail and context. In Proceedings of the Visualization in Scientific Computing ’97: Proceedings of the Eurographics Workshop, Boulogne-sur-Mer, France, 28–30 April 1997; Lefer, W., Grave, M., Eds.; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
- Roberts, J.C. Regular Spatial Separation for Exploratory Visualization. In Visualization and Data Analysis; Erbacher, R., Chen, P., Grohn, M., Roberts, J., Wittenbrink, C., Eds.; Electronic Imaging Symposium; IS&T/SPIE: San Jose, CA, USA, 2002; Volume 4665, pp. 182–196. [Google Scholar] [CrossRef] [Green Version]
- Feiner, S.K.; Beshers, C. Worlds within Worlds: Metaphors for Exploring n-Dimensional Virtual Worlds. In Proceedings of the 3rd Annual ACM SIGGRAPH Symposium on User Interface Software and Technology, Snowbird, UT, USA, 3–5 October 1990; Association for Computing Machinery: New York, NY, USA, 1990; pp. 76–83. [Google Scholar] [CrossRef]
- Robertson, G.G.; Mackinlay, J.D.; Card, S.K. Information Visualization Using 3D Interactive Animation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; Association for Computing Machinery: New York, NY, USA, 1991; pp. 461–462. [Google Scholar] [CrossRef]
- Cockburn, A.; McKenzie, B. 3D or Not 3D?: Evaluating the Effect of the Third Dimension in a Document Management System. In Proceedings of the CHI Conference on Human Factors in Computing Systems (ACM CHI 2001), New York, NY, USA, 31 March–5 April 2001; ACM: New York, NY, USA, 2001; pp. 434–441. [Google Scholar] [CrossRef]
- Cockburn, A.; McKenzie, B. Evaluating the Effectiveness of Spatial Memory in 2D and 3D Physical and Virtual Environments. In Proceedings of the CHI Conference on Human Factors in Computing Systems (ACM CHI 2002), Minesotta, MN, USA, 20–25 April 2002; ACM: New York, NY, USA, 2002; pp. 203–210. [Google Scholar] [CrossRef] [Green Version]
- Cockburn, A. Revisiting 2D vs 3D Implications on Spatial Memory. In Proc Australasian User Interface—Volume 28; Australian Computer Society, Inc.: Darlinghurst, Australia, 2004; pp. 25–31. [Google Scholar]
- Roberts, J.C.; Yang, J.; Kohlbacher, O.; Ward, M.O.; Zhou, M.X. Novel visual metaphors for multivariate networks. In Multivariate Network Visualization; Springer: Berlin/Heidelberg, Germany, 2014; pp. 127–150. [Google Scholar]
- Robertson, G.G.; Mackinlay, J.D.; Card, S.K. Cone Trees: Animated 3D Visualizations of Hierarchical Information. In Proceedings of the SIGCHI ’91, New Orleands, LO, USA, 27 April–2 May 1991; ACM: New York, NY, USA, 1991; pp. 189–194. [Google Scholar] [CrossRef]
- Balzer, M.; Deussen, O. Hierarchy Based 3D Visualization of Large Software Structures. In Proceedings of the Visualization, Austin, TX, USA, 11–15 October 2004; p. 4. [Google Scholar] [CrossRef] [Green Version]
- Dachselt, R.; Hübner, A. Three-dimensional menus: A survey and taxonomy. Comput. Graph. 2007, 31, 53–65. [Google Scholar] [CrossRef]
- Teyseyre, A.; Campo, M. An Overview of 3D Software Visualization. IEEE Trans. Vis. Comput. Graph. 2009, 15, 87–105. [Google Scholar] [CrossRef] [PubMed]
- Roberts, J.C.; Joseph, W.; Mearman, A.W. Harnessing photogrammetry to create alternative views of Heritage. In Proceedings of the 12th Conference on Culture and Computer Science—Reality and Virtuality, Berlin, Germany, 22–23 May 2014; Culture and Computer Science. Franken-Wendelstorf, R., Lindinger, E., Sieck, J., Eds.; Verlag Werner Hülsbusch: Regensburg, Germany, 2014; pp. 49–62. [Google Scholar]
- Ware, C.; Mitchell, P. Reevaluating Stereo and Motion Cues for Visualizing Graphs in Three Dimensions. In Proceedings of the 2nd Symposium on Applied Perception in Graphics and Visualization, New York, NY, USA, 26–28 August 2005; Association for Computing Machinery: New York, NY, USA, 2005; pp. 51–58. [Google Scholar] [CrossRef]
- Roberts, J.C.; Walker, R. Using All Our Senses: The need for a Unified Theoretical Approach to Multi-sensory Information Visualization. In Proceedings of the IEEE VisWeek 2010 Workshop: The Role of Theory in Information Visualization, Salt Lake City, UT, USA, 25 October 2010. [Google Scholar]
- Fuchs, H. Beyond the Desktop Metaphor: Toward More Effective Display, Interaction, and Telecollaboration in the Office of the Future via a Multitude of Sensors and Displays. In Proceedings of the Advanced Multimedia Content Processing; Springer: London, UK, 1999; pp. 30–43. [Google Scholar]
- Lee, B.; Isenberg, P.; Riche, N.; Carpendale, S. Beyond Mouse and Keyboard: Expanding Design Considerations for Information Visualization Interactions. IEEE Trans. Vis. Comput. Graph. 2012, 18, 2689–2698. [Google Scholar] [CrossRef] [Green Version]
- Elmqvist, N.; Moere, A.V.; Jetter, H.C.; Cernea, D.; Reiterer, H.; Jankun-Kelly, T. Fluid interaction for information visualization. Inf. Vis. 2011, 10, 327–340. [Google Scholar] [CrossRef]
- Spindler, M.; Tominski, C.; Schumann, H.; Dachselt, R. Tangible Views for Information Visualization. In Proceedings of the Interactive Tabletops and Surfaces, Saarbrücken, Germany, 7–10 November 2010; ACM: New York, NY, USA, 2010; pp. 157–166. [Google Scholar] [CrossRef]
- Kruszyński, K.; van Liere, R. Tangible props for scientific visualization: Concept, requirements, application. Virtual Real. 2009, 13, 235–244. [Google Scholar] [CrossRef]
- Panëels, S.A.; Ritsos, P.D.; Rodgers, P.J.; Roberts, J.C. Special Section on Touching the 3rd Dimension: Prototyping 3D Haptic Data Visualizations. Comput. Graph. 2013, 37, 179–192. [Google Scholar] [CrossRef]
- Batch, A.; Patnaik, B.; Akazue, M.; Elmqvist, N. Scents and Sensibility: Evaluating Information Olfactation. In Proceedings of the CHI Conference on Human Factors in Computing Systems (ACM CHI 2020), Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–14. [Google Scholar] [CrossRef]
- Miles, H.C.; Wilson, A.T.; Labrosse, F.; Tiddeman, B.; Roberts, J.C. A Community-Built Virtual Heritage Collection. In Transactions on Computational Science XXVI—Volume 9550; Springer: Berlin/Heidelberg, Germany, 2015; pp. 91–110. [Google Scholar] [CrossRef] [Green Version]
- Griffiths, S.; Edwards, B.; Wilson, A.; Labrosse, F.; Miles, H.; Roberts, J.; Tiddeman, B. Crowd-sourcing archaeological research: HeritageTogether digital public archaeology in practice. Internet Archaeol. 2015, 40. [Google Scholar] [CrossRef]
- Miles, H.C.; Wilson, A.T.; Labrosse, F.; Tiddeman, B.; Griffiths, S.; Edwards, B.; Ritsos, P.D.; Mearman, J.W.; Möller, K.; Karl, R.; et al. Alternative Representations of 3D-Reconstructed Heritage Data. J. Comput. Cult. Herit. 2015, 9. [Google Scholar] [CrossRef] [Green Version]
- Roberts, J.C.; Headleand, C.; Ritsos, P.D. Sketching Designs Using the Five Design-Sheet Methodology. IEEE Trans. Vis. Comput. Graph. 2016, 22, 419–428. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- George, R.L.; Robins, P.E.; Davies, A.G.; Ritsos, P.D.; Roberts, J.C. Interactive visual analytics of hydrodynamic flux for the coastal zone. Environ. Earth Sci. 2014, 1–14. [Google Scholar] [CrossRef]
- George, R.L.S.F.; Roberts, J.C.; Davies, A.G. Using Processing to Develop iCove: A Tool for Interactive Coastal Oceanographic Visualization. In Theory and Practice of Computer Graphics; Collomosse, J., Grimstead, I., Eds.; The Eurographics Association: Geneve, Switzerland, 2010. [Google Scholar] [CrossRef]
- Ware, C.; Mitchell, P. Visualizing Graphs in Three Dimensions. ACM Trans. Appl. Percept. 2008, 5, 1–15. [Google Scholar] [CrossRef] [Green Version]
- Donalek, C.; Djorgovski, S.G.; Cioc, A.; Wang, A.; Zhang, J.; Lawler, E.; Yeh, S.; Mahabal, A.; Graham, M.; Drake, A.; et al. Immersive and collaborative data visualization using virtual reality platforms. In Proceedings of the IEEE International Conference on Big Data, Washington, DC, USA, 27–30 October 2014; pp. 609–614. [Google Scholar] [CrossRef] [Green Version]
- Butcher, P.W.; John, N.W.; Ritsos, P.D. VRIA—A Framework for Immersive Analytics on the Web. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (ACM CHI 2019), Glasgow, UK, 4–9 May 2019. [Google Scholar] [CrossRef] [Green Version]
- Lee, B.; Brown, D.; Lee, B.; Hurter, C.; Drucker, S.; Dwyer, T. Data Visceralization: Enabling Deeper Understanding of Data Using Virtual Reality. IEEE Trans. Vis. Comput. Graph. 2021, 27, 1095–1105. [Google Scholar] [CrossRef]
- Ritsos, P.D.; Mearman, J.; Jackson, J.R.; Roberts, J.C. Synthetic Visualizations in Web-based Mixed Reality. In Immersive Analytics: Exploring Future Visualization and Interaction Technologies for Data Analytics Workshop; Bach, B., Cordeil, M., Dwyer, T., Lee, B., Saket, B., Endert, A., Collins, C., Carpendale, S., Eds.; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
- Willett, W.; Jansen, Y.; Dragicevic, P. Embedded Data Representations. IEEE Trans. Vis. Comput. Graph. 2017, 23, 461–470. [Google Scholar] [CrossRef] [Green Version]
- Azuma, R.T. A Survey of Augmented Reality. Presence Teleoper. Virt. Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
- Coles, T.R.; Meglan, D.; John, N.W. The Role of Haptics in Medical Training Simulators: A Survey of the State of the Art. IEEE Trans. Haptics 2011, 4, 51–66. [Google Scholar] [CrossRef]
- Drogemuller, A.; Cunningham, A.; Walsh, J.A.; Baumeister, J.; Smith, R.T.; Thomas, B.H. Haptic and Visual Comprehension of a 2D Graph Layout Through Physicalisation. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
- Yu, W.; Ramloll, R.; Brewster, S. Haptic graphs for blind computer users. In Haptic Human-Computer Interaction; Brewster, S., Murray-Smith, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 41–51. [Google Scholar]
- Munzner, T. A Nested Process Model for Visualization Design and Validation. IEEE Trans. Vis. Comp. Graph. 2009, 15, 921–928. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- North, C.; Shneiderman, B. Snap-Together Visualization: A User Interface for Coordinating Visualizations via Relational Schemata; ACM: New York, NY, USA, 2000; pp. 128–135. [Google Scholar] [CrossRef]
- Weaver, C. Cross-Filtered Views for Multidimensional Visual Analysis. IEEE Trans. Vis. Comput. Graph. 2010, 16, 192–204. [Google Scholar] [CrossRef] [PubMed]
- Badam, S.K.; Elmqvist, N. Visfer: Camera-based visual data transfer for cross-device visualization. Inf. Vis. 2019, 18, 68–93. [Google Scholar] [CrossRef] [Green Version]
- Elmqvist, N.; Tsigas, P. A Taxonomy of 3D Occlusion Management for Visualization. IEEE Trans. Vis. Comput. Graph. 2008, 14, 1095–1109. [Google Scholar] [CrossRef]
- Jansen, Y. Physical and Tangible Information Visualization; Technical Report 2014PA112039ff; Université Paris Sud: Paris, France, 2014. [Google Scholar]
- Thomas, B.H. Virtual Reality for Information Visualization Might Just Work This Time. Front. Robot. AI 2019, 6. [Google Scholar] [CrossRef] [Green Version]
- Steed, A. Towards a General Model for Selection in Virtual Environments. In Proceedings of the 3D User Interfaces (3DUI’06), Alexandria, VA, USA, 25–26 March 2006; pp. 103–110. [Google Scholar] [CrossRef]
- Bressa, N.; Korsgaard, H.; Tabard, A.; Houben, S.; Vermeulen, J. What’s the Situation with Situated Visualization? A Survey and Perspectives on Situatedness. IEEE Trans. Vis. Comput. Graph. 2022, 28, 107–117. [Google Scholar] [CrossRef]
- Kato, H.; Billinghurst, M. Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99), San Francisco, CA, USA, 20–21 October 1999; pp. 85–94. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Roberts, J.C.; Butcher, P.W.S.; Ritsos, P.D. One View Is Not Enough: Review of and Encouragement for Multiple and Alternative Representations in 3D and Immersive Visualisation. Computers 2022, 11, 20. https://doi.org/10.3390/computers11020020
Roberts JC, Butcher PWS, Ritsos PD. One View Is Not Enough: Review of and Encouragement for Multiple and Alternative Representations in 3D and Immersive Visualisation. Computers. 2022; 11(2):20. https://doi.org/10.3390/computers11020020
Chicago/Turabian StyleRoberts, Jonathan C., Peter W. S. Butcher, and Panagiotis D. Ritsos. 2022. "One View Is Not Enough: Review of and Encouragement for Multiple and Alternative Representations in 3D and Immersive Visualisation" Computers 11, no. 2: 20. https://doi.org/10.3390/computers11020020
APA StyleRoberts, J. C., Butcher, P. W. S., & Ritsos, P. D. (2022). One View Is Not Enough: Review of and Encouragement for Multiple and Alternative Representations in 3D and Immersive Visualisation. Computers, 11(2), 20. https://doi.org/10.3390/computers11020020