Big Data Visualization and Virtual Reality

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information and Communications Technology".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 3840

Special Issue Editor

School of Interactive Games and Media, Golisano College of Computing and Information Sciences, Rochester Institute of Technology, Rochester, NY, USA
Interests: high-performance graphics; massive model rendering; multimodal interaction; virtual reality; interactive media

Special Issue Information

Dear Colleagues,

We are pleased to announce a Special Issue on “Big Data Visualization and Virtual Reality” that solicits works at the intersection of big data, virtual reality, visualization, and immersive experience design. As the data quantity and complexity continue to increase due to advances in data acquisition and modelling, there is an increasing demand on new platforms and tools that can enhance the effectiveness of data analytics and interactive capabilities. The convergence of virtual reality and wearable tracking technologies has helped promote hands-free and heads-up work, allowing people to see and interact with the massive amount of data from the first-person perspective. Data exploration with such emerging technologies has the potential to address the difficulties caused by the growing size and complexity of the data, with the methods combining real-time visualization techniques, interactive innovations, and in situ simulations.

We welcome researchers, data scientists, designers, and industrial professionals to submit their original research and review articles that address open questions, provide insightful experiments, evaluations, and case studies, present an advance in performance and interaction modality design, or provide insights and guidelines towards future design and challenges.

Dr. Chao Peng
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intuitive visual representations and data placement in VR
  • multimodal interaction for data visualization
  • embodied visualization, navigation, analytics, and data collaboration
  • immersive interfaces and tools for big data operations
  • engagement with immersive data visualization in prolonged uses

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 3287 KiB  
Article
The Spherical Retractable Bubble Space: An Egocentric Graph Visualization throughout a Retractable Visualization Space
by Piriziwè Kobina, Thierry Duval and Laurent Brisson
Information 2023, 14(10), 531; https://doi.org/10.3390/info14100531 - 28 Sep 2023
Viewed by 820
Abstract
In this paper, we present a new egocentric metaphor for graph visualization that consists in positioning a graph between two concentric spheres of different radii. It improves the expansion of nodes in space, contrary to 3D spatialization algorithms. The edge drawing is optimized [...] Read more.
In this paper, we present a new egocentric metaphor for graph visualization that consists in positioning a graph between two concentric spheres of different radii. It improves the expansion of nodes in space, contrary to 3D spatialization algorithms. The edge drawing is optimized by pushing all the edges into the area delimited by our two concentric spheres so that a user can move freely without being encumbered by edges. Our new metaphor also makes it possible to reduce the display angles in order to have a global view of the graph without leaving the egocentricity. Full article
(This article belongs to the Special Issue Big Data Visualization and Virtual Reality)
Show Figures

Figure 1

25 pages, 24280 KiB  
Article
Automatic 3D Building Model Generation from Airborne LiDAR Data and OpenStreetMap Using Procedural Modeling
by Robert Župan, Adam Vinković, Rexhep Nikçi and Bernarda Pinjatela
Information 2023, 14(7), 394; https://doi.org/10.3390/info14070394 - 11 Jul 2023
Cited by 4 | Viewed by 2452
Abstract
This research is primarily focused on utilizing available airborne LiDAR data and spatial data from the OpenStreetMap (OSM) database to generate 3D models of buildings for a large-scale urban area. The city center of Ljubljana, Slovenia, was selected for the study area due [...] Read more.
This research is primarily focused on utilizing available airborne LiDAR data and spatial data from the OpenStreetMap (OSM) database to generate 3D models of buildings for a large-scale urban area. The city center of Ljubljana, Slovenia, was selected for the study area due to data availability and diversity of building shapes, heights, and functions, which presented a challenge for the automated generation of 3D models. To extract building heights, a range of data sources were utilized, including OSM attribute data, as well as georeferenced and classified point clouds and a digital elevation model (DEM) obtained from openly available LiDAR survey data of the Slovenian Environment Agency. A digital surface model (DSM) and digital terrain model (DTM) were derived from the processed LiDAR data. Building outlines and attributes were extracted from OSM and processed using QGIS. Spatial coverage of OSM data for buildings in the study area is excellent, whereas only 18% have attributes describing external appearance of the building and 6% describing roof type. LASTools software (rapidlasso GmbH, Friedrichshafener Straße 1, 82205 Gilching, GERMANY) was used to derive and assign building heights from 3D coordinates of the segmented point clouds. Various software options for procedural modeling were compared and Blender was selected due to the ability to process OSM data, availability of documentation, and low computing requirements. Using procedural modeling, a 3D model with level of detail (LOD) 1 was created fully automated. After analyzing roof types, a 3D model with LOD2 was created fully automated for 87.64% of buildings. For the remaining buildings, a comparison of procedural roof modeling and manual roof editing was performed. Finally, a visual comparison between the resulting 3D model and Google Earth’s model was performed. The main objective of this study is to demonstrate the efficient modeling process using open data and free software and resulting in an enhanced accuracy of the 3D building models compared to previous LOD2 iterations. Full article
(This article belongs to the Special Issue Big Data Visualization and Virtual Reality)
Show Figures

Figure 1

Back to TopTop