Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = 3D point clouds
Page = 2

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 19642 KB  
Article
MAP3D: An Explorative Approach for Automatic Mapping of Real-World Eye-Tracking Data on a Virtual 3D Model
by Isabell Stein, Helen Jossberger and Hans Gruber
J. Eye Mov. Res. 2022, 15(3), 1-17; https://doi.org/10.16910/jemr.15.3.8 - 31 May 2023
Cited by 4 | Viewed by 357
Abstract
Mobile eye tracking helps to investigate real-world settings, in which participants can move freely. This enhances the studies’ ecological validity but poses challenges for the analysis. Often, the 3D stimulus is reduced to a 2D image (reference view) and the fixations are manually [...] Read more.
Mobile eye tracking helps to investigate real-world settings, in which participants can move freely. This enhances the studies’ ecological validity but poses challenges for the analysis. Often, the 3D stimulus is reduced to a 2D image (reference view) and the fixations are manually mapped to this 2D image. This leads to a loss of information about the three-dimensionality of the stimulus. Using several reference images, from different perspectives, poses new problems, in particular concerning the mapping of fixations in the transition areas between two reference views. A newly developed approach (MAP3D) is presented that enables generating a 3D model and automatic mapping of fixations to this virtual 3D model of the stimulus. This avoids problems with the reduction to a 2D reference image and with transitions between images. The x, y and z coordinates of the fixations are available as a point cloud and as .csv output. First exploratory application and evaluation tests are promising: MAP3D offers innovative ways of post-hoc mapping fixation data on 3D stimuli with open-source software and thus provides cost-efficient new avenues for research. Full article
Show Figures

Figure 1

17 pages, 2667 KB  
Article
Uncertainty Estimation in Deep Neural Networks for Point Cloud Segmentation in Factory Planning
by Christina Petschnigg and Jürgen Pilz
Modelling 2021, 2(1), 1-17; https://doi.org/10.3390/modelling2010001 - 4 Jan 2021
Cited by 10 | Viewed by 4413
Abstract
The digital factory provides undoubtedly great potential for future production systems in terms of efficiency and effectivity. A key aspect on the way to realize the digital copy of a real factory is the understanding of complex indoor environments on the basis of [...] Read more.
The digital factory provides undoubtedly great potential for future production systems in terms of efficiency and effectivity. A key aspect on the way to realize the digital copy of a real factory is the understanding of complex indoor environments on the basis of three-dimensional (3D) data. In order to generate an accurate factory model including the major components, i.e., building parts, product assets, and process details, the 3D data that are collected during digitalization can be processed with advanced methods of deep learning. For instance, the semantic segmentation of a point cloud enables the identification of relevant objects within the environment. In this work, we propose a fully Bayesian and an approximate Bayesian neural network for point cloud segmentation. Both of the networks are used within a workflow in order to generate an environment model on the basis of raw point clouds. The Bayesian and approximate Bayesian networks allow us to analyse how different ways of estimating uncertainty in these networks improve segmentation results on raw point clouds. We achieve superior model performance for both, the Bayesian and the approximate Bayesian model compared to the frequentist one. This performance difference becomes even more striking when incorporating the networks’ uncertainty in their predictions. For evaluation, we use the scientific data set S3DIS as well as a data set, which was collected by the authors at a German automotive production plant. The methods proposed in this work lead to more accurate segmentation results and the incorporation of uncertainty information also makes this approach especially applicable to safety critical applications aside from our factory planning use case. Full article
(This article belongs to the Special Issue Feature Papers of Modelling)
Show Figures

Figure 1

Back to TopTop