Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = out-of-core processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 10517 KB  
Article
Scalable Post-Processing of Large-Scale Numerical Simulations of Turbulent Fluid Flows
by Christian Lagares, Wilson Rivera and Guillermo Araya
Symmetry 2022, 14(4), 823; https://doi.org/10.3390/sym14040823 - 14 Apr 2022
Cited by 7 | Viewed by 3139
Abstract
Military, space, and high-speed civilian applications will continue contributing to the renewed interest in compressible, high-speed turbulent boundary layers. To further complicate matters, these flows present complex computational challenges ranging from the pre-processing to the execution and subsequent post-processing of large-scale numerical simulations. [...] Read more.
Military, space, and high-speed civilian applications will continue contributing to the renewed interest in compressible, high-speed turbulent boundary layers. To further complicate matters, these flows present complex computational challenges ranging from the pre-processing to the execution and subsequent post-processing of large-scale numerical simulations. Exploring more complex geometries at higher Reynolds numbers will demand scalable post-processing. Modern times have brought application developers and scientists the advent of increasingly more diversified and heterogeneous computing hardware, which significantly complicates the development of performance-portable applications. To address these challenges, we propose Aquila, a distributed, out-of-core, performance-portable post-processing library for large-scale simulations. It is designed to alleviate the burden of domain experts writing applications targeted at heterogeneous, high-performance computers with strong scaling performance. We provide two implementations, in C++ and Python; and demonstrate their strong scaling performance and ability to reach 60% of peak memory bandwidth and 98% of the peak filesystem bandwidth while operating out of core. We also present our approach to optimizing two-point correlations by exploiting symmetry in the Fourier space. A key distinction in the proposed design is the inclusion of an out-of-core data pre-fetcher to give the illusion of in-memory availability of files yielding up to 46% improvement in program runtime. Furthermore, we demonstrate a parallel efficiency greater than 70% for highly threaded workloads. Full article
(This article belongs to the Special Issue Turbulence and Multiphase Flows and Symmetry)
Show Figures

Figure 1

22 pages, 3205 KB  
Review
Voxelisation Algorithms and Data Structures: A Review
by Mitko Aleksandrov, Sisi Zlatanova and David J. Heslop
Sensors 2021, 21(24), 8241; https://doi.org/10.3390/s21248241 - 9 Dec 2021
Cited by 44 | Viewed by 12311
Abstract
Voxel-based data structures, algorithms, frameworks, and interfaces have been used in computer graphics and many other applications for decades. There is a general necessity to seek adequate digital representations, such as voxels, that would secure unified data structures, multi-resolution options, robust validation procedures [...] Read more.
Voxel-based data structures, algorithms, frameworks, and interfaces have been used in computer graphics and many other applications for decades. There is a general necessity to seek adequate digital representations, such as voxels, that would secure unified data structures, multi-resolution options, robust validation procedures and flexible algorithms for different 3D tasks. In this review, we evaluate the most common properties and algorithms for voxelisation of 2D and 3D objects. Thus, many voxelisation algorithms and their characteristics are presented targeting points, lines, triangles, surfaces and solids as geometric primitives. For lines, we identify three groups of algorithms, where the first two achieve different voxelisation connectivity, while the third one presents voxelisation of curves. We can say that surface voxelisation is a more desired voxelisation type compared to solid voxelisation, as it can be achieved faster and requires less memory if voxels are stored in a sparse way. At the same time, we evaluate in the paper the available voxel data structures. We split all data structures into static and dynamic grids considering the frequency to update a data structure. Static grids are dominated by SVO-based data structures focusing on memory footprint reduction and attributes preservation, where SVDAG and SSVDAG are the most advanced methods. The state-of-the-art dynamic voxel data structure is NanoVDB which is superior to the rest in terms of speed as well as support for out-of-core processing and data management, which is the key to handling large dynamically changing scenes. Overall, we can say that this is the first review evaluating the available voxelisation algorithms for different geometric primitives as well as voxel data structures. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

20 pages, 9889 KB  
Article
Efficient Calculation Method for Tree Stem Traits from Large-Scale Point Clouds of Forest Stands
by Hiroshi Masuda, Yuichiro Hiraoka, Kazuto Saito, Shinsuke Eto, Michinari Matsushita and Makoto Takahashi
Remote Sens. 2021, 13(13), 2476; https://doi.org/10.3390/rs13132476 - 25 Jun 2021
Cited by 5 | Viewed by 3005
Abstract
With the use of terrestrial laser scanning (TLS) in forest stands, surveys are now equipped to obtain dense point cloud data. However, the data range, i.e., the number of points, often reaches the billions or even higher, exceeding random access memory (RAM) limits [...] Read more.
With the use of terrestrial laser scanning (TLS) in forest stands, surveys are now equipped to obtain dense point cloud data. However, the data range, i.e., the number of points, often reaches the billions or even higher, exceeding random access memory (RAM) limits on common computers. Moreover, the processing time often also extends beyond acceptable processing lengths. Thus, in this paper, we present a new method of efficiently extracting stem traits from huge point cloud data obtained by TLS, without subdividing or downsampling the point clouds. In this method, each point cloud is converted into a wireframe model by connecting neighboring points on the same continuous surface, and three-dimensional points on stems are resampled as cross-sectional points of the wireframe model in an out-of-core manner. Since the data size of the section points is much smaller than the original point clouds, stem traits can be calculated from the section points on a common computer. With the study method, 1381 tree stems were calculated from 3.6 billion points in ~20 min on a common computer. To evaluate the accuracy of this method, eight targeted trees were cut down and sliced at 1-m intervals; actual stem traits were then compared to those calculated from point clouds. The experimental results showed that the efficiency and accuracy of the proposed method are sufficient for practical use in various fields, including forest management and forest research. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop