Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = graphics processing units (GPGPU)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 9513 KiB  
Article
Parallel Implicit Solvers for 2D Numerical Models on Structured Meshes
by Yaoxin Zhang, Mohammad Z. Al-Hamdan and Xiaobo Chao
Mathematics 2024, 12(14), 2184; https://doi.org/10.3390/math12142184 - 12 Jul 2024
Cited by 1 | Viewed by 1100
Abstract
This paper presents the parallelization of two widely used implicit numerical solvers for the solution of partial differential equations on structured meshes, namely, the ADI (Alternating-Direction Implicit) solver for tridiagonal linear systems and the SIP (Strongly Implicit Procedure) solver for the penta-diagonal systems. [...] Read more.
This paper presents the parallelization of two widely used implicit numerical solvers for the solution of partial differential equations on structured meshes, namely, the ADI (Alternating-Direction Implicit) solver for tridiagonal linear systems and the SIP (Strongly Implicit Procedure) solver for the penta-diagonal systems. Both solvers were parallelized using CUDA (Computer Unified Device Architecture) Fortran on GPGPUs (General-Purpose Graphics Processing Units). The parallel ADI solver (P-ADI) is based on the Parallel Cyclic Reduction (PCR) algorithm, while the parallel SIP solver (P-SIP) uses the wave front method (WF) following a diagonal line calculation strategy. To map the solution schemes onto the hierarchical block-threads framework of the CUDA on the GPU, the P-ADI solver adopted two mapping methods, one block thread with iterations (OBM-it) and multi-block threads (MBMs), while the P-SIP solver also used two mappings, one conventional mapping using effective WF lines (WF-e) with matrix coefficients and solution variables defined on original computational mesh, and a newly proposed mapping using all WF mesh (WF-all), on which matrix coefficients and solution variables are defined. Both the P-ADI and the P-SIP have been integrated into a two-dimensional (2D) hydrodynamic model, the CCHE2D (Center of Computational Hydroscience and Engineering) model, developed by the National Center for Computational Hydroscience and Engineering at the University of Mississippi. This study for the first time compared these two parallel solvers and their efficiency using examples and applications in complex geometries, which can provide valuable guidance for future uses of these two parallel implicit solvers in computational fluids dynamics (CFD). Both parallel solvers demonstrated higher efficiency than their serial counterparts on the CPU (Central Processing Unit): 3.73~4.98 speedup ratio for flow simulations, and 2.166~3.648 speedup ratio for sediment transport simulations. In general, the P-ADI solver is faster than but not as stable as the P-SIP solver; and for the P-SIP solver, the newly developed mapping method WF-all significantly improved the conventional mapping method WF-e. Full article
(This article belongs to the Special Issue Mathematical Modeling and Numerical Simulation in Fluids)
Show Figures

Figure 1

19 pages, 38481 KiB  
Article
Dispersion and Radiation Modelling in ESTE System Using Urban LPM
by Ľudovít Lipták, Peter Čarný, Michal Marčišovský, Mária Marčišovská, Miroslav Chylý and Eva Fojciková
Atmosphere 2023, 14(7), 1077; https://doi.org/10.3390/atmos14071077 - 26 Jun 2023
Cited by 1 | Viewed by 1564
Abstract
In cases of accidental or deliberate incidents involving a harmful agent in urban areas, a detailed modelling approach is required to include the building shapes and spatial locations. Simultaneously, when applied to crisis management, a simulation tool must meet strict time constraints. This [...] Read more.
In cases of accidental or deliberate incidents involving a harmful agent in urban areas, a detailed modelling approach is required to include the building shapes and spatial locations. Simultaneously, when applied to crisis management, a simulation tool must meet strict time constraints. This work presents a Lagrangian particle model (LPM) for computing atmospheric dispersion. The model is implemented in the nuclear decision support system ESTE CBRN, a software tool developed to calculate the atmospheric dispersion of airborne hazardous materials and radiological impacts in the built-up area. The implemented LPM is based on Thomson’s solution for the nonstationary, three-dimensional Langevin equation model for turbulent diffusion. The simulation results are successfully analyzed by testing compatibility with Briggs sigma functions in the case of continuous release. The implemented LPM is compared with the Joint Urban 2003 Street Canyon Experiment for instantaneous puff releases. We compare the maximum concentrations and peak times measured during two intensive operational periods. The modeled peak times are mostly 10–20% smaller than the measured. Except for a few detector locations, the maximum concentrations are reproduced consistently. In the end, we demonstrate via calculation on single computers utilizing general-purpose computing on graphics processing units (GPGPU) that the implementation is well suited for an actual emergency response since the computational times (including dispersion and dose calculation) for an acceptable level of result accuracy are similar to the modeled event duration itself. Full article
Show Figures

Figure 1

14 pages, 2988 KiB  
Article
Performance Investigation of the Conjunction Filter Methods and Enhancement of Computation Speed on Conjunction Assessment Analysis with CUDA Techniques
by Phasawee Saingyen, Sittiporn Channumsin, Suwat Sreesawet, Keerati Puttasuwan and Thanathip Limna
Aerospace 2023, 10(6), 543; https://doi.org/10.3390/aerospace10060543 - 7 Jun 2023
Cited by 1 | Viewed by 2175
Abstract
The growing number of space objects leads to increases in the potential risks of damage to satellites and generates space debris after colliding. Conjunction assessment analysis is the one of keys to evaluating the collision risk of satellites and satellite operators require the [...] Read more.
The growing number of space objects leads to increases in the potential risks of damage to satellites and generates space debris after colliding. Conjunction assessment analysis is the one of keys to evaluating the collision risk of satellites and satellite operators require the analyzed results as fast as possible to decide and execute collision maneuver planning. However, the computation time to analyze the potential risk of all satellites is proportional to the number of space objects. The conjunction filters and parallel computing techniques can shorten the computation cost of conjunction analysis to provide the analyzed results. Therefore, this paper shows the investigation of the conjunction filter performances (accuracy and computation speed): Smart Sieve, CSieve and CAOS-D (combination of both Smart Sieve and CSieve) in both a single satellite (one vs. all) and all space objects (all vs. all) cases. Then, all the screening filters are developed to implement an algorithm that executes General-purpose computing on graphics processing units (GPGPU) by using NVIDIAs Compute Unified Device Architecture (CUDA). The analyzed results show the comparison results of the accuracy of conjunction screening analysis and computation times of each filter when implemented with the parallel computation techniques. Full article
Show Figures

Figure 1

20 pages, 6027 KiB  
Article
3D Numerical Analysis Method for Simulating Collapse Behavior of RC Structures by Hybrid FEM/DEM
by Gyeongjo Min, Daisuke Fukuda and Sangho Cho
Appl. Sci. 2022, 12(6), 3073; https://doi.org/10.3390/app12063073 - 17 Mar 2022
Cited by 8 | Viewed by 2823
Abstract
Recent years have seen an increase in demand for the demolition of obsolete and potentially hazardous structures, including reinforced concrete (RC) structures, using blasting techniques. However, because the risk of failure is significantly higher when applying blasting to demolish RC structures than mechanical [...] Read more.
Recent years have seen an increase in demand for the demolition of obsolete and potentially hazardous structures, including reinforced concrete (RC) structures, using blasting techniques. However, because the risk of failure is significantly higher when applying blasting to demolish RC structures than mechanical dismantling, it is critical to achieve the optimal demolition design and conditions using blasting by taking into account the major factors affecting a structure’s demolition. To this end, numerical analysis techniques have frequently been used to simulate the progressive failure resulting in the collapse of structures. In this study, the three-dimensional (3D) combined finite discrete element method (FDEM), which is accelerated by a parallel computation technique incorporating a general-purpose graphics processing unit (GPGPU), was coupled with the one-dimensional (1D) reinforcing bar (rebar) model as a numerical simulation tool for simulating the process of RC structure demolition by blasting. Three-point bending tests on the RC beams were simulated to validate the developed 3D FDEM code, including the calibration of 3D FDEM input parameters to simulate the concrete fracture in the RC beam accurately. The effect of the elements size for the concrete part on the RC beam’s fracture process was also discussed. Then, the developed 3D FDEM code was used to model the blasting demolition of a small-scale RC structure. The numerical simulation results for the progressive collapse of the RC structure were compared to the actual experimental results and found to be highly consistent. Full article
(This article belongs to the Special Issue Dynamics of Building Structures)
Show Figures

Figure 1

19 pages, 17299 KiB  
Article
Evaluation of NVIDIA Xavier NX Platform for Real-Time Image Processing for Plasma Diagnostics
by Bartłomiej Jabłoński, Dariusz Makowski, Piotr Perek, Patryk Nowak vel Nowakowski, Aleix Puig Sitjes, Marcin Jakubowski, Yu Gao, Axel Winter and The W-X Team
Energies 2022, 15(6), 2088; https://doi.org/10.3390/en15062088 - 12 Mar 2022
Cited by 13 | Viewed by 4543
Abstract
Machine protection is a core task of real-time image diagnostics aiming for steady-state operation in nuclear fusion devices. The paper evaluates the applicability of the newest low-power NVIDIA Jetson Xavier NX platform for image plasma diagnostics. This embedded NVIDIA Tegra System-on-a-Chip (SoC) integrates [...] Read more.
Machine protection is a core task of real-time image diagnostics aiming for steady-state operation in nuclear fusion devices. The paper evaluates the applicability of the newest low-power NVIDIA Jetson Xavier NX platform for image plasma diagnostics. This embedded NVIDIA Tegra System-on-a-Chip (SoC) integrates a Graphics Processing Unit (GPU) and Central Processing Unit (CPU) on a single chip. The hardware differences and features compared to the previous NVIDIA Jetson TX2 are signified. Implemented algorithms detect thermal events in real-time, utilising the high parallelism provided by the embedded General-Purpose computing on Graphics Processing Units (GPGPU). The performance and accuracy are evaluated on the experimental data from the Wendelstein 7-X (W7-X) stellarator. Strike-line and reflection events are primarily investigated, yet benchmarks for overload hotspots, surface layers and visualisation algorithms are also included. Their detection might allow for automating real-time risk evaluation incorporated in the divertor protection system in W7-X. For the first time, the paper demonstrates the feasibility of complex real-time image processing in nuclear fusion applications on low-power embedded devices. Moreover, GPU-accelerated reference processing pipelines yielding higher accuracy compared to the literature results are proposed, and remarkable performance improvement resulting from the upgrade to the Xavier NX platform is attained. Full article
Show Figures

Figure 1

28 pages, 1957 KiB  
Article
Information Fusion in Autonomous Vehicle Using Artificial Neural Group Key Synchronization
by Mohammad Zubair Khan, Arindam Sarkar, Hamza Ghandorh, Maha Driss and Wadii Boulila
Sensors 2022, 22(4), 1652; https://doi.org/10.3390/s22041652 - 20 Feb 2022
Cited by 8 | Viewed by 3651
Abstract
Information fusion in automated vehicle for various datatypes emanating from many resources is the foundation for making choices in intelligent transportation autonomous cars. To facilitate data sharing, a variety of communication methods have been integrated to build a diverse V2X infrastructure. However, information [...] Read more.
Information fusion in automated vehicle for various datatypes emanating from many resources is the foundation for making choices in intelligent transportation autonomous cars. To facilitate data sharing, a variety of communication methods have been integrated to build a diverse V2X infrastructure. However, information fusion security frameworks are currently intended for specific application instances, that are insufficient to fulfill the overall requirements of Mutual Intelligent Transportation Systems (MITS). In this work, a data fusion security infrastructure has been developed with varying degrees of trust. Furthermore, in the V2X heterogeneous networks, this paper offers an efficient and effective information fusion security mechanism for multiple sources and multiple type data sharing. An area-based PKI architecture with speed provided by a Graphic Processing Unit (GPU) is given in especially for artificial neural synchronization-based quick group key exchange. A parametric test is performed to ensure that the proposed data fusion trust solution meets the stringent delay requirements of V2X systems. The efficiency of the suggested method is tested, and the results show that it surpasses similar strategies already in use. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion in Autonomous Vehicles)
Show Figures

Figure 1

14 pages, 2830 KiB  
Article
A Shader-Based Ray Tracing Engine
by Sukjun Park and Nakhoon Baek
Appl. Sci. 2021, 11(7), 3264; https://doi.org/10.3390/app11073264 - 6 Apr 2021
Cited by 4 | Viewed by 5875
Abstract
Recently, ray tracing techniques have been highly adopted to produce high quality images and animations. In this paper, we present our design and implementation of a real-time ray-traced rendering engine. We achieved real-time capability for triangle primitives, based on the ray tracing techniques [...] Read more.
Recently, ray tracing techniques have been highly adopted to produce high quality images and animations. In this paper, we present our design and implementation of a real-time ray-traced rendering engine. We achieved real-time capability for triangle primitives, based on the ray tracing techniques on GPGPU (general-purpose graphics processing unit) compute shaders. To accelerate the ray tracing engine, we used a set of acceleration techniques, including bounding volume hierarchy, its roped representation, joint up-sampling, and bilateral filtering. Our current implementation shows remarkable speed-ups, with acceptable error values. Experimental results shows 2.5–13.6 times acceleration, and less than 3% error values for the 95% confidence range. Our next step will be enhancing bilateral filter behaviors. Full article
(This article belongs to the Collection Big Data Analysis and Visualization Ⅱ)
Show Figures

Figure 1

29 pages, 1686 KiB  
Article
GPGPU Task Scheduling Technique for Reducing the Performance Deviation of Multiple GPGPU Tasks in RPC-Based GPU Virtualization Environments
by Jihun Kang and Heonchang Yu
Symmetry 2021, 13(3), 508; https://doi.org/10.3390/sym13030508 - 20 Mar 2021
Cited by 4 | Viewed by 4760
Abstract
In remote procedure call (RPC)-based graphic processing unit (GPU) virtualization environments, GPU tasks requested by multiple-user virtual machines (VMs) are delivered to the VM owning the GPU and are processed in a multi-process form. However, because the thread executing the computing on general [...] Read more.
In remote procedure call (RPC)-based graphic processing unit (GPU) virtualization environments, GPU tasks requested by multiple-user virtual machines (VMs) are delivered to the VM owning the GPU and are processed in a multi-process form. However, because the thread executing the computing on general GPUs cannot arbitrarily stop the task or trigger context switching, GPU monopoly may be prolonged owing to a long-running general-purpose computing on graphics processing unit (GPGPU) task. Furthermore, when scheduling tasks on the GPU, the time for which each user VM uses the GPU is not considered. Thus, in cloud environments that must provide fair use of computing resources, equal use of GPUs between each user VM cannot be guaranteed. We propose a GPGPU task scheduling scheme based on thread division processing that supports GPU use evenly by multiple VMs that process GPGPU tasks in an RPC-based GPU virtualization environment. Our method divides the threads of the GPGPU task into several groups and controls the execution time of each thread group to prevent a specific GPGPU task from a long time monopolizing the GPU. The efficiency of the proposed technique is verified through an experiment in an environment where multiple VMs simultaneously perform GPGPU tasks. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

30 pages, 21731 KiB  
Article
Wave Propagation Studies in Numerical Wave Tanks with Weakly Compressible Smoothed Particle Hydrodynamics
by Samarpan Chakraborty and Balakumar Balachandran
J. Mar. Sci. Eng. 2021, 9(2), 233; https://doi.org/10.3390/jmse9020233 - 22 Feb 2021
Cited by 4 | Viewed by 3750
Abstract
Generation and propagation of waves in a numerical wave tank constructed using Weakly Compressible Smoothed Particle Hydrodynamics (WCSPH) are considered here. Numerical wave tank simulations have been carried out with implementations of different Wendland kernels in conjunction with different numerical dissipation schemes. The [...] Read more.
Generation and propagation of waves in a numerical wave tank constructed using Weakly Compressible Smoothed Particle Hydrodynamics (WCSPH) are considered here. Numerical wave tank simulations have been carried out with implementations of different Wendland kernels in conjunction with different numerical dissipation schemes. The simulations were accelerated by using General Process Graphics Processing Unit (GPGPU) computing to utilize the massively parallel nature of the simulations and thus improve process efficiency. Numerical experiments with short domains have been carried out to validate the dissipation schemes used. The wave tank experiments consist of piston-type wavemakers and appropriate passive absorption arrangements to facilitate comparisons with theoretical predictions. The comparative performance of the different numerical wave tank experiments was carried out on the basis of the hydrostatic pressure and wave surface elevations. The effect of numerical dissipation with the different kernel functions was also studied on the basis of energy analysis. Finally, the observations and results were used to arrive at the best possible numerical set up for simulation of waves at medium and long distances of propagation, which can play a significant role in the study of extreme waves and energy localizations observed in oceans through such numerical wave tank simulations. Full article
(This article belongs to the Special Issue Dynamic Instability in Offshore Structures)
Show Figures

Figure 1

9 pages, 1918 KiB  
Article
Performance Analysis of Thread Block Schedulers in GPGPU and Its Implications
by KyungWoon Cho and Hyokyung Bahn
Appl. Sci. 2020, 10(24), 9121; https://doi.org/10.3390/app10249121 - 20 Dec 2020
Cited by 2 | Viewed by 3135
Abstract
GPGPU (General-Purpose Graphics Processing Unit) consists of hardware resources that can execute tens of thousands of threads simultaneously. However, in reality, the parallelism is limited as resource allocation is performed by the base unit called thread block, which is not managed judiciously in [...] Read more.
GPGPU (General-Purpose Graphics Processing Unit) consists of hardware resources that can execute tens of thousands of threads simultaneously. However, in reality, the parallelism is limited as resource allocation is performed by the base unit called thread block, which is not managed judiciously in the current GPGPU systems. To schedule threads in GPGPU, a specialized hardware scheduler allocates thread blocks to the computing unit called SM (Stream Multiprocessors) in a Round-Robin manner. Although scheduling in hardware is simple and fast, we observe that the Round-Robin scheduling is not efficient in GPGPU, as it does not consider the workload characteristics of threads and the resource balance among SMs. In this article, we present a new thread block scheduling model that has the ability of analyzing and quantifying the performances of thread block scheduling. We implement our model as a GPGPU scheduling simulator and show that the conventional thread block scheduling provided in GPGPU hardware does not perform well as the workload becomes heavy. Specifically, we observe that the performance degradation of Round-Robin can be eliminated by adopting DFA (Depth First Allocation), which is simple but scalable. Moreover, as our simulator consists of modular forms based on the framework and we publicly open it for other researchers to use, various scheduling policies can be incorporated into our simulator for evaluating the performance of GPGPU schedulers. Full article
(This article belongs to the Special Issue Recent Advances in Sustainable Process Design and Optimization)
Show Figures

Figure 1

22 pages, 7313 KiB  
Article
Three-Dimensional Combined Finite-Discrete Element Modeling of Shear Fracture Process in Direct Shearing of Rough Concrete–Rock Joints
by Gyeongjo Min, Daisuke Fukuda, Sewook Oh, Gyeonggyu Kim, Younghun Ko, Hongyuan Liu, Moonkyung Chung and Sangho Cho
Appl. Sci. 2020, 10(22), 8033; https://doi.org/10.3390/app10228033 - 12 Nov 2020
Cited by 15 | Viewed by 3999
Abstract
A three-dimensional combined finite-discrete element element method (FDEM), parallelized by a general-purpose graphic-processing-unit (GPGPU), was applied to identify the fracture process of rough concrete–rock joints under direct shearing. The development process of shear resistance under the complex interaction between the rough concrete–rock joint [...] Read more.
A three-dimensional combined finite-discrete element element method (FDEM), parallelized by a general-purpose graphic-processing-unit (GPGPU), was applied to identify the fracture process of rough concrete–rock joints under direct shearing. The development process of shear resistance under the complex interaction between the rough concrete–rock joint surfaces, i.e., asperity dilatation, sliding, and degradation, was numerically simulated in terms of various asperity roughness under constant normal confinement. It was found that joint roughness significantly affects the development of overall joint shear resistance. The main mechanism for the joint shear resistance was identified as asperity sliding in the case of smoother joint roughness and asperity degradation in the case of rougher joint asperity. Moreover, it was established that the bulk internal friction angle increased with asperity angle increments in the Mohr–Coulomb criterion, and these results follow Patton’s theoretical model. Finally, the friction coefficient in FDEM appears to be an important parameter for simulating the direct shear test because the friction coefficient affects the bulk shear strength as well as the bulk internal friction angle. In addition, the friction coefficient of the rock–concrete joints contributes to the variation of the internal friction angle at the smooth joint than the rough joint. Full article
(This article belongs to the Special Issue Fracture Mechanics – Theory, Modeling and Applications)
Show Figures

Figure 1

18 pages, 5144 KiB  
Article
Prediction-Based Error Correction for GPU Reliability with Low Overhead
by Hyunyul Lim, Tae Hyun Kim and Sungho Kang
Electronics 2020, 9(11), 1849; https://doi.org/10.3390/electronics9111849 - 5 Nov 2020
Cited by 3 | Viewed by 3329
Abstract
Scientific and simulation applications are continuously gaining importance in many fields of research and industries. These applications require massive amounts of memory and substantial arithmetic computation. Therefore, general-purpose computing on graphics processing units (GPGPU), which combines the computing power of graphics processing units [...] Read more.
Scientific and simulation applications are continuously gaining importance in many fields of research and industries. These applications require massive amounts of memory and substantial arithmetic computation. Therefore, general-purpose computing on graphics processing units (GPGPU), which combines the computing power of graphics processing units (GPUs) and general CPUs, have been used for computationally intensive scientific and big data processing applications. Because current GPU architectures lack hardware support for error detection in computation logic, GPGPU has low reliability. Unlike graphics applications, errors in GPGPU can lead to serious problems in general-purpose computing applications. These applications are often intertwined with human life, meaning that errors can be life threatening. Therefore, this paper proposes a novel prediction-based error correction method called Prediction-based Error Correction (PRECOR) for GPU reliability, which detects and corrects errors in GPGPU platforms with a focus on errors in computational elements. The implementation of the proposed architecture needs a small number of checkpoint buffers in order to fix errors in computational logic. The PRECOR architecture has prediction buffers and controller units for predicting erroneous outputs before performing rollback. Following a rollback, the architecture confirms the accuracy of its predictions. The proposed method effectively reduces the hardware and time overheads required to correct errors. Experimental results confirm that PRECOR efficiently fixes errors with low hardware and time overheads. Full article
Show Figures

Figure 1

29 pages, 2722 KiB  
Article
Simulation of Fire with a Gas Kinetic Scheme on Distributed GPGPU Architectures
by Stephan Lenz, Martin Geier and Manfred Krafczyk
Computation 2020, 8(2), 50; https://doi.org/10.3390/computation8020050 - 26 May 2020
Cited by 4 | Viewed by 4304
Abstract
The simulation of fire is a challenging task due to its occurrence on multiple space-time scales and the non-linear interaction of multiple physical processes. Current state-of-the-art software such as the Fire Dynamics Simulator (FDS) implements most of the required physics, yet a significant [...] Read more.
The simulation of fire is a challenging task due to its occurrence on multiple space-time scales and the non-linear interaction of multiple physical processes. Current state-of-the-art software such as the Fire Dynamics Simulator (FDS) implements most of the required physics, yet a significant drawback of this implementation is its limited scalability on modern massively parallel hardware. The current paper presents a massively parallel implementation of a Gas Kinetic Scheme (GKS) on General Purpose Graphics Processing Units (GPGPUs) as a potential alternative modeling and simulation approach. The implementation is validated for turbulent natural convection against experimental data. Subsequently, it is validated for two simulations of fire plumes, including a small-scale table top setup and a fire on the scale of a few meters. We show that the present GKS achieves comparable accuracy to the results obtained by FDS. Yet, due to the parallel efficiency on dedicated hardware, our GKS implementation delivers a reduction of wall-clock times of more than an order of magnitude. This paper demonstrates the potential of explicit local schemes in massively parallel environments for the simulation of fire. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

32 pages, 9964 KiB  
Article
Free-Surface Effects on the Performance of Flapping-Foil Thruster for Augmenting Ship Propulsion in Waves
by Evangelos S. Filippas, George P. Papadakis and Kostas A. Belibassakis
J. Mar. Sci. Eng. 2020, 8(5), 357; https://doi.org/10.3390/jmse8050357 - 19 May 2020
Cited by 32 | Viewed by 4443
Abstract
Flapping foils located beneath or to the side of the hull of the ship can be used as unsteady thrusters, augmenting ship propulsion in waves. The basic setup is composed of a horizontal wing, which undergoes an induced vertical motion due to the [...] Read more.
Flapping foils located beneath or to the side of the hull of the ship can be used as unsteady thrusters, augmenting ship propulsion in waves. The basic setup is composed of a horizontal wing, which undergoes an induced vertical motion due to the ship’s responses in waves, while the self-pitching motion of the wing is controlled. Flapping foil thrusters can achieve high level of thrust as indicated by measurements and numerical simulations. Due to the relatively small submergence of the above biomimetic ship thrusters, the free-surface effects become significant. In the present work, the effect of the free surface on the performance of flapping foil thruster is assessed by means of two in-house developed computational models. On one hand, a cost-effective time-domain boundary element method (BEM) solver exploiting parallel programming techniques and general purpose programming on graphics processing units (GPGPU) is employed, while on the other hand a higher fidelity RANSE finite volume solver implemented for high performance computing (HPC) is used, and comparative results are presented. BEM and RANSE calculations present quite similar trends with respect to mean submergence depth, presenting 12%, 28%, and 18% of differences concerning the mean values of lift, thrust, and moment coefficients, respectively. The latter differences become very small after enhancement of the BEM model to include viscous corrections. Useful information and data are derived supporting the design of the considered biomimetic thrusters, for moderate submergence depths and conditions characterized by minor flow separation effects. Full article
(This article belongs to the Special Issue Propulsion of Ships in Waves)
Show Figures

Figure 1

23 pages, 3659 KiB  
Article
A Pragmatic Approach to the Design of Advanced Precision Terrain-Aided Navigation for UAVs and Its Verification
by Jungshin Lee, Chang-Ky Sung, Juhyun Oh, Kyungjun Han, Sangwoo Lee and Myeong-Jong Yu
Remote Sens. 2020, 12(9), 1396; https://doi.org/10.3390/rs12091396 - 28 Apr 2020
Cited by 9 | Viewed by 3987
Abstract
Autonomous unmanned aerial vehicles (UAVs) require highly reliable navigation information. Generally, navigation systems with the inertial navigation system (INS) and global navigation satellite system (GNSS) have been widely used. However, the GNSS is vulnerable to jamming and spoofing. The terrain referenced navigation (TRN) [...] Read more.
Autonomous unmanned aerial vehicles (UAVs) require highly reliable navigation information. Generally, navigation systems with the inertial navigation system (INS) and global navigation satellite system (GNSS) have been widely used. However, the GNSS is vulnerable to jamming and spoofing. The terrain referenced navigation (TRN) technique can be used to solve this problem. In this study, to obtain reliable navigation information even if a GNSS is not available or the degree of terrain roughness is not determined, we propose a federated filter based INS/GNSS/TRN integrated navigation system. We also introduce a TRN system that combines batch processing and an auxiliary particle filter to ensure stable flight of UAVs even in a long-term GNSS-denied environment. As an altimeter sensor for the TRN system, an interferometric radar altimeter (IRA) is used to obtain reliable navigation accuracy in high altitude flight. In addition, a parallel computing technique with general purpose computing on graphics processing units (GPGPU) is applied to process a high resolution terrain database and a nonlinear filter in real-time on board. Finally, the performance of the proposed system is verified through software-in-the-loop (SIL) tests and captive flight tests in a GNSS unavailable environment. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop