Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = computer graphics pipeline

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2255 KB  
Article
Design and Implementation of a YOLOv2 Accelerator on a Zynq-7000 FPGA
by Huimin Kim and Tae-Kyoung Kim
Sensors 2025, 25(20), 6359; https://doi.org/10.3390/s25206359 - 14 Oct 2025
Abstract
You Only Look Once (YOLO) is a convolutional neural network-based object detection algorithm widely used in real-time vision applications. However, its high computational demand leads to significant power consumption and cost when deployed in graphics processing units. Field-programmable gate arrays offer a low-power [...] Read more.
You Only Look Once (YOLO) is a convolutional neural network-based object detection algorithm widely used in real-time vision applications. However, its high computational demand leads to significant power consumption and cost when deployed in graphics processing units. Field-programmable gate arrays offer a low-power alternative. However, their efficient implementation requires architecture-level optimization tailored to limited device resources. This study presents an optimized YOLOv2 accelerator for the Zynq-7000 system-on-chip (SoC). The design employs 16-bit integer quantization, a filter reuse structure, an input feature map reuse scheme using a line buffer, and tiling parameter optimization for the convolution and max pooling layers to maximize resource efficiency. In addition, a stall-based control mechanism is introduced to prevent structural hazards in the pipeline. The proposed accelerator was implemented on the Zynq-7000 SoC board, and a system-level evaluation confirmed a negligible accuracy drop of only 0.2% compared with the 32-bit floating-point baseline. Compared with previous YOLO accelerators on the same SoC, the design achieved up to 26% and 15% reductions in flip-flop and digital signal processor usage, respectively. This result demonstrates feasible deployment on XC7Z020 with DSP 57.27% and FF 16.55% utilization. Full article
(This article belongs to the Special Issue Object Detection and Recognition Based on Deep Learning)
Show Figures

Figure 1

16 pages, 5738 KB  
Article
Image-Processing-Driven Modeling and Reconstruction of Traditional Patterns via Dual-Channel Detection and B-Spline Analysis
by Xuemei He, Siyi Chen, Yin Kuang and Xinyue Yang
J. Imaging 2025, 11(10), 349; https://doi.org/10.3390/jimaging11100349 - 7 Oct 2025
Viewed by 290
Abstract
This study aims to address the research gap in the digital analysis of traditional patterns by proposing an image-processing-driven parametric modeling method that combines graphic primitive function modeling with topological reconstruction. The image is processed using a dual-channel image processing algorithm (Canny edge [...] Read more.
This study aims to address the research gap in the digital analysis of traditional patterns by proposing an image-processing-driven parametric modeling method that combines graphic primitive function modeling with topological reconstruction. The image is processed using a dual-channel image processing algorithm (Canny edge detection and grayscale mapping) to extract and vectorize graphic primitives. These primitives are uniformly represented using B-spline curves, with variations generated through parametric control. A topological reconstruction approach is introduced, incorporating mapped geometric parameters, topological combination rules, and geometric adjustments to output topological configurations. The generated patterns are evaluated using fractal dimension analysis for complexity quantification and applied in cultural heritage imaging practice. The proposed image processing pipeline enables flexible parametric control and continuous structural integration of the graphic primitives and demonstrates high reproducibility and expandability. This study establishes a novel computational framework for traditional patterns, offering a replicable technical pathway that integrates image processing, parametric modeling, and topological reconstruction for digital expression, stylistic innovation, and heritage conservation. Full article
(This article belongs to the Section Computational Imaging and Computational Photography)
Show Figures

Figure 1

20 pages, 21741 KB  
Article
SegGen: An Unreal Engine 5 Pipeline for Generating Multimodal Semantic Segmentation Datasets
by Justin McMillen and Yasin Yilmaz
Sensors 2025, 25(17), 5569; https://doi.org/10.3390/s25175569 - 6 Sep 2025
Viewed by 1104
Abstract
Synthetic data has become an increasingly important tool for semantic segmentation, where collecting large-scale annotated datasets is often costly and impractical. Prior work has leveraged computer graphics and game engines to generate training data, but many pipelines remain limited to single modalities and [...] Read more.
Synthetic data has become an increasingly important tool for semantic segmentation, where collecting large-scale annotated datasets is often costly and impractical. Prior work has leveraged computer graphics and game engines to generate training data, but many pipelines remain limited to single modalities and constrained environments or require substantial manual setup. To address these limitations, we present a fully automated pipeline built within Unreal Engine 5 (UE5) that procedurally generates diverse, labeled environments and collects multimodal visual data for semantic segmentation tasks. Our system integrates UE5’s biome-based procedural generation framework with a spline-following drone actor capable of capturing both RGB and depth imagery, alongside pixel-perfect semantic segmentation labels. As a proof of concept, we generated a dataset consisting of 1169 samples across two visual modalities and seven semantic classes. The pipeline supports scalable expansion and rapid environment variation, enabling high-throughput synthetic data generation with minimal human intervention. To validate our approach, we trained benchmark computer vision segmentation models on the synthetic dataset and demonstrated their ability to learn meaningful semantic representations. This work highlights the potential of game-engine-based data generation to accelerate research in multimodal perception and provide reproducible, scalable benchmarks for future segmentation models. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

33 pages, 19016 KB  
Article
Multitask Learning-Based Pipeline-Parallel Computation Offloading Architecture for Deep Face Analysis
by Faris S. Alghareb and Balqees Talal Hasan
Computers 2025, 14(1), 29; https://doi.org/10.3390/computers14010029 - 20 Jan 2025
Cited by 1 | Viewed by 2145
Abstract
Deep Neural Networks (DNNs) have been widely adopted in several advanced artificial intelligence applications due to their competitive accuracy to the human brain. Nevertheless, the superior accuracy of a DNN is achieved at the expense of intensive computations and storage complexity, requiring custom [...] Read more.
Deep Neural Networks (DNNs) have been widely adopted in several advanced artificial intelligence applications due to their competitive accuracy to the human brain. Nevertheless, the superior accuracy of a DNN is achieved at the expense of intensive computations and storage complexity, requiring custom expandable hardware, i.e., graphics processing units (GPUs). Interestingly, leveraging the synergy of parallelism and edge computing can significantly improve CPU-based hardware platforms. Therefore, this manuscript explores levels of parallelism techniques along with edge computation offloading to develop an innovative hardware platform that improves the efficacy of deep learning computing architectures. Furthermore, the multitask learning (MTL) approach is employed to construct a parallel multi-task classification network. These tasks include face detection and recognition, age estimation, gender recognition, smile detection, and hair color and style classification. Additionally, both pipeline and parallel processing techniques are utilized to expedite complicated computations, boosting the overall performance of the presented deep face analysis architecture. A computation offloading approach, on the other hand, is leveraged to distribute computation-intensive tasks to the server edge, whereas lightweight computations are offloaded to edge devices, i.e., Raspberry Pi 4. To train the proposed deep face analysis network architecture, two custom datasets (HDDB and FRAED) were created for head detection and face-age recognition. Extensive experimental results demonstrate the efficacy of the proposed pipeline-parallel architecture in terms of execution time. It requires 8.2 s to provide detailed face detection and analysis for an individual and 23.59 s for an inference containing 10 individuals. Moreover, a speedup of 62.48% is achieved compared to the sequential-based edge computing architecture. Meanwhile, 25.96% speed performance acceleration is realized when implementing the proposed pipeline-parallel architecture only on the server edge compared to the sever sequential implementation. Considering classification efficiency, the proposed classification modules achieve an accuracy of 88.55% for hair color and style classification and a remarkable prediction outcome of 100% for face recognition and age estimation. To summarize, the proposed approach can assist in reducing the required execution time and memory capacity by processing all facial tasks simultaneously on a single deep neural network rather than building a CNN model for each task. Therefore, the presented pipeline-parallel architecture can be a cost-effective framework for real-time computer vision applications implemented on resource-limited devices. Full article
Show Figures

Figure 1

34 pages, 9890 KB  
Article
Synchronized Delay Measurement of Multi-Stream Analysis over Data Concentrator Units
by Anvarjon Yusupov, Sun Park and JongWon Kim
Electronics 2025, 14(1), 81; https://doi.org/10.3390/electronics14010081 - 27 Dec 2024
Viewed by 1732
Abstract
Autonomous vehicles (AVs) rely heavily on multi-modal sensors to perceive their surroundings and make real-time decisions. However, the increasing complexity of these sensors, combined with the computational demands of AI models and the challenges of synchronizing data across multiple inputs, presents significant obstacles [...] Read more.
Autonomous vehicles (AVs) rely heavily on multi-modal sensors to perceive their surroundings and make real-time decisions. However, the increasing complexity of these sensors, combined with the computational demands of AI models and the challenges of synchronizing data across multiple inputs, presents significant obstacles for AV systems. These challenges of the AV domain often lead to performance latency, resulting in delayed decision-making, causing major traffic accidents. The data concentrator unit (DCU) concept addresses these issues by optimizing data pipelines and implementing intelligent control mechanisms to process sensor data efficiently. Identifying and addressing bottlenecks that contribute to latency can enhance system performance, reducing the need for costly hardware upgrades or advanced AI models. This paper introduces a delay measurement tool for multi-node analysis, enabling synchronized monitoring of data pipelines across connected hardware platforms, such as clock-synchronized DCUs. The proposed tool traces the execution flow of software applications and assesses time delays at various stages of the data pipeline in clock-synchronized hardware. The various stages are represented with intuitive graphical visualization, simplifying the identification of performance bottlenecks. Full article
(This article belongs to the Special Issue Advancements in Connected and Autonomous Vehicles)
Show Figures

Figure 1

14 pages, 3424 KB  
Article
Directorial Editing: A Hybrid Deep-Learning Approach to Content-Aware Image Retargeting and Resizing
by Elliot Dickman and Paul Diefenbach
Electronics 2024, 13(22), 4459; https://doi.org/10.3390/electronics13224459 - 14 Nov 2024
Viewed by 1316
Abstract
Image retargeting is a common computer graphics task which involves manipulating the size or aspect ratio of an image. This task often presents a challenge to the artist or user, because manipulating the size of an image necessitates some degree of data loss [...] Read more.
Image retargeting is a common computer graphics task which involves manipulating the size or aspect ratio of an image. This task often presents a challenge to the artist or user, because manipulating the size of an image necessitates some degree of data loss as pixels need to be removed to accommodate a different image size. We present an image retargeting framework which implements a confidence map generated by a segmentation model for content-aware resizing, allowing users to specify which subjects in an image to preserve using natural language prompts much like the role of an art director conversing with their artist. Using computer vision models to detect object positions also provides additional control over the composition of the retargeted image at various points in the image-processing pipeline. This object-based approach to energy map augmentation is incredibly flexible, because only minor adjustments to the processing of the energy maps can provide a significant degree of control over where seams—paths of pixels through the image—are removed, and how seam removal is prioritized in different sections of the image. It also provides additional control with techniques for object and background separation and recomposition. This research explores how several different types of deep-learning models can be integrated into this pipeline in order to easily make these decisions, and provide different retargeting results on the same image based on user input and compositional considerations. Because this is a framework based on existing machine-learning models, this approach will benefit from advancements in the rapidly developing fields of computer vision and large language models and can be extended for further natural language directorial controls over images. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Computer Vision)
Show Figures

Figure 1

16 pages, 3298 KB  
Article
The “Dry-Lab” Side of Food Authentication: Benchmark of Bioinformatic Pipelines for the Analysis of Metabarcoding Data
by Gabriele Spatola, Alice Giusti and Andrea Armani
Foods 2024, 13(13), 2102; https://doi.org/10.3390/foods13132102 - 1 Jul 2024
Cited by 1 | Viewed by 1981
Abstract
Next Generation Sequencing Technologies (NGS), particularly metabarcoding, are valuable tools for authenticating foodstuffs and detecting eventual fraudulent practices such as species substitution. This technique, mostly used for the analysis of prokaryotes in several environments (including food), is in fact increasingly applied to identify [...] Read more.
Next Generation Sequencing Technologies (NGS), particularly metabarcoding, are valuable tools for authenticating foodstuffs and detecting eventual fraudulent practices such as species substitution. This technique, mostly used for the analysis of prokaryotes in several environments (including food), is in fact increasingly applied to identify eukaryotes (e.g., fish, mammals, avian, etc.) in multispecies food products. Besides the “wet-lab” procedures (e.g., DNA extraction, PCR, amplicon purification, etc.), the metabarcoding workflow includes a final “dry-lab” phase in which sequencing data are analyzed using a bioinformatic pipeline (BP). BPs play a crucial role in the accuracy, reliability, and interpretability of the metabarcoding results. Choosing the most suitable BP for the analysis of metabarcoding data could be challenging because it might require greater informatics skills than those needed in standard molecular analysis. To date, studies comparing BPs for metabarcoding data analysis in foodstuff authentication are scarce. In this study, we compared the data obtained from two previous studies in which fish burgers and insect-based products were authenticated using a customizable, ASV-based, and command-line interface BP (BP1) by analyzing the same data with a customizable but OTU-based and graphical user interface BP (BP2). The final sample compositions were compared statistically. No significant difference in sample compositions was highlighted by applying BP1 and BP2. However, BP1 was considered as more user-friendly than BP2 with respect to data analysis streamlining, cost of analysis, and computational time consumption. This study can provide useful information for researchers approaching the bioinformatic analysis of metabarcoding data for the first time. In the field of food authentication, an effective and efficient use of BPs could be especially useful in the context of official controls performed by the Competent Authorities and companies’ self-control in order to detect species substitution and counterfeit frauds. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

31 pages, 10402 KB  
Article
Virtual Experience Toolkit: An End-to-End Automated 3D Scene Virtualization Framework Implementing Computer Vision Techniques
by Pau Mora, Clara Garcia, Eugenio Ivorra, Mario Ortega and Mariano L. Alcañiz
Sensors 2024, 24(12), 3837; https://doi.org/10.3390/s24123837 - 13 Jun 2024
Cited by 2 | Viewed by 2930
Abstract
Virtualization plays a critical role in enriching the user experience in Virtual Reality (VR) by offering heightened realism, increased immersion, safer navigation, and newly achievable levels of interaction and personalization, specifically in indoor environments. Traditionally, the creation of virtual content has fallen under [...] Read more.
Virtualization plays a critical role in enriching the user experience in Virtual Reality (VR) by offering heightened realism, increased immersion, safer navigation, and newly achievable levels of interaction and personalization, specifically in indoor environments. Traditionally, the creation of virtual content has fallen under one of two broad categories: manual methods crafted by graphic designers, which are labor-intensive and sometimes lack precision; traditional Computer Vision (CV) and Deep Learning (DL) frameworks that frequently result in semi-automatic and complex solutions, lacking a unified framework for both 3D reconstruction and scene understanding, often missing a fully interactive representation of the objects and neglecting their appearance. To address these diverse challenges and limitations, we introduce the Virtual Experience Toolkit (VET), an automated and user-friendly framework that utilizes DL and advanced CV techniques to efficiently and accurately virtualize real-world indoor scenarios. The key features of VET are the use of ScanNotate, a retrieval and alignment tool that enhances the precision and efficiency of its precursor, supported by upgrades such as a preprocessing step to make it fully automatic and a preselection of a reduced list of CAD to speed up the process, and the implementation in a user-friendly and fully automatic Unity3D application that guides the users through the whole pipeline and concludes in a fully interactive and customizable 3D scene. The efficacy of VET is demonstrated using a diversified dataset of virtualized 3D indoor scenarios, supplementing the ScanNet dataset. Full article
Show Figures

Figure 1

13 pages, 3960 KB  
Article
Visualization Program Design for Complex Piping Systems in Marine Engine Simulation Systems
by Xiaoyu Wu, Zhibin He, Zhenghao Wei, Qi Zhang and Zhibo Fan
Appl. Sci. 2024, 14(6), 2497; https://doi.org/10.3390/app14062497 - 15 Mar 2024
Viewed by 1668
Abstract
This study is dedicated to the development of an advanced ship piping network programming tool to address the challenges faced by traditional text-based design and computation methods when dealing with complex and large-data-volume piping systems, such as burdensome programming tasks, high error rates, [...] Read more.
This study is dedicated to the development of an advanced ship piping network programming tool to address the challenges faced by traditional text-based design and computation methods when dealing with complex and large-data-volume piping systems, such as burdensome programming tasks, high error rates, and difficulty in troubleshooting faults. Leveraging Microsoft’s WPF technology and the C# language, combined with Excel as a data input platform, this tool provides an intuitive graphical user interface, allowing users to intuitively build and analyze ship piping network models by dragging and dropping controls. The tool not only simplifies the design process of complex piping systems but also significantly improves efficiency and accuracy through automated data processing and calculations. It supports user customization of key pipeline characteristics, such as maximum flow and direction, further enhancing the applicability and accuracy of the piping network model. In addition, with optimized interaction design and data management methods, the tool significantly reduces the learning difficulty for users, while improving the reliability of design and efficiency of troubleshooting. The results of this study show the tool not only technically outperforms traditional methods but also provides a new efficient, intuitive, and user-friendly tool for the teaching and engineering applications of ship piping networks, paving a new path for the design and optimization of ship piping network systems, with significant practical application value and theoretical significance. Looking forward, this tool is expected to play a broader role in the instruction and industrial practices associated with ship piping networks, moving the field toward more efficient and intelligent development. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

17 pages, 17840 KB  
Article
User-Centered Pipeline for Synthetic Augmentation of Anomaly Detection Datasets
by Alexander Rosbak-Mortensen, Marco Jansen, Morten Muhlig, Mikkel Bjørndahl Kristensen Tøt and Ivan Nikolov
Computers 2024, 13(3), 70; https://doi.org/10.3390/computers13030070 - 8 Mar 2024
Viewed by 2651
Abstract
Automatic anomaly detection plays a critical role in surveillance systems but requires datasets comprising large amounts of annotated data to train and evaluate models. Gathering and annotating these data is a labor-intensive task that can become costly. A way to circumvent this is [...] Read more.
Automatic anomaly detection plays a critical role in surveillance systems but requires datasets comprising large amounts of annotated data to train and evaluate models. Gathering and annotating these data is a labor-intensive task that can become costly. A way to circumvent this is to use synthetic data to augment anomalies directly into existing datasets. This far more diverse scenario can be created and come directly with annotations. This however also poses new issues for the computer-vision engineer and researcher end users, who are not readily familiar with 3D modeling, game development, or computer graphics methodologies and must rely on external specialists to use or tweak such pipelines. In this paper, we extend our previous work of an application that synthesizes dataset variations using 3D models and augments anomalies on real backgrounds using the Unity Engine. We developed a high-usability user interface for our application through a series of RITE experiments and evaluated the final product with the help of deep-learning specialists who provided positive feedback regarding its usability, accessibility, and user experience. Finally, we tested if the proposed solution can be used in the context of traffic surveillance by augmenting the train data from the challenging Street Scene dataset. We found that by using our synthetic data, we could achieve higher detection accuracy. We also propose the next steps to expand the proposed solution for better usability and render accuracy through the use of segmentation pre-processing. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2023))
Show Figures

Figure 1

16 pages, 26655 KB  
Article
VSpipe-GUI, an Interactive Graphical User Interface for Virtual Screening and Hit Selection
by Rashid Hussain, Andrew Scott Hackett, Sandra Álvarez-Carretero and Lydia Tabernero
Int. J. Mol. Sci. 2024, 25(4), 2002; https://doi.org/10.3390/ijms25042002 - 7 Feb 2024
Cited by 1 | Viewed by 2077
Abstract
Virtual screening of large chemical libraries is essential to support computer-aided drug development, providing a rapid and low-cost approach for further experimental validation. However, existing computational packages are often for specialised users or platform limited. Previously, we developed VSpipe, an open-source semi-automated pipeline [...] Read more.
Virtual screening of large chemical libraries is essential to support computer-aided drug development, providing a rapid and low-cost approach for further experimental validation. However, existing computational packages are often for specialised users or platform limited. Previously, we developed VSpipe, an open-source semi-automated pipeline for structure-based virtual screening. We have now improved and expanded the initial command-line version into an interactive graphical user interface: VSpipe-GUI, a cross-platform open-source Python toolkit functional in various operating systems (e.g., Linux distributions, Windows, and Mac OS X). The new implementation is more user-friendly and accessible, and considerably faster than the previous version when AutoDock Vina is used for docking. Importantly, we have introduced a new compound selection module (i.e., spatial filtering) that allows filtering of docked compounds based on specified features at the target binding site. We have tested the new VSpipe-GUI on the Hepatitis C Virus NS3 (HCV NS3) protease as the target protein. The pocket-based and interaction-based modes of the spatial filtering module showed efficient and specific selection of ligands from the virtual screening that interact with the HCV NS3 catalytic serine 139. Full article
(This article belongs to the Special Issue Drug Discovery of Compounds by Structural Design)
Show Figures

Figure 1

16 pages, 2416 KB  
Article
MD–Ligand–Receptor: A High-Performance Computing Tool for Characterizing Ligand–Receptor Binding Interactions in Molecular Dynamics Trajectories
by Michele Pieroni, Francesco Madeddu, Jessica Di Martino, Manuel Arcieri, Valerio Parisi, Paolo Bottoni and Tiziana Castrignanò
Int. J. Mol. Sci. 2023, 24(14), 11671; https://doi.org/10.3390/ijms241411671 - 19 Jul 2023
Cited by 31 | Viewed by 5599
Abstract
Molecular dynamics simulation is a widely employed computational technique for studying the dynamic behavior of molecular systems over time. By simulating macromolecular biological systems consisting of a drug, a receptor and a solvated environment with thousands of water molecules, MD allows for realistic [...] Read more.
Molecular dynamics simulation is a widely employed computational technique for studying the dynamic behavior of molecular systems over time. By simulating macromolecular biological systems consisting of a drug, a receptor and a solvated environment with thousands of water molecules, MD allows for realistic ligand–receptor binding interactions (lrbi) to be studied. In this study, we present MD–ligand–receptor (MDLR), a state-of-the-art software designed to explore the intricate interactions between ligands and receptors over time using molecular dynamics trajectories. Unlike traditional static analysis tools, MDLR goes beyond simply taking a snapshot of ligand–receptor binding interactions (lrbi), uncovering long-lasting molecular interactions and predicting the time-dependent inhibitory activity of specific drugs. With MDLR, researchers can gain insights into the dynamic behavior of complex ligand–receptor systems. Our pipeline is optimized for high-performance computing, capable of efficiently processing vast molecular dynamics trajectories on multicore Linux servers or even multinode HPC clusters. In the latter case, MDLR allows the user to analyze large trajectories in a very short time. To facilitate the exploration and visualization of lrbi, we provide an intuitive Python notebook (Jupyter), which allows users to examine and interpret the results through various graphical representations. Full article
(This article belongs to the Special Issue Research on Molecular Dynamics)
Show Figures

Figure 1

23 pages, 4534 KB  
Article
VanityX: An Agile 3D Rendering Platform Supporting Mixed Reality
by Ivan Zoraja, Mirjana Bonkovic, Vladan Papic and Vaidy Sunderam
Appl. Sci. 2023, 13(9), 5468; https://doi.org/10.3390/app13095468 - 27 Apr 2023
Viewed by 3773
Abstract
VanityX is a prototype, low-level, real-time 3D rendering and computing platform. Unlike most XR solutions, which integrate several commercial and/or open-source products, such as game engines, XR libraries, runtime, and services, VanityX is a platform ready to adapt to any business domain including [...] Read more.
VanityX is a prototype, low-level, real-time 3D rendering and computing platform. Unlike most XR solutions, which integrate several commercial and/or open-source products, such as game engines, XR libraries, runtime, and services, VanityX is a platform ready to adapt to any business domain including anthropology and medicine. The design, architecture, and implementation are presented, which are based on CPU and GPU asymmetric multiprocessing with explicit synchronization and collaboration of parallel tasks and a predictable transfer of pipeline resources between processors. The VanityX API is based on DirectX 12 and native programming languages C++20 and HLSL 6, which, in conjunction with explicit parallel processing, the asynchronous loading and explicit managing of graphic resources, and effective algorithms, results in great performance and resource utilization close to metal. Surface-based rendering, direct volume rendering (DVR), and mixed reality (MR) on the HoloLens 2 immersive headset are currently supported. Our MR applications are directly compiled and deployed to HoloLens 2 allowing for better programming experiences and software engineering practices such as testing, debugging, and profiling. The VanityX server provides various computational and rendering services to its clients running on HoloLens 2. The use and test cases are in many business domains including anthropology and medicine. Our future research challenges will primarily, via the MetaverseMed project, focus on opening new opportunities for implementing innovative MR-based scenarios in medical procedures, especially in education, diagnostics, and surgical operations. Full article
(This article belongs to the Topic Virtual Reality, Digital Twins, the Metaverse)
Show Figures

Figure 1

20 pages, 2753 KB  
Article
A Machine Learning Pipeline for Gait Analysis in a Semi Free-Living Environment
by Sylvain Jung, Nicolas de l’Escalopier, Laurent Oudre, Charles Truong, Eric Dorveaux, Louis Gorintin and Damien Ricard
Sensors 2023, 23(8), 4000; https://doi.org/10.3390/s23084000 - 14 Apr 2023
Cited by 5 | Viewed by 3459
Abstract
This paper presents a novel approach to creating a graphical summary of a subject’s activity during a protocol in a Semi Free-Living Environment. Thanks to this new visualization, human behavior, in particular locomotion, can now be condensed into an easy-to-read and user-friendly output. [...] Read more.
This paper presents a novel approach to creating a graphical summary of a subject’s activity during a protocol in a Semi Free-Living Environment. Thanks to this new visualization, human behavior, in particular locomotion, can now be condensed into an easy-to-read and user-friendly output. As time series collected while monitoring patients in Semi Free-Living Environments are often long and complex, our contribution relies on an innovative pipeline of signal processing methods and machine learning algorithms. Once learned, the graphical representation is able to sum up all activities present in the data and can quickly be applied to newly acquired time series. In a nutshell, raw data from inertial measurement units are first segmented into homogeneous regimes with an adaptive change-point detection procedure, then each segment is automatically labeled. Then, features are extracted from each regime, and lastly, a score is computed using these features. The final visual summary is constructed from the scores of the activities and their comparisons to healthy models. This graphical output is a detailed, adaptive, and structured visualization that helps better understand the salient events in a complex gait protocol. Full article
(This article belongs to the Special Issue Body Sensor Networks and Wearables for Health Monitoring)
Show Figures

Figure 1

15 pages, 4444 KB  
Article
Parallelization of Runge–Kutta Methods for Hardware Implementation
by Petr Fedoseev, Konstantin Zhukov, Dmitry Kaplun, Nikita Vybornov and Valery Andreev
Computation 2022, 10(12), 215; https://doi.org/10.3390/computation10120215 - 7 Dec 2022
Viewed by 3435
Abstract
Parallel numerical integration is a valuable tool used in many applications requiring high-performance numerical solvers, which is of great interest nowadays due to the increasing difficulty and complexity in differential problems. One of the possible approaches to increase the efficiency of ODE solvers [...] Read more.
Parallel numerical integration is a valuable tool used in many applications requiring high-performance numerical solvers, which is of great interest nowadays due to the increasing difficulty and complexity in differential problems. One of the possible approaches to increase the efficiency of ODE solvers is to parallelize recurrent numerical methods, making them more suitable for execution in hardware with natural parallelism, e.g., field-programmable gate arrays (FPGAs) or graphical processing units (GPUs). Some of the simplest and most popular ODE solvers are explicit Runge–Kutta methods. Despite the high implementability and overall simplicity of the Runge–Kutta schemes, recurrent algorithms remain weakly suitable for execution in parallel computers. In this paper, we propose an approach for parallelizing classical explicit Runge–Kutta methods to construct efficient ODE solvers with pipeline architecture. A novel technique to obtain parallel finite-difference models based on Runge–Kutta integration is described. Three test initial value problems are considered to evaluate the properties of the obtained solvers. It is shown that the truncation error of the parallelized Runge–Kutta method does not significantly change after its known recurrent version. A possible speed up in calculations is estimated using Amdahl’s law and is approximately 2.5–3-times. Block diagrams of fixed-point parallel ODE solvers suitable for hardware implementation on FPGA are given. Full article
(This article belongs to the Special Issue Mathematical Modeling and Study of Nonlinear Dynamic Processes)
Show Figures

Figure 1

Back to TopTop