Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = corrupted data recovery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 335 KB  
Review
AI-Driven Motion Capture Data Recovery: A Comprehensive Review and Future Outlook
by Ahood Almaleh, Gary Ushaw and Rich Davison
Sensors 2025, 25(24), 7525; https://doi.org/10.3390/s25247525 - 11 Dec 2025
Viewed by 523
Abstract
This paper presents a comprehensive review of motion capture (MoCap) data recovery techniques, with a particular focus on the suitability of artificial intelligence (AI) for addressing missing or corrupted motion data. Existing approaches are classified into three categories: non-data-driven, data-driven (AI-based), and hybrid [...] Read more.
This paper presents a comprehensive review of motion capture (MoCap) data recovery techniques, with a particular focus on the suitability of artificial intelligence (AI) for addressing missing or corrupted motion data. Existing approaches are classified into three categories: non-data-driven, data-driven (AI-based), and hybrid methods. Within the AI domain, frameworks such as generative adversarial networks (GANs), transformers, and graph neural networks (GNNs) demonstrate strong capabilities in modeling complex spatial–temporal dependencies and achieving accurate motion reconstruction. Compared with traditional methods, AI techniques offer greater adaptability and precision, though they remain limited by high computational costs and dependence on large, high-quality datasets. Hybrid approaches that combine AI models with physics-based or statistical algorithms provide a balance between efficiency, interpretability, and robustness. The review also examines benchmark datasets, including CMU MoCap and Human3.6M, while highlighting the growing role of synthetic and augmented data in improving AI model generalization. Despite notable progress, the absence of standardized evaluation protocols and diverse real-world datasets continues to hinder generalization. Emerging trends point toward real-time AI-driven recovery, multimodal data fusion, and unified performance benchmarks. By integrating traditional, AI-based, and hybrid approaches into a coherent taxonomy, this review provides a unique contribution to the literature. Unlike prior surveys focused on prediction, denoising, pose estimation, or generative modeling, it treats MoCap recovery as a standalone problem. It further synthesizes comparative insights across datasets, evaluation metrics, movement representations, and common failure cases, offering a comprehensive foundation for advancing MoCap recovery research. Full article
Show Figures

Figure 1

18 pages, 443 KB  
Article
Low-Rank Matrix Completion via Nonconvex Rank Approximation for IoT Network Localization
by Nana Li, Ling He, Die Meng, Chuang Han and Qiang Tu
Electronics 2025, 14(19), 3920; https://doi.org/10.3390/electronics14193920 - 1 Oct 2025
Viewed by 799
Abstract
Accurate node localization is essential for many Internet of Things (IoT) applications. However, incomplete and noisy distance measurements often degrade the reliability of the Euclidean Distance Matrix (EDM), which is critical for range-based localization. To address this issue, a Low-Rank Matrix Completion approach [...] Read more.
Accurate node localization is essential for many Internet of Things (IoT) applications. However, incomplete and noisy distance measurements often degrade the reliability of the Euclidean Distance Matrix (EDM), which is critical for range-based localization. To address this issue, a Low-Rank Matrix Completion approach based on nonconvex rank approximation (LRMCN) is proposed to recover the true EDM. First, the observed EDM is decomposed into a low-rank matrix representing the true distances and a sparse matrix capturing noise. Second, a nonconvex surrogate function is used to approximate the matrix rank, while the l1-norm is utilized to model the sparsity of the noise component. Third, the resulting optimization problem is solved using the Alternating Direction Method of Multipliers (ADMMs). This enables accurate recovery of a complete and denoised EDM from incomplete and corrupted measurements. Finally, relative node locations are estimated using classical multi-dimensional scaling, and absolute coordinates are determined based on a small set of anchor nodes with known locations. The experimental results show that the proposed method achieves superior performance in both matrix completion and localization accuracy, even in the presence of missing and corrupted data. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

17 pages, 3606 KB  
Article
Kalman–FIR Fusion Filtering for High-Dynamic Airborne Gravimetry: Implementation and Noise Suppression on the GIPS-1A System
by Guanxin Wang, Shengqing Xiong, Fang Yan, Feng Luo, Linfei Wang and Xihua Zhou
Appl. Sci. 2025, 15(17), 9363; https://doi.org/10.3390/app15179363 - 26 Aug 2025
Cited by 1 | Viewed by 721
Abstract
High-dynamic airborne gravimetry faces critical challenges from platform-induced noise contamination. Conventional filtering methods exhibit inherent limitations in simultaneously achieving dynamic tracking capability and spectral fidelity. To overcome these constraints, this study proposes a Kalman–FIR fusion filtering (K-F) method, which is validated through engineering [...] Read more.
High-dynamic airborne gravimetry faces critical challenges from platform-induced noise contamination. Conventional filtering methods exhibit inherent limitations in simultaneously achieving dynamic tracking capability and spectral fidelity. To overcome these constraints, this study proposes a Kalman–FIR fusion filtering (K-F) method, which is validated through engineering implementation on the GIPS-1A airborne gravimeter platform. The proposed framework employs a dual-stage strategy: (1) An adaptive state-space framework employing calibration coefficients (Sx, Sy, Sz) continuously estimates triaxial acceleration errors to compensate for gravity anomaly signals. This approach resolves aliasing artifacts induced by non-stationary noise while preserving low-frequency gravity components that are traditionally attenuated by conventional FIR filters. (2) A window-optimized FIR post-filter explicitly regulates cutoff frequencies to ensure spectral compatibility with downstream processing workflows, including terrain correction. Flight experiments demonstrate that the K-F method achieves a repeat-line internal consistency of 0.558 mGal at 0.01 Hz—a 65.3% accuracy improvement over standalone FIR filtering (1.606 mGal at 0.01 Hz). Concurrently, it enhances spatial resolution to 2.5 km (half-wavelength), enabling the recovery of data segments corrupted by airflow disturbances that were previously unusable. Implemented on the GIPS-1A system, K-F enables precision mineral exploration and establishes a noise-suppressed paradigm for extreme-dynamic gravimetry. Full article
(This article belongs to the Special Issue Advances in Geophysical Exploration)
Show Figures

Figure 1

14 pages, 2520 KB  
Article
Non-Iterative Recovery Information Procedure with Database Inspired in Hopfield Neural Networks
by Cesar U. Solis, Jorge Morales and Carlos M. Montelongo
Computation 2025, 13(4), 95; https://doi.org/10.3390/computation13040095 - 10 Apr 2025
Viewed by 615
Abstract
This work establishes a simple algorithm to recover an information vector from a predefined database available every time. It is considered that the information analyzed may be incomplete, damaged, or corrupted. This algorithm is inspired by Hopfield Neural Networks (HNN), which allows the [...] Read more.
This work establishes a simple algorithm to recover an information vector from a predefined database available every time. It is considered that the information analyzed may be incomplete, damaged, or corrupted. This algorithm is inspired by Hopfield Neural Networks (HNN), which allows the recursive reconstruction of an information vector through an energy-minimizing optimal process, but this paper presents a procedure that generates results in a single iteration. Images have been chosen for the information recovery application to build the vector information. In addition, a filter is added to the algorithm to focus on the most important information when reconstructing data, allowing it to work with damaged or incomplete vectors, even without losing the ability to be a non-iterative process. A brief theoretical introduction and a numerical validation for recovery information are shown with an example of a database containing 40 images. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

19 pages, 368 KB  
Article
ClusteredLog: Optimizing Log Structures for Efficient Data Recovery and Integrity Management in Database Systems
by Mariha Siddika Ahmad and Brajendra Nath Panda
Electronics 2024, 13(23), 4723; https://doi.org/10.3390/electronics13234723 - 29 Nov 2024
Viewed by 1524
Abstract
In modern database systems, efficient log management is crucial for ensuring data integrity and facilitating swift recovery from potential data corruption or system failures. Traditional log structures, which store operations sequentially as they occur, often lead to significant delays in accessing and recovering [...] Read more.
In modern database systems, efficient log management is crucial for ensuring data integrity and facilitating swift recovery from potential data corruption or system failures. Traditional log structures, which store operations sequentially as they occur, often lead to significant delays in accessing and recovering specific data objects due to their scattered nature across the log. ClusteredLog addresses the limitations of traditional logging methods by implementing a novel logical organization of log entries. Instead of simply storing operations sequentially, it groups related operations for each data item into clusters. As a result, ClusteredLog enables faster identification and recovery of damaged data items and thus reduces the need for extensive log scanning, improving overall efficiency in database recovery processes. We introduce data structures and algorithms that facilitate the creation of these clustered logs, which also track dependencies and update operations on data items. Simulation studies demonstrate that our clustered log method significantly accelerates damage assessment and recovery times compared to traditional sequential logs, particularly as the number of transactions and data items increases. This optimization is pivotal for maintaining data integrity and operational efficiency in databases, especially in scenarios involving potential malicious modifications. Full article
(This article belongs to the Special Issue Current Trends on Data Management)
Show Figures

Figure 1

27 pages, 4016 KB  
Article
Symmetrical Data Recovery: FPGA-Based Multi-Dimensional Elastic Recovery Acceleration for Multiple Block Failures in Ceph Systems
by Fan Lei, Yong Wang, Junqi Chen and Sijie Yang
Symmetry 2024, 16(6), 672; https://doi.org/10.3390/sym16060672 - 30 May 2024
Viewed by 1245
Abstract
In the realm of Ceph distributed storage systems, ensuring swift and symmetrical data recovery during severe data corruption scenarios is pivotal for data reliability and system stability. This paper introduces an innovative FPGA-based Multi-Dimensional Elastic Recovery Acceleration method, termed AMDER-Ceph. Utilizing FPGA technology, [...] Read more.
In the realm of Ceph distributed storage systems, ensuring swift and symmetrical data recovery during severe data corruption scenarios is pivotal for data reliability and system stability. This paper introduces an innovative FPGA-based Multi-Dimensional Elastic Recovery Acceleration method, termed AMDER-Ceph. Utilizing FPGA technology, this method is a pioneer in accelerating erasure code data recovery within such systems symmetrically. By harnessing the parallel computing power of FPGAs and optimizing Cauchy matrix binary operations, AMDER-Ceph significantly enhances data recovery speed and efficiency symmetrically. Our evaluations in real-world Ceph environments show that AMDER-Ceph achieves up to 4.84 times faster performance compared with traditional methods, especially evident in the standard 4 MB block size configurations of Ceph systems. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 2369 KB  
Article
Weighted Robust Tensor Principal Component Analysis for the Recovery of Complex Corrupted Data in a 5G-Enabled Internet of Things
by Hanh Hong-Phuc Vo, Thuan Minh Nguyen and Myungsik Yoo
Appl. Sci. 2024, 14(10), 4239; https://doi.org/10.3390/app14104239 - 16 May 2024
Cited by 2 | Viewed by 1888
Abstract
Technological developments coupled with socioeconomic changes are driving a rapid transformation of the fifth-generation (5G) cellular network landscape. This evolution has led to versatile applications with fast data-transfer capabilities. The integration of 5G with wireless sensor networks (WSNs) has rendered the Internet of [...] Read more.
Technological developments coupled with socioeconomic changes are driving a rapid transformation of the fifth-generation (5G) cellular network landscape. This evolution has led to versatile applications with fast data-transfer capabilities. The integration of 5G with wireless sensor networks (WSNs) has rendered the Internet of Things (IoTs) crucial for measurement and sensing. Although 5G-enabled IoTs are vital, they face challenges in data integrity, such as mixed noise, outliers, and missing values, owing to various transmission issues. Traditional methods such as the tensor robust principal component analysis (TRPCA) have limitations in preserving essential data. This study introduces an enhanced approach, the weighted robust tensor principal component analysis (WRTPCA), combined with weighted tensor completion (WTC). The new method enhances data recovery using tensor singular value decomposition (t-SVD) to separate regular and abnormal data, preserve significant components, and robustly address complex data corruption issues, such as mixed noise, outliers, and missing data, with the globally optimal solution determined through the alternating direction method of multipliers (ADMM). Our study is the first to address complex corruption in multivariate data using the WTRPCA. The proposed approach outperforms current techniques. In all corrupted scenarios, the normalized mean absolute error (NMAE) of the proposed method is typically less than 0.2, demonstrating strong performance even in the most challenging conditions in which other models struggle. This highlights the effectiveness of the proposed approach in real-world 5G-enabled IoTs. Full article
Show Figures

Figure 1

12 pages, 1390 KB  
Article
Distributed Consensus for Global Matrix Formation in the Principal Component Pursuit Scenario
by Gustavo Suárez  and Juan David Velásquez
Appl. Sci. 2024, 14(9), 3619; https://doi.org/10.3390/app14093619 - 25 Apr 2024
Cited by 1 | Viewed by 1515
Abstract
The aim behind principal component pursuit is to recover a low-rank matrix and a sparse matrix from a noisy signal which is the sum of both matrices. This optimization problem is a priori and non-convex and is useful in signal processing, data compression, [...] Read more.
The aim behind principal component pursuit is to recover a low-rank matrix and a sparse matrix from a noisy signal which is the sum of both matrices. This optimization problem is a priori and non-convex and is useful in signal processing, data compression, image processing, machine learning, fluid dynamics, and more. Here, a distributed scheme described by a static undirected graph, where each agent only observes part of the noisy or corrupted matrix, is applied to achieve a consensus; then, a robust approach that can also handle missing values is applied using alternating directions to solve the convex relaxation problem, which actually solves the non-convex problem under some weak assumptions. Some examples of image recovery are shown, where the network of agents achieves consensus exponentially fast. Full article
Show Figures

Figure 1

19 pages, 1358 KB  
Article
Advanced Dual Reversible Data Hiding: A Focus on Modification Direction and Enhanced Least Significant Bit (LSB) Approaches
by Cheonshik Kim, Luis Cavazos Quero, Ki-Hyun Jung and Lu Leng
Appl. Sci. 2024, 14(6), 2437; https://doi.org/10.3390/app14062437 - 14 Mar 2024
Cited by 11 | Viewed by 1937
Abstract
In this study, we investigate advances in reversible data hiding (RDH), a critical area in the era of widespread digital data sharing. Recognizing the inherent vulnerabilities such as unauthorized access and data corruption during data transmission, we introduce an innovative dual approach to [...] Read more.
In this study, we investigate advances in reversible data hiding (RDH), a critical area in the era of widespread digital data sharing. Recognizing the inherent vulnerabilities such as unauthorized access and data corruption during data transmission, we introduce an innovative dual approach to RDH. We use the EMD (Exploiting Modification Direction) method along with an optimized LSB (Least Significant Bit) replacement strategy. This dual method, applied to grayscale images, has been carefully developed to improve data hiding by focusing on modifying pixel pairs. Our approach sets new standards for achieving a balance between high data embedding rates and the integrity of visual quality. The EMD method ensures that each secret digit in a 5-ary notational system is hidden by 2 cover pixels. Meanwhile, our LSB strategy finely adjusts the pixels selected by EMD to minimize data errors. Despite its simplicity, this approach has been proven to outperform existing technologies. It offers a high embedding rate (ER) while maintaining the high visual quality of the stego images. Moreover, it significantly improves data hiding capacity. This enables the full recovery of the original image without increasing file size or adding unnecessary data, marking a significant breakthrough in data security. Full article
(This article belongs to the Special Issue Deep Learning for Data Analysis)
Show Figures

Figure 1

22 pages, 2721 KB  
Article
Modelling Interrelationships of the Factors Impeding Community Engagement in Risk-Sensitive Urban Planning: Evidence from Sri Lanka
by Devindi Geekiyanage, Terrence Fernando and Kaushal Keraminiyage
Sustainability 2023, 15(20), 14662; https://doi.org/10.3390/su152014662 - 10 Oct 2023
Cited by 3 | Viewed by 3096
Abstract
During the last two decades, global disasters have impacted over 5.2 billion people, with economic losses exceeding USD 2.97 trillion. This underscores the critical need for inclusive risk-sensitive urban planning (RSUP) that integrates community insights. Community-based disaster risk reduction (CBDRR) has demonstrated a [...] Read more.
During the last two decades, global disasters have impacted over 5.2 billion people, with economic losses exceeding USD 2.97 trillion. This underscores the critical need for inclusive risk-sensitive urban planning (RSUP) that integrates community insights. Community-based disaster risk reduction (CBDRR) has demonstrated a potential reduction of up to 40% in mortality rates during disasters and cost savings in disaster response and recovery. However, research has shown that only 20% felt they are involved in decisions related to neighborhood planning, despite communities’ lived experience in surviving local hazards. This highlights a gap where practitioners dominate the development of mitigation and development plans, sidelining local perspectives. Using Sri Lanka as a case study, this study investigated the barriers to effective community participation in the decision-making of RSUP and thereby developed an interpretive logic model to establish an understanding of why they occur and how each barrier is interlinked. The data gathered from a sample of 44 experts and community representatives revealed 19 factors that impede community inclusion in the decision-making of RSUP in Sri Lanka. The Total Interpretive Structural Modelling (TISM) analysis adopted identified that the absence of legal provisions for inclusive development, political dynamics, and corruption are the most significant barriers. The Matrix Impact of Cross Multiplication Applied to Classification (MICMAC) further revealed that fewer financial provisions and the absence of an institutional framework for community engagement are the linking barriers to the other 17 barriers. This study not only extends the theoretical debate on barriers to community engagement for risk-responsive and equitable development but also helps urban planners, disaster management practitioners, and strategy policymakers focus on critical areas that need major reforms. Full article
(This article belongs to the Special Issue Sustainability of Post-disaster Recovery)
Show Figures

Figure 1

42 pages, 19037 KB  
Article
On Recovery of a Non-Negative Relaxation Spectrum Model from the Stress Relaxation Test Data
by Anna Stankiewicz, Monika Bojanowska and Paweł Drozd
Polymers 2023, 15(16), 3464; https://doi.org/10.3390/polym15163464 - 18 Aug 2023
Cited by 5 | Viewed by 1849
Abstract
The relaxation spectra, from which other material functions used to describe mechanical properties of materials can be uniquely determined, are important for modeling the rheological properties of polymers used in chemistry, food technology, medicine, cosmetics, and many other industries. The spectrum, being not [...] Read more.
The relaxation spectra, from which other material functions used to describe mechanical properties of materials can be uniquely determined, are important for modeling the rheological properties of polymers used in chemistry, food technology, medicine, cosmetics, and many other industries. The spectrum, being not directly accessible by measurement, is recovered from relaxation stress or oscillatory shear data. Only a few models and identification methods take into account the non-negativity of the real spectra. In this paper, the problem of recovery of non-negative definite relaxation spectra from discrete-time noise-corrupted measurements of relaxation modulus obtained in the stress relaxation test is considered. A new hierarchical identification scheme is developed, being applicable both for relaxation time and frequency spectra. Finite-dimensional parametric classes of models are assumed for the relaxation spectra, described by a finite series of power-exponential and square-exponential basis functions. The related models of relaxation modulus are given by compact analytical formula, described by the products of power of time and the modified Bessel functions of the second kind for the time spectrum, and by recurrence formulas based on products of power of time and complementary error functions for frequency spectrum. The basis functions are non-negative. In result, the identification task was reduced to a finite-dimensional linear-quadratic problem with non-negative unknown model parameters. To stabilize the solution, an additional smoothing constraint is introduced. Dual approach was used to solve the stated optimal identification task resulting in the hierarchical two-stage identification scheme. In the first stage, dual problem is solved in two levels and the vector of non-negative model parameters is computed to provide the best fit of the relaxation modulus to experiment data. Next, in second stage, the optimal non-negative spectrum model is determined. A complete scheme of the hierarchical computations is outlined; it can be easily implemented in available computing environments. The model smoothness is analytically studied, and the applicability ranges are numerically examined. The numerical studies have proved that using developed models and algorithm, it is possible to determine non-negative definite unimodal and bimodal relaxation spectra for a wide class of polymers. However, the examples also demonstrated that if the basis functions are non-negative and the model is properly selected for a given type of the real spectrum (unimodal, multimodal), the optimal model determined without non-negativity constraint can be non-negative in the dominant range of its arguments, especially in the wide neighborhood of the spectrum peaks. Full article
(This article belongs to the Special Issue Time-Dependent Mechanical Behavior of Polymers and Polymer Composites)
Show Figures

Figure 1

22 pages, 1122 KB  
Article
CRBF: Cross-Referencing Bloom-Filter-Based Data Integrity Verification Framework for Object-Based Big Data Transfer Systems
by Preethika Kasu, Prince Hamandawana and Tae-Sun Chung
Appl. Sci. 2023, 13(13), 7830; https://doi.org/10.3390/app13137830 - 3 Jul 2023
Viewed by 2063
Abstract
Various components are involved in the end-to-end path of data transfer. Protecting data integrity from failures in these intermediate components is a key feature of big data transfer tools. Although most of these components provide some degree of data integrity, they are either [...] Read more.
Various components are involved in the end-to-end path of data transfer. Protecting data integrity from failures in these intermediate components is a key feature of big data transfer tools. Although most of these components provide some degree of data integrity, they are either too expensive or inefficient in recovering corrupted data. This problem highlights the need for application-level end-to-end integrity verification during data transfer. However, the computational, memory, and storage overhead of big data transfer tools can be a significant bottleneck for ensuring data integrity due to the large size of the data. This paper proposes a novel framework for data integrity verification in big data transfer systems using a cross-referencing Bloom filter. This framework has three advantages over state-of-the-art data integrity techniques: lower computation and memory overhead and zero false-positive errors for a limited number of elements. This study evaluates the computation, memory, recovery time, and false-positive overhead for the proposed framework and compares them with state-of-the-art solutions. The evaluation results indicate that the proposed framework is efficient in detecting and recovering from integrity errors while eliminating false positives in the Bloom filter data structure. In addition, we observe negligible computation, memory, and recovery overheads for all workloads. Full article
(This article belongs to the Special Issue Secure Integration of IoT & Digital Twins)
Show Figures

Figure 1

34 pages, 6846 KB  
Article
Two-Level Scheme for Identification of the Relaxation Time Spectrum Using Stress Relaxation Test Data with the Optimal Choice of the Time-Scale Factor
by Anna Stankiewicz
Materials 2023, 16(9), 3565; https://doi.org/10.3390/ma16093565 - 6 May 2023
Cited by 9 | Viewed by 2208
Abstract
The viscoelastic relaxation spectrum is vital for constitutive models and for insight into the mechanical properties of materials, since, from the relaxation spectrum, other material functions used to describe rheological properties can be uniquely determined. The spectrum is not directly accessible via measurement [...] Read more.
The viscoelastic relaxation spectrum is vital for constitutive models and for insight into the mechanical properties of materials, since, from the relaxation spectrum, other material functions used to describe rheological properties can be uniquely determined. The spectrum is not directly accessible via measurement and must be recovered from relaxation stress or oscillatory shear data. This paper deals with the problem of the recovery of the relaxation time spectrum of linear viscoelastic material from discrete-time noise-corrupted measurements of a relaxation modulus obtained in the stress relaxation test. A two-level identification scheme is proposed. In the lower level, the regularized least-square identification combined with generalized cross-validation is used to find the optimal model with an arbitrary time-scale factor. Next, in the upper level, the optimal time-scale factor is determined to provide the best fit of the relaxation modulus to experiment data. The relaxation time spectrum is approximated by a finite series of power–exponential basis functions. The related model of the relaxation modulus is proved to be given by compact analytical formulas as the products of power of time and the modified Bessel functions of the second kind. The proposed approach merges the technique of an expansion of a function into a series of independent basis functions with the least-squares regularized identification and the optimal choice of the time-scale factor. Optimality conditions, approximation error, convergence, noise robustness and model smoothness are studied analytically. Applicability ranges are numerically examined. These studies have proved that using a developed model and algorithm, it is possible to determine the relaxation spectrum model for a wide class of viscoelastic materials. The model is smoothed and noise robust; small model errors are obtained for the optimal time-scale factors. The complete scheme of the hierarchical computations is outlined, which can be easily implemented in available computing environments. Full article
(This article belongs to the Special Issue Modelling of Viscoelastic Materials and Mechanical Behavior)
Show Figures

Figure 1

28 pages, 2605 KB  
Article
An Effective and Efficient Approach for 3D Recovery of Human Motion Capture Data
by Hashim Yasin, Saba Ghani and Björn Krüger
Sensors 2023, 23(7), 3664; https://doi.org/10.3390/s23073664 - 31 Mar 2023
Cited by 4 | Viewed by 4327
Abstract
In this work, we propose a novel data-driven approach to recover missing or corrupted motion capture data, either in the form of 3D skeleton joints or 3D marker trajectories. We construct a knowledge-base that contains prior existing knowledge, which helps us to make [...] Read more.
In this work, we propose a novel data-driven approach to recover missing or corrupted motion capture data, either in the form of 3D skeleton joints or 3D marker trajectories. We construct a knowledge-base that contains prior existing knowledge, which helps us to make it possible to infer missing or corrupted information of the motion capture data. We then build a kd-tree in parallel fashion on the GPU for fast search and retrieval of this already available knowledge in the form of nearest neighbors from the knowledge-base efficiently. We exploit the concept of histograms to organize the data and use an off-the-shelf radix sort algorithm to sort the keys within a single processor of GPU. We query the motion missing joints or markers, and as a result, we fetch a fixed number of nearest neighbors for the given input query motion. We employ an objective function with multiple error terms that substantially recover 3D joints or marker trajectories in parallel on the GPU. We perform comprehensive experiments to evaluate our approach quantitatively and qualitatively on publicly available motion capture datasets, namely CMU and HDM05. From the results, it is observed that the recovery of boxing, jumptwist, run, martial arts, salsa, and acrobatic motion sequences works best, while the recovery of motion sequences of kicking and jumping results in slightly larger errors. However, on average, our approach executes outstanding results. Generally, our approach outperforms all the competing state-of-the-art methods in the most test cases with different action sequences and executes reliable results with minimal errors and without any user interaction. Full article
(This article belongs to the Special Issue Sensor-Based Motion Analysis in Medicine, Rehabilitation and Sport)
Show Figures

Figure 1

35 pages, 4215 KB  
Article
A Class of Algorithms for Recovery of Continuous Relaxation Spectrum from Stress Relaxation Test Data Using Orthonormal Functions
by Anna Stankiewicz
Polymers 2023, 15(4), 958; https://doi.org/10.3390/polym15040958 - 15 Feb 2023
Cited by 10 | Viewed by 2331
Abstract
The viscoelastic relaxation spectrum provides deep insights into the complex behavior of polymers. The spectrum is not directly measurable and must be recovered from oscillatory shear or relaxation stress data. The paper deals with the problem of recovery of the relaxation spectrum of [...] Read more.
The viscoelastic relaxation spectrum provides deep insights into the complex behavior of polymers. The spectrum is not directly measurable and must be recovered from oscillatory shear or relaxation stress data. The paper deals with the problem of recovery of the relaxation spectrum of linear viscoelastic materials from discrete-time noise-corrupted measurements of relaxation modulus obtained in the stress relaxation test. A class of robust algorithms of approximation of the continuous spectrum of relaxation frequencies by finite series of orthonormal functions is proposed. A quadratic identification index, which refers to the measured relaxation modulus, is adopted. Since the problem of relaxation spectrum identification is an ill-posed inverse problem, Tikhonov regularization combined with generalized cross-validation is used to guarantee the stability of the scheme. It is proved that the accuracy of the spectrum approximation depends both on measurement noises and the regularization parameter and on the proper selection of the basis functions. The series expansions using the Laguerre, Legendre, Hermite and Chebyshev functions were studied in this paper as examples. The numerical realization of the scheme by the singular value decomposition technique is discussed and the resulting computer algorithm is outlined. Numerical calculations on model data and relaxation spectrum of polydisperse polymer are presented. Analytical analysis and numerical studies proved that by choosing an appropriate model through selection of orthonormal basis functions from the proposed class of models and using a developed algorithm of least-square regularized identification, it is possible to determine the relaxation spectrum model for a wide class of viscoelastic materials. The model is smoothed and robust on measurement noises; small model approximation errors are obtained. The identification scheme can be easily implemented in available computing environments. Full article
(This article belongs to the Special Issue Time-Dependent Mechanical Behavior of Polymers and Polymer Composites)
Show Figures

Figure 1

Back to TopTop