Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = memory corruption detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 502 KB  
Article
Exception-Driven Security: A Risk-Aware Permission Adjustment for High-Availability Embedded Systems
by Mina Soltani Siapoush and Jim Alves-Foss
Mathematics 2025, 13(20), 3304; https://doi.org/10.3390/math13203304 - 16 Oct 2025
Viewed by 619
Abstract
Real-time operating systems (RTOSs) are widely used in embedded systems to ensure deterministic task execution, predictable responses, and concurrent operations, which are crucial for time-sensitive applications. However, the growing complexity of embedded systems, increased network connectivity, and dynamic software updates significantly expand the [...] Read more.
Real-time operating systems (RTOSs) are widely used in embedded systems to ensure deterministic task execution, predictable responses, and concurrent operations, which are crucial for time-sensitive applications. However, the growing complexity of embedded systems, increased network connectivity, and dynamic software updates significantly expand the attack surface, exposing RTOSs to a variety of security threats, including memory corruption, privilege escalation, and side-channel attacks. Traditional security mechanisms often impose additional overhead that can compromise real-time guarantees. In this work, we present a Risk-aware Permission Adjustment (RPA) framework, implemented on CHERIoT RTOS, which is a CHERI-based operating system. RPA aims to detect anomalous behavior in real time, quantify security risks, and dynamically adjust permissions to mitigate potential threats. RPA maintains system continuity, enforces fine-grained access control, and progressively contains the impact of violations without interrupting critical operations. The framework was evaluated through targeted fault injection experiments, including 20 real-world CVEs and 15 abstract vulnerability classes, demonstrating its ability to mitigate both known and generalized attacks. Performance measurements indicate minimal runtime overhead while significantly reducing system downtime compared to conventional CHERIoT and FreeRTOS implementations. Full article
Show Figures

Figure 1

25 pages, 1670 KB  
Article
Reliability of LEON3 Processor’s Program Counter Against SEU, MBU, and SET Fault Injection
by Afef Kchaou, Sehmi Saad, Hatem Garrab and Mohsen Machhout
Cryptography 2025, 9(3), 54; https://doi.org/10.3390/cryptography9030054 - 27 Aug 2025
Cited by 2 | Viewed by 1405
Abstract
This paper presents a comprehensive register transfer-level (RTL) fault injection study targeting the program counter (PC) of the LEON3 processor, a SPARC V8-compliant core widely used in safety-critical and radiation-prone embedded applications. Using the enhanced NETFI+ framework, over four million faults, including single-event [...] Read more.
This paper presents a comprehensive register transfer-level (RTL) fault injection study targeting the program counter (PC) of the LEON3 processor, a SPARC V8-compliant core widely used in safety-critical and radiation-prone embedded applications. Using the enhanced NETFI+ framework, over four million faults, including single-event upsets (SEUs), multiple-bit upsets (MBUs), and single-event transients (SETs), were systematically injected into the PC across all pipeline stages. The analysis reveals that early stages, particularly Fetch (FE), Decode (DE), Register Access (RA), and Execute (EX), are highly sensitive to SEU and MBU faults. The propagation of errors detected in the two early stages of the pipeline (FE and DE) is classified with an important percentage of halt execution and timeout traps. Intermediate stages, such as RA and EX, exhibited a higher incidence of silent data corruption and halt execution, while the Memory (ME) and Exception (XC) stages demonstrated greater resilience through fault masking. SET faults were mostly transient and masked, though they occasionally resulted in control flow anomalies. In addition to error classification, detailed trap and exception analysis was performed to characterize fault-induced failure mechanisms. The findings underscore the need for pipeline-stage-specific hardening strategies and highlight the value of simulation-based fault injection for early design validation in safety-critical embedded processors. Full article
Show Figures

Figure 1

19 pages, 1619 KB  
Article
A Structured Method to Generate Self-Test Libraries for Tensor Cores
by Robert Limas Sierra, Juan David Guerrero Balaguera, Josie E. Rodriguez Condia and Matteo Sonza Reorda
Electronics 2025, 14(11), 2148; https://doi.org/10.3390/electronics14112148 - 25 May 2025
Viewed by 1377
Abstract
Modern computing systems increasingly rely on specialized hardware accelerators, such as Graphics Processing Units (GPUs), to meet growing computational demands. GPUs are essential for accelerating a wide range of applications, from machine learning and scientific computing to safety-critical domains like autonomous systems and [...] Read more.
Modern computing systems increasingly rely on specialized hardware accelerators, such as Graphics Processing Units (GPUs), to meet growing computational demands. GPUs are essential for accelerating a wide range of applications, from machine learning and scientific computing to safety-critical domains like autonomous systems and aerospace. To enhance performance, modern GPUs integrate dedicated in-chip units, such as Tensor Cores(TCs), which are designed for efficient mixed-precision matrix operations. However, as semiconductor technologies scale down, reliability challenges emerge. Permanent hardware faults caused by aging, process variations, or environmental stress can lead to Silent Data Corruptions, which silently compromise computation results. In order to detect such faults, self-test libraries (STLs) are widely used, corresponding to suitably crafted pieces of code, able to activate faults and propagate their effects to visible points (e.g., the memory) and possibly signal their occurrence. This work introduces a structured method for generating STLs to detect permanent hardware faults that may arise in TCs. By leveraging the parallelism and regular structure of TCs, the method facilitates the creation of effective STLs for in-field fault detection without hardware modifications and with minimal requirements in terms of test time and memory. The proposed approach was validated on an NVIDIA GeForce RTX 3060 Ti GPU, installed in a Hewlett-Packard Z2 G5 workstation with an Intel Core i9-10800 CPU and 32 GB RAM, available at the Department of Control and Computer Engineering (DAUIN), Politecnico di Torino, Turin, Italy.This setup was used to address stuck-at faults in the arithmetic units of TCs. The results demonstrate that the methodology offers a practical, scalable, and non-intrusive solution for enhancing GPU reliability, applicable in both high-performance and safety-critical environments. Full article
Show Figures

Figure 1

22 pages, 687 KB  
Article
Performance and Scalability of Data Cleaning and Preprocessing Tools: A Benchmark on Large Real-World Datasets
by Pedro Martins, Filipe Cardoso, Paulo Váz, José Silva and Maryam Abbasi
Data 2025, 10(5), 68; https://doi.org/10.3390/data10050068 - 5 May 2025
Cited by 4 | Viewed by 9920
Abstract
Data cleaning remains one of the most time-consuming and critical steps in modern data science, directly influencing the reliability and accuracy of downstream analytics. In this paper, we present a comprehensive evaluation of five widely used data cleaning tools—OpenRefine, Dedupe, Great Expectations, TidyData [...] Read more.
Data cleaning remains one of the most time-consuming and critical steps in modern data science, directly influencing the reliability and accuracy of downstream analytics. In this paper, we present a comprehensive evaluation of five widely used data cleaning tools—OpenRefine, Dedupe, Great Expectations, TidyData (PyJanitor), and a baseline Pandas pipeline—applied to large-scale, messy datasets spanning three domains (healthcare, finance, and industrial telemetry). We benchmark each tool on dataset sizes ranging from 1 million to 100 million records, measuring execution time, memory usage, error detection accuracy, and scalability under increasing data volumes. Additionally, we assess qualitative aspects such as usability and ease of integration, reflecting real-world adoption concerns. We incorporate recent findings on parallelized data cleaning and highlight how domain-specific anomalies (e.g., negative amounts in finance, sensor corruption in industrial telemetry) can significantly impact tool choice. Our findings reveal that no single solution excels across all metrics; while Dedupe provides robust duplicate detection and Great Expectations offers in-depth rule-based validation, tools like TidyData and baseline Pandas pipelines demonstrate strong scalability and flexibility under chunk-based ingestion. The choice of tool ultimately depends on domain-specific requirements (e.g., approximate matching in finance and strict auditing in healthcare) and the magnitude of available computational resources. By highlighting each framework’s strengths and limitations, this study offers data practitioners clear, evidence-driven guidance for selecting and combining tools to tackle large-scale data cleaning challenges. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

20 pages, 938 KB  
Review
IoT Firmware Emulation and Its Security Application in Fuzzing: A Critical Revisit
by Wei Zhou, Shandian Shen and Peng Liu
Future Internet 2025, 17(1), 19; https://doi.org/10.3390/fi17010019 - 6 Jan 2025
Cited by 4 | Viewed by 7346
Abstract
As IoT devices with microcontroller (MCU)-based firmware become more common in our lives, memory corruption vulnerabilities in their firmware are increasingly targeted by adversaries. Fuzzing is a powerful method for detecting these vulnerabilities, but it poses unique challenges when applied to IoT devices. [...] Read more.
As IoT devices with microcontroller (MCU)-based firmware become more common in our lives, memory corruption vulnerabilities in their firmware are increasingly targeted by adversaries. Fuzzing is a powerful method for detecting these vulnerabilities, but it poses unique challenges when applied to IoT devices. Direct fuzzing on these devices is inefficient, and recent efforts have shifted towards creating emulation environments for dynamic firmware testing. However, unlike traditional software, firmware interactions with peripherals that are significantly more diverse presents new challenges for achieving scalable full-system emulation and effective fuzzing. This paper reviews 27 state-of-the-art works in MCU-based firmware emulation and its applications in fuzzing. Instead of classifying existing techniques based on their capabilities and features, we first identify the fundamental challenges faced by firmware emulation and fuzzing. We then revisit recent studies, organizing them according to the specific challenges they address, and discussing how each specific challenge is addressed. We compare the emulation fidelity and bug detection capabilities of various techniques to clearly demonstrate their strengths and weaknesses, aiding users in selecting or combining tools to meet their needs. Finally, we highlight the remaining technical gaps and point out important future research directions in firmware emulation and fuzzing. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

23 pages, 1499 KB  
Article
A Finite State Automaton for Green Data Validation in a Real-World Smart Manufacturing Environment with Special Regard to Time-Outs and Overtaking
by Simon Paasche and Sven Groppe
Future Internet 2023, 15(11), 349; https://doi.org/10.3390/fi15110349 - 26 Oct 2023
Cited by 2 | Viewed by 2509
Abstract
Since data are the gold of modern business, companies put a huge effort into collecting internal and external information, such as process, supply chain, or customer data. To leverage the full potential of gathered information, data have to be free of errors and [...] Read more.
Since data are the gold of modern business, companies put a huge effort into collecting internal and external information, such as process, supply chain, or customer data. To leverage the full potential of gathered information, data have to be free of errors and corruptions. Thus, the impacts of data quality and data validation approaches become more and more relevant. At the same time, the impact of information and communication technologies has been increasing for several years. This leads to increasing energy consumption and the associated emission of climate-damaging gases such as carbon dioxide (CO2). Since these gases cause serious problems (e.g., climate change) and lead to climate targets not being met, it is a major goal for companies to become climate neutral. Our work focuses on quality aspects in smart manufacturing lines and presents a finite automaton to validate an incoming stream of manufacturing data. Through this process, we aim to achieve a sustainable use of manufacturing resources. In the course of this work, we aim to investigate possibilities to implement data validation in resource-saving ways. Our automaton enables the detection of errors in a continuous data stream and reports discrepancies directly. By making inconsistencies visible and annotating affected data sets, we are able to increase the overall data quality. Further, we build up a fast feedback loop, allowing us to quickly intervene and remove sources of interference. Through this fast feedback, we expect a lower consumption of material resources on the one hand because we can intervene in case of error and optimize our processes. On the other hand, our automaton decreases the immaterial resources needed, such as the required energy consumption for data validation, due to more efficient validation steps. We achieve the more efficient validation steps by the already-mentioned automaton structure. Furthermore, we reduce the response time through additional recognition of overtaking data records. In addition, we implement an improved check for complex inconsistencies. Our experimental results show that we are able to significantly reduce memory usage and thus decrease the energy consumption for our data validation task. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

22 pages, 1122 KB  
Article
CRBF: Cross-Referencing Bloom-Filter-Based Data Integrity Verification Framework for Object-Based Big Data Transfer Systems
by Preethika Kasu, Prince Hamandawana and Tae-Sun Chung
Appl. Sci. 2023, 13(13), 7830; https://doi.org/10.3390/app13137830 - 3 Jul 2023
Viewed by 2100
Abstract
Various components are involved in the end-to-end path of data transfer. Protecting data integrity from failures in these intermediate components is a key feature of big data transfer tools. Although most of these components provide some degree of data integrity, they are either [...] Read more.
Various components are involved in the end-to-end path of data transfer. Protecting data integrity from failures in these intermediate components is a key feature of big data transfer tools. Although most of these components provide some degree of data integrity, they are either too expensive or inefficient in recovering corrupted data. This problem highlights the need for application-level end-to-end integrity verification during data transfer. However, the computational, memory, and storage overhead of big data transfer tools can be a significant bottleneck for ensuring data integrity due to the large size of the data. This paper proposes a novel framework for data integrity verification in big data transfer systems using a cross-referencing Bloom filter. This framework has three advantages over state-of-the-art data integrity techniques: lower computation and memory overhead and zero false-positive errors for a limited number of elements. This study evaluates the computation, memory, recovery time, and false-positive overhead for the proposed framework and compares them with state-of-the-art solutions. The evaluation results indicate that the proposed framework is efficient in detecting and recovering from integrity errors while eliminating false positives in the Bloom filter data structure. In addition, we observe negligible computation, memory, and recovery overheads for all workloads. Full article
(This article belongs to the Special Issue Secure Integration of IoT & Digital Twins)
Show Figures

Figure 1

31 pages, 5140 KB  
Article
A Family of Developed Hybrid Four-Term Conjugate Gradient Algorithms for Unconstrained Optimization with Applications in Image Restoration
by Eltiyeb Ali and Salem Mahdi
Symmetry 2023, 15(6), 1203; https://doi.org/10.3390/sym15061203 - 4 Jun 2023
Cited by 5 | Viewed by 2544
Abstract
The most important advantage of conjugate gradient methods (CGs) is that these methods have low memory requirements and convergence speed. This paper contains two main parts that deal with two application problems, as follows. In the first part, three new parameters of the [...] Read more.
The most important advantage of conjugate gradient methods (CGs) is that these methods have low memory requirements and convergence speed. This paper contains two main parts that deal with two application problems, as follows. In the first part, three new parameters of the CG methods are designed and then combined by employing a convex combination. The search direction is a four-term hybrid form for modified classical CG methods with some newly proposed parameters. The result of this hybridization is the acquisition of a newly developed hybrid CGCG method containing four terms. The proposed CGCG has sufficient descent properties. The convergence analysis of the proposed method is considered under some reasonable conditions. A numerical investigation is carried out for an unconstrained optimization problem. The comparison between the newly suggested algorithm (CGCG) and five other classical CG algorithms shows that the new method is competitive with and in all statuses superior to the five methods in terms of efficiency reliability and effectiveness in solving large-scale, unconstrained optimization problems. The second main part of this paper discusses the image restoration problem. By using the adaptive median filter method, the noise in an image is detected, and then the corrupted pixels of the image are restored by using a new family of modified hybrid CG methods. This new family has four terms: the first is the negative gradient; the second one consists of either the HS-CG method or the HZ-CG method; and the third and fourth terms are taken from our proposed CGCG method. Additionally, a change in the size of the filter window plays a key role in improving the performance of this family of CG methods, according to the noise level. Four famous images (test problems) are used to examine the performance of the new family of modified hybrid CG methods. The outstanding clearness of the restored images indicates that the new family of modified hybrid CG methods has reliable efficiency and effectiveness in dealing with image restoration problems. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

23 pages, 7655 KB  
Article
Image Edge Detection Based on Fractional-Order Ant Colony Algorithm
by Xinyu Liu and Yi-Fei Pu
Fractal Fract. 2023, 7(6), 420; https://doi.org/10.3390/fractalfract7060420 - 23 May 2023
Cited by 5 | Viewed by 2585
Abstract
Edge detection is a highly researched topic in the field of image processing, with numerous methods proposed by previous scholars. Among these, ant colony algorithms have emerged as a promising approach for detecting image edges. These algorithms have demonstrated high efficacy in accurately [...] Read more.
Edge detection is a highly researched topic in the field of image processing, with numerous methods proposed by previous scholars. Among these, ant colony algorithms have emerged as a promising approach for detecting image edges. These algorithms have demonstrated high efficacy in accurately identifying edges within images. For this paper, due to the long-term memory, nonlocality, and weak singularity of fractional calculus, fractional-order ant colony algorithm combined with fractional differential mask and coefficient of variation (FACAFCV) for image edge detection is proposed. If we set the order of the fractional-order ant colony algorithm and fractional differential mask to v=0, the edge detection method we propose becomes an integer-order edge detection method. We conduct experiments on images that are corrupted by multiplicative noise, as well as on an edge detection dataset. Our experimental results demonstrate that our method is able to detect image edges, while also mitigating the impact of multiplicative noise. These results indicate that our method has the potential to be a valuable tool for edge detection in practical applications. Full article
Show Figures

Figure 1

18 pages, 658 KB  
Article
Defending against OS-Level Malware in Mobile Devices via Real-Time Malware Detection and Storage Restoration
by Niusen Chen and Bo Chen
J. Cybersecur. Priv. 2022, 2(2), 311-328; https://doi.org/10.3390/jcp2020017 - 26 May 2022
Cited by 5 | Viewed by 5846
Abstract
Combating the OS-level malware is a very challenging problem as this type of malware can compromise the operating system, obtaining the kernel privilege and subverting almost all the existing anti-malware tools. This work aims to address this problem in the context of mobile [...] Read more.
Combating the OS-level malware is a very challenging problem as this type of malware can compromise the operating system, obtaining the kernel privilege and subverting almost all the existing anti-malware tools. This work aims to address this problem in the context of mobile devices. As real-world malware is very heterogeneous, we narrow down the scope of our work by especially focusing on a special type of OS-level malware that always corrupts user data. We have designed mobiDOM, the first framework that can combat the OS-level data corruption malware for mobile computing devices. Our mobiDOM contains two components, a malware detector and a data repairer. The malware detector can securely and timely detect the presence of OS-level malware by fully utilizing the existing hardware features of a mobile device, namely, flash memory and Arm TrustZone. Specifically, we integrate the malware detection into the flash translation layer (FTL), a firmware layer embedded into the flash storage hardware, which is inaccessible to the OS; in addition, we run a trusted application in the Arm TrustZone secure world, which acts as a user-level manager of the malware detector. The FTL-based malware detection and the TrustZone-based manager can communicate with each other stealthily via steganography. The data repairer can allow restoring the external storage to a healthy historical state by taking advantage of the out-of-place-update feature of flash memory and our malware-aware garbage collection in the FTL. Security analysis and experimental evaluation on a real-world testbed confirm the effectiveness of mobiDOM. Full article
(This article belongs to the Special Issue Secure Software Engineering)
Show Figures

Figure 1

19 pages, 1336 KB  
Article
Machine Learning Meets Compressed Sensing in Vibration-Based Monitoring
by Federica Zonzini, Antonio Carbone, Francesca Romano, Matteo Zauli and Luca De Marchi
Sensors 2022, 22(6), 2229; https://doi.org/10.3390/s22062229 - 14 Mar 2022
Cited by 40 | Viewed by 5731
Abstract
Artificial Intelligence applied to Structural Health Monitoring (SHM) has provided considerable advantages in the accuracy and quality of the estimated structural integrity. Nevertheless, several challenges still need to be tackled in the SHM field, which extended the monitoring process beyond the mere data [...] Read more.
Artificial Intelligence applied to Structural Health Monitoring (SHM) has provided considerable advantages in the accuracy and quality of the estimated structural integrity. Nevertheless, several challenges still need to be tackled in the SHM field, which extended the monitoring process beyond the mere data analytics and structural assessment task. Besides, one of the open problems in the field relates to the communication layer of the sensor networks since the continuous collection of long time series from multiple sensing units rapidly consumes the available memory resources, and requires complicated protocol to avoid network congestion. In this scenario, the present work presents a comprehensive framework for vibration-based diagnostics, in which data compression techniques are firstly introduced as a means to shrink the dimension of the data to be managed through the system. Then, neural network models solving binary classification problems were implemented for the sake of damage detection, also encompassing the influence of environmental factors in the evaluation of the structural status. Moreover, the potential degradation induced by the usage of low cost sensors on the adopted framework was evaluated: Additional analyses were performed in which experimental data were corrupted with the noise characterizing MEMS sensors. The proposed solutions were tested with experimental data from the Z24 bridge use case, proving that the amalgam of data compression, optimized (i.e., low complexity) machine learning architectures and environmental information allows to attain high classification scores, i.e., accuracy and precision greater than 96% and 95%, respectively. Full article
Show Figures

Figure 1

15 pages, 4120 KB  
Article
The Application of Deep Learning Algorithms for PPG Signal Processing and Classification
by Filipa Esgalhado, Beatriz Fernandes, Valentina Vassilenko, Arnaldo Batista and Sara Russo
Computers 2021, 10(12), 158; https://doi.org/10.3390/computers10120158 - 25 Nov 2021
Cited by 39 | Viewed by 13014
Abstract
Photoplethysmography (PPG) is widely used in wearable devices due to its conveniency and cost-effective nature. From this signal, several biomarkers can be collected, such as heart and respiration rate. For the usual acquisition scenarios, PPG is an artefact-ridden signal, which mandates the need [...] Read more.
Photoplethysmography (PPG) is widely used in wearable devices due to its conveniency and cost-effective nature. From this signal, several biomarkers can be collected, such as heart and respiration rate. For the usual acquisition scenarios, PPG is an artefact-ridden signal, which mandates the need for the designated classification algorithms to be able to reduce the noise component effect on the classification. Within the selected classification algorithm, the hyperparameters’ adjustment is of utmost importance. This study aimed to develop a deep learning model for robust PPG wave detection, which includes finding each beat’s temporal limits, from which the peak can be determined. A study database consisting of 1100 records was created from experimental PPG measurements performed in 47 participants. Different deep learning models were implemented to classify the PPG: Long Short-Term Memory (LSTM), Bidirectional LSTM, and Convolutional Neural Network (CNN). The Bidirectional LSTM and the CNN-LSTM were investigated, using the PPG Synchrosqueezed Fourier Transform (SSFT) as the models’ input. Accuracy, precision, recall, and F1-score were evaluated for all models. The CNN-LSTM algorithm, with an SSFT input, was the best performing model with accuracy, precision, and recall of 0.894, 0.923, and 0.914, respectively. This model has shown to be competent in PPG detection and delineation tasks, under noise-corrupted signals, which justifies the use of this innovative approach. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2021)
Show Figures

Graphical abstract

24 pages, 7102 KB  
Article
Automated Memory Corruption Detection through Analysis of Static Variables and Dynamic Memory Usage
by Jihyun Park, Byoungju Choi and Yeonhee Kim
Electronics 2021, 10(17), 2127; https://doi.org/10.3390/electronics10172127 - 1 Sep 2021
Cited by 2 | Viewed by 3999
Abstract
Various methods for memory fault detection have been developed through continuous study. However, many memory defects remain that are difficult to resolve. Memory corruption is one such defect, and can cause system crashes, making debugging important. However, the locations of the system crash [...] Read more.
Various methods for memory fault detection have been developed through continuous study. However, many memory defects remain that are difficult to resolve. Memory corruption is one such defect, and can cause system crashes, making debugging important. However, the locations of the system crash and the actual source of the memory corruption often differ, which makes it difficult to solve these defects using the existing methods. In this paper, we propose a method that detects memory defects in which the location causing the defect is different from the actual location, providing useful information for debugging. This study presents a method for the real-time detection of memory defects in software based on data obtained through static and dynamic analysis. The data we used for memory defect analysis were (1) information of static global variables (data, address, size) derived through the analysis of executable binary files, and (2) dynamic memory usage information obtained by tracking memory-related functions that are called during the real-time execution of the process. We implemented the proposed method as a tool and applied it to applications running on the Linux. The results indicate the defect-detection efficacy of our tool for this application. Our method accurately detects defects with different cause and detected-fault locations, and also requires a very low overhead for fault detection. Full article
(This article belongs to the Special Issue Software Verification and Validation for Embedded Systems—Volume 2)
Show Figures

Figure 1

18 pages, 1071 KB  
Article
Lightweight Microcontroller with Parallelized ECC-Based Code Memory Protection Unit for Robust Instruction Execution in Smart Sensors
by Myeongjin Kang and Daejin Park
Sensors 2021, 21(16), 5508; https://doi.org/10.3390/s21165508 - 16 Aug 2021
Cited by 2 | Viewed by 3136
Abstract
Embedded systems typically operate in harsh environments, such as where there is external shock, insufficient power, or an obsolete sensor after the replacement cycle. Despite these harsh environments, embedded systems require data integrity for accurate operation. Unintended data changes can cause a serious [...] Read more.
Embedded systems typically operate in harsh environments, such as where there is external shock, insufficient power, or an obsolete sensor after the replacement cycle. Despite these harsh environments, embedded systems require data integrity for accurate operation. Unintended data changes can cause a serious error in reduced instruction set computer (RISC)-based small embedded systems. For instance, if communication is performed on an edge, where there is insufficient power supply, the peak threshold is not reached, resulting in data transmission failure or incorrect data transmission. To ensure data integrity, we use an error-correcting code (ECC), which can detect and correct errors. The ECC parity bit and data are stored together using additional ECC memory, and the original data are extracted through the ECC decoding process. The process of extracting the original data is executed in the instruction fetch stage, where a bottleneck appears in the RISC-based structure. When the ECC decoding process is executed in the bottleneck, the instruction fetch stage increases the instruction fetch time and significantly reduces the overall performance. In this study, we attempt to minimize the effect of ECC on the transmission speed by executing the ECC decoding process in parallel to improve speed by degrading the bottleneck. To evaluate the performance of a parallelized ECC decoding block, we applied the proposed method to the tiny processing unit (TPU) with a RISC-based von Neumann structure and compared memory usage, speed, and reliability according to different transmission success rates in each model. The experiment was conducted using a benchmark that repeatedly executed several 3*3 matrix calculations, and reliability improvement was compared by corrupting the stored random date to confirm the reliability of the transmission success rate. As a result, in the proposed model, using the additional parity bits for parallel processing, memory usage increased by 10 bits per instruction, reducing the data rate from 80 to 61%. However, it showed an improvement in overall reliability and a 7% increase in speed. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

18 pages, 1840 KB  
Article
Recurrent Neural Network for Human Activity Recognition in Embedded Systems Using PPG and Accelerometer Data
by Michele Alessandrini, Giorgio Biagetti, Paolo Crippa, Laura Falaschetti and Claudio Turchetti
Electronics 2021, 10(14), 1715; https://doi.org/10.3390/electronics10141715 - 17 Jul 2021
Cited by 60 | Viewed by 6256
Abstract
Photoplethysmography (PPG) is a common and practical technique to detect human activity and other physiological parameters and is commonly implemented in wearable devices. However, the PPG signal is often severely corrupted by motion artifacts. The aim of this paper is to address the [...] Read more.
Photoplethysmography (PPG) is a common and practical technique to detect human activity and other physiological parameters and is commonly implemented in wearable devices. However, the PPG signal is often severely corrupted by motion artifacts. The aim of this paper is to address the human activity recognition (HAR) task directly on the device, implementing a recurrent neural network (RNN) in a low cost, low power microcontroller, ensuring the required performance in terms of accuracy and low complexity. To reach this goal, (i) we first develop an RNN, which integrates PPG and tri-axial accelerometer data, where these data can be used to compensate motion artifacts in PPG in order to accurately detect human activity; (ii) then, we port the RNN to an embedded device, Cloud-JAM L4, based on an STM32 microcontroller, optimizing it to maintain an accuracy of over 95% while requiring modest computational power and memory resources. The experimental results show that such a system can be effectively implemented on a constrained-resource system, allowing the design of a fully autonomous wearable embedded system for human activity recognition and logging. Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

Back to TopTop