Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (140)

Search Parameters:
Keywords = bit representation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 1259 KiB  
Article
A Survey of Printable Encodings
by Marco Botta, Davide Cavagnino, Alessandro Druetto, Maurizio Lucenteforte and Annunziata Marra
Algorithms 2025, 18(8), 504; https://doi.org/10.3390/a18080504 - 12 Aug 2025
Viewed by 102
Abstract
The representation of binary data in a compact, printable, efficient, and often human-readable format is essential in numerous computing applications, mainly driven by the limitations of systems and communication protocols not designed to handle arbitrary 8-bit binary data. This paper provides a comprehensive [...] Read more.
The representation of binary data in a compact, printable, efficient, and often human-readable format is essential in numerous computing applications, mainly driven by the limitations of systems and communication protocols not designed to handle arbitrary 8-bit binary data. This paper provides a comprehensive survey and an extensive characterization of printable encoding schemes, tracing their evolution from historical methods to contemporary solutions for representing, storing, and transmitting binary data using restricted character sets. The review includes a foundational analysis of fundamental character encodings, proposes a layered model for the classification of printable encodings, and examines various schemes based on their numerical bases, alphabets, and functional characteristics. Algorithms, key design trade-offs, the impact of relevant standards, security implications, performance considerations, and human factors are systematically discussed, aiming to offer a detailed understanding of the current context and open challenges. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

34 pages, 1448 KiB  
Article
High-Fidelity Image Transmission in Quantum Communication with Frequency Domain Multi-Qubit Techniques
by Udara Jayasinghe, Thanuj Fernando and Anil Fernando
Algorithms 2025, 18(8), 501; https://doi.org/10.3390/a18080501 - 11 Aug 2025
Viewed by 246
Abstract
This paper proposes a novel quantum image transmission framework to address the limitations of existing single-qubit time domain systems, which struggle with noise resilience and scalability. The framework integrates frequency domain processing with multi-qubit (1 to 8 qubits) encoding to enhance robustness against [...] Read more.
This paper proposes a novel quantum image transmission framework to address the limitations of existing single-qubit time domain systems, which struggle with noise resilience and scalability. The framework integrates frequency domain processing with multi-qubit (1 to 8 qubits) encoding to enhance robustness against quantum noise. Initially, images are source-coded using JPEG and HEIF formats with rate adjustment to ensure consistent bandwidth usage. The resulting bitstreams are channel-encoded and mapped to multi-qubit quantum states. These states are transformed into the frequency domain via the quantum Fourier transform (QFT) for transmission. At the receiver, the inverse QFT recovers the time domain states, followed by multi-qubit decoding, channel decoding, and source decoding to reconstruct the image. Performance is evaluated using bit error rate (BER), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and universal quality index (UQI). Results show that increasing the number of qubits enhances image quality and noise robustness, albeit at the cost of increased system complexity. Compared to time domain processing, the frequency domain approach achieves superior performance across all qubit configurations, with the eight-qubit system delivering up to a 4 dB maximum channel SNR gain for both JPEG and HEIF images. Although single-qubit systems benefit less from frequency domain encoding due to limited representational capacity, the overall framework demonstrates strong potential for scalable and noise-robust quantum image transmission in future quantum communication networks. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

21 pages, 766 KiB  
Article
Edge AI Trustworthiness: Revisiting Bit-Flip Impacts and Criticality Conditions
by Régis Leveugle, Ahmed Al-Kaf and Mounir Benabdenbi
Electronics 2025, 14(16), 3186; https://doi.org/10.3390/electronics14163186 - 11 Aug 2025
Viewed by 186
Abstract
Neural networks (NNs) are increasingly used in embedded systems driven by artificial intelligence (AI) features. Such systems are now developed for many critical applications with strong dependability constraints and trustworthiness requirements. The effect of bit-flips occurring during inferences in the field has therefore [...] Read more.
Neural networks (NNs) are increasingly used in embedded systems driven by artificial intelligence (AI) features. Such systems are now developed for many critical applications with strong dependability constraints and trustworthiness requirements. The effect of bit-flips occurring during inferences in the field has therefore become a major concern. Dedicated design methods have been proposed to increase the robustness of NNs and enforce trustworthiness, while minimizing implementation overheads in the context of Edge AI. Such approaches are usually based on criticality criteria characterizing the most impactful bit-flips. In this work, the two main criteria are revisited in the case of bit-flips in data that is independent of the hardware/software implementation. Extensive statistical fault injection results are analyzed for a case study based on a quantized version of LeNet. They first demonstrate that any criterion is only held in some perturbation conditions, depending on the type of perturbed data and on the perturbed layer. The data representation format has also a significant impact. Surprising outcomes of the case study allow identifying a new parameter overlooked in the literature, i.e., the definition of the activation functions. Another important finding, mostly neglected in the literature, is the large number of bit-flips having a positive impact on inferences by correcting misclassifications of the nominal NN. In the presented case study, almost 40% of the misclassifications due to bit-flips are compensated for by the positive effects of other bit-flips, leading to only 0.06% global accuracy loss when bit-flips occur. These outcomes indicate that mitigations must be carefully tailored to each layer and data subset of the NN in order to accurately limit the effect of critical bit-flips. Moreover, at the same time they must avoid suppressing the advantage taken from bit-flips with positive effects. Although derived from a specific case study, these findings have a wide significance in design practice. Full article
Show Figures

Figure 1

32 pages, 6134 KiB  
Article
Nonlinear Dynamic Modeling and Analysis of Drill Strings Under Stick–Slip Vibrations in Rotary Drilling Systems
by Mohamed Zinelabidine Doghmane
Energies 2025, 18(14), 3860; https://doi.org/10.3390/en18143860 - 20 Jul 2025
Viewed by 382
Abstract
This paper presents a comprehensive study of torsional stick–slip vibrations in rotary drilling systems through a comparison between two lumped parameter models with differing complexity: a simple two-degree-of-freedom (2-DOF) model and a complex high-degree-of-freedom (high-DOF) model. The two models are developed under identical [...] Read more.
This paper presents a comprehensive study of torsional stick–slip vibrations in rotary drilling systems through a comparison between two lumped parameter models with differing complexity: a simple two-degree-of-freedom (2-DOF) model and a complex high-degree-of-freedom (high-DOF) model. The two models are developed under identical boundary conditions and consider an identical nonlinear friction torque dynamic involving the Stribeck effect and dry friction phenomena. The high-DOF model is calculated with the Finite Element Method (FEM) to enable accurate simulation of the dynamic behavior of the drill string and accurate representation of wave propagation, energy build-up, and torque response. Field data obtained from an Algerian oil well with Measurement While Drilling (MWD) equipment are used to guide modeling and determine simulations. According to the findings, the FEM-based high-DOF model demonstrates better performance in simulating basic stick–slip dynamics, such as drill bit velocity oscillation, nonlinear friction torque formation, and transient bit-to-surface contacts. On the other hand, the 2-DOF model is not able to represent these effects accurately and can lead to inappropriate control actions and mitigation of vibration severity. This study highlights the importance of robust model fidelity in building reliable real-time rotary drilling control systems. From the performance difference measurement between low-resolution and high-resolution models, the findings offer valuable insights to optimize drilling efficiency further, minimize non-productive time (NPT), and improve the rate of penetration (ROP). This contribution points to the need for using high-fidelity models, such as FEM-based models, in facilitating smart and adaptive well control strategies in modern petroleum drilling engineering. Full article
(This article belongs to the Section H: Geo-Energy)
Show Figures

Figure 1

18 pages, 1956 KiB  
Article
Two Novel Quantum Steganography Algorithms Based on LSB for Multichannel Floating-Point Quantum Representation of Digital Signals
by Meiyu Xu, Dayong Lu, Youlin Shang, Muhua Liu and Songtao Guo
Electronics 2025, 14(14), 2899; https://doi.org/10.3390/electronics14142899 - 20 Jul 2025
Viewed by 256
Abstract
Currently, quantum steganography schemes utilizing the least significant bit (LSB) approach are primarily optimized for fixed-point data processing, yet they encounter precision limitations when handling extended floating-point data structures owing to quantization error accumulation. To overcome precision constraints in quantum data hiding, the [...] Read more.
Currently, quantum steganography schemes utilizing the least significant bit (LSB) approach are primarily optimized for fixed-point data processing, yet they encounter precision limitations when handling extended floating-point data structures owing to quantization error accumulation. To overcome precision constraints in quantum data hiding, the EPlsb-MFQS and MVlsb-MFQS quantum steganography algorithms are constructed based on the LSB approach in this study. The multichannel floating-point quantum representation of digital signals (MFQS) model enhances information hiding by augmenting the number of available channels, thereby increasing the embedding capacity of the LSB approach. Firstly, we analyze the limitations of fixed-point signals steganography schemes and propose the conventional quantum steganography scheme based on the LSB approach for the MFQS model, achieving enhanced embedding capacity. Moreover, the enhanced embedding efficiency of the EPlsb-MFQS algorithm primarily stems from the superposition probability adjustment of the LSB approach. Then, to prevent an unauthorized person easily extracting secret messages, we utilize channel qubits and position qubits as novel carriers during quantum message encoding. The secret message is encoded into the signal’s qubits of the transmission using a particular modulo value rather than through sequential embedding, thereby enhancing the security and reducing the time complexity in the MVlsb-MFQS algorithm. However, this algorithm in the spatial domain has low robustness and security. Therefore, an improved method of transferring the steganographic process to the quantum Fourier transformed domain to further enhance security is also proposed. This scheme establishes the essential building blocks for quantum signal processing, paving the way for advanced quantum algorithms. Compared with available quantum steganography schemes, the proposed steganography schemes achieve significant improvements in embedding efficiency and security. Finally, we theoretically delineate, in detail, the quantum circuit design and operation process. Full article
Show Figures

Figure 1

18 pages, 3451 KiB  
Article
Switched 32-Bit Fixed-Point Format for Laplacian-Distributed Data
by Bojan Denić, Zoran Perić, Milan Dinčić, Sofija Perić, Nikola Simić and Marko Anđelković
Information 2025, 16(7), 574; https://doi.org/10.3390/info16070574 - 4 Jul 2025
Viewed by 280
Abstract
The 32-bit floating-point (FP32) format has many useful applications, particularly in computing and neural network systems. The classic 32-bit fixed-point (FXP32) format often introduces lower quality of representation (i.e., precision), making it unsuitable for real deployment, despite offering faster computations and reduced computational [...] Read more.
The 32-bit floating-point (FP32) format has many useful applications, particularly in computing and neural network systems. The classic 32-bit fixed-point (FXP32) format often introduces lower quality of representation (i.e., precision), making it unsuitable for real deployment, despite offering faster computations and reduced computational cost, which positively impacts energy efficiency. In this paper, we propose a switched FXP32 format able to compete with or surpass the widely used FP32 format across a wide variance range. It actually proposes switching between the possible values of key parameters according to the variance level of the data modeled with the Laplacian distribution. Precision analysis is achieved using the signal-to-quantization noise ratio (SQNR) as a performance metric, introduced based on the analogy between digital formats and quantization. Theoretical SQNR results provided in a wide range of variance confirm the design objectives. Experimental and simulation results obtained using neural network weights further support the approach. The strong agreement between the experiment, simulation, and theory indicates the efficiency of this proposal in encoding Laplacian data, as well as its potential applicability in neural networks. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning, 2nd Edition)
Show Figures

Figure 1

18 pages, 1067 KiB  
Article
Fine-Grained Fault Sensitivity Analysis of Vision Transformers Under Soft Errors
by Jiajun He, Yi Liu, Changqing Xu, Xinfang Liao and Yintang Yang
Electronics 2025, 14(12), 2418; https://doi.org/10.3390/electronics14122418 - 13 Jun 2025
Viewed by 706
Abstract
Over the past decade, deep neural networks (DNNs) have revolutionized the fields of computer vision (CV) and natural language processing (NLP), achieving unprecedented performance across a variety of tasks. The Vision Transformer (ViT) has emerged as a powerful alternative to convolutional neural networks [...] Read more.
Over the past decade, deep neural networks (DNNs) have revolutionized the fields of computer vision (CV) and natural language processing (NLP), achieving unprecedented performance across a variety of tasks. The Vision Transformer (ViT) has emerged as a powerful alternative to convolutional neural networks (CNNs), leveraging self-attention mechanisms to capture long-range dependencies and global context. Owing to their flexible architecture and scalability, ViTs have been widely adopted in safety-critical applications such as autonomous driving, where system reliability is paramount. However, ViTs’ reliability issues induced by soft errors in large-scale digital integrated circuits have generally been overlooked. In this paper, we present a fine-grained fault sensitivity analysis of ViT variants under bit-flip fault injections, focusing on different ViT models, transformer encoder layers, weight matrix types, and attention-head dimensions. Experimental results demonstrate that the first transformer encoder layer is susceptible to soft errors due to its essential role in local and global feature extraction. Moreover, in the middle and later layers, the Multi-Layer Perceptron (MLP) sub-blocks dominate the computational workload and significantly influence representation learning, making them critical points of vulnerability. These insights highlight key reliability bottlenecks in ViT architectures when deployed in error-prone environments. Full article
Show Figures

Figure 1

23 pages, 1475 KiB  
Article
Learning Online MEMS Calibration with Time-Varying and Memory-Efficient Gaussian Neural Topologies
by Danilo Pietro Pau, Simone Tognocchi and Marco Marcon
Sensors 2025, 25(12), 3679; https://doi.org/10.3390/s25123679 - 12 Jun 2025
Viewed by 2894
Abstract
This work devised an on-device learning approach to self-calibrate Micro-Electro-Mechanical Systems-based Inertial Measurement Units (MEMS-IMUs), integrating a digital signal processor (DSP), an accelerometer, and a gyroscope in the same package. The accelerometer and gyroscope stream their data in real time to the DSP, [...] Read more.
This work devised an on-device learning approach to self-calibrate Micro-Electro-Mechanical Systems-based Inertial Measurement Units (MEMS-IMUs), integrating a digital signal processor (DSP), an accelerometer, and a gyroscope in the same package. The accelerometer and gyroscope stream their data in real time to the DSP, which runs artificial intelligence (AI) workloads. The real-time sensor data are subject to errors, such as time-varying bias and thermal stress. To compensate for these drifts, the traditional calibration method based on a linear model is applicable, and unfortunately, it does not work with nonlinear errors. The algorithm devised by this study to reduce such errors adopts Radial Basis Function Neural Networks (RBF-NNs). This method does not rely on the classical adoption of the backpropagation algorithm. Due to its low complexity, it is deployable using kibyte memory and in software runs on the DSP, thus performing interleaved in-sensor learning and inference by itself. This avoids using any off-package computing processor. The learning process is performed periodically to achieve consistent sensor recalibration over time. The devised solution was implemented in both 32-bit floating-point data representation and 16-bit quantized integer version. Both of these were deployed into the Intelligent Sensor Processing Unit (ISPU), integrated into the LSM6DSO16IS Inertial Measurement Unit (IMU), which is a programmable 5–10 MHz DSP on which the programmer can compile and execute AI models. It integrates 32 KiB of program RAM and 8 KiB of data RAM. No permanent memory is integrated into the package. The two (fp32 and int16) RBF-NN models occupied less than 21 KiB out of the 40 available, working in real-time and independently in the sensor package. The models, respectively, compensated between 46% and 95% of the accelerometer measurement error and between 32% and 88% of the gyroscope measurement error. Finally, it has also been used for attitude estimation of a micro aerial vehicle (MAV), achieving an error of only 2.84°. Full article
(This article belongs to the Special Issue Sensors and IoT Technologies for the Smart Industry)
Show Figures

Graphical abstract

21 pages, 739 KiB  
Article
Bit-Parallel Implementations of Neural Network Activation Functions in Onboard Computing Systems
by Mikhail Khachumov
Electronics 2025, 14(12), 2348; https://doi.org/10.3390/electronics14122348 - 8 Jun 2025
Viewed by 406
Abstract
This study generalizes and further develops methods for efficiently implementing artificial neural networks (ANNs) in the onboard computers of mobile robotic systems with limited resources, including unmanned aerial vehicles (UAVs). The neural networks are sped up by constructing a new unbounded activation function [...] Read more.
This study generalizes and further develops methods for efficiently implementing artificial neural networks (ANNs) in the onboard computers of mobile robotic systems with limited resources, including unmanned aerial vehicles (UAVs). The neural networks are sped up by constructing a new unbounded activation function called “s-parabola”, which meets the requirements of twice differentiability and reduces computational complexity over sigmoid-based functions. An additional contribution to this acceleration comes from activation functions based on bit-parallel computational circuits. A comprehensive review of modern publications in this subject area is provided. For autonomous problem-solving using ANNs directly on board an unmanned aerial vehicle, a trade-off between the speed and accuracy of the resulting solutions must be achieved. For this reason, we propose using fast bit-parallel circuits with limited digit capacity. The special representation and calculation of activation functions is performed based on the transformation of Jack Volder’s CORDIC iterative algorithms for trigonometric functions and Georgy Pukhov’s bit-analog calculations. Two statements are formulated, the proofs of which are based on the equivalence of the results obtained using the two approaches. We also provide theoretical and experimental estimates of the computational complexity of the algorithms achieved with different operand summation schemes. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Graphical abstract

17 pages, 1535 KiB  
Article
Attention-Based Multi-Scale Graph Fusion Hashing for Fast Cross-Modality Image–Text Retrieval
by Jiayi Li and Gengshen Wu
Symmetry 2025, 17(6), 861; https://doi.org/10.3390/sym17060861 - 1 Jun 2025
Viewed by 527
Abstract
In recent years, hashing-based algorithms have garnered significant attention as vital technologies for cross-modal retrieval tasks. They leverage the inherent symmetry between different data modalities (e.g., text, images, or audio) to bridge their semantic gaps by embedding them into a unified representation space. [...] Read more.
In recent years, hashing-based algorithms have garnered significant attention as vital technologies for cross-modal retrieval tasks. They leverage the inherent symmetry between different data modalities (e.g., text, images, or audio) to bridge their semantic gaps by embedding them into a unified representation space. This symmetry-preserving approach would greatly enhance retrieval performance. However, challenges persist in mining and enriching multi-modal semantic feature information. Most current methods use pre-trained models for feature extraction, which limits information representation during hash code learning. Additionally, these methods map multi-modal data into a unified space, but this mapping is sensitive to feature distribution variations, potentially degrading cross-modal retrieval performance. To tackle these challenges, this paper introduces a novel method called Attention-based Multi-scale Graph Fusion Hashing (AMGFH). This approach first enhances the semantic representation of image features through multi-scale learning via an image feature enhancement network. Additionally, graph convolutional networks (GCNs) are employed to fuse multi-modal features, where the self-attention mechanism is incorporated to enhance feature representation by dynamically adjusting the weights of less relevant features. By optimizing a combination of loss functions and addressing the diverse requirements of image and text features, the proposed model demonstrates superior performance across various dimensions. Extensive experiments conducted on public datasets further confirm its outstanding performance. For instance, AMGFH exceeds the most competitive baseline by 3% and 4.7% in terms of mean average precision (MAP) when performing image-to-text and text-to-image retrieval tasks at 32 bits on the MS COCO dataset. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 6004 KiB  
Article
Remote Sensing Image Change Detection Based on Dynamic Adaptive Context Attention
by Yong Xie, Yixuan Wang, Xin Wang, Yin Tan and Qin Qin
Symmetry 2025, 17(5), 793; https://doi.org/10.3390/sym17050793 - 20 May 2025
Viewed by 571
Abstract
Although some progress has been made in deep learning-based remote sensing image change detection, the complexity of scenes and the diversity of changes in remote sensing images lead to challenges related to background interference. For instance, remote sensing images typically contain numerous background [...] Read more.
Although some progress has been made in deep learning-based remote sensing image change detection, the complexity of scenes and the diversity of changes in remote sensing images lead to challenges related to background interference. For instance, remote sensing images typically contain numerous background regions, while the actual change regions constitute only a small proportion of the overall image. To address these challenges in remote sensing image change detection, this paper proposes a Dynamic Adaptive Context Attention Network (DACA-Net) based on an exchanging dual encoder–decoder (EDED) architecture. The core innovation of DACA-Net is the development of a novel Dynamic Adaptive Context Attention Module (DACAM), which learns attention weights and automatically adjusts the appropriate scale according to the features present in remote sensing images. By fusing multi-scale contextual features, DACAM effectively captures information regarding changes within these images. In addition, DACA-Net adopts an EDED architectural design, where the conventional convolutional modules in the EDED framework are replaced by DACAM modules. Unlike the original EDED architecture, DACAM modules are embedded after each encoder unit, enabling dynamic recalibration of T1/T2 features and cross-temporal information interaction. This design facilitates the capture of fine-grained change features at multiple scales. This architecture not only facilitates the extraction of discriminative features but also promotes a form of structural symmetry in the processing pipeline, contributing to more balanced and consistent feature representations. To validate the applicability of our proposed method in real-world scenarios, we constructed an Unmanned Aerial Vehicle (UAV) remote sensing dataset named the Guangxi Beihai Coast Nature Reserves (GBCNR). Extensive experiments conducted on three public datasets and our GBCNR dataset demonstrate that the proposed DACA-Net achieves strong performance across various evaluation metrics. For example, it attains an F1 score (F1) of 72.04% and a precision(P) of 66.59% on the GBCNR dataset, representing improvements of 3.94% and 4.72% over state-of-the-art methods such as semantic guidance and spatial localization network (SGSLN) and bi-temporal image Transformer (BIT), respectively. These results verify that the proposed network significantly enhances the ability to detect critical change regions and improves generalization performance. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

16 pages, 6927 KiB  
Article
Estimation of Missing DICOM Windowing Parameters in High-Dynamic-Range Radiographs Using Deep Learning
by Mateja Napravnik, Natali Bakotić, Franko Hržić, Damir Miletić and Ivan Štajduhar
Mathematics 2025, 13(10), 1596; https://doi.org/10.3390/math13101596 - 13 May 2025
Viewed by 456
Abstract
Digital Imaging and Communication in Medicine (DICOM) is a standard format for storing medical images, which are typically represented in higher bit depths (10–16 bits), enabling detailed representation but exceeding the display capabilities of standard displays and human visual perception. To address this, [...] Read more.
Digital Imaging and Communication in Medicine (DICOM) is a standard format for storing medical images, which are typically represented in higher bit depths (10–16 bits), enabling detailed representation but exceeding the display capabilities of standard displays and human visual perception. To address this, DICOM images are often accompanied by windowing parameters, analogous to tone mapping in High-Dynamic-Range image processing, which compress the intensity range to enhance diagnostically relevant regions. This study evaluates traditional histogram-based methods and explores the potential of deep learning for predicting window parameters in radiographs where such information is missing. A range of architectures, including MobileNetV3Small, VGG16, ResNet50, and ViT-B/16, were trained on high-bit-depth computed radiography images using various combinations of loss functions, including structural similarity (SSIM), perceptual loss (LPIPS), and an edge preservation loss. Models were evaluated based on multiple criteria, including pixel entropy preservation, Hellinger distance of pixel value distributions, and peak-signal-to-noise ratio after 8-bit conversion. The tested approaches were further validated on the publicly available GRAZPEDWRI-DX dataset. Although histogram-based methods showed satisfactory performance, especially scaling through identifying the peaks in the pixel value histogram, deep learning-based methods were better at selectively preserving clinically relevant image areas while removing background noise. Full article
Show Figures

Figure 1

26 pages, 442 KiB  
Article
Improving the Fast Fourier Transform for Space and Edge Computing Applications with an Efficient In-Place Method
by Christoforos Vasilakis, Alexandros Tsagkaropoulos, Ioannis Koutoulas and Dionysios Reisis
Software 2025, 4(2), 11; https://doi.org/10.3390/software4020011 - 12 May 2025
Viewed by 1424
Abstract
Satellite and edge computing designers develop algorithms that restrict resource utilization and execution time. Among these design efforts, optimizing Fast Fourier Transform (FFT), key to many tasks, has led mainly to in-place FFT-specific hardware accelerators. Aiming at improving the FFT performance on processors [...] Read more.
Satellite and edge computing designers develop algorithms that restrict resource utilization and execution time. Among these design efforts, optimizing Fast Fourier Transform (FFT), key to many tasks, has led mainly to in-place FFT-specific hardware accelerators. Aiming at improving the FFT performance on processors and computing devices with limited resources, the current paper enhances the efficiency of the radix-2 FFT by exploring the benefits of an in-place technique. First, we present the advantages of organizing the single memory bank of processors to store two (2) FFT elements in each memory address and provide parallel load and store of each FFT pair of data. Second, we optimize the floating point (FP) and block floating point (BFP) configurations to improve the FFT Signal-to-Noise (SNR) performance and the resource utilization. The resulting techniques reduce the memory requirements by two and significantly improve the time performance for the overall prevailing BFP representation. The execution of inputs ranging from 1K to 16K FFT points, using 8-bit or 16-bit as FP or BFP numbers, on the space-proven Atmel AVR32 and Vision Processing Unit (VPU) Intel Movidius Myriad 2, the edge device Raspberry Pi Zero 2W and a low-cost accelerator on Xilinx Zynq 7000 Field Programmable Gate Array (FPGA), validates the method’s performance improvement. Full article
Show Figures

Figure 1

21 pages, 5433 KiB  
Article
Efficient Implementation of Matrix-Based Image Processing Algorithms for IoT Applications
by Sorin Zoican and Roxana Zoican
Appl. Sci. 2025, 15(9), 4947; https://doi.org/10.3390/app15094947 - 29 Apr 2025
Viewed by 522
Abstract
This paper analyzes implementation approaches of matrix-based image processing algorithms. As an example, an image processing algorithm that provides both image compression and image denoising using random sample consensus and discrete cosine transform is analyzed. Two implementations are illustrated: one using the Blackfin [...] Read more.
This paper analyzes implementation approaches of matrix-based image processing algorithms. As an example, an image processing algorithm that provides both image compression and image denoising using random sample consensus and discrete cosine transform is analyzed. Two implementations are illustrated: one using the Blackfin processor with 32-bit fixed-point representation and the second using the convolutional neural network (CNN) accelerator in the MAX78000 microcontroller. Implementation with Blackfin can be considered a classic approach, in C language, possible on all existing microcontrollers. This implementation is improved by using two cores. The proposed implementation with the CNN accelerator is a new approach that effectively uses the dedicated accelerator for convolutional neural networks, with better results than a classical implementation. The execution time of matrix-based image processing algorithms can be reduced by using CNN accelerators already integrated in some modern microcontrollers to implement artificial intelligence algorithms. The proposed method uses CNN in a different way, not for artificial intelligence algorithms, but for matrix calculations using CNN resources effectively while maintaining the accuracy of the calculations. A comparison of these two implementations and the validation using MATLAB with 64 bits floating point representation are conducted. The obtained performance is good both in terms of quality of reconstructed image and execution time, and the performance differences between the infinite precision implementation and the finite precision implementation are small. The CNN accelerator implementation, based on matrix multiplication implemented using CNN, has a better performance suitable for Internet of Things applications. Full article
Show Figures

Figure 1

32 pages, 15292 KiB  
Article
Compression Ratio as Picture-Wise Just Noticeable Difference Predictor
by Nenad Stojanović, Boban Bondžulić, Vladimir Lukin, Dimitrije Bujaković, Sergii Kryvenko and Oleg Ieremeiev
Mathematics 2025, 13(9), 1445; https://doi.org/10.3390/math13091445 - 28 Apr 2025
Cited by 1 | Viewed by 573
Abstract
This paper presents the interesting results of applying compression ratio (CR) in the prediction of the boundary between visually lossless and visually lossy compression, which is of particular importance in perceptual image compression. The prediction is carried out through the objective quality (peak [...] Read more.
This paper presents the interesting results of applying compression ratio (CR) in the prediction of the boundary between visually lossless and visually lossy compression, which is of particular importance in perceptual image compression. The prediction is carried out through the objective quality (peak signal-to-noise ratio, PSNR) and image representation in bits per pixel (bpp). In this analysis, the results of subjective tests from four publicly available databases are used as ground truth for comparison with the results obtained using the compression ratio as a predictor. Through a wide analysis of color and grayscale infrared JPEG and Better Portable Graphics (BPG) compressed images, the values of parameters that control these two types of compression and for which CR is calculated are proposed. It is shown that PSNR and bpp predictions can be significantly improved by using CR calculated using these proposed values, regardless of the type of compression and whether color or infrared images are used. In this paper, CR is used for the first time in predicting the boundary between visually lossless and visually lossy compression for images from the infrared part of the electromagnetic spectrum, as well as in the prediction of BPG compressed content. This paper indicates the great potential of CR so that in future research, it can be used in joint prediction based on several features or through the CR curve obtained for different values of the parameters controlling the compression. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

Back to TopTop