Advances in Digital Signal and Image Processing, Techniques, and Computations with Multidisciplinary Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Circuit and Signal Processing".

Deadline for manuscript submissions: closed (31 December 2024) | Viewed by 25363

Special Issue Editor


E-Mail Website
Guest Editor
Centre for Life-Cycle Engineering and Management, School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield MK43 0AL, UK
Interests: signals and systems; digital filter design; digital image processing; medical image processing; pattern recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Image processing is a rapidly evolving technique applied in several research fields. Processing digital signals, a major objective in many scientific domains, can be achieved through image processing approaches. This path includes the analysis, classification, and manipulation of signals using operations such as filtering, compression, feature extraction, enhancement, and spectral analysis. 

This Special Issue aims to highlight innovative ideas and algorithms for treating different types of discrete signals using image processing algorithms.

We welcome original and novel contributions, including research papers and extensive reviews, addressing the impact and relevance of electronic signal processing using image processing applications.

We welcome submissions detailing new theories and evolutionary methods for digital signal processing using image processing approaches. A non-exhaustive list of topics is as follows:

  • Digital signal processing using machine learning;
  • Deep learning for digital signal processing;
  • Image restoration and noise reduction;
  • Image classification, segmentation, and clustering;
  • Object detection and tracking;
  • Medical imaging for EEG/ECG signal processing;
  • Feature selection, extraction, and learning;
  • Digital signal detection and recognition using image processing techniques;
  • Motion analysis of digital signals.

Dr. Honarvar Shakibaei Asli
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • digital signal processing
  • image processing
  • machine learning
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 4409 KiB  
Article
A Method for Reducing White Noise in Partial Discharge Signals of Underground Power Cables
by Jifang Li and Qilong Zhang
Electronics 2025, 14(4), 780; https://doi.org/10.3390/electronics14040780 - 17 Feb 2025
Viewed by 398
Abstract
Online partial discharge (PD) detection for power cables is one reliable means of monitoring their health. However, strong interference by white noise poses a major challenge in the process of collecting information on partial discharge signals. To solve the problem whereby the wavelet [...] Read more.
Online partial discharge (PD) detection for power cables is one reliable means of monitoring their health. However, strong interference by white noise poses a major challenge in the process of collecting information on partial discharge signals. To solve the problem whereby the wavelet threshold estimation based on sample entropy falls into the local optimal and the wavelet noise reduction makes it difficult to process detailed information, we propose a partial discharge signal noise reduction method based on a combination of improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and discrete wavelet transform (DWT) with multiscale sample entropy (MSE). Firstly, the ICEEMDAN method was used to decompose the original sequence into multiple intrinsic mode components. The intrinsic mode function (IMF) components were grouped using the mutual information method, and high-frequency noise was eliminated using the kurtosis criterion. Next, an MSE model was established to optimize the wavelet threshold, and wavelet noise reduction was applied to the effective component. The ICEEMDAN-MSE-DWT method can retain effective information while achieving complete denoising, which alleviates the problem of information loss that occurs after denoising using the wavelet method. Lastly, as shown by our simulation and experimental results, the proposed method can effectively realize noise reduction for power cable partial discharge signals, thus providing an effective method. Full article
Show Figures

Figure 1

14 pages, 3017 KiB  
Article
Investigation of Blind Deconvolution Method with Total Variation Regularization in Cardiac Cine Magnetic Resonance Imaging
by Kyuseok Kim and Youngjin Lee
Electronics 2025, 14(4), 743; https://doi.org/10.3390/electronics14040743 - 13 Feb 2025
Viewed by 542
Abstract
Various studies have been conducted to reduce the blurring caused by movement in cine magnetic resonance imaging (MRI) of the heart. This study proposed a blind deconvolution method using a total variation regularization algorithm to remove blurring in cardiac cine magnetic resonance (MR) [...] Read more.
Various studies have been conducted to reduce the blurring caused by movement in cine magnetic resonance imaging (MRI) of the heart. This study proposed a blind deconvolution method using a total variation regularization algorithm to remove blurring in cardiac cine magnetic resonance (MR) images. The MR data were acquired using a rat cardiac cine sequence in an open format. We investigated a blind deconvolution method with a total variation regularization, incorporating a 3-dimensional point-spread function on cardiac cine MRI. The gradient of magnitude (GM) and perceptual sharpness index (PSI) were used to evaluate the usefulness of the proposed deblurring method. We confirmed that the proposed method can reduce temporal blur relatively efficiently compared with the generalized variation-based deblurring algorithm. In particular, the GM and PSI values of the cardiac cine MR image corrected using the proposed method were improved by approximately 7.59 and 4.18 times, respectively, compared with the degraded image. We achieved improved image quality by validating a blind deconvolution method using a total variation regularization algorithm on the cardiac cine MR images of small animals. Full article
Show Figures

Figure 1

20 pages, 10253 KiB  
Article
FPGA Implementation of Image Encryption by Adopting New Shimizu–Morioka System-Based Chaos Synchronization
by Cheng-Hsiung Yang, Jian-De Lee, Lap-Mou Tam, Shih-Yu Li and Shyi-Chyi Cheng
Electronics 2025, 14(4), 740; https://doi.org/10.3390/electronics14040740 - 13 Feb 2025
Cited by 1 | Viewed by 413
Abstract
This study presents an innovative approach utilizing the new Shimizu–Morioka chaotic system. By integrating adaptive backstepping control with GYC partial region stability theory, we successfully achieve synchronization of a slave system with the proposed Shimizu–Morioka chaotic system. The architecture, encompassing the chaotic master [...] Read more.
This study presents an innovative approach utilizing the new Shimizu–Morioka chaotic system. By integrating adaptive backstepping control with GYC partial region stability theory, we successfully achieve synchronization of a slave system with the proposed Shimizu–Morioka chaotic system. The architecture, encompassing the chaotic master system, synchronized slave system, adaptive backstepping controllers, and parameter update laws, has been implemented on an FPGA platform. Comparative analysis demonstrates that the synchronization convergence times (e1, e2, e3, and e4) are significantly reduced compared to conventional adaptive backstepping control methods, exhibiting speed enhancements of approximately 3.42, 3.55, 5.89, and 9.23 times for e1, e2, e3, and e4, respectively. Furthermore, the synchronization results obtained from continuous-time, discrete-time systems, and FPGA implementations exhibit consistent outcomes, validating the effectiveness of the proposed model and controller. Leveraging this validated synchronization framework, chaotic synchronization and secure image encryption are successfully implemented on the FPGA platform. The chaotic signal circuits are meticulously designed and integrated into the FPGA to facilitate a robust image encryption algorithm. In this system, digital signals generated by the synchronized slave chaotic system are utilized for image decryption, while the master chaotic system’s digital signals are employed for encryption. This dual-system architecture highlights the efficacy of the chaotic synchronization method based on the novel Shimizu–Morioka system for practical applications in secure communication. Full article
Show Figures

Figure 1

19 pages, 11770 KiB  
Article
PDS-YOLO: A Real-Time Detection Algorithm for Pipeline Defect Detection
by Ke Zhang, Longxiao Qin and Liming Zhu
Electronics 2025, 14(1), 208; https://doi.org/10.3390/electronics14010208 - 6 Jan 2025
Viewed by 926
Abstract
Regular inspection of urban drainage pipes can effectively maintain the reliable operation of the drainage system and the production safety of residents. Aiming at the shortcomings of the CCTV inspection method used in the drainage pipe defect detection task, a PDS-YOLO algorithm that [...] Read more.
Regular inspection of urban drainage pipes can effectively maintain the reliable operation of the drainage system and the production safety of residents. Aiming at the shortcomings of the CCTV inspection method used in the drainage pipe defect detection task, a PDS-YOLO algorithm that can be deployed in the pipe defect detection system is proposed to overcome the problems of inefficiency of manual inspection and the possibility of errors and omissions. First, the C2f-PCN module was introduced to decrease the model sophistication and decrease the model weight file size. Second, to enhance the model’s capability in detecting pipe defect edges, we incorporate the SPDSC structure within the neck network. Introducing a hybrid local channel MLCA attention mechanism and Wise-IoU loss function based on a dynamic focusing mechanism, the model improves the precision of segmentation without adding extra computational cost, and enhances the extraction and expression of pipeline defect features in the model. The experimental outcomes indicate that the mAP, F1-score, precision, and recall of the PDS-YOLO algorithm are improved by 3.4%, 4%, 4.8%, and 4.0%, respectively, compared to the original algorithm. Additionally, the model achieves a reduction in both the model’s parameter and GFLOPs by 8.6% and 12.3%, respectively. It saves computational resources while improving the detection accuracy, and provides a more lightweight model for the defect detection system with tight computing power. Finally, the PDS-YOLOv8n model is deployed to the NVIDIA Jetson Nano, the central console of the mobile embedded system, and the weight files are optimized using TensorRT. The test results show that the velocity of the model’s inference capabilities in the embedded device is improved from 5.4 FPS to 19.3 FPS, which can basically satisfy the requirements of real-time pipeline defect detection assignments in mobile scenarios. Full article
Show Figures

Figure 1

10 pages, 8466 KiB  
Article
Investigation of a Robust Blind Deconvolution Algorithm Using Extracted Structures in Light Microscopy Images of Salivary Glands: A Pilot Study
by Kyuseok Kim, Jae-Young Kim and Ji-Youn Kim
Electronics 2024, 13(24), 4940; https://doi.org/10.3390/electronics13244940 - 14 Dec 2024
Viewed by 766
Abstract
Although light microscopy (LM) images are widely used to observe various bodily tissues, including salivary glands, reaching a satisfactory spatial resolution in the final images remains a major challenge. The objective of this study was to model a robust blind deconvolution algorithm using [...] Read more.
Although light microscopy (LM) images are widely used to observe various bodily tissues, including salivary glands, reaching a satisfactory spatial resolution in the final images remains a major challenge. The objective of this study was to model a robust blind deconvolution algorithm using the extracted structure and analyze its applicability to LM images. Given LM images of the salivary glands, the proposed robust blind deconvolution method performs non-blind deconvolution after estimating the structural map and kernel of each image. To demonstrate the usefulness of the proposed algorithm for LM images, the perceptual sharpness index (PSI), Blanchet’s sharpness index (BSI), and natural image quality evaluator (NIQE) were used as evaluation metrics. We demonstrated that when the proposed algorithm was applied to salivary gland LM images, the PSI and BSI were improved by 7.95% and 7.44%, respectively, compared with those of the conventional TV-based algorithm. When the proposed algorithm was applied to an LM image, we confirmed that the NIQE value was similar to that of a low-resolution image. In conclusion, the proposed robust blind deconvolution algorithm is highly applicable to salivary gland LM images, and we expect that further applications will become possible. Full article
Show Figures

Figure 1

13 pages, 3195 KiB  
Article
Application and Optimization of a Fast Non-Local Means Noise Reduction Algorithm in Pediatric Abdominal Virtual Monoenergetic Images
by Hajin Kim, Juho Park, Jina Shim and Youngjin Lee
Electronics 2024, 13(23), 4684; https://doi.org/10.3390/electronics13234684 - 27 Nov 2024
Viewed by 768
Abstract
In this study, we applied and optimized a fast non-local means (FNLM) algorithm to reduce noise in pediatric abdominal virtual monoenergetic images (VMIs). To analyze various contrast agent concentrations, we produced contrast agent concentration samples (20, 40, 60, 80, and 100%) and inserted [...] Read more.
In this study, we applied and optimized a fast non-local means (FNLM) algorithm to reduce noise in pediatric abdominal virtual monoenergetic images (VMIs). To analyze various contrast agent concentrations, we produced contrast agent concentration samples (20, 40, 60, 80, and 100%) and inserted them into a phantom model of a one-year-old pediatric patient. Single-energy computed tomography (SECT) and dual-energy computed tomography (DECT) images were acquired from the phantom, and 40 kilo-electron-volt (keV) VMI was acquired based on the DECT images. For the 40 keV VMI, the smoothing factor of the FNLM algorithm was applied from 0.01 to 1.00 in increments of 0.01. We derived the optimized value of the FNLM algorithm based on quantitative evaluation and performed a comparative assessment with SECT, DECT, and a total variation (TV) algorithm. As a result of the analysis, we found that the average contrast to noise ratio (CNR) and coefficient of variation (COV) of each concentration were most improved at a smoothing factor of 0.02. Based on these results, we derived the optimized smoothing factor value of 0.02. Comparative evaluation shows that the optimized FNLM algorithm improves the CNR and COV results by approximately 3.14 and 2.45 times, respectively, compared with the DECT image, and the normalized noise power spectrum result shows a 101 mm2 improvement. The main contribution of this study is to demonstrate the effectiveness of an optimized FNLM algorithm in reducing noise in pediatric abdominal VMI, allowing high-quality images to be acquired while reducing contrast dose. This advancement has significant implications for minimizing the risk of contrast-induced toxicity, especially in pediatric patients. Our approach addresses the problem of limited datasets in pediatric imaging by providing a computationally efficient noise reduction technique and highlights the clinical applicability of the FNLM algorithm. In addition, effective noise reduction enables high-contrast imaging with minimal radiation and contrast exposure, which is expected to be suitable for repeat CT examinations of pediatric liver cancer patients and other abdominal diseases. Full article
Show Figures

Figure 1

23 pages, 12210 KiB  
Article
Mixed Reality-Based Concrete Crack Detection and Skeleton Extraction Using Deep Learning and Image Processing
by Davood Shojaei, Peyman Jafary and Zezheng Zhang
Electronics 2024, 13(22), 4426; https://doi.org/10.3390/electronics13224426 - 12 Nov 2024
Viewed by 1399
Abstract
Advancements in image processing and deep learning offer considerable opportunities for automated defect assessment in civil structures. However, these systems cannot work interactively with human inspectors. Mixed reality (MR) can be adopted to address this by involving inspectors in various stages of the [...] Read more.
Advancements in image processing and deep learning offer considerable opportunities for automated defect assessment in civil structures. However, these systems cannot work interactively with human inspectors. Mixed reality (MR) can be adopted to address this by involving inspectors in various stages of the assessment process. This paper integrates You Only Look Once (YOLO) v5n and YOLO v5m with the Canny algorithm for real-time concrete crack detection and skeleton extraction with a Microsoft HoloLens 2 MR device. The YOLO v5n demonstrates a superior mean average precision (mAP) 0.5 and speed, while YOLO v5m achieves the highest mAP 0.5 0.95 among the other YOLO v5 structures. The Canny algorithm also outperforms the Sobel and Prewitt edge detectors with the highest F1 score. The developed MR-based system could not only be employed for real-time defect assessment but also be utilized for the automatic recording of the location and other specifications of the cracks for further analysis and future re-inspections. Full article
Show Figures

Figure 1

12 pages, 5283 KiB  
Article
Polarization-Based Two-Stage Image Dehazing in a Low-Light Environment
by Xin Zhang, Xia Wang, Changda Yan, Gangcheng Jiao and Huiyang He
Electronics 2024, 13(12), 2269; https://doi.org/10.3390/electronics13122269 - 10 Jun 2024
Cited by 1 | Viewed by 1322
Abstract
Fog, as a common weather condition, severely affects the visual quality of images. Polarization-based dehazing techniques can effectively produce clear results by utilizing the atmospheric polarization transmission model. However, current polarization-based dehazing methods are only suitable for scenes with strong illumination, such as [...] Read more.
Fog, as a common weather condition, severely affects the visual quality of images. Polarization-based dehazing techniques can effectively produce clear results by utilizing the atmospheric polarization transmission model. However, current polarization-based dehazing methods are only suitable for scenes with strong illumination, such as daytime scenes, and cannot be applied to low-light scenes. Due to the insufficient illumination at night and the differences in polarization characteristics between it and sunlight, polarization images captured in a low-light environment can suffer from loss of polarization and intensity information. Therefore, this paper proposes a two-stage low-light image dehazing method based on polarization. We firstly construct a polarization-based low-light enhancement module to remove noise interference in polarization images and improve image brightness. Then, we design a low-light polarization dehazing module, which combines the polarization characteristics of the scene and objects to remove fog, thereby restoring the intensity and polarization information of the scene and improving image contrast. For network training, we generate a simulation dataset for low-light polarization dehazing. We also collect a low-light polarization hazy dataset to test the performance of our method. Experimental results indicate that our proposed method can achieve the best dehazing effect. Full article
Show Figures

Figure 1

22 pages, 624 KiB  
Article
Improving Detection of DeepFakes through Facial Region Analysis in Images
by Fatimah Alanazi, Gary Ushaw and Graham Morgan
Electronics 2024, 13(1), 126; https://doi.org/10.3390/electronics13010126 - 28 Dec 2023
Cited by 1 | Viewed by 5910
Abstract
In the evolving landscape of digital media, the discipline of media forensics, which encompasses the critical examination and authentication of digital images, videos, and audio recordings, has emerged as an area of paramount importance. This heightened significance is predominantly attributed to the burgeoning [...] Read more.
In the evolving landscape of digital media, the discipline of media forensics, which encompasses the critical examination and authentication of digital images, videos, and audio recordings, has emerged as an area of paramount importance. This heightened significance is predominantly attributed to the burgeoning concerns surrounding the proliferation of DeepFakes, which are highly realistic and manipulated media content, often created using advanced artificial intelligence techniques. Such developments necessitate a profound understanding and advancement in media forensics to ensure the integrity of digital media in various domains. Current research endeavours are primarily directed towards addressing a common challenge observed in DeepFake datasets, which pertains to the issue of overfitting. Many suggested remedies centre around the application of data augmentation methods, with a frequently adopted strategy being the incorporation of random erasure or cutout. This method entails the random removal of sections from an image to introduce diversity and mitigate overfitting. Generating disparities between the altered and unaltered images serves to inhibit the model from excessively adapting itself to individual samples, thus leading to more favourable results. Nonetheless, the stochastic nature of this approach may inadvertently obscure facial regions that harbour vital information necessary for DeepFake detection. Due to the lack of guidelines on specific regions for cutout, most studies use a randomised approach. However, in recent research, face landmarks have been integrated to designate specific facial areas for removal, even though the selection remains somewhat random. Therefore, there is a need to acquire a more comprehensive insight into facial features and identify which regions hold more crucial data for the identification of DeepFakes. In this study, the investigation delves into the data conveyed by various facial components through the excision of distinct facial regions during the training of the model. The goal is to offer valuable insights to enhance forthcoming face removal techniques within DeepFake datasets, fostering a deeper comprehension among researchers and advancing the realm of DeepFake detection. Our study presents a novel method that uses face cutout techniques to improve understanding of key facial features crucial in DeepFake detection. Moreover, the method combats overfitting in DeepFake datasets by generating diverse images with these techniques, thereby enhancing model robustness. The developed methodology is validated against publicly available datasets like FF++ and Celeb-DFv2. Both face cutout groups surpassed the Baseline, indicating cutouts improve DeepFake detection. Face Cutout Group 2 excelled, with 91% accuracy on Celeb-DF and 86% on the compound dataset, suggesting external facial features’ significance in detection. The study found that eyes are most impactful and the nose is least in model performance. Future research could explore the augmentation policy’s effect on video-based DeepFake detection. Full article
Show Figures

Figure 1

33 pages, 125054 KiB  
Article
Seismic Image Identification and Detection Based on Tchebichef Moment Invariant
by Andong Lu and Barmak Honarvar Shakibaei Asli
Electronics 2023, 12(17), 3692; https://doi.org/10.3390/electronics12173692 - 31 Aug 2023
Cited by 6 | Viewed by 2594
Abstract
The research focuses on the analysis of seismic data, specifically targeting the detection, edge segmentation, and classification of seismic images. These processes are fundamental in image processing and are crucial in understanding the stratigraphic structure and identifying oil and natural gas resources. However, [...] Read more.
The research focuses on the analysis of seismic data, specifically targeting the detection, edge segmentation, and classification of seismic images. These processes are fundamental in image processing and are crucial in understanding the stratigraphic structure and identifying oil and natural gas resources. However, there is a lack of sufficient resources in the field of seismic image detection, and interpreting 2D seismic image slices based on 3D seismic data sets can be challenging. In this research, image segmentation involves image preprocessing and the use of a U-net network. Preprocessing techniques, such as Gaussian filter and anisotropic diffusion, are employed to reduce blur and noise in seismic images. The U-net network, based on the Canny descriptor is used for segmentation. For image classification, the ResNet-50 and Inception-v3 models are applied to classify different types of seismic images. In image detection, Tchebichef invariants are computed using the Tchebichef polynomials’ recurrence relation. These invariants are then used in an optimized multi-class SVM network for detecting and classifying various types of seismic images. The promising results of the SVM model based on Tchebichef invariants suggest its potential to replace Hu moment invariants (HMIs) and Zernike moment invariants (ZMIs) for seismic image detection. This approach offers a more efficient and dependable solution for seismic image analysis in the future. Full article
Show Figures

Figure 1

17 pages, 1908 KiB  
Article
Design and Implementation of an Orbitrap Mass Spectrometer Data Acquisition System for Atmospheric Molecule Identification
by Wei Wang and Yongping Li
Electronics 2023, 12(11), 2387; https://doi.org/10.3390/electronics12112387 - 25 May 2023
Viewed by 2288
Abstract
Orbitrap mass spectrometers have gained widespread popularity in ground-based environmental component analysis. However, their application in atmospheric exploration for space missions remains limited. Existing data acquisition solutions for Orbitrap instruments primarily rely on commercial systems and computer-based spectrum analysis. In this study, we [...] Read more.
Orbitrap mass spectrometers have gained widespread popularity in ground-based environmental component analysis. However, their application in atmospheric exploration for space missions remains limited. Existing data acquisition solutions for Orbitrap instruments primarily rely on commercial systems and computer-based spectrum analysis. In this study, we developed a self-designed data acquisition solution specifically tailored for atmospheric molecule detection. The implementation involved directly integrating a spectrum analysis algorithm onto a field programmable gate array (FPGA), enabling miniaturization, real-time performance, and meeting the desired requirements. The system comprises signal conditioning circuits, analog-to-digital conversion (ADC) circuits, programmable logic circuits, and related software. These components facilitate real-time spectrum analysis and signal processing on hardware, enabling high-speed acquisition and analysis of signals generated by the Orbitrap. Experimental results demonstrate that the system can sample front-end analog signals at a rate of 25 MHz and differentiate signal spectra with an error margin of less than 7 kHz. This establishes the viability of the designed data acquisition system for atmospheric mass spectrometry analysis. Full article
Show Figures

Figure 1

25 pages, 9122 KiB  
Article
Four-Term Recurrence for Fast Krawtchouk Moments Using Clenshaw Algorithm
by Barmak Honarvar Shakibaei Asli and Maryam Horri Rezaei
Electronics 2023, 12(8), 1834; https://doi.org/10.3390/electronics12081834 - 12 Apr 2023
Cited by 4 | Viewed by 1798
Abstract
Krawtchouk polynomials (KPs) are discrete orthogonal polynomials associated with the Gauss hypergeometric functions. These polynomials and their generated moments in 1D or 2D formats play an important role in information and coding theories, signal and image processing tools, image watermarking, and pattern recognition. [...] Read more.
Krawtchouk polynomials (KPs) are discrete orthogonal polynomials associated with the Gauss hypergeometric functions. These polynomials and their generated moments in 1D or 2D formats play an important role in information and coding theories, signal and image processing tools, image watermarking, and pattern recognition. In this paper, we introduce a new four-term recurrence relation to compute KPs compared to their ordinary recursions (three-term) and analyse the proposed algorithm speed. Moreover, we use Clenshaw’s technique to accelerate the computation procedure of the Krawtchouk moments (KMs) using a fast digital filter structure to generate a lattice network for KPs calculation. The proposed method confirms the stability of KPs computation for higher orders and their signal reconstruction capabilities as well. The results show that the KMs calculation using the proposed combined method based on a four-term recursion and Clenshaw’s technique is reliable and fast compared to the existing recursions and fast KMs algorithms. Full article
Show Figures

Figure 1

15 pages, 934 KiB  
Article
A Semi-Fragile, Inner-Outer Block-Based Watermarking Method Using Scrambling and Frequency Domain Algorithms
by Ahmet Senol, Ersin Elbasi, Ahmet E. Topcu and Nour Mostafa
Electronics 2023, 12(4), 1065; https://doi.org/10.3390/electronics12041065 - 20 Feb 2023
Cited by 7 | Viewed by 1842
Abstract
Image watermarking is most often used to prove that an image belongs to someone and to make sure that the image is the same as was originally produced. The type of watermarking used for the detection of originality and tampering is known as [...] Read more.
Image watermarking is most often used to prove that an image belongs to someone and to make sure that the image is the same as was originally produced. The type of watermarking used for the detection of originality and tampering is known as authentication-type watermarking. In this paper, a blind semi-fragile authentication watermarking method is introduced. Although the main concern in this paper is authenticating the image, watermarking for proving ownership is additionally implemented. The method considers the image as two main parts: an inner part and an outer part. The inner and outer parts are divided into non-overlapping blocks. The block size of the inner and outer part are different. The outer blocks have a greater area than the inner blocks so that their watermark-holding capacity is greater, providing enough robustness for semi-fragility. The method is semi-fragile and the watermarked image is authenticated despite the JPEG being compressed to 75% quality. The embedded watermark also survives innocent types of image operations, such as intensity adjustment, histogram equalization and gamma correction. Semi-fragile and selectively fragile authentication is valuable and in high demand specifically because it survives these innocent image operations while detecting ill-intentioned tampering. In this work, we embed a binary watermark into the inner and outer parts of images using a scrambling algorithm, discrete wavelet transform (DWT) and discrete cosine transform (DCT) in the blocks. The proposed methodology has high image quality after watermarking, with a PSNR value of 40.577, and high quality is also achieved after JPEG compression. The embedding process provides acceptable image quality after tamper attacks, including JPEG compression, Gaussian noise, average filtering, and scaling attacks with PSNR values greater than 29. Experimental results obtained show that the proposed semi-fragile watermarking algorithm is more robust, secure and resistant than other algorithms in the literature. Full article
Show Figures

Figure 1

15 pages, 1853 KiB  
Article
Increasing the Speed of Multiscale Signal Analysis in the Frequency Domain
by Viliam Ďuriš, Sergey G. Chumarov and Vladimir I. Semenov
Electronics 2023, 12(3), 745; https://doi.org/10.3390/electronics12030745 - 2 Feb 2023
Cited by 4 | Viewed by 1709
Abstract
In the Mallat algorithm, calculations are performed in the time domain. To speed up the signal conversion at each level, the wavelet coefficients are sequentially halved. This paper presents an algorithm for increasing the speed of multiscale signal analysis using fast Fourier transform. [...] Read more.
In the Mallat algorithm, calculations are performed in the time domain. To speed up the signal conversion at each level, the wavelet coefficients are sequentially halved. This paper presents an algorithm for increasing the speed of multiscale signal analysis using fast Fourier transform. In this algorithm, calculations are performed in the frequency domain, which is why the authors call this algorithm multiscale analysis in the frequency domain. For each level of decomposition, the wavelet coefficients are determined from the signal and can be calculated in parallel, which reduces the conversion time. In addition, the zoom factor can be less than two. The Mallat algorithm uses non-symmetric wavelets, and to increase the accuracy of the reconstruction, large-order wavelets are obtained, which increases the transformation time. On the contrary, in our algorithm, depending on the sample length, the wavelets are symmetric and the time of the inverse wavelet transform can be faster by 6–7 orders of magnitude compared to the direct numerical calculation of the convolution. At the same time, the quality of analysis and the accuracy of signal reconstruction increase because the wavelet transform is strictly orthogonal. Full article
Show Figures

Figure 1

Review

Jump to: Research

28 pages, 1601 KiB  
Review
Methods and Approaches for User Engagement and User Experience Analysis Based on Electroencephalography Recordings: A Systematic Review
by Christos Bellos, Konstantinos Stefanou, Alexandros Tzallas, Georgios Stergios and Markos Tsipouras
Electronics 2025, 14(2), 251; https://doi.org/10.3390/electronics14020251 - 9 Jan 2025
Cited by 2 | Viewed by 1038
Abstract
This review paper explores the intersection of user engagement and user experience studies with electroencephalography (EEG) analysis by investigating the existing literature in this field. User engagement describes the immediate, session-based experience of using interactive products and is commonly used as a metric [...] Read more.
This review paper explores the intersection of user engagement and user experience studies with electroencephalography (EEG) analysis by investigating the existing literature in this field. User engagement describes the immediate, session-based experience of using interactive products and is commonly used as a metric to assess the success of games, online platforms, applications, and websites, while user experience encompasses the broader and longer-term aspects of user interaction. This review focuses on the use of EEG as a precise and objective method to gain insights into user engagement. EEG recordings capture brain activity as waves, which can be categorized into different frequency bands. By analyzing patterns of brain activity associated with attention, emotion, mental workload, and user experience, EEG provides valuable insights into user engagement. The review follows the PRISMA statement. The search process involved an extensive exploration of multiple databases, resulting in the identification of 74 relevant studies. The review encompasses the entire information flow of the experiments, including data acquisition, pre-processing analysis, feature extraction, and analysis. By examining the current literature, this review provides a comprehensive overview of various algorithms and processes utilized in EEG-based systems for studying user engagement and identifies potential directions for future research endeavors. Full article
Show Figures

Graphical abstract

Back to TopTop