Advances in Signal, Image and Video Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 February 2022) | Viewed by 12798

Special Issue Editor

School of Electrical and Computer Engineering , Ben Gurion University of the Negev, Be’er-Sheva 84105001, Israel
Interests: image and video processing/compression; deep learning in various emerging applications in computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the last few decades, we have witnessed rapid advancements in digital technologies and their applications in different areas, such as telecommunications, entertainment, medicine, automotive, etc. These rapid developments have been possible thanks to the developments in solid-state semiconductor technology.

Rapid developments in technologies also require algorithms, which support new hardware and the transition from analog to digital signals. These developments have also triggered the development of more powerful digital signal processors, parallelized central processing units and graphic processing units.

In this Special Issue, we will focus on recent advances in signal, image and video processing. A special focus will be given to recent developments in deep neural networks and their applications in signal, image and video processing. We encourage prospective authors to submit their recent work on advancements in the applications of deep neural networks to multi-dimensional signal processing. Possible topics of interest include, but are not limited to, the following:

  • Noise reduction in signals, images and video;
  • Reconstruction from sparse measurements;
  • Signal, image and video coding and compression;
  • Feature extraction;
  • Fusion of the signals from heterogeneous sensors;
  • Upsampling and super-resolution;
  • Perception-based quality metrics;
  • Depth image and video denoising;
  • Depth video super-resolution;
  • Medical image processing;
  • New deep neural network architectures for processing images and video.

The goal of this Special Issue is to demonstrate advances in the above areas enabled by deep neural networks over conventional processing methods. Prospective authors are also encouraged to submit any work that presents a new application of deep neural networks in signal, image and video processing.

Dr. Ofer Hadar
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • signal processing
  • image processing
  • video processing
  • deep neural networks
  • depth processing
  • sensor fusion

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 4492 KiB  
Article
Object Detection-Based Video Compression
by Myung-Jun Kim and Yung-Lyul Lee
Appl. Sci. 2022, 12(9), 4525; https://doi.org/10.3390/app12094525 - 29 Apr 2022
Cited by 4 | Viewed by 1974
Abstract
Video compression is designed to provide good subjective image quality, even at a high-compression ratio. In addition, video quality metrics have been used to show the results can maintain a high Peak Signal-to-Noise Ratio (PSNR), even at high compression. However, there are many [...] Read more.
Video compression is designed to provide good subjective image quality, even at a high-compression ratio. In addition, video quality metrics have been used to show the results can maintain a high Peak Signal-to-Noise Ratio (PSNR), even at high compression. However, there are many difficulties in object recognition on the decoder side due to the low image quality caused by high compression. Accordingly, providing good image quality for the detected objects is necessary for the given total bitrate for utilizing object detection in a video decoder. In this paper, object detection-based video compression by the encoder and decoder is proposed that allocates lower quantization parameters to the detected-object regions and higher quantization parameters to the background. Therefore, better image quality is obtained for the detected objects on the decoder side. Object detection-based video compression consists of two types: Versatile Video Coding (VVC) and object detection. In this paper, the decoder performs the decompression process by receiving the bitstreams in the object-detection decoder and the VVC decoder. In the proposed method, the VVC encoder and decoder are processed based on the information obtained from object detection. In a random access (RA) configuration, the average Bjøntegaard Delta (BD)-rates of Y, Cb, and Cr increased by 2.33%, 2.67%, and 2.78%, respectively. In an All Intra (AI) configuration, the average BD-rates of Y, Cb, and Cr increased by 0.59%, 1.66%, and 1.42%, respectively. In an RA configuration, the averages of ΔY-PSNR, ΔCb-PSNR, and ΔCr-PSNR for the object-detected areas improved to 0.17%, 0.23%, and 0.04%, respectively. In an AI configuration, the averages of ΔY-PSNR, ΔCb-PSNR, and ΔCr-PSNR for the object-detected areas improved to 0.71%, 0.30%, and 0.30%, respectively. Subjective image quality was also improved in the object-detected areas. Full article
(This article belongs to the Special Issue Advances in Signal, Image and Video Processing)
Show Figures

Figure 1

13 pages, 3407 KiB  
Article
Efficient Reversible Data Hiding Scheme for AMBTC-Compressed Images
by Chia-Chen Lin, Thai-Son Nguyen, Chin-Chen Chang and Wen-Chi Chang
Appl. Sci. 2021, 11(15), 6741; https://doi.org/10.3390/app11156741 - 22 Jul 2021
Cited by 2 | Viewed by 1407
Abstract
Reversible data hiding has attracted significant attention from researchers because it can extract an embedded secret message correctly and recover a cover image without distortion. In this paper, a novel, efficient reversible data hiding scheme is proposed for absolute moment block truncation code [...] Read more.
Reversible data hiding has attracted significant attention from researchers because it can extract an embedded secret message correctly and recover a cover image without distortion. In this paper, a novel, efficient reversible data hiding scheme is proposed for absolute moment block truncation code (AMBTC) compressed images. The proposed scheme is based on the high correlation of neighboring values in two mean tables of AMBTC-compressed images to further losslessly encode these values and create free space for containing a secret message. Experimental results demonstrated that the proposed scheme obtained a high embedding capacity and guaranteed the same PSNRs as the traditional AMBTC algorithm. In addition, the proposed scheme achieved a higher embedding capacity and higher efficiency rate than those of some previous schemes while maintaining an acceptable bit rate. Full article
(This article belongs to the Special Issue Advances in Signal, Image and Video Processing)
Show Figures

Figure 1

18 pages, 10820 KiB  
Article
Edge-Preserving Image Denoising Based on Lipschitz Estimation
by Bushra Jalil, Zunera Jalil, Eric Fauvet and Olivier Laligant
Appl. Sci. 2021, 11(11), 5126; https://doi.org/10.3390/app11115126 - 31 May 2021
Cited by 1 | Viewed by 2087
Abstract
The information transmitted in the form of signals or images is often corrupted with noise. These noise elements can occur due to the relative motion, noisy channels, error in measurements, and environmental conditions (rain, fog, change in illumination, etc.) and result in the [...] Read more.
The information transmitted in the form of signals or images is often corrupted with noise. These noise elements can occur due to the relative motion, noisy channels, error in measurements, and environmental conditions (rain, fog, change in illumination, etc.) and result in the degradation of images acquired by a camera. In this paper, we address these issues, focusing mainly on the edges that correspond to the abrupt changes in the signal or images. Preserving these important structures, such as edges or transitions and textures, has significant theoretical importance. These image structures are important, more specifically, for visual perception. The most significant information about the structure of the image or type of the signal is often hidden inside these transitions. Therefore it is necessary to preserve them. This paper introduces a method to reduce noise and to preserve edges while performing Non-Destructive Testing (NDT). The method computes Lipschitz exponents of transitions to identify the level of discontinuity. Continuous wavelet transform-based multi-scale analysis highlights the modulus maxima of the respective transitions. Lipschitz values estimated from these maxima are used as a measure to preserve edges in the presence of noise. Experimental results show that the noisy data sample and smoothness-based heuristic approach in the spatial domain restored noise-free images while preserving edges. Full article
(This article belongs to the Special Issue Advances in Signal, Image and Video Processing)
Show Figures

Figure 1

19 pages, 4147 KiB  
Article
Hybrid Encoding Scheme for AMBTC Compressed Images Using Ternary Representation Technique
by Tung-Shou Chen, Jie Wu, Kai Sheng Chen, Junying Yuan and Wien Hong
Appl. Sci. 2021, 11(2), 619; https://doi.org/10.3390/app11020619 - 10 Jan 2021
Cited by 1 | Viewed by 1647
Abstract
Absolute moment block truncated coding (AMBTC) is a lossy image compression technique aiming at low computational cost, and has been widely studied. Previous studies have investigated the performance improvement of AMBTC; however, they often over describe the details of image blocks during encoding, [...] Read more.
Absolute moment block truncated coding (AMBTC) is a lossy image compression technique aiming at low computational cost, and has been widely studied. Previous studies have investigated the performance improvement of AMBTC; however, they often over describe the details of image blocks during encoding, causing an increase in bitrate. In this paper, we propose an efficient method to improve the compression performance by classifying image blocks into flat, smooth, and complex blocks according to their complexity. Flat blocks are encoded by their block means, while smooth blocks are encoded by a pair of adjusted quantized values and an index pointing to one of the k representative bitmaps. Complex blocks are encoded by three quantized values and a ternary map obtained by a clustering algorithm. Ternary indicators are used to specify the encoding cases. In our method, the details of most blocks can be retained without significantly increasing the bitrate. Experimental results show that, compared with prior works, the proposed method achieves higher image quality at a better compression ratio for all of the test images. Full article
(This article belongs to the Special Issue Advances in Signal, Image and Video Processing)
Show Figures

Figure 1

23 pages, 5486 KiB  
Article
Block Compressive Sensing Single-View Video Reconstruction Using Joint Decoding Framework for Low Power Real Time Applications
by Mansoor Ebrahim, Syed Hasan Adil, Kamran Raza and Syed Saad Azhar Ali
Appl. Sci. 2020, 10(22), 7963; https://doi.org/10.3390/app10227963 - 10 Nov 2020
Cited by 1 | Viewed by 1316
Abstract
Several real-time visual monitoring applications such as surveillance, mental state monitoring, driver drowsiness and patient care, require equipping high-quality cameras with wireless sensors to form visual sensors and this creates an enormous amount of data that has to be managed and transmitted at [...] Read more.
Several real-time visual monitoring applications such as surveillance, mental state monitoring, driver drowsiness and patient care, require equipping high-quality cameras with wireless sensors to form visual sensors and this creates an enormous amount of data that has to be managed and transmitted at the sensor node. Moreover, as the sensor nodes are battery-operated, power utilization is one of the key concerns that must be considered. One solution to this issue is to reduce the amount of data that has to be transmitted using specific compression techniques. The conventional compression standards are based on complex encoders (which require high processing power) and simple decoders and thus are not pertinent for battery-operated applications, i.e., VSN (primitive hardware). In contrast, compressive sensing (CS) a distributive source coding mechanism, has transformed the standard coding mechanism and is based on the idea of a simple encoder (i.e., transmitting fewer data-low processing requirements) and a complex decoder and is considered a better option for VSN applications. In this paper, a CS-based joint decoding (JD) framework using frame prediction (using keyframes) and residual reconstruction for single-view video is proposed. The idea is to exploit the redundancies present in the key and non-key frames to produce side information to refine the non-key frames’ quality. The proposed method consists of two main steps: frame prediction and residual reconstruction. The final reconstruction is performed by adding a residual frame with the predicted frame. The proposed scheme was validated on various arrangements. The association among correlated frames and compression performance is also analyzed. Various arrangements of the frames have been studied to select the one that produces better results. The comprehensive experimental analysis proves that the proposed JD method performs notably better than the independent block compressive sensing scheme at different subrates for various video sequences with low, moderate and high motion contents. Also, the proposed scheme outperforms the conventional CS video reconstruction schemes at lower subrates. Further, the proposed scheme was quantized and compared with conventional video codecs (DISCOVER, H-263, H264) at various bitrates to evaluate its efficiency (rate-distortion, encoding, decoding). Full article
(This article belongs to the Special Issue Advances in Signal, Image and Video Processing)
Show Figures

Figure 1

Review

Jump to: Research

32 pages, 1751 KiB  
Review
A Comparative Analysis of Arabic Text Steganography
by Reema Thabit, Nur Izura Udzir, Sharifah Md Yasin, Aziah Asmawi, Nuur Alifah Roslan and Roshidi Din
Appl. Sci. 2021, 11(15), 6851; https://doi.org/10.3390/app11156851 - 26 Jul 2021
Cited by 11 | Viewed by 3237
Abstract
Protecting sensitive information transmitted via public channels is a significant issue faced by governments, militaries, organizations, and individuals. Steganography protects the secret information by concealing it in a transferred object such as video, audio, image, text, network, or DNA. As text uses low [...] Read more.
Protecting sensitive information transmitted via public channels is a significant issue faced by governments, militaries, organizations, and individuals. Steganography protects the secret information by concealing it in a transferred object such as video, audio, image, text, network, or DNA. As text uses low bandwidth, it is commonly used by Internet users in their daily activities, resulting a vast amount of text messages sent daily as social media posts and documents. Accordingly, text is the ideal object to be used in steganography, since hiding a secret message in a text makes it difficult for the attacker to detect the hidden message among the massive text content on the Internet. Language’s characteristics are utilized in text steganography. Despite the richness of the Arabic language in linguistic characteristics, only a few studies have been conducted in Arabic text steganography. To draw further attention to Arabic text steganography prospects, this paper reviews the classifications of these methods from its inception. For analysis, this paper presents a comprehensive study based on the key evaluation criteria (i.e., capacity, invisibility, robustness, and security). It opens new areas for further research based on the trends in this field. Full article
(This article belongs to the Special Issue Advances in Signal, Image and Video Processing)
Show Figures

Figure 1

Back to TopTop